10 Common Java Performance Mistakes and How to Fix Them

    10 Common Java Performance Mistakes and How to Fix Them

    07/01/2026

    Picture this: your Java application works perfectly in development. Then it hits production with real traffic, and suddenly response times spike from 50ms to 5 seconds. Users complain. Your monitoring dashboard lights up like a Christmas tree. After hours of debugging, you find the culprit: a simple loop concatenating strings with the + operator inside a method that gets called thousands of times per second.

    I've seen this exact scenario play out more times than I can count. Performance issues in Java are rarely dramatic bugs—they're death by a thousand cuts. Small inefficiencies that seem harmless in isolation compound into serious problems at scale.

    Here are 10 performance mistakes I keep encountering in production Java code, along with the fixes that actually work.

    1. String Concatenation in Loops

    This is the classic performance killer, yet I still see it in codebases from teams who should know better.

    The Problem:

    public String buildReport(List<Transaction> transactions) { String report = ""; for (Transaction tx : transactions) { report += tx.getId() + "," + tx.getAmount() + "," + tx.getDate() + "\n"; } return report; }

    Every += creates a new String object. With 10,000 transactions, you're creating 10,000 intermediate strings that immediately become garbage. I profiled a similar method once—it was allocating 2GB of memory just to build a 50MB report.

    The Fix:

    public String buildReport(List<Transaction> transactions) { StringBuilder report = new StringBuilder(transactions.size() * 50); // Pre-size estimate for (Transaction tx : transactions) { report.append(tx.getId()) .append(",") .append(tx.getAmount()) .append(",") .append(tx.getDate()) .append("\n"); } return report.toString(); }

    The trick is pre-sizing the StringBuilder. If you know roughly how large the result will be, set the initial capacity. This avoids internal array resizing.

    Note: Since JDK 9, the JVM does optimize simple concatenations using invokedynamic. But loop concatenations like the example above still create the same problem. Don't rely on compiler magic—use StringBuilder for loops.

    2. Autoboxing in Hot Paths

    Autoboxing is convenient, but it's also sneaky. The compiler silently converts between primitives and wrappers, creating objects you never asked for.

    The Problem:

    public long sumValues(List<Long> values) { Long sum = 0L; for (Long value : values) { sum += value; // Unbox value, add, then box result } return sum; }

    Each iteration unboxes value, performs the addition, then boxes the result back into sum. With a million elements, you've created a million Long objects.

    The Fix:

    public long sumValues(List<Long> values) { long sum = 0L; // Primitive for (Long value : values) { sum += value; // Only one unbox per iteration } return sum; }

    Even better, if you're dealing with large datasets, consider using primitive collections from libraries like Eclipse Collections or Trove:

    // Using Eclipse Collections LongList values = LongLists.mutable.withAll(originalList); long sum = values.sum(); // No boxing at all

    3. Wrong Collection Type for the Job

    I've lost count of how many times I've seen ArrayList used where HashSet would be orders of magnitude faster.

    Collection Time Complexity Comparison Operation ArrayList LinkedList HashSet HashMap Get by index O(1) O(n) N/A N/A Contains O(n) O(n) O(1) O(1) Insert at end O(1)* O(1) O(1) O(1) Insert in middle O(n) O(1) N/A N/A * Amortized O(1), occasional O(n) when array resizes

    The Problem:

    public boolean hasPermission(String userId, List<String> authorizedUsers) { return authorizedUsers.contains(userId); // O(n) scan every time }

    If authorizedUsers has 100,000 entries and this method gets called in a request filter, every single request does a linear scan.

    The Fix:

    // Convert once at startup or cache private final Set<String> authorizedUsers; public PermissionService(List<String> userList) { this.authorizedUsers = new HashSet<>(userList); // O(1) lookups now } public boolean hasPermission(String userId) { return authorizedUsers.contains(userId); }

    Quick reference for collection choice:

    • Need fast random access by index? ArrayList
    • Need fast contains/lookup? HashSet or HashMap
    • Need sorted iteration? TreeSet or TreeMap
    • Need to preserve insertion order with fast lookup? LinkedHashSet
    • Lots of insertions/deletions in the middle? LinkedList (but honestly, this is rare)

    4. Creating Objects Inside Loops That Could Be Reused

    Some objects are expensive to create. Regex patterns, date formatters, and JSON mappers come to mind.

    The Problem:

    public List<String> extractDates(List<String> logs) { List<String> dates = new ArrayList<>(); for (String log : logs) { Pattern pattern = Pattern.compile("\\d{4}-\\d{2}-\\d{2}"); // Compiled every iteration! Matcher matcher = pattern.matcher(log); if (matcher.find()) { dates.add(matcher.group()); } } return dates; }

    Pattern.compile() is expensive. Doing it inside a loop that runs millions of times is brutal.

    The Fix:

    private static final Pattern DATE_PATTERN = Pattern.compile("\\d{4}-\\d{2}-\\d{2}"); public List<String> extractDates(List<String> logs) { List<String> dates = new ArrayList<>(logs.size()); for (String log : logs) { Matcher matcher = DATE_PATTERN.matcher(log); if (matcher.find()) { dates.add(matcher.group()); } } return dates; }

    The same applies to DateTimeFormatter, ObjectMapper, MessageDigest, and similar heavy objects. Create them once, reuse them.

    Gotcha: SimpleDateFormat is NOT thread-safe. If you're caching it as a static field in a multi-threaded context, use DateTimeFormatter (thread-safe) or ThreadLocal<SimpleDateFormat> instead.

    5. N+1 Query Problem in JPA/Hibernate

    This is the most common database performance issue I encounter. It's insidious because it works fine in development with a few records, then destroys production.

    The Problem:

    @Entity public class Order { @Id private Long id; @ManyToOne(fetch = FetchType.LAZY) private Customer customer; @OneToMany(mappedBy = "order", fetch = FetchType.LAZY) private List<OrderItem> items; } // In your service public List<OrderDTO> getOrders() { List<Order> orders = orderRepository.findAll(); // 1 query return orders.stream() .map(order -> new OrderDTO( order.getId(), order.getCustomer().getName(), // +1 query per order! order.getItems().size() // +1 query per order! )) .toList(); }

    With 1,000 orders, this executes 2,001 queries. I've seen this bring databases to their knees.

    The Fix:

    // Option 1: Use JOIN FETCH in your repository @Query("SELECT o FROM Order o JOIN FETCH o.customer JOIN FETCH o.items") List<Order> findAllWithDetails(); // Option 2: Use @EntityGraph @EntityGraph(attributePaths = {"customer", "items"}) List<Order> findAll(); // Option 3: Use a DTO projection @Query(""" SELECT new com.example.OrderDTO(o.id, c.name, SIZE(o.items)) FROM Order o JOIN o.customer c """) List<OrderDTO> findOrderSummaries();

    Option 3 is often the best for read-only scenarios—it doesn't load entities into the persistence context at all.

    📚 Related: Check out Database Indexes Deep Dive for more on optimizing database queries.

    6. Synchronizing More Than Necessary

    Thread safety is important, but over-synchronization kills throughput.

    The Problem:

    public class UserCache { private final Map<String, User> cache = new HashMap<>(); public synchronized User getUser(String id) { return cache.get(id); } public synchronized void putUser(String id, User user) { cache.put(id, user); } public synchronized void processAllUsers() { // Long-running operation while holding the lock for (User user : cache.values()) { sendNotification(user); // 100ms per user } } }

    The entire class is synchronized on this. While processAllUsers runs (which could take minutes), no one can even read from the cache.

    The Fix:

    public class UserCache { private final ConcurrentHashMap<String, User> cache = new ConcurrentHashMap<>(); public User getUser(String id) { return cache.get(id); // No synchronization needed } public void putUser(String id, User user) { cache.put(id, user); // Thread-safe without blocking reads } public void processAllUsers() { // Take a snapshot for processing List<User> snapshot = new ArrayList<>(cache.values()); for (User user : snapshot) { sendNotification(user); } } }

    Use java.util.concurrent collections. They're designed for exactly this. ConcurrentHashMap allows concurrent reads and even concurrent writes to different segments.

    7. Using Exceptions for Control Flow

    Exceptions are expensive. Creating an exception captures the entire stack trace. Using them for regular control flow is a performance anti-pattern.

    The Problem:

    public int parseOrDefault(String value, int defaultValue) { try { return Integer.parseInt(value); } catch (NumberFormatException e) { return defaultValue; // Using exception as control flow } }

    If 50% of your input is invalid, you're creating exception objects half the time. Stack trace generation is not free.

    The Fix:

    public int parseOrDefault(String value, int defaultValue) { if (value == null || value.isEmpty()) { return defaultValue; } // Check if it looks like a number first for (int i = 0; i < value.length(); i++) { char c = value.charAt(i); if (i == 0 && c == '-') continue; if (!Character.isDigit(c)) { return defaultValue; } } return Integer.parseInt(value); // Now we know it's valid } // Or use Optional (cleaner, slight overhead) public int parseOrDefault(String value, int defaultValue) { return Optional.ofNullable(value) .filter(s -> s.matches("-?\\d+")) .map(Integer::parseInt) .orElse(defaultValue); }

    When exceptions ARE appropriate:

    • Actual exceptional conditions (file not found, connection refused)
    • Errors that should interrupt normal flow
    • Situations where recovery isn't expected inline

    When exceptions are NOT appropriate:

    • Validating user input
    • Checking if a value exists
    • Any situation where "failure" is a normal, expected outcome

    8. Not Initializing Collections with Expected Size

    ArrayList and HashMap resize dynamically, but resizing isn't free. Each resize allocates a new array and copies elements.

    The Problem:

    public List<String> processRecords(List<Record> records) { List<String> results = new ArrayList<>(); // Default capacity: 10 for (Record record : records) { results.add(record.process()); } return results; }

    If you have 10,000 records, the ArrayList will resize approximately 10-12 times (10 → 15 → 22 → 33 → ... → 10,000+).

    The Fix:

    public List<String> processRecords(List<Record> records) { List<String> results = new ArrayList<>(records.size()); // Exact size for (Record record : records) { results.add(record.process()); } return results; }

    Same principle applies to HashMap:

    // Bad: default capacity 16, will resize multiple times Map<String, User> userMap = new HashMap<>(); // Good: size / 0.75 to account for load factor Map<String, User> userMap = new HashMap<>((int) (expectedSize / 0.75) + 1); // Even better in Java 19+ Map<String, User> userMap = HashMap.newHashMap(expectedSize);

    9. Ignoring Connection and Thread Pool Sizing

    Connection pools and thread pools that are too small cause request queuing. Pools that are too large waste resources and can cause contention.

    The Problem:

    // Default HikariCP pool size is 10 # application.properties spring.datasource.hikari.maximum-pool-size=10

    If your application handles 100 concurrent requests and each request needs a database connection for 100ms, you can only process 100 requests per second. The rest wait.

    The Fix:

    The right pool size depends on your workload, but here's a starting formula from HikariCP's documentation:

    connections = ((core_count * 2) + effective_spindle_count)
    

    For most modern SSDs and 4-core machines, start with 10-20 connections. Then measure.

    # application.yml spring: datasource: hikari: maximum-pool-size: 20 minimum-idle: 5 connection-timeout: 30000 idle-timeout: 600000 max-lifetime: 1800000

    Important: More connections isn't always better. PostgreSQL, for example, creates a process per connection. 200 connections means 200 processes, which can saturate your database server even if your queries are fast.

    10. Premature Optimization Without Profiling

    The worst performance mistake? Optimizing the wrong thing.

    I once spent a week optimizing a string processing algorithm, shaving off 5ms per call. Felt great. Then I profiled the application and discovered the actual bottleneck was a misconfigured database connection pool that was adding 200ms of wait time per request.

    The Rule: Profile first, optimize second.

    Tools to use:

    • VisualVM – Free, bundled with JDK, good for heap analysis and CPU sampling
    • Java Flight Recorder (JFR) – Built into OpenJDK 11+, low overhead, production-safe
    • async-profiler – Excellent for CPU and allocation profiling without safepoint bias
    • JProfiler / YourKit – Commercial tools with excellent UIs
    # Enable JFR in production (minimal overhead) java -XX:+FlightRecorder \ -XX:StartFlightRecording=duration=60s,filename=recording.jfr \ -jar myapp.jar # Analyze with JDK Mission Control jmc recording.jfr

    What to look for:

    1. Methods consuming the most CPU time
    2. Objects with highest allocation rate
    3. Lock contention and thread blocking
    4. GC frequency and pause times

    Don't guess. Measure. Then fix the actual bottleneck.

    Quick Reference: Performance Checklist

    Before your next code review, check for these:

    IssueQuick Check
    String concatenation in loopsSearch for += with String type in loops
    Autoboxing overheadLook for Long, Integer, Double in hot paths
    Wrong collection typeList.contains() called frequently? Use Set
    Object creation in loopsPattern.compile, new SimpleDateFormat inside loops
    N+1 queriesEnable Hibernate SQL logging, count queries
    Over-synchronizationsynchronized methods or large synchronized blocks
    Exception-driven control flowcatch blocks returning default values
    Unsized collectionsnew ArrayList<>() without size hint
    Pool sizingCheck connection and thread pool metrics

    Conclusion

    Performance problems rarely announce themselves. They creep in through innocent-looking code that works fine in isolation but falls apart at scale. The patterns I've covered here account for probably 80% of the Java performance issues I encounter in production.

    The best defense isn't memorizing optimization tricks—it's building the habit of profiling before and after changes. Know your hotspots. Measure everything. And when you do optimize, make sure you're fixing the actual bottleneck, not just the code that looks suspicious.


    For more in-depth tutorials on Java performance and optimization, follow me:

    🔗 Blog 🔗 LinkedIn 🔗 Medium 🔗 Github

    Discover Top YouTube Creators

    Explore Popular Tech YouTube Channels

    Find the most popular YouTube creators in tech categories like AI, Java, JavaScript, Python, .NET, and developer conferences. Perfect for learning, inspiration, and staying updated with the best tech content.

    Summarise

    Transform Your Learning

    Get instant AI-powered summaries of YouTube videos and websites. Save time while enhancing your learning experience.

    Instant video summaries
    Smart insights extraction
    Channel tracking