HikariCP Connection Pool Tuning for Production Spring Boot Apps

    HikariCP Connection Pool Tuning for Production Spring Boot Apps

    01/05/2026

    The default settings will eventually burn you

    Spring Boot auto-configures HikariCP with a connection pool of 10. For a small app with low traffic this is fine. For a service handling 200 concurrent requests that each touch the database, 10 connections means 190 requests are queued waiting — and your response times spike even though your queries are fast.

    Most developers set spring.datasource.url, start the app, and forget about the pool. Let's look at the settings that actually matter in production: how to size the pool correctly, what the timeout settings do, how to detect leaks, and what to monitor.


    How connection pooling works

    Without pooling, every database operation opens a TCP connection, authenticates, runs the query, and closes the connection. Opening a connection takes around 20–100ms depending on the database and network. At scale, that overhead can be significant.

    HikariCP maintains a pool of open, authenticated connections. A request borrows one from the pool, runs its queries, and returns it. The connection stays open and warm. The typical borrow/return overhead is under 1ms.

    HikariCP Connection Pool Flow Thread Pool HTTP Requests HikariCP Pool max 10 connections (default) PostgreSQL max_connections: 100 borrow return Key insight: pool size ≠ throughput A pool of 10 with fast queries (1ms) serves 10,000 req/sec. A pool of 100 with slow queries (100ms) serves 1,000 req/sec. Too small: requests queue, timeouts, slow response Right size: fast borrow, low wait time Too large: DB overloaded, context switching overhead

    The settings that matter

    Here's the full configuration section with everything labeled:

    spring: datasource: url: jdbc:postgresql://localhost:5432/mydb username: app_user password: ${DB_PASSWORD} hikari: # Pool sizing maximum-pool-size: 20 # Max connections HikariCP will hold minimum-idle: 5 # Connections kept open when idle # Timeouts connection-timeout: 3000 # Wait time to get a connection (ms) — default 30000 idle-timeout: 600000 # How long an idle connection lives (ms) — 10 min max-lifetime: 1800000 # Max connection age (ms) — 30 min, must be < DB timeout keepalive-time: 60000 # Ping idle connections to prevent firewall cuts (ms) # Debugging connection-test-query: SELECT 1 # validation query (JDBC4 drivers don't need this) pool-name: MyApp-Pool # shows in logs and JMX leak-detection-threshold: 5000 # warn if connection not returned within 5s

    How to size the pool correctly

    The default of 10 is often wrong in both directions — too small for high-concurrency services, and too large if you have many app instances sharing one database.

    The formula

    A rough starting point from the HikariCP author and PostgreSQL docs:

    pool size = (cores × 2) + effective_spindle_count
    

    For a 4-core app server talking to a network database (no spindles): (4 × 2) + 1 = 9. The PostgreSQL team uses a similar formula and recommends not exceeding your max_connections divided by the number of app instances.

    app_pool_size = (db.max_connections × 0.8) / num_app_instances
    

    For PostgreSQL with max_connections = 100 and 3 app instances:

    (100 × 0.8) / 3 = ~26 connections per instance
    

    This keeps 20% headroom for migrations, admin tasks, monitoring connections.

    IO-bound vs CPU-bound queries

    For mostly IO-bound workloads (typical REST APIs hitting a remote database), connections spend most of their time waiting for the database to respond. A larger pool helps by overlapping that wait time. Don't go above ~50 per instance — at that point, the database itself becomes the bottleneck and you're just shifting the queue.

    For CPU-bound queries (heavy aggregations, analytics), the database can only run as many queries in parallel as it has cores. Adding more connections doesn't help — it adds context switching overhead.

    With virtual threads (Java 21+)

    Virtual threads mean you can have thousands of concurrent HTTP requests. But if your pool is 10, all those virtual threads compete for 10 connections. Your pool size still matters — increase it to match your concurrency target.

    hikari: maximum-pool-size: 50 # support higher virtual-thread concurrency minimum-idle: 10

    Timeout settings — what each one does

    Getting these wrong causes either connection timeouts in production or stale connection errors on Monday morning after a weekend of low traffic.

    connection-timeout (default: 30,000ms — too long)

    How long a thread waits for a connection from the pool before throwing SQLTransientConnectionException. The default 30 seconds means a user request can hang for 30 seconds before getting an error. Set this to 3,000–5,000ms so failures surface quickly.

    connection-timeout: 3000 # fail fast, don't keep users waiting 30s

    max-lifetime (default: 1,800,000ms — usually fine)

    Maximum age of a connection before HikariCP retires it. This must be shorter than your database's wait_timeout (MySQL) or tcp_keepalives_idle (PostgreSQL). If the database closes a connection due to inactivity and HikariCP doesn't know, the next borrow gets a broken connection.

    Check your database timeout setting and set max-lifetime 30–60 seconds below it:

    # PostgreSQL default tcp keepalives: varies, often 7200s (2h) # MySQL default wait_timeout: 28800s (8h) max-lifetime: 1800000 # 30 min — well within both defaults

    idle-timeout (default: 600,000ms)

    How long a connection can sit idle before being removed from the pool (down to minimum-idle). Relevant for services with variable load — keep it shorter to release connections back to the database during off-peak hours.

    keepalive-time (default: 0 — disabled)

    HikariCP pings idle connections on this interval to prevent firewalls and proxies from cutting the TCP connection silently. Essential in cloud environments (AWS RDS, Azure, GCP) where network infrastructure aggressively closes idle TCP connections.

    keepalive-time: 60000 # ping every 60s, adjust based on your cloud firewall timeout

    Detecting connection leaks

    A connection leak happens when code borrows a connection and never returns it — usually because a transaction wasn't committed/rolled back, or an exception was thrown before the connection was released.

    Symptoms: pool exhaustion under low load, requests hanging, log messages like:

    HikariPool-1 - Connection is not available, request timed out after 3000ms
    

    Enable leak detection:

    hikari: leak-detection-threshold: 5000 # warn if connection held > 5s

    HikariCP will log a warning with a stack trace showing where the connection was borrowed. Look for:

    • Missing @Transactional on service methods that need a transaction
    • Manually opened EntityManager that wasn't closed
    • Connection objects obtained directly without try-with-resources
    // Leak source: connection opened, exception thrown, never closed Connection conn = dataSource.getConnection(); String result = runQuery(conn); // if this throws, conn is never closed // Fixed: try-with-resources try (Connection conn = dataSource.getConnection()) { return runQuery(conn); }

    Monitoring in production

    Metrics you need to watch:

    MetricNormalWarning
    hikaricp.connections.active< 80% of max> 90% of max consistently
    hikaricp.connections.pending0> 0 sustained
    hikaricp.connections.timeout0Any > 0
    hikaricp.connections.acquire (ms)< 5ms> 100ms

    HikariCP exposes these via JMX and Spring Actuator. Enable Micrometer integration:

    management: endpoints: web: exposure: include: health, metrics

    Then query:

    GET /actuator/metrics/hikaricp.connections.active
    GET /actuator/metrics/hikaricp.connections.pending
    GET /actuator/metrics/hikaricp.connections.timeout
    

    In Grafana, graph hikaricp.connections.active and hikaricp.connections.pending together. If pending is ever non-zero, your pool is undersized for that traffic level.

    For structured logging, HikariCP logs pool stats at DEBUG level. Add this to get pool state every 30 seconds:

    logging.level.com.zaxxer.hikari=DEBUG logging.level.com.zaxxer.hikari.HikariConfig=DEBUG

    Multi-datasource configuration

    If you have multiple databases (read replica, analytics DB, etc.), configure separate pools:

    @Configuration public class DataSourceConfig { @Bean @Primary @ConfigurationProperties("spring.datasource.primary.hikari") public HikariDataSource primaryDataSource() { return DataSourceBuilder.create().type(HikariDataSource.class).build(); } @Bean @ConfigurationProperties("spring.datasource.readonly.hikari") public HikariDataSource readonlyDataSource() { return DataSourceBuilder.create().type(HikariDataSource.class).build(); } }
    spring: datasource: primary: url: jdbc:postgresql://primary-db:5432/mydb hikari: maximum-pool-size: 20 pool-name: Primary-Pool readonly: url: jdbc:postgresql://replica-db:5432/mydb hikari: maximum-pool-size: 10 # read-only, smaller pool fine pool-name: ReadOnly-Pool

    Production checklist

    spring: datasource: hikari: maximum-pool-size: 20 # sized for your concurrency, not default 10 minimum-idle: 5 # keep some warm, don't hold all 20 connection-timeout: 3000 # fail fast, not 30s max-lifetime: 1800000 # below your DB's connection timeout keepalive-time: 60000 # essential in cloud environments leak-detection-threshold: 5000 # catch leaks in staging pool-name: ${spring.application.name}-Pool # visible in logs

    The most common production improvements in order of impact:

    1. Fix connection-timeout from 30s to 3s — the easiest win, immediate improvement in failure detection
    2. Size the pool based on (db.max_connections × 0.8) / instances
    3. Enable keepalive-time if running on any cloud database service
    4. Enable leak-detection-threshold in staging to find leaks before they hit production
    5. Add Actuator metrics and alert on connections.pending > 0 and connections.timeout > 0

    🔗 Blog · LinkedIn · Medium · GitHub

    Discover Top YouTube Creators

    Explore Popular Tech YouTube Channels

    Find the most popular YouTube creators in tech categories like AI, Java, JavaScript, Python, .NET, and developer conferences. Perfect for learning, inspiration, and staying updated with the best tech content.

    Summarise

    Transform Your Learning

    Get instant AI-powered summaries of YouTube videos and websites. Save time while enhancing your learning experience.

    Instant video summaries
    Smart insights extraction
    Channel tracking