Skip to Main Content
Cloud Platform


This is an IBM Automation portal for Cloud Platform products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.


Status Under review
Workspace WebSphere Liberty
Created by Guest
Created on Mar 12, 2026

Adaptive Connection Pool Throttling based on Latency

In high-load environments, when a backend resource (like a JMS) starts slowing down, the standard behavior of a connection pool is to open more connections to handle the queue, up to the maxPoolSize. This often creates a "death spiral" where the increased number of concurrent connections further degrades the backend performance.

Currently, Liberty users must manually tune pools or rely on external circuit breakers. There is no native way to make the pool size sensitive to the health (latency) of the connection it provides.

Implement control monitor within the Liberty connection manager. This feature would monitor the average response time of requests using the pool.

Threshold Monitoring: Define a latency target threshold= 250ms (for example)

Dynamic Capping: If the actual average response time exceeds the threshold, the pool should dynamically throttle the effective maxPoolSize to prevent further congestion.

Recovery Phase: Once falls below the threshold for a sustained period (cool-down), the governor gradually restores the maxPoolSize to its original configured value.

Resilience: Automatically protects backends from being overwhelmed during latency spikes.

Stability: Prevents "cascading failures" across microservices.

Efficiency: Reduces the need for complex manual tuning and improves "out-of-the-box" stability for Liberty in unpredictable backend environments.

Idea priority High
  • Guest
    Mar 13, 2026

    This is a good idea.  I'd like to ask for some further clarification on it.
    Is the problem from the connection pool creating additional new connections to the database to try to handle the load, as suggested by "open more connections to handle the queue"?
    Or is the problem from the connection pool allowing too many existing connections to be used at once, as suggested by "increased number of concurrent connections"?

    In either case, there are going to be situations where measuring response time will not always be accurate or useful.  For example, some requests to a backend might have a lengthy response time because the data they are trying to access is locked while the owner of the lock wants to make additional requests to the same backend of data that is not locked before releasing the lock.  In that case, throttling could make things worse and delay the release of the locks that are causing some lengthy response times.  Alternatively, some requests might take longer than others because they are more complex than others.  It seems like to get this right the connection pool would need to make extra requests that are all identical (and don't involve locking) -- something like JDBC's Connection.isValid method -- to be able to make an accurate comparison. There would be the tradeoff of the cost of the extra requests, and how often these measurable requests are made, also impacting the granularity and responsiveness of the autotuning.