-
Notifications
You must be signed in to change notification settings - Fork 435
Pool: prevent trimming the last idle connection under load #1271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Previously, the inactivity timer could terminate idle connections even when doing so left the pool effectively empty. Under heavy load this forced the pool to create new connections, causing extra overhead and occasional TimeoutErrors during acquire(). This change adds a guard in PoolConnectionHolder so that idle deactivation only happens when it is safe: - never below pool min_size - never if there are waiters - never removing the last idle connection This ensures the pool retains at least one ready connection and avoids spurious connection churn under load.
Previously, the inactivity timer could terminate idle connections even when doing so left the pool effectively empty. Under heavy load or after inactivity for a few minutes, this forced the pool to create new connections, causing extra overhead and occasional TimeoutErrors during acquire(). This change adds a guard in PoolConnectionHolder so that idle deactivation only happens when it is safe: - never below pool min_size - never if there are waiters - never removing the last idle connection This ensures the pool retains at least one ready connection and avoids spurious connection after minutes of inactivity or heavy loads.
…fix-empty-connection-pool
Previously, the inactivity timer could terminate idle connections even when doing so left the pool effectively empty. Under heavy load or after inactivity for a few minutes, this forced the pool to create new connections, causing extra overhead and occasional TimeoutErrors during acquire(). This change adds a guard in PoolConnectionHolder so that idle deactivation only happens when it is safe: - never below pool min_size - never if there are waiters - never removing the last idle connection This ensures the pool retains at least one ready connection and avoids spurious connection after minutes of inactivity or heavy loads.
- Keep at least one idle connection available (i.e., at least 2 idle holders so | ||
trimming one still leaves one idle). | ||
""" | ||
pool = getattr(self, "_pool", None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These attributes are basically always available, so getattr
is superfluous, just access the attributes directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point thanks
waiters = 0 | ||
|
||
# Include tasks currently in the process of acquiring. | ||
waiters += int(getattr(pool, "_acquiring", 0) or 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Except this one does not seem to actually exist?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add that and it'll let us to drop the queue._getters
fiddling above.
self._inactive_callback.cancel() | ||
self._inactive_callback = None | ||
|
||
def _can_deactivate_inactive_connection(self) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would probably make sense to move the method to the Pool
class given how much we fiddle with internals here.
waiters = 0 | ||
|
||
# Include tasks currently in the process of acquiring. | ||
waiters += int(getattr(pool, "_acquiring", 0) or 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add that and it'll let us to drop the queue._getters
fiddling above.
Related To #1268
Under heavy load, or when max_inactive_connection_lifetime exceeds the pool’s configured connection lifetime (like after inactivity for few minutes when lifetime is set to 300),
the inactivity timer may deactivate idle connections too aggressively.
In rare cases this leaves the pool empty, forcing new connections to be created just as clients are waiting. Since connection setup is expensive, this can result in extra overhead and even TimeoutErrors when acquiring a connection.
This change adds a guard to PoolConnectionHolder._deactivate_inactive_connection so that idle deactivation only happens when it is safe:
1- never below min_size
2- never if there are waiters in the pool queue
3- never removing the last idle connection
With these rules, the pool always keeps at least one ready connection, avoiding spurious churn and reducing the risk of TimeoutErrors under load.
The implementation only uses existing private fields (_holders, _queue, _minsize, etc.), so it doesn’t introduce new API surface.
This makes the pool’s behavior more predictable under load without changing existing configuration knobs.