Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add maxUses config option to Pool #2157

Merged
merged 1 commit into from
Apr 9, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,14 @@ I will __happily__ accept your pull request if it:

If your change involves breaking backwards compatibility please please point that out in the pull request & we can discuss & plan when and how to release it and what type of documentation or communication it will require.

### Setting up for local development

1. Clone the repo
2. From your workspace root run `yarn` and then `yarn lerna bootstrap`
3. Ensure you have a PostgreSQL instance running with SSL enabled and an empty database for tests
4. Ensure you have the proper environment variables configured for connecting to the instance
5. Run `yarn test` to run all the tests

## Troubleshooting and FAQ

The causes and solutions to common errors can be found among the [Frequently Asked Questions (FAQ)](https://github.com/brianc/node-postgres/wiki/FAQ)
Expand Down
26 changes: 26 additions & 0 deletions packages/pg-pool/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ var pool2 = new Pool({
max: 20, // set pool max size to 20
idleTimeoutMillis: 1000, // close idle clients after 1 second
connectionTimeoutMillis: 1000, // return an error after 1 second if connection could not be established
maxUses: 7500, // close (and replace) a connection after it has been used 7500 times (see below for discussion)
})

//you can supply a custom client constructor
Expand Down Expand Up @@ -330,6 +331,31 @@ var bluebirdPool = new Pool({

__please note:__ in node `<=0.12.x` the pool will throw if you do not provide a promise constructor in one of the two ways mentioned above. In node `>=4.0.0` the pool will use the native promise implementation by default; however, the two methods above still allow you to "bring your own."

## maxUses and read-replica autoscaling (e.g. AWS Aurora)

The maxUses config option can help an application instance rebalance load against a replica set that has been auto-scaled after the connection pool is already full of healthy connections.

The mechanism here is that a connection is considered "expended" after it has been acquired and released `maxUses` number of times. Depending on the load on your system, this means there will be an approximate time in which any given connection will live, thus creating a window for rebalancing.

Imagine a scenario where you have 10 app instances providing an API running against a replica cluster of 3 that are accessed via a round-robin DNS entry. Each instance runs a connection pool size of 20. With an ambient load of 50 requests per second, the connection pool will likely fill up in a few minutes with healthy connections.

If you have weekly bursts of traffic which peak at 1,000 requests per second, you might want to grow your replicas to 10 during this period. Without setting `maxUses`, the new replicas will not be adopted by the app servers without an intervention -- namely, restarting each in turn in order to build up new connection pools that are balanced against all the replicas. Adding additional app server instances will help to some extent because they will adopt all the replicas in an even way, but the initial app servers will continue to focus additional load on the original replicas.

This is where the `maxUses` configuration option comes into play. Setting `maxUses` to 7500 will ensure that over a period of 30 minutes or so the new replicas will be adopted as the pre-existing connections are closed and replaced with new ones, thus creating a window for eventual balance.

You'll want to test based on your own scenarios, but one way to make a first guess at `maxUses` is to identify an acceptable window for rebalancing and then solve for the value:

```
maxUses = rebalanceWindowSeconds * totalRequestsPerSecond / numAppInstances / poolSize
```

In the example above, assuming we acquire and release 1 connection per request and we are aiming for a 30 minute rebalancing window:

```
maxUses = rebalanceWindowSeconds * totalRequestsPerSecond / numAppInstances / poolSize
7200 = 1800 * 1000 / 10 / 25
```

## tests

To run tests clone the repo, `npm i` in the working dir, and then run `npm test`
Expand Down
8 changes: 7 additions & 1 deletion packages/pg-pool/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,7 @@ class Pool extends EventEmitter {
}

this.options.max = this.options.max || this.options.poolSize || 10
this.options.maxUses = this.options.maxUses || Infinity
this.log = this.options.log || function () { }
this.Client = this.options.Client || Client || require('pg').Client
this.Promise = this.options.Promise || global.Promise
Expand Down Expand Up @@ -296,8 +297,13 @@ class Pool extends EventEmitter {
_release (client, idleListener, err) {
client.on('error', idleListener)

client._poolUseCount = (client._poolUseCount || 0) + 1

// TODO(bmc): expose a proper, public interface _queryable and _ending
if (err || this.ending || !client._queryable || client._ending) {
if (err || this.ending || !client._queryable || client._ending || client._poolUseCount >= this.options.maxUses) {
if (client._poolUseCount >= this.options.maxUses) {
this.log('remove expended client')
}
this._remove(client)
this._pulseQueue()
return
Expand Down
85 changes: 85 additions & 0 deletions packages/pg-pool/test/max-uses.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
const expect = require('expect.js')
const co = require('co')
const _ = require('lodash')

const describe = require('mocha').describe
const it = require('mocha').it

const Pool = require('../')

describe('maxUses', () => {
it('can create a single client and use it once', co.wrap(function * () {
const pool = new Pool({ maxUses: 2 })
expect(pool.waitingCount).to.equal(0)
const client = yield pool.connect()
const res = yield client.query('SELECT $1::text as name', ['hi'])
expect(res.rows[0].name).to.equal('hi')
client.release()
pool.end()
}))

it('getting a connection a second time returns the same connection and releasing it also closes it', co.wrap(function * () {
const pool = new Pool({ maxUses: 2 })
expect(pool.waitingCount).to.equal(0)
const client = yield pool.connect()
client.release()
const client2 = yield pool.connect()
expect(client).to.equal(client2)
expect(client2._ending).to.equal(false)
client2.release()
expect(client2._ending).to.equal(true)
return yield pool.end()
}))

it('getting a connection a third time returns a new connection', co.wrap(function * () {
const pool = new Pool({ maxUses: 2 })
expect(pool.waitingCount).to.equal(0)
const client = yield pool.connect()
client.release()
const client2 = yield pool.connect()
expect(client).to.equal(client2)
client2.release()
const client3 = yield pool.connect()
expect(client3).not.to.equal(client2)
client3.release()
return yield pool.end()
}))

it('getting a connection from a pending request gets a fresh client when the released candidate is expended', co.wrap(function * () {
const pool = new Pool({ max: 1, maxUses: 2 })
expect(pool.waitingCount).to.equal(0)
const client1 = yield pool.connect()
pool.connect()
.then(client2 => {
expect(client2).to.equal(client1)
expect(pool.waitingCount).to.equal(1)
// Releasing the client this time should also expend it since maxUses is 2, causing client3 to be a fresh client
client2.release()
})
const client3Promise = pool.connect()
.then(client3 => {
// client3 should be a fresh client since client2's release caused the first client to be expended
expect(pool.waitingCount).to.equal(0)
expect(client3).not.to.equal(client1)
return client3.release()
})
// There should be two pending requests since we have 3 connect requests but a max size of 1
expect(pool.waitingCount).to.equal(2)
// Releasing the client should not yet expend it since maxUses is 2
client1.release()
yield client3Promise
return yield pool.end()
}))

it('logs when removing an expended client', co.wrap(function * () {
const messages = []
const log = function (msg) {
messages.push(msg)
}
const pool = new Pool({ maxUses: 1, log })
const client = yield pool.connect()
client.release()
expect(messages).to.contain('remove expended client')
return yield pool.end()
}))
})