-
Notifications
You must be signed in to change notification settings - Fork 438
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large TVP parameter consumes memory and processing power #879
Comments
Using mssql 4.3.0 the following code should replicate the issue: const mssql = require("mssql");
async function test() {
// Send 100 people to each device.
// Each association is 1 record
const People = 100;
const Devices = 1000;
let tblParam = new mssql.Table();
tblParam.columns.add("fkidPerson", mssql.Int, {nullable: true});
tblParam.columns.add("fkidDevice", mssql.Int, {nullable: true});
tblParam.columns.add("foo", mssql.Int, {nullable: true});
tblParam.columns.add("bar", mssql.Int, {nullable: true});
for(let i = 0; i < People; i++) {
for(let j = 0; j < Devices; j++) {
tblParam.rows.add(i+10000, j + 1000, i%2 == 0, j%7 == 0);
}
}
let conn = new mssql.ConnectionPool({
server: "localhost",
user: "service.worker",
password: "Password",
database: "Database"
});
await conn.connect();
let req = conn.request();
req.input("paramMyTableParam", mssql.TVP, tblParam);
await req.execute("procDoSomethingWithMyTableParam");
await conn.close();
}
test(); |
Hi @Jamezo97, I have tried the script that you provide, for the issue, I should expect the memory usage grow experientially right? While I am running the script that you provided, I am checking the memory usage under task manager, the memory performance looks normal. Is this the correct way to reproduce this issue, or you monitor the memory usage using a different way? |
Hi @MichaelSun90 , thank you for your interest in this issue. Sorry for not making it clear, but I believe that what you're seeing is the correct behaviour. The total memory usage of the process will remain limited to within a reasonable range as the GC cleans up the old allocated buffers. As each new buffer is allocated, the previous buffers go out of scope and are subsequently cleaned up. The number I gave above of 465 Gigabytes was just an estimation of the total sum of all the memory the operation would allocate if run to completion. What you should see, is the memory usage of the process fluctuating rapidly, increasing in... intensity...(?) as time goes on and the size of the buffers being allocated get larger. I've run the test using node v10.13.0 and v8.15.0 (versions on my Window and WSL instances respectively) and they both show the same symptoms. I also tried it with v5.0.3 of mssql (which requires tedious v4.2.0) and the issue remains. I wrote a quick 'hotfix' which solves the issue, so maybe try with and without the fix to see if there's a difference? "use strict"
let TrackingBuffer = require("tedious/lib/tracking-buffer/writable-tracking-buffer");
let OldMakeRoomFor = TrackingBuffer.prototype.makeRoomFor;
// Override the existing prototype function
TrackingBuffer.prototype.makeRoomFor = function() {
/**
* Forcibly enable doubleSizeGrowth for all buffers created.
* This allows us to create a TVP table with 2MB of data by only
* allocating ~8 MB of memory for it.
*/
this.doubleSizeGrowth = true;
OldMakeRoomFor.apply(this, arguments);
} Thanks! |
🎉 This issue has been resolved in version 8.1.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |
The Issue
We're trying to execute a stored procedure with approximately 100,000 rows of data, each consisting of around 20 bytes of data. This totals around 2MB.
However while assembling the
RpcRequestPayload
, the NodeJS process maxes out the CPU core, and starts allocating insane amounts of memory, chewing through hundreds of megabytes per second. It took over 10 minutes before it sent the request to the database.I believe I've tracked the issue down to the 'WritableTrackingBuffer'. For all operations except a 'bulk-load', the
WritableTrackingBuffer
reallocates the internal buffer for every write after the initial buffer (500 bytes) is filled.If we write 4 bytes for each operation, that's approximately 100,000*(20/4) allocations for the entire TVP, allocating 504, 508, 512, 516, 520 etc all the way up to 2,000,000. This eventually totals 465 Gigabytes of NodeJS buffers allocated, not to mention 465 Gigabytes of memory copied between each successive buffer. All for a ~2 megabyte payload!
This kind of issue seems to have been reported before but I didn't find any mention of increased processing power and memory usage: #475
There's a PR for this issue, however it doesn't seem to solve the above problem, all it does is relinquish control back to the event loop periodically: #845
Solutions
The easiest solution is to force the WritableTrackingBuffer to enable 'doubleSizeGrowth' for all instances.
This is a common solution that I've seen in other applications where the size of the Buffer or array isn't known initially. It seems to cause the buffer to double in size, so for a 2 MB payload, it would allocate
512, 1024, 2048, 4096, 8192... etc, or something similar. This causes a maximum of size * 4 to be allocated for size payload.
The second, harder solution is to assemble the RpcRequestPayload twice. The first time just keep track of the size of the buffer to be created without actually allocating anything. This could be done by creating a class similar to WritableTrackingBuffer with the same
writeX
functions which just increment a counter. Then on the second pass, just allocate the Buffer once and fill it.Alternatively, a third solution is to allocate new Buffers in chunks of say, 512 bytes internally within the WritableTrackingBuffer. Each time the buffer is filled or about to be filled, truncate any unused bytes, push the buffer to an array, allocate a new Buffer and reset the buffer position.
I'm more than happy to make a PR for any of these, or something else if there are better solutions. I haven't tested any of the above either so treat it as pseudo code.
Any thoughts or suggestions are welcome. Thanks!
The text was updated successfully, but these errors were encountered: