I am running jest unit and integrations tests on my NodeJS api, I am facing some issue with possible memory leak. I tried upgrading Jest from 26.3.2 to 27.5.1, but that did not help much. I took some heap snapshot from chrome console.
Snapshot 1
Snapshot 2
Snapshot 3
Snapshot 4
From above snapshot I can see that the increase in usage is going very high. But I am unable to understand what's going on wrong.
I see something's up with String, Object and JSBufferData. But not sure what the issue is.
In case of string, I see this:
Multiple calls/lines for the stringified version of library, but where this comes from and why?
In case of Object:
The object in screenshot is possibly coming from a library I use countries-list
this is to get list of countries to find ISO name.
And finally the JSBufferData, which points to something like URLSearchParam, but I am not using anywhere in my application any of the above object/library:
Stack I use:
NodeJS: 16.14.2 Jest: 27.5.1 jest-searial-runner: 1.2.0
Me and my team were having a memory leak using jest 29.x and node 18.x. The test suite would run, have a memory leak and make node crash at 2GB of memory usage. Increasing RAM memory allocated to node was not an option, since the test suite would increase in number of tests making it eventually fail, besides, 2GB is already too much.
This answer IS NOT a direct solution to the problem, but this turns the problem potentially harmless and we used lots of hours to reach it.
First thing you need to do is to set this config on your jest.config.js
file:
module.exports = {
...
workerIdleMemoryLimit: '512MB',
...
}
I chose 512MB, but choose your value with some space left like "test_suite_ram_startup + 10 * memory_leak_increase"
. whenever the memory usage reaches this value, jest will restart the worker and clear the memory, impact in the total amount of time required to run the tests are negligible.
This alone itself will never allow your test suite to crash and the problem is pretty much solved.
Now, as important as not making node crash is using reasonable amount of RAM to run your tests. Every consideration from now on decreased the total amount of RAM used in a full run of the test suite.
node --expose-gc --no-compilation-cache ./node_modules/jest-cli/bin/jest.js --logHeapUsage
The "logHeapUsage" will log the memory usage so you can check the improvement.
If you are using typescript, use SWC as a transpiler to JS instead of tsjest, it's way faster, saved us tons of memory since the transpilation process was our most memory intensive process.
afterAll(() => {
global.gc && global.gc()
})
This in principle shouldn't be necessary, but somehow jest is not doing it as much as it should. Using this, we reduced the total amount of memory per test suite executed.
Some libraries like aws-sdk v2 will try to import everything even if you are using just a tiny fraction of it's code. If you look where the import happens in the lib, everything is imported to a single file, than you import from that file and all that useless stuff sits in your memory. To solve it, just trace the exact location where the asset you want sits and import directly from there. In the case of aws-sdk you can also use aws-sdk v3 which solves this problem.
Sometimes we can't just replace libraries, but changing from class-validator to a tiny library like zod helped.
In our case updating Prisma ORM from 4.10 to 5.0 made wonders. The new prisma made our test suite run 2.6x faster and use way less memory.
Just so you can take some numbers into perspective. Our test suite would require 1.1GB of RAM on startup, every test suite created a memory leak that increased the total amount of memory in 100MB. Each test suite took 15 - 10s to run. After 10 tests node would crash.
At the end of all of this, each test suite took 2-3s to run.
We went from a crashing test run that would start with 1.1GB of RAM usage and climb 100MB per each test suite to a never crashing one (due to the solution I first stated on the jest config) that would start with 30MB and climb 14MB per each test suite executed. If 512MB was ever reached, it would immediately decrease to 30MB again and start to slowly climb again. Also everything was much faster.
I really hope that helps anyone struggling with that.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With