Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf(ramda): Use individual imports #79

Merged
merged 1 commit into from
Aug 8, 2022

Conversation

Ne3l
Copy link
Contributor

@Ne3l Ne3l commented Jul 13, 2022

Import directly the function to avoid jest creating a context for all the ramda utilities

What:

Import directly the ramda functions instead of the global index

Why:

Jest create a virtual machine for every imported file in order to provide a secure context. this has been reported in different jest issues

profiling my project i see that 500ms is spent in everyfile due to the setupFilesAfterEnv https://jestjs.io/docs/configuration#setupfilesafterenv-array

Screenshot 2022-07-13 at 23 06 55

and this is mainly because jest is creating a vm for each ramda function file

Screenshot 2022-07-13 at 23 07 22

This has a tradeoff as it's a bit of incovenience for the mantainers but as this library is very linked to jest, i feel is interesting to pay this small code penalty to provide a faster performance for the consumers.

In my project we have over 200 test files and this change saved more than 30s on the CI testing time.

How:

Modify the import path to point directly to the function file.

Checklist:

  • Documentation added to the
    docs
  • Typescript definitions updated
  • Tests
  • Ready to be merged

Import directly the function to avoid jest creating a context for all the ramda utilities
@Ne3l
Copy link
Contributor Author

Ne3l commented Jul 21, 2022

Hello @dcalhoun @mdjastrzebski can you please take a look at this? Not sure who are the maintainers of the repo but saw you on other PR.

@codecov
Copy link

codecov bot commented Jul 21, 2022

Codecov Report

Merging #79 (5496927) into main (985a98e) will not change coverage.
The diff coverage is n/a.

@@            Coverage Diff            @@
##              main       #79   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files            8         8           
  Lines          149       149           
  Branches        48        50    +2     
=========================================
  Hits           149       149           
Flag Coverage Δ
node-12 100.00% <ø> (ø)
node-14 100.00% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
src/to-be-disabled.js 100.00% <ø> (ø)
src/to-be-empty.js 100.00% <ø> (ø)
src/to-contain-element.js 100.00% <ø> (ø)
src/to-have-prop.js 100.00% <ø> (ø)
src/to-have-style.js 100.00% <ø> (ø)
src/to-have-text-content.js 100.00% <ø> (ø)

@dcalhoun
Copy link

Hey @Ne3l. 👋🏻 This looks like a solid improvement. I am not a maintainer of this project myself. I recommend reviewing #31 as it includes discussion on project maintainers.

@thymikee
Copy link
Collaborator

Hm, this is quite unintuitive. Are there other libraries that done similarly to speed up the tests?

@Ne3l
Copy link
Contributor Author

Ne3l commented Jul 22, 2022

@thymikee Not that i'm aware of, i just was checking why the test on my project are slow(still fixing some other stuff) and found this quick win during the profiling.

TBH, it's quite unintuitive if we don't add a comment explaining why it needs to be done. In my opinion is worth to do it as we're gonna improve the experience of the consumers just by modifying some imports but as i'm not the maintainer and won't have to deal with any consequences of this change i understand your POV and respect your decision.

In case you wanna give it a try here you got how to profile it https://www.youtube.com/watch?v=RB2g-o39upo from @kentcdodds.

The summary is to

  • replace jest with node --inspect-brk ./node_modules/jest/bin/jest.js
  • Open chrome://inspect/#devices and open the debug session
  • start profiling

after applying the change on our project we went from 425 ms to 121ms.
image

In the CI it meant a reduction of over 30s.

@mdjastrzebski
Copy link
Collaborator

@thymikee seems like this is ramda specific issue, as they state that:

Destructuring imports from ramda does not necessarily prevent importing the entire library. You can manually cherry-pick methods like the following, which would only grab the parts necessary for identity to work:

import identity from 'ramda/src/identity'

Source: https://ramdajs.com/

@mdjastrzebski
Copy link
Collaborator

After consideration I think that despite this is quite counterintuitive I would give it a go.

Long term we should probably swap Ramda for something more performance, maybe Rambda or Remeda. But we would have to measure the actual performance of such change.

@mdjastrzebski mdjastrzebski merged commit 09d898d into testing-library:main Aug 8, 2022
@github-actions
Copy link

github-actions bot commented Aug 8, 2022

🎉 This PR is included in version 4.0.6 🎉

The release is available on:

Your semantic-release bot 📦🚀

@thymikee
Copy link
Collaborator

thymikee commented Aug 9, 2022

Thanks for checking that @mdjastrzebski

@Grohden
Copy link
Contributor

Grohden commented Aug 10, 2022

can't we use babel-transform-imports to transform those imports?

@mdjastrzebski
Copy link
Collaborator

Sounds like a good idea.

@Grohden would you like to prepare a PR for this?

@thymikee
Copy link
Collaborator

thymikee commented Aug 10, 2022

FYI, this library is last updated 3 years ago, so expect some challenges.

@lokeshdangi
Copy link

is this issue anyhow related to this changes ?
downgrading package to previous version fixes this

Screenshot 2022-08-10 at 3 20 37 PM

@mdjastrzebski
Copy link
Collaborator

Yes this looks like related, what is your node version?

@lokeshdangi
Copy link

lokeshdangi commented Aug 10, 2022

tried with v16.10.0, v12.22.7 and v14.18.1

@mdjastrzebski
Copy link
Collaborator

@lokeshdangi That's strange, as I've done checks on my side and it works correctly.

  1. Can you check that your node_modules contains ramda directory and ramda/src/includes.js?
  2. Do you use yarn or npm? Which versions?
  3. Do you have a simple repo or some kind of monorepo?

@lokeshdangi
Copy link

  1. yes

Screenshot 2022-08-10 at 4 13 39 PM

  1. yarn version is 1.22.10
  2. no

one thing i noticed is when we check the whole ramda export it does not contain includes key.
Screenshot 2022-08-10 at 4 10 11 PM

@mdjastrzebski
Copy link
Collaborator

mdjastrzebski commented Aug 10, 2022

I wonder if this relates only to includes functions, as we could just use regular JS Array.includes in the code and skip that import from ramda. Looks like it could just be the first import from first Jest Native export, so the problem probably relates to all of these.

@mdjastrzebski
Copy link
Collaborator

mdjastrzebski commented Aug 10, 2022

@lokeshdangi which version of ramda you have installed? As in your yarn.lock

@lokeshdangi
Copy link

lokeshdangi commented Aug 10, 2022

we are not using it directly inside our project. its being installed as a dependency of @testing-library/jest-native
Screenshot 2022-08-10 at 5 10 23 PM

if i'm installing ramda as a dependeny of my project this issue does not happen

@mdjastrzebski
Copy link
Collaborator

@lokeshdangi I've released v4.0.10 which reverses this change.

@mdjastrzebski mdjastrzebski mentioned this pull request Aug 10, 2022
10 tasks
@mdjastrzebski
Copy link
Collaborator

@Ne3l I've released v4.0.11 which completely removes Ramda in favor of plain JS. Could you test how that version performs again original Ramda version?

@Ne3l
Copy link
Contributor Author

Ne3l commented Aug 16, 2022

@mdjastrzebski Thanks for it. Yes it good on my side, here you can see the diff from 4.0.5 to 4.0.11 on the CI of the project i'm working on.

4.0.5
image

4.0.11
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants