-
Notifications
You must be signed in to change notification settings - Fork 12.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ideas for faster cold compiler start-up #25658
Comments
The numbers for the blog post were, as you noted, from 3 years ago. I tested it on the Typescript compiler that was part of the Octane benchmark. I'm sure these numbers are totally outdated, but I'm also sure that the benefits are still very significant. Unfortunately the steps I used only work on vanilla V8. Node.js currently doesn't support startup snapshots yet. There are efforts underway to use startup snapshots, but until that is done, custom startup snapshot for e.g. Typescript are not yet possible. |
deno is currently using V8 snapshots of TypeScript for the runtime. At the moment it isn't possible to compare a non-snappshotted version, but it certainly appears to have increased startup time from the old architecture. |
Thanks for getting back to us @hashseed! We'll be keen to see any progress there, but there's certainly no rush. 🙂 |
You can also make the compiler even more incremental. You add a step for indexing symbols in header/declaration files ahead of time. The index files contains locations of all symbols in one file. When the parser parses a source file, it parses the symbols as it encounters them. If a symbols lies in a declaration file, it does a lookup in the index, and parses that specific part only and resolves that type. In this way, only code that is used is parsed. |
This is already being done to a certain degree with preparse data. We store function ranges and captured variables to avoid repeating to parse. |
You mean in node.js? I was more referring to the TS compiler. Haven't been following this project as close as before, but I think TS doesn't do this. |
In case when you have a lot of TypeScript files it is possible that caching some fs results (fileExists, directoryExists and so on) may cause speedup of compilation time (see TypeStrong/ts-loader#825 (comment)). |
The package |
Not sure if this is the right place to talk about this, but I looked into tsconfig.buildinfo and saw its creating hashes for all the files in node_modules. {
"program": {
"fileInfos": {
"/home/andrew/Build/dev/scrape-pages/node_modules/typescript/lib/lib.es5.d.ts": {
"version": "c8665e66018917580e71792b91022bcaf53fb946fab4aaf8dfb0738ed564db88",
"signature": "c8665e66018917580e71792b91022bcaf53fb946fab4aaf8dfb0738ed564db88"
},
"/home/andrew/Build/dev/scrape-pages/node_modules/typescript/lib/lib.es2015.d.ts": {
"version": "7994d44005046d1413ea31d046577cdda33b8b2470f30281fd9c8b3c99fe2d96",
"signature": "7994d44005046d1413ea31d046577cdda33b8b2470f30281fd9c8b3c99fe2d96"
},
"/home/andrew/Build/dev/scrape-pages/node_modules/typescript/lib/lib.es2016.d.ts": {
"version": "5f217838d25704474d9ef93774f04164889169ca31475fe423a9de6758f058d1",
"signature": "5f217838d25704474d9ef93774f04164889169ca31475fe423a9de6758f058d1"
},
"/home/andrew/Build/dev/scrape-pages/node_modules/typescript/lib/lib.es2017.d.ts": {
"version": "459097c7bdd88fc5731367e56591e4f465f2c9de81a35427a7bd473165c34743",
"signature": "459097c7bdd88fc5731367e56591e4f465f2c9de81a35427a7bd473165c34743"
},
...
}
} Given that
I am guessing this is not something configurable currently and Im guessing the compiler just defaults to creating a hash for every single file the compiler takes in, but if we assume node_modules wont change, then we could remove all that signature creation and checking on every build. That would definitely lead to some speed-ups. The only problem I could see is what happens when a node module is updated or removed, but I dont even know if typechecking is relevant to the .buildinfo file, or if it is purely for deciding which files need to be re-compiled |
@hashseed The issue linked to the RFC (nodejs/node#17058) was closed and nodejs/node#35711 was opened as continuation. Is the continuation issue a requirement for the changes proposed in this issue or is the original RFC sufficient for improving TS startup performance? The startup performance is something we are attempting to tackle for Chrome DevTools (#40721) and snapshots could potentially help in this regard. |
The continuation is required, in particular the part "Enabling user land snapshot" is necessary to bundle a pre-loaded TSC so that we can save the time spent on loading TSC into memory upon startup. |
user snapshot compilation is enabled by node 18 on build https://nodejs.org/en/blog/announcements/v18-release-announce/#build-time-user-land-snapshot-experimental |
Node v22 ships NODE_COMPILE_CACHE, so e.g.
could turn that on, right? Would it be worth adding an option / making this the default in vsc? |
I've closed my attempt at using snapshotting: #55830 The new @kurtextrem Not quite; that config is not a shell command, but a path passed to Unfortunately, we can't just set |
@joyeecheung Do you see a world in which the compile cache can be enabled from within a running program, or where this option is just "always on" and benefits all programs? |
@jakebailey Thank you for making me aware! I tried the following wrapper (
but in that case TS never finishes IntelliSense status. Passing just |
I would think you'd need to pass in some sort of relative path or stick that on your PATH. Another update is nodejs/node#53639 which would enable our entrypoints to enable caching themselves. |
Sorry, missed the ping - nodejs/node#53639 would allow a script to enable caching for another script/module, so technically, it can be enabled from within the same running program. To enable caching of a script by itself I don't have good ideas - maybe some directive would be possible to implement, but it's another can of worms about how acceptable it would be to add Node.js-specific directives effectively to the JS language, or whether parsing directive can lead to overhead themselves... |
Background
For some users, cold compile times are getting to be a bit long - so much so that it's impacting people's non-watch-mode experience, and giving people a negative perception of the compiler.
Compilation is already a hard sell for JavaScript users. If we can get some speed wins, I think it'd ease a lot of the pain of starting out with TypeScript.
Automatic
skipDefaultLibCheck
lib.d.ts
is a pretty big file, and it's only going to grow. Realistically, most people don't ever declare symbols that conflict with the global scope, so we made theskipDefaultLibCheck
(and also theskipLibCheck
flags) for faster compilations.We can suggest this flag to users, but the truth is that it's not discoverable. It's also often misused, so I want to stop recommending it to people. 😄
It'd be interesting to see if we can get the same results of
skipDefaultLibCheck
based on the code users have written. Any program that doesn't contribute a global augmentation, or a declaration in the global scope, doesn't really need to havelib.d.ts
checked over again.@mhegazy and I have discussed this, and it sounds like we have the necessary information after the type-checker undergoes symbol-merging. If no symbols ever get merged outside of lib files, we can make the assumption that
lib
files never need to get checked. But this requires knowing that all lib files have already had symbols merged up front before any other files the compiler is given.Pros
skipDefaultLibCheck
removes anywhere between 400-700ms on my machine from a "Hello world" file, so we could expect the same here.Cons
lib.d.ts
wouldn't see erroneous changes in a compiler (so we'd likely need aforceDefaultLibCheck
).lib.d.ts
files, only our team ever needs to runforceDefaultLibCheck
, reducing the cost for all other TypeScript users.V8 Snapshots
~3 years ago, the V8 team introduced custom startup snapshots. In that post
Obviously my machine's not the same as that aforementioned desktop, but I'm getting just a bit over 200ms for running
tsc -v
, so we could possibly minimize a decent chunk of our raw startup cost. Maybe @hashseed or @bmeurer would be able to lend some insight for how difficult this would be.Minification
@RyanCavanaugh and I tried some offhand loading benchmarks with Uglify and managed
typescript.js
's size by about halfI don't know how impactful 30ms is, but the size reduction sounds appealing.
The text was updated successfully, but these errors were encountered: