Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disk Cache for Pyodide Wheels #1851

Merged
merged 8 commits into from
Apr 2, 2024
Merged

Disk Cache for Pyodide Wheels #1851

merged 8 commits into from
Apr 2, 2024

Conversation

garrettgu10
Copy link
Collaborator

@garrettgu10 garrettgu10 commented Mar 16, 2024

Test plan:

$ mkdir ~/.cache/workerd
$ ./bazel-bin/src/workerd/server/workerd test --disk-cache-dir ~/.cache/workerd samples/pyodide-fastapi/config.capnp
# disable internet (either by blocking pub-45d734c4145d4285b343833ee450ef38.r2.dev in /etc/hosts or by switching off all internet access)
$ ./bazel-bin/src/workerd/server/workerd test --disk-cache-dir ~/.cache/workerd samples/pyodide-fastapi/config.capnp
# it should still work

@garrettgu10
Copy link
Collaborator Author

garrettgu10 commented Mar 18, 2024

I just found out KJ does in fact have file I/O, so I'm in the process of rewriting that section of the PR

@garrettgu10
Copy link
Collaborator Author

Only thing left to do is to get the cache root from the command line or from the capnp config and edit cacheRoot in server.c++ to make it dynamic

@garrettgu10 garrettgu10 force-pushed the ggu/pyodide-disk-cache branch 2 times, most recently from a38d264 to e229797 Compare March 21, 2024 19:35
@garrettgu10
Copy link
Collaborator Author

Having trouble testing since devspace is down, but the test plan is at the top and everything should be complete at this point

@garrettgu10 garrettgu10 marked this pull request as ready for review March 26, 2024 19:33
@garrettgu10 garrettgu10 requested review from a team as code owners March 26, 2024 19:33
Copy link
Contributor

@hoodmane hoodmane left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally looks good to me. Is there a downstream PR? Looks to me like you'll need to add a DisabledDiskCache or something like that so that the disk-cache import won't fail.

@garrettgu10
Copy link
Collaborator Author

Relies on #1919

@garrettgu10 garrettgu10 merged commit 6044ec6 into main Apr 2, 2024
10 checks passed
Comment on lines +932 to +936
void diskCacheDir(kj::StringPtr pathStr) {
kj::Path path = fs->getCurrentPath().eval(pathStr);
kj::Maybe<kj::Own<const kj::Directory>> dir = fs->getRoot().tryOpenSubdir(path, kj::WriteMode::MODIFY);
server.setDiskCacheRoot(kj::mv(dir));
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this parameter have a default value pointing at $TMPDIR/workerd-python-package-cache?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On linux and mac I'd like to see it default to:

  • $XDG_CACHE_HOME/workerd-python-package-cache if XDG_CACHE_HOME is defined
  • $HOME/.cache/workerd-python-package-cache otherwise.

Unless you have a strong opinion that it should go in /tmp instead.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, better to follow the XDG spec here.

For macOS I think we want ~/Library/Caches/ and I guess %LOCALAPPDATA% for Windows.

garrettgu10 added a commit that referenced this pull request May 13, 2024
* Disk Cache for Pyodide Wheels

* Add command line argument for disk cache root

* Better API for instantiating a disabled disk cache

* Remove logs when using disk cache

* Fix bug with reportUndefinedSymbols shim
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants