Replies: 1 comment
-
The file copying is done to ensure that no undeclared dependencies are used, see #86. However, this should only be done for nix_file_deps on the current repository rule, mit transitive ones. So, if the set of files is the same for all packages, then it is effectively part of your nixpkgs repository. Would it be possible to include in the nix_file_deps of a local_nixpkgs_repository? Then they should only be copied once. There was a discussion quite a while ago about building multiple packages at once to save on evaluation time by @thufschmitt here: Flakes and evaluation caching seem like an elegant solution to the problem. I'd also be interested to see what kind of API could be designed with bzlmod and flakes. |
Beta Was this translation helpful? Give feedback.
-
Hi,
Our nix+bazel setup has matured since we started using it a couple years ago. We have slowly departed from upstream nixpkgs, as we need more control over the versions of our dependencies. It was quite simple at first, but evolved into quite a fork of nixpkgs, with a lot of pinned packages and custom logic. We still use upstream nixpkgs internally, but bazel should only use packages that are provided from our custom, version pinned, package set.
The problem is that now all the packages are inter-dependant, and it is no more possible to define a short list of files needed for a nixpkgs_package invocation. Each and every nixpkgs_package needs to depend on nearly all the files, and the presence of an overlay file collecting all the packages makes it so that any change invalidates all the nixpkgs_package.
Now, the packages themselves have not changed, but bazel cannot observe it anymore. It thus re-evaluates the 100+ nixpkgs_package on a nix change. These evaluations are reasonnably fast (10-20s), but when 100+ are started in parallel, it adds up to several minutes of wasted time before bazel even starts building anything.
I have looked roughly into the timings, and bazel spends a non-trivial amount of time copying the files into the external repo. Some restarting also happens, but that is a problem for another time.
My idea is that all these nix evaluations do the same thing over and over. They have to load all of nixpkgs, apply an overlay, which is a costly fixpoint, and extract a tree of .drv files. Evaluating all the packages at once however should be way faster, as each additional package should have a decreasing marginal evaluation cost.
Assuming the nix evaluation cache has support for caching .drv's (nix-instantiate, not nix-build), then it would make sense to evaluate all the attributes that are know to be used. This can be a fuzzy approximation, it does not matter much. Packages that are not used are only instantiated, not built. And packages that are used but not evaluated in that bulk still work, slower.
This has two main advantages:
This is not yet perfect, as this only works for flakes that are copied to the store. So we need tooling to copy the local files to the nix store. I am also not too sure if the evaluation cache can be used that way.
Any further thoughts or comments ?
Beta Was this translation helpful? Give feedback.
All reactions