-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cuda: add module #422
base: main
Are you sure you want to change the base?
cuda: add module #422
Conversation
We'll need to add toolkit folder to top-level.nix imports. Is this ever supposed to work on macOS? |
👍
I don't think so, but not sure 😅 |
Let's add an assertion then if |
Signed-off-by: Bob van der Linden <[email protected]>
968411f
to
7894a7b
Compare
Requires #383, as cuda is unfree. |
Would we want to incorporate any feedback from NixOS/nixpkgs#217780 (comment)? Leaving it for the future is also fine :) |
Is CUDA available on MacOS? |
Hmm, this PR is not really ready. It probably shouldn't support MacOS if cuda in nixpkgs doesn't support it. On Discord it was mentioned this method did work for pytorch, but because of I think Using the pytorch package from nixpkgs (and thus the nixpkgs cuda maintainers) doesn't play nicely with poetry (pyproject.toml), so there is not a perfect solution yet. I am interested in looking into this further to get a good setup for cuda + pytorch + rust, but it's not high on my todo list atm. I can leave this PR in draft to keep the discussion for devenv centralized, but I can also open a new issue if that's more appropriate. |
e82dccf
to
2cae5ec
Compare
Adding
|
I think that is a fix that should be in NixOS. It makes no sense for other distros nor MacOS. Because of that, I'm not sure whether it should be in devenv. |
There's no need for that on NixOS: as long as you use a nix-built pytorch, |
Indeed. However, most people want to use pytorch from poetry (having it be part of The best of both worlds might be to use It avoids having the need for It does have its own downside in that it probably will not work very nicely with poetry cli commands like |
Dunno, I haven't seen these people 😆
This is not exactly correct. Most other systems do indeed merge all libraries into one location. But the reason their pytorch manages to discover e.g. |
i use torch from poetry most of time because cuda libs are in pypi now and fewest number of package mangers the better. |
also because ml packages are updating so fast it's not viable to do with nix system packages |
Could we somehow detect if opengl stuff is wired up and error out with a nice message what to do? |
enable = lib.mkEnableOption "CUDA toolkit"; | ||
|
||
package = lib.mkOption { | ||
type = lib.types.package; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sometimes CUDA can't come from Nix, so we'll have to allow a way to setup FHS in those cases.
It's tricky to get this right (and won't work on macOS), but it's often required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sometimes CUDA can't come from Nix,
Interesting. Any specific examples in mind?
package = lib.mkOption { | ||
type = lib.types.package; | ||
description = "Which package of cuda toolkit to use."; | ||
default = pkgs.cudatoolkit; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: this attribute is almost unmaintained, it's better to use the splayed packages
env.LD_LIBRARY_PATH = lib.mkIf pkgs.stdenv.isLinux ( | ||
lib.makeLibraryPath [ | ||
pkgs.gcc-unwrapped.lib | ||
pkgs.linuxPackages.nvidia_x11 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm. How is this to be synchronized with config.boot.kernelPackages
?
Hey, any news for this PR? It would be of great help to allow cuda to be installed with devenv |
This PR is a draft. It's only here because of some initial attempt and gather feedback / centralize the discussion on implementation. I don't feel this is ready for anyone to adopt in its current state. Feel free to create your own and I'll close this one. I don't do much CUDA on Linux anymore and am not involved with the developments of CUDA on Nix. I'm not the right person to test this properly (no access to a MacOS ARM machine as well) 😅 I gathered there are people more qualified to say something useful about this integration. |
Thanks for your reply @bobvanderlinden |
As discussed on Discord, this configuration is needed to run pytorch in devenv on Linux. It was confirmed to work.
I don't have much knowledge of CUDA itself, so I'm unsure what other libraries exactly need. I did find that CUDA_HOME and CUDA_PATH are used by tensorflow.
Confirmations that this works for real projects are welcome!