You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't think there's a reason for it, except that it hasn't been important to our knowledge. Production runs are generally not going to be using these builds, which value portability and reliability over performance.
I think a PR adding separate MPI builds would be welcome, but I'd like to ensure programs running in serial aren't affected. I'm not actually sure the best way to do this; in the GROMACS feedstock we just build several variants and muck with the build number, which is not what the build number is meant to do, but it works: https://github.com/conda-forge/gromacs-feedstock/tree/main/recipe
Yea, just lack of volunteer time to implement it and maintain it. Same with CUDA stuff. Ambertools takes a while to build so each attempt is very time consuming if things stop working due to a change somewhere.
Comment:
It seems to me that MPI is not enabled in this recipe. I'd like to know if there is any specific reason for it, or if I can contribute to enable MPI.
The text was updated successfully, but these errors were encountered: