-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Torchscript support and OpenMM and LAMMPS #2
Comments
The OpenMM is currently tested here: |
In order to close that we need to add in the readme:
@davkovacs @wcwitt @jharrymoore Anything more? |
Should we also add the ability to create a LAMMPs potential directly from |
Is this ready to try? I'd be keen to see an example. |
@nickhine which one would you like to try? OpenMM or LAMMPS? |
I was interested in the LAMMPS version, applied to a periodic system - particularly if the performance for small/medium sized systems is respectable on CPU. |
Hi @nickhine, can you email me at [email protected]? I'll send you an example and can help with any problems. |
Hi @wcwitt, I recently utilized MACE in my project (using a cutoff of 7 angstrom and 2 layers for bilayer graphene with defects) and was impressed with its high accuracy (energy RMSE =0.3 meV/atom, force RMSE = 20 meV/A). However, I noticed that the LAMMPS interface only supports CPU without MPI, resulting in a limited speed of 30 atom-steps/second for a 1000-atom system with 48 cores. I have followed the introductions in the mannual to perform the MD simulations. May I kindly inquire if there is a GPU implementation available now, as I would be very interested in trying it out? Otherwise, do you have any suggestions on improviding the runing speed for MD simulations? |
Hi @hityingph, Glad you are finding MACE useful, and thanks for the interest in the LAMMPS interface!
This is not strictly true, but I agree that for 1000 atoms you are unlikely (at present) to get much speedup from MPI parallelization.
We do have a LAMMPS GPU version, but we are still testing it internally. You can expect a public version very soon - I'd suggest waiting for that, unfortunately. |
Hi @hityingph, Thank you for using MACE! While waiting for the GPU interface for LAMMPs you might be able to run some faster MD with openMM (see https://mace-docs.readthedocs.io/en/latest/guide/openmm.html) that runs on GPU. |
@wcwitt Thanks for your prompt response, and I look forward to the GPU version! Hi @ilyes319 , thank you for recommending the OpenMM interface. I'm not familiar with OpenMM, and I'm curious if it has some features similar to those in LAMMPS that I need (such as the "fix spring" function I used for sliding simulations). Nevertheless, I will give it a try to get a general understanding of the running speed of MACE using a GPU. |
I am closing this issue now as all this is officially supported. One can find details of these features in the User guide of the documentation. |
Being able to compile MACE models is of high priority :
The text was updated successfully, but these errors were encountered: