Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torchscript support and OpenMM and LAMMPS #2

Closed
9 tasks done
ilyes319 opened this issue Jun 29, 2022 · 12 comments · Fixed by #55
Closed
9 tasks done

Torchscript support and OpenMM and LAMMPS #2

ilyes319 opened this issue Jun 29, 2022 · 12 comments · Fixed by #55
Assignees
Labels
enhancement New feature or request

Comments

@ilyes319
Copy link
Contributor

ilyes319 commented Jun 29, 2022

Being able to compile MACE models is of high priority :

  • Compile with torchscript the internal functionalities of MACE.
  • Write test for compiled modules.
  • Resolve incompatibilities between torchscript and AtomicData.
  • Compile the full model with torchscript.
  • Create deployed model with metadata (r_cut, species).
  • Load model in C++ using libtorch.
  • Compiled version of neighboring list.
  • Create interface to OpenMM.
  • Create interface to LAMMPs with pair potential.
@ilyes319 ilyes319 assigned ilyes319 and davkovacs and unassigned ilyes319 and davkovacs Jun 29, 2022
@ilyes319 ilyes319 added the enhancement New feature or request label Jul 1, 2022
ilyes319 added a commit that referenced this issue Aug 29, 2022
@davkovacs
Copy link
Collaborator

The OpenMM is currently tested here:
https://github.com/davkovacs/mace/tree/openmm3

@davkovacs davkovacs added this to the TODO for next release milestone Nov 10, 2022
@ilyes319
Copy link
Contributor Author

ilyes319 commented Jan 6, 2023

In order to close that we need to add in the readme:

  • A presentation of the new functionalities (one for openMM, one for lammps)
  • How to use them with link to the associated repos and their installation guides
  • A notebook example would be awesome

@davkovacs @wcwitt @jharrymoore Anything more?

@ilyes319 ilyes319 linked a pull request Jan 8, 2023 that will close this issue
@wcwitt
Copy link
Collaborator

wcwitt commented Jan 15, 2023

Should we also add the ability to create a LAMMPs potential directly from run_train? The procedure right now requires an extra script.

@ilyes319 ilyes319 pinned this issue Jan 16, 2023
@nickhine
Copy link

Is this ready to try? I'd be keen to see an example.

@davkovacs
Copy link
Collaborator

@nickhine which one would you like to try? OpenMM or LAMMPS?
OpenMM works for non-periodic simulations only at the moment, but we are working on it actively to enable periodic / bulk simulations. LAMMPS works on CPU-s, and the GPU implementation is ongoing.

@nickhine
Copy link

I was interested in the LAMMPS version, applied to a periodic system - particularly if the performance for small/medium sized systems is respectable on CPU.

@wcwitt
Copy link
Collaborator

wcwitt commented Jan 17, 2023

Hi @nickhine, can you email me at [email protected]? I'll send you an example and can help with any problems.

@hityingph
Copy link

Hi @wcwitt, I recently utilized MACE in my project (using a cutoff of 7 angstrom and 2 layers for bilayer graphene with defects) and was impressed with its high accuracy (energy RMSE =0.3 meV/atom, force RMSE = 20 meV/A). However, I noticed that the LAMMPS interface only supports CPU without MPI, resulting in a limited speed of 30 atom-steps/second for a 1000-atom system with 48 cores. I have followed the introductions in the mannual to perform the MD simulations. May I kindly inquire if there is a GPU implementation available now, as I would be very interested in trying it out? Otherwise, do you have any suggestions on improviding the runing speed for MD simulations?

@wcwitt
Copy link
Collaborator

wcwitt commented Apr 10, 2023

Hi @hityingph,

Glad you are finding MACE useful, and thanks for the interest in the LAMMPS interface!

only supports CPU without MPI

This is not strictly true, but I agree that for 1000 atoms you are unlikely (at present) to get much speedup from MPI parallelization.

May I kindly inquire if there is a GPU implementation available now

We do have a LAMMPS GPU version, but we are still testing it internally. You can expect a public version very soon - I'd suggest waiting for that, unfortunately.

@ilyes319
Copy link
Contributor Author

Hi @hityingph,

Thank you for using MACE!

While waiting for the GPU interface for LAMMPs you might be able to run some faster MD with openMM (see https://mace-docs.readthedocs.io/en/latest/guide/openmm.html) that runs on GPU.

@hityingph
Copy link

@wcwitt Thanks for your prompt response, and I look forward to the GPU version!

Hi @ilyes319 , thank you for recommending the OpenMM interface. I'm not familiar with OpenMM, and I'm curious if it has some features similar to those in LAMMPS that I need (such as the "fix spring" function I used for sliding simulations). Nevertheless, I will give it a try to get a general understanding of the running speed of MACE using a GPU.

@ilyes319
Copy link
Contributor Author

I am closing this issue now as all this is officially supported. One can find details of these features in the User guide of the documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants