Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update foundations with develop #299

Merged
merged 106 commits into from
Jan 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
106 commits
Select commit Hold shift + click to select a range
955fb75
Update Installation
ilyes319 Jul 11, 2023
7543f16
Merge pull request #129 from ACEsuit/ilyes319-patch-1
ilyes319 Jul 11, 2023
42659c3
Update hidden irreps default
ilyes319 Aug 17, 2023
12a6e47
auto-format readme
janosh Oct 11, 2023
0f53ef7
add readme section 'Pretrained Universal MACE Checkpoints'
janosh Oct 11, 2023
6e24ccb
add subsection
ilyes319 Oct 11, 2023
6594a5e
Merge pull request #188 from janosh/mbd-huggingface-doc-links
ilyes319 Oct 11, 2023
bb2295b
fix table of content readme
ilyes319 Oct 11, 2023
ff054d5
Merge pull request #190 from ACEsuit/ilyes319-patch-1
ilyes319 Oct 11, 2023
29d3266
add deprecated model_path arg
davkovacs Nov 2, 2023
deeff35
Merge pull request #209 from ACEsuit/ase_model_path
ilyes319 Nov 2, 2023
3ac2b23
add better error string for no swa checkpoint found
ilyes319 Nov 9, 2023
0f8716c
Merge remote-tracking branch 'origin/main' into develop
ilyes319 Nov 9, 2023
d42e185
add new tutorials link
davkovacs Nov 9, 2023
d9526e4
Merge pull request #142 from ACEsuit/develop
ilyes319 Nov 9, 2023
1abf46f
Update __version__.py
ilyes319 Nov 13, 2023
0f4e241
fix active learning script
davkovacs Nov 13, 2023
dbd74da
extend ase calc committee test
davkovacs Nov 13, 2023
9699b76
Merge pull request #221 from davkovacs/main
ilyes319 Nov 14, 2023
d62758e
Update README.md
ilyes319 Nov 15, 2023
8b5e862
Merge pull request #226 from ACEsuit/ilyes319-patch-3
ilyes319 Nov 15, 2023
91916c1
fix default dtype errors in calculator
ilyes319 Nov 15, 2023
531afc2
fix mace_mp: doesn't allow overriding default_dtype = "float32"
janosh Nov 16, 2023
acc9356
auto-fix typos
janosh Nov 16, 2023
3adeac5
add on-the-fly checkpount download and caching in mace_mp
janosh Nov 16, 2023
d2e275f
add **kwargs to mace_mp function
janosh Nov 16, 2023
91861e3
fix model_path=None case
janosh Nov 16, 2023
9d42e99
auto-pick torch.device
janosh Nov 16, 2023
ff7a423
use torch map_location when loading model in MACECalculator
janosh Nov 16, 2023
ed2c3ba
Update mace.py
ilyes319 Nov 17, 2023
e2bafa8
improve the dtype selection
ilyes319 Nov 17, 2023
bff4115
recover dtype in mace_mp
ilyes319 Nov 17, 2023
c75ab75
if download fails, fall back to foundations_models/2023-08-14-mace-un…
janosh Nov 17, 2023
fdcecb3
load repo MP checkpoint first if exists, then try figshare download, …
janosh Nov 17, 2023
38d1f7e
rename model_path to model
janosh Nov 17, 2023
857700e
os.path.exists->isfile
janosh Nov 17, 2023
edbabc0
Update foundations_models.py
ilyes319 Nov 17, 2023
4ac1873
Merge pull request #230 from janosh/fix-hardcoded-mace-mp-dtype
ilyes319 Nov 17, 2023
ed99fc0
Merge pull request #231 from ACEsuit/develop
ilyes319 Nov 17, 2023
bab23cd
add back map_location is ase calculator
ilyes319 Nov 18, 2023
ac07e3f
Merge pull request #233 from ACEsuit/develop
ilyes319 Nov 18, 2023
7588a2b
add cpu as default map location for robustness
ilyes319 Nov 18, 2023
602da18
Merge pull request #234 from ACEsuit/develop
ilyes319 Nov 20, 2023
ea8f37c
revert back the map location due to further weakness
ilyes319 Nov 20, 2023
d8416e5
add D3 option to mace mp
ilyes319 Nov 20, 2023
e22f2aa
fix typos
janosh Nov 20, 2023
67da6f1
add Yuan Chiang to mace_mp() please cite list
janosh Nov 20, 2023
f37b038
fix deprecation warning from default mace_mp() behavior: model_path a…
janosh Nov 20, 2023
facb5ff
Merge pull request #235 from janosh/fix-mace-mp-deprecation-warning
ilyes319 Nov 20, 2023
a1c2804
fix typo models_paths->model_paths
janosh Nov 20, 2023
2a88218
add test_mace_mp()
janosh Nov 20, 2023
2f8a761
Merge pull request #236 from janosh/test-mace-mp
ilyes319 Nov 20, 2023
c6a242e
fix mace_mp download
janosh Nov 28, 2023
536fc08
Merge pull request #239 from janosh/fix-uni-mace-download
ilyes319 Nov 28, 2023
c2abff8
test_mace_mp() check for expected stdout
janosh Nov 28, 2023
6a909c9
Merge pull request #240 from janosh/improve-mace-mp-test
ilyes319 Nov 30, 2023
77a57df
fix test and add correlation at each layer
ilyes319 Nov 30, 2023
b0d7854
Update arg_parser.py
ilyes319 Nov 30, 2023
bcab2c9
Merge pull request #241 from ACEsuit/develop
ilyes319 Nov 30, 2023
065a9f5
Merge pull request #242 from ACEsuit/develop
ilyes319 Dec 1, 2023
4a4340c
change cutoff of dispersion from 95 Bohr to 40 Bohr
ilyes319 Dec 1, 2023
e970da6
update the mace_mp model to new checkpoints trained with stress
ilyes319 Dec 4, 2023
8a820a5
update comments
ilyes319 Dec 4, 2023
aca120b
Merge pull request #243 from ACEsuit/develop
ilyes319 Dec 4, 2023
d551e55
Update torch_tools.py
sivonxay Dec 5, 2023
1386467
Merge branch 'ACEsuit:main' into sivonxay-patch-1
sivonxay Dec 5, 2023
2ab8ed2
Merge pull request #245 from sivonxay/sivonxay-patch-1
ilyes319 Dec 5, 2023
c6ba5eb
Merge pull request #246 from ACEsuit/develop
ilyes319 Dec 5, 2023
c949b54
Update __version__.py
ilyes319 Dec 6, 2023
1488320
Merge pull request #250 from ACEsuit/develop
ilyes319 Dec 6, 2023
e2fc4bb
Update __version__.py
ilyes319 Dec 6, 2023
022d09b
Merge pull request #251 from ACEsuit/develop
ilyes319 Dec 6, 2023
fa79343
Update to 0.3.2 to overwrite previous tag
ilyes319 Dec 6, 2023
5debea4
Merge pull request #252 from ACEsuit/develop
ilyes319 Dec 6, 2023
42b7ddc
Update foundations_models.py
ilyes319 Dec 11, 2023
dc5d745
Merge pull request #257 from ACEsuit/develop
ilyes319 Dec 11, 2023
2cc2f64
remove "=" from print for old python compatibility
ilyes319 Dec 11, 2023
fba3725
Merge pull request #263 from ACEsuit/develop
ilyes319 Dec 15, 2023
1448637
update the small model to energy model
ilyes319 Dec 15, 2023
68f1bd4
Update README.md, PyTorch 2.1 supported
ilyes319 Jan 2, 2024
bc575ec
Update README.md, PyTorch 2.1 supported
ilyes319 Jan 2, 2024
a079181
add mace_off
davkovacs Jan 4, 2024
673b974
fix download links
davkovacs Jan 5, 2024
8ec5a20
update redame
davkovacs Jan 5, 2024
038dbeb
add option to load raw mace_off
davkovacs Jan 5, 2024
ce169bb
tidy up raw model and fix docstring
davkovacs Jan 5, 2024
2fe611a
add test and clean-up
davkovacs Jan 5, 2024
d9028b0
small clean up
ilyes319 Jan 5, 2024
16bca5e
Merge pull request #275 from davkovacs/mace_off
ilyes319 Jan 5, 2024
94bdc27
Merge pull request #277 from ACEsuit/develop
ilyes319 Jan 5, 2024
05e407e
pypi tests
ilyes319 Jan 8, 2024
58e0b06
Merge pull request #281 from ACEsuit/pypi-test
ilyes319 Jan 8, 2024
e8dc16c
Merge pull request #282 from ACEsuit/develop
ilyes319 Jan 8, 2024
62ca55d
add short description and url to setup.cfg
ilyes319 Jan 8, 2024
a279a14
update installation with PyPI
ilyes319 Jan 8, 2024
fc3c75c
Merge pull request #283 from ACEsuit/develop
ilyes319 Jan 8, 2024
5c7155c
update license info of mace-off
davkovacs Jan 8, 2024
87d8749
Merge pull request #284 from davkovacs/mace_off
ilyes319 Jan 8, 2024
9490653
add ?confirm=yTib query param to gdrive mace_mp checkpoint URLs to av…
janosh Jan 11, 2024
4dc1061
add link to github release
ilyes319 Jan 11, 2024
a93aa29
Merge pull request #290 from janosh/develop
ilyes319 Jan 11, 2024
442a02e
update the readme of foundation models
ilyes319 Jan 12, 2024
8eba737
update readme toc
ilyes319 Jan 12, 2024
7532bfb
edit readme example
ilyes319 Jan 12, 2024
6bdcd5e
add (9,) and (6,) format for stress and virials
ilyes319 Jan 15, 2024
9395221
update version to 0.3.4
ilyes319 Jan 16, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,6 @@ build/
.vscode/
*.txt
*.log

# Distribution
dist/
128 changes: 94 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,51 +1,70 @@
<span style="font-size:larger;">MACE</span>
========
# <span style="font-size:larger;">MACE</span>

[![GitHub release](https://img.shields.io/github/release/ACEsuit/mace.svg)](https://GitHub.com/ACEsuit/mace/releases/)
[![Paper](https://img.shields.io/badge/Paper-NeurIPs2022-blue)](https://openreview.net/forum?id=YPpSngE-ZU)
[![License](https://img.shields.io/badge/License-MIT%202.0-blue.svg)](https://opensource.org/licenses/mit)
[![GitHub issues](https://img.shields.io/github/issues/ACEsuit/mace.svg)](https://GitHub.com/ACEsuit/mace/issues/)
[![Documentation Status](https://readthedocs.org/projects/mace/badge/)](https://mace-docs.readthedocs.io/en/latest/)

# Table of contents
## Table of contents

- [About MACE](#about-mace)
- [Documentation](#documentation)
- [Installation](#installation)
- [Usage](#usage)
- [Training](#training)
- [Evaluation](#evaluation)
- [Training](#training)
- [Evaluation](#evaluation)
- [Tutorial](#tutorial)
- [Weights and Biases](#weights-and-biases-for-experiment-tracking)
- [Development](#development)
- [Pretrained foundation models](#pretrained-foundation-models)
- [MACE-MP: Materials Project Force Fields](#mace-mp-materials-project-force-fields)
- [MACE-OFF: Transferable Organic Force Fields](#mace-off-transferable-organic-force-fields)
- [References](#references)
- [Contact](#contact)
- [License](#license)

## About MACE

## About MACE
MACE provides fast and accurate machine learning interatomic potentials with higher order equivariant message passing.

This repository contains the MACE reference implementation developed by
Ilyes Batatia, Gregor Simm, and David Kovacs.

Also available:
* [MACE in JAX](https://github.com/ACEsuit/mace-jax), currently about 2x times faster at evaluation, but training is recommended in Pytorch for optimal performances.
* [MACE layers](https://github.com/ACEsuit/mace-layer) for constructing higher order equivariant graph neural networks for arbitrary 3D point clouds.
Also available:

- [MACE in JAX](https://github.com/ACEsuit/mace-jax), currently about 2x times faster at evaluation, but training is recommended in Pytorch for optimal performances.
- [MACE layers](https://github.com/ACEsuit/mace-layer) for constructing higher order equivariant graph neural networks for arbitrary 3D point clouds.

## Documentation

A partial documentation is available at: https://mace-docs.readthedocs.io/en/latest/
A partial documentation is available at: https://mace-docs.readthedocs.io

## Installation

Requirements:
* Python >= 3.7
* [PyTorch](https://pytorch.org/) >= 1.12

- Python >= 3.7
- [PyTorch](https://pytorch.org/) >= 1.12

(for openMM, use Python = 3.9)

### pip installation

To install via `pip`, follow the steps below:

```sh
pip install --upgrade pip
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install mace-torch
```

For CPU or MPS (Apple Silicon) installation, use `pip install torch torchvision torchaudio` instead.

### conda installation

If you do not have CUDA pre-installed, it is **recommended** to follow the conda installation process:

```sh
# Create a virtual environment and activate it
conda create --name mace_env
Expand All @@ -58,13 +77,14 @@ conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvi
conda install numpy scipy matplotlib ase opt_einsum prettytable pandas e3nn

# Clone and install MACE (and all required packages)
git clone git@github.com:ACEsuit/mace.git
git clone https://github.com/ACEsuit/mace.git
pip install ./mace
```

### pip installation
### pip installation from source

To install via `pip`, follow the steps below:

```sh
# Create a virtual environment and activate it
python -m venv mace-venv
Expand All @@ -74,15 +94,15 @@ source mace-venv/bin/activate
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116

# Clone and install MACE (and all required packages)
git clone git@github.com:ACEsuit/mace.git
git clone https://github.com/ACEsuit/mace.git
pip install ./mace
```

**Note:** The homonymous package on [PyPI](https://pypi.org/project/MACE/) has nothing to do with this one.

## Usage

### Training
### Training

To train a MACE model, you can use the `mace_run_train` script, which should be in the usual place that pip places binaries (or you can explicitly run `python3 <path_to_cloned_dir>/mace/cli/run_train.py`)

Expand All @@ -108,21 +128,21 @@ mace_run_train \
--device=cuda \
```

To give a specific validation set, use the argument `--valid_file`. To set a larger batch size for evaluating the validation set, specify `--valid_batch_size`.
To give a specific validation set, use the argument `--valid_file`. To set a larger batch size for evaluating the validation set, specify `--valid_batch_size`.

To control the model's size, you need to change `--hidden_irreps`. For most applications, the recommended default model size is `--hidden_irreps='256x0e'` (meaning 256 invariant messages) or `--hidden_irreps='128x0e + 128x1o'`. If the model is not accurate enough, you can include higher order features, e.g., `128x0e + 128x1o + 128x2e`, or increase the number of channels to `256`.
To control the model's size, you need to change `--hidden_irreps`. For most applications, the recommended default model size is `--hidden_irreps='256x0e'` (meaning 256 invariant messages) or `--hidden_irreps='128x0e + 128x1o'`. If the model is not accurate enough, you can include higher order features, e.g., `128x0e + 128x1o + 128x2e`, or increase the number of channels to `256`.

It is usually preferred to add the isolated atoms to the training set, rather than reading in their energies through the command line like in the example above. To label them in the training set, set `config_type=IsolatedAtom` in their info fields. If you prefer not to use or do not know the energies of the isolated atoms, you can use the option `--E0s="average"` which estimates the atomic energies using least squares regression.
It is usually preferred to add the isolated atoms to the training set, rather than reading in their energies through the command line like in the example above. To label them in the training set, set `config_type=IsolatedAtom` in their info fields. If you prefer not to use or do not know the energies of the isolated atoms, you can use the option `--E0s="average"` which estimates the atomic energies using least squares regression.

If the keyword `--swa` is enabled, the energy weight of the loss is increased for the last ~20% of the training epochs (from `--start_swa` epochs). This setting usually helps lower the energy errors.
If the keyword `--swa` is enabled, the energy weight of the loss is increased for the last ~20% of the training epochs (from `--start_swa` epochs). This setting usually helps lower the energy errors.

The precision can be changed using the keyword ``--default_dtype``, the default is `float64` but `float32` gives a significant speed-up (usually a factor of x2 in training).
The precision can be changed using the keyword `--default_dtype`, the default is `float64` but `float32` gives a significant speed-up (usually a factor of x2 in training).

The keywords ``--batch_size`` and ``--max_num_epochs`` should be adapted based on the size of the training set. The batch size should be increased when the number of training data increases, and the number of epochs should be decreased. An heuristic for initial settings, is to consider the number of gradient update constant to 200 000, which can be computed as $\text{max-num-epochs}*\frac{\text{num-configs-training}}{\text{batch-size}}$.
The keywords `--batch_size` and `--max_num_epochs` should be adapted based on the size of the training set. The batch size should be increased when the number of training data increases, and the number of epochs should be decreased. An heuristic for initial settings, is to consider the number of gradient update constant to 200 000, which can be computed as $\text{max-num-epochs}*\frac{\text{num-configs-training}}{\text{batch-size}}$.

The code can handle training set with heterogeneous labels, for example containing both bulk structures with stress and isolated molecules. In this example, to make the code ignore stress on molecules, append to your molecules configuration a ``config_stress_weight = 0.0``.
The code can handle training set with heterogeneous labels, for example containing both bulk structures with stress and isolated molecules. In this example, to make the code ignore stress on molecules, append to your molecules configuration a `config_stress_weight = 0.0`.

To use Apple Silicon GPU acceleration make sure to install the latest PyTorch version and specify ``--device=mps``.
To use Apple Silicon GPU acceleration make sure to install the latest PyTorch version and specify `--device=mps`.

### Evaluation

Expand All @@ -137,22 +157,62 @@ mace_eval_configs \

## Tutorial

You can run our [Colab tutorial](https://colab.research.google.com/drive/1D6EtMUjQPey_GkuxUAbPgld6_9ibIa-V?authuser=1#scrollTo=Z10787RE1N8T) to quickly get started with MACE.
You can run our [Colab tutorial](https://colab.research.google.com/drive/1D6EtMUjQPey_GkuxUAbPgld6_9ibIa-V?authuser=1#scrollTo=Z10787RE1N8T) to quickly get started with MACE.

We also have a more detailed user and developer tutorial at https://github.com/ilyes319/mace-tutorials

## Weights and Biases for experiment tracking

If you would like to use MACE with Weights and Biases to log your experiments simply install with
If you would like to use MACE with Weights and Biases to log your experiments simply install with

```sh
pip install ./mace[wandb]
```

And specify the necessary keyword arguments (`--wandb`, `--wandb_project`, `--wandb_entity`, `--wandb_name`, `--wandb_log_hypers`)


## Pretrained Foundation Models

### MACE-MP: Materials Project Force Fields

We have collaborated with the Materials Project (MP) to train a universal MACE potential covering 89 elements on 1.6 M bulk crystals in the [MPTrj dataset](https://figshare.com/articles/dataset/23713842) selected from MP relaxation trajectories.
The models are releaed on GitHub at https://github.com/ACEsuit/mace-mp.
If you use them please cite [our paper](https://arxiv.org/abs/2401.00096) which also contains an large range of example applications and benchmarks.

#### Example usage in ASE
```py
from mace.calculators import mace_mp
from ase import build

atoms = build.molecule('H2O')
calc = mace_mp(model="medium", dispersion=False, default_dtype="float32", device='cuda')
atoms.calc = calc
print(atoms.get_potential_energy())
```

### MACE-OFF: Transferable Organic Force Fields

There is a series (small, medium, large) transferable organic force fields. These can be used for the simulation of organic molecules, crystals and molecular liquids, or as a starting point for fine-tuning on a new dataset. The models are released under the [ASL license](https://github.com/gabor1/ASL).
The models are releaed on GitHub at https://github.com/ACEsuit/mace-off.
If you use them please cite [our paper](https://arxiv.org/abs/2312.15211) which also contains detailed benchmarks and example applications.

#### Example usage in ASE
```py
from mace.calculators import mace_off
from ase import build

atoms = build.molecule('H2O')
calc = mace_off(model="medium", device='cuda')
atoms.calc = calc
print(atoms.get_potential_energy())
```

## Development

We use `black`, `isort`, `pylint`, and `mypy`.
Run the following to format and check your code:

```sh
bash ./scripts/run_checks.sh
```
Expand All @@ -165,15 +225,15 @@ We are happy to accept pull requests under an [MIT license](https://choosealicen
## References

If you use this code, please cite our papers:

```text
@inproceedings{
Batatia2022mace,
title={{MACE}: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields},
author={Ilyes Batatia and David Peter Kovacs and Gregor N. C. Simm and Christoph Ortner and Gabor Csanyi},
booktitle={Advances in Neural Information Processing Systems},
editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
year={2022},
url={https://openreview.net/forum?id=YPpSngE-ZU}
@inproceedings{Batatia2022mace,
title={{MACE}: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields},
author={Ilyes Batatia and David Peter Kovacs and Gregor N. C. Simm and Christoph Ortner and Gabor Csanyi},
booktitle={Advances in Neural Information Processing Systems},
editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
year={2022},
url={https://openreview.net/forum?id=YPpSngE-ZU}
}

@misc{Batatia2022Design,
Expand Down
2 changes: 1 addition & 1 deletion mace/__version__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "0.2.0"
__version__ = "0.3.4"
3 changes: 2 additions & 1 deletion mace/calculators/__init__.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
from .foundations_models import mace_anicc, mace_mp, mace_off
from .lammps_mace import LAMMPS_MACE
from .mace import MACECalculator
from .foundations_models import mace_mp, mace_anicc

__all__ = [
"MACECalculator",
"LAMMPS_MACE",
"mace_mp",
"mace_off",
"mace_anicc",
]
Loading