Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yu/parallelize - close #83 & changed MGP #105

Merged
merged 10 commits into from
Oct 28, 2019
Merged

Yu/parallelize - close #83 & changed MGP #105

merged 10 commits into from
Oct 28, 2019

Conversation

YuuuXie
Copy link
Collaborator

@YuuuXie YuuuXie commented Oct 25, 2019

There are a few changes:

  1. parallelized update_L_alpha function, which modified gp.py and gp_algebra.py
  2. The previous MGP has the gp model as its attribute (i.e. self.GP), which is redundant, I eliminated this
  3. The previous MGP builds up the spline functions when creating a MappedGaussianProcess object, I changed it to only creating an empty container (i.e. a spline class object without coefficients), and the coefficients can be fitted by calling set_values function. This change is for future's parallelization of MGP prediction

@@ -81,6 +82,51 @@ def get_forces_mgp(self, atoms):
atoms.get_uncertainties = self.get_uncertainties
return forces

def get_forces_mgp_par(self, atoms):
return self.get_forces_mgp_serial(atoms)
# comm = MPI.COMM_WORLD
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why all the MPI command is commented out? maybe we can leave a note in the pull request that this is yet to be tested.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we might be able to use multiprocessing.shared_memory instead of mpi4py. I tested it, but there's some conflict inside of ASE's own parallel setting, calling QE and this MGP parallelization.
The tricky part is we need to use mpirun -n xx python **.py if we use mpi4py, I think we must avoid this, and trying multiprocessing is safer.

BTW, the shared memory feature of multiprocessing is new (only supported) in python 3.8....... We might need to transfer to this version....

@@ -12,7 +12,7 @@

np.random.seed(12345)

md_engines = ['VelocityVerlet', 'NVTBerendsen', 'NPTBerendsen', 'NPT']
md_engines = ['VelocityVerlet'] #, 'NVTBerendsen', 'NPTBerendsen', 'NPT']
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is NVT available?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is available. otf_setup.py is not the unit test file, the unit test file is test_otf.py which contains all MD engines. I modified it only because I used it in the documentation (tutorial)

@YuuuXie YuuuXie merged commit 21504a5 into master Oct 28, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants