Make Madam work in a MPI environment #204
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Because of issue #201, Madam files created in a MPI environment do not contain all the TODs. This PR solves the problem by properly running over all the MPI processes.
The PR is quite huge, because the task is complex: Madam requires each detector to have its data in distinct files that must be numbered with an increasing counter. Therefore, to make the code work, this PR implements an algorithm that walks over all the MPI processes and counts how many observations for each of them contribute to each detector.
To make the code clearer to read, and to make
litebird_sim
easier to debug, I have added a new method toSimulation
:describe_mpi_distribution()
. Its purpose is to build a «map» of all the observations in every MPI process. This map is defined using the new typeMpiDistributionDescr
, which can be printed to get a visual representation of the way the TOD was split across observations and processes; here is an example:Things to do before merging this PR:
MpiDistributionDescr
and all ancillary classesdescribe_mpi_distribution
save_simulation_for_madam
so that it usesdescribe_mpi_distribution
to properly walk over all the MPI processesdescribe_mpi_distribution
andMpiDistributionDescr
in the manual