Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GMG solver pending tasks #17

Closed
8 tasks done
amartinhuertas opened this issue Jul 29, 2022 · 9 comments
Closed
8 tasks done

GMG solver pending tasks #17

amartinhuertas opened this issue Jul 29, 2022 · 9 comments
Assignees

Comments

@amartinhuertas
Copy link
Member

amartinhuertas commented Jul 29, 2022

@principejavier @JordiManyer @santiagobadia, FYI, ... filling a set of pending (short-term) tasks related to geometric multigrid as I go.

Please feel free to add tasks if you see something which might be pending/good to have/limitation to fix, etc.

You may contribute to some of the current tasks if you are interested. If this is the case, please add your name to the end of the task. I have already added my name to one of these tasks.

  • [Difficulty: easy] GMG! is currently a function. With composability in mind, we should design a type, say, GMGLinearSolver which extends/implements Gridap's LinearSolver interface. @amartinhuertas [2aadaa6]
  • [Difficulty: easy] To test the code whenever the number of tasks in the coarsest-grid level is different from one, i.e., whenever the coarsest-grid level is distributed among a subset of processes of the global communicator. The code should be prepared to handle this case, however i did not test it, so that there might be minor BUG fixes to be applied. @principejavier [Minor fixes into generate mesh hierarchy #19]
  • [Difficulty: easy] GMG! does not work properly whenever there is a single level in the hierarchy. (convergence gets stuck). Solved in 687b8a7
  • [Difficulty: easy/medium] The code is currently written such that we assume that the number of parallel tasks in level 1, i.e., the one corresponding to the finest-grid meshin the hierarchy, matches the number of MPI tasks in the global communicator. I think it would be helpful, essentially for testing purposes, that we can decouple this current constraint. For example, we may run the parallel program with say, with 4 MPI tasks, and set num_parts_x_level=[2,2,2,1]. @principejavier [Minor fixes into generate mesh hierarchy #19]
  • [Difficulty: medium] Extend the GMG solver to 3D. Currently, only works for 2D problems.
  • [Difficulty: medium/hard] To improve the numerical accuracy of the strategy underlying the change_domain_fine_to_coarse function. Click here for more details. This strategy results in the fine-grid FE function being integrated over the coarse-grid using the quadrature rule of the latter. In other words, we end up using a standard Gauss quadrature to integrate a piece-wise polynomial function with reduced regularity on the interface of the children cells. We can improve this by decomposing the integral over the coarse-grid cells as a sum of integrals over the children cells using the quadratures of the latter cells. To this end, we need to change the domain of the coarse test function to the fine-grid, as we do with the trial function in the change_domain_coarse_to_fine function.
  • [Difficulty: hard] To support parallel distributed meshes in the current implementation of PatchBasedLinearSolver. See here for more details. @amartinhuertas
  • [Difficulty:Easy] OctreeDistributedDiscreteModel should implement the interface of DistributedDiscreteModel. Quality of life methods like get_cells, ...
@JordiManyer
Copy link
Member

I will not assign myself any task right now since I will be leaving on vacation for a couple weeks. However feel free to leave me any work you want. I am doing the P-MultiGrid solver as a LinearSolver right now, so I can definitely do the first task when I come back from vacation.

@principejavier
Copy link
Member

I'm working till next Wednesday and then leaving for a couple of weeks, I will try to do the second before that.

@principejavier
Copy link
Member

Everything works out-of-the-box when using more than 1 coarser task. I ran MeshHierarchiesTests.jl RedistributeToolsTests.jl and GMGLinearSolversPoissonTests.jl with 2 coarser tasks successfully in my desktop (only changes in the insignificant digits of the final residual). Is there any other test to run @amartinhuertas ?

@principejavier
Copy link
Member

Regarding the third point only a minor fix is needed, just the line

    if GridapP4est.i_am_in(mh.level_parts[1])

so only tasks actually assigned to level 1 enter into the computation (otherwise the definition of the FESpace breaks). The generation at ModelHierarchy works directly because generate_level_parts only requires num_procs_x_level .<= MPI.Comm_size(root_comm).

Then GMGLinearSolversPoissonTests.jl runs successfully with 6 MPI tasks and the following changes:

  parts = get_part_ids(mpi,6)
  num_parts_x_level=[4,4,2,2]

@amartinhuertas
Copy link
Member Author

Everything works out-of-the-box when using more than 1 coarser task. I ran MeshHierarchiesTests.jl RedistributeToolsTests.jl and GMGLinearSolversPoissonTests.jl with 2 coarser tasks successfully in my desktop (only changes in the insignificant digits of the final residual).

Fantastic, thanks!. I guess that in the coarser level LUSolver() gathers the distributed linear system into the master processor, then uses LU there, and scatters back the result. Obviously, in an scalable run, one would use a parallel solver, such as PETSc GAMG.

Is there any other test to run @amartinhuertas ?

No, that I am aware of. Perhaps we may modify in the repo some of these tests such that they are run with 2 MPI tasks at the last level?

@amartinhuertas
Copy link
Member Author

I will not assign myself any task right now since I will be leaving on vacation for a couple weeks. However feel free to leave me any work you want. I am doing the P-MultiGrid solver as a LinearSolver right now, so I can definitely do the first task when I come back from vacation.

Ok, no worries. Enjoy your holidays! I have already assigned you the first task, I may solve it before you come back. In such a case I will check the box.

@amartinhuertas
Copy link
Member Author

Then GMGLinearSolversPoissonTests.jl runs successfully with 6 MPI tasks and the following changes:

Great, thanks! Can you do PR into the generate_mesh_hierarchy branch?

@amartinhuertas
Copy link
Member Author

I may solve it before you come back. In such a case I will check the box.

FYI ... I solved task

[Difficulty: easy] GMG! is currently a function. With composability in mind, we should design a type, say, GMGLinearSolver which extends/implements Gridap's LinearSolver interface. @amartinhuertas [https://github.com/gridap/GridapP4est.jl/commit/2aadaa6775b92120d7b2f0d7fcf7756fa037ee74]

Please be aware of the following subtlety

2aadaa6#diff-b478369422498837742b114d4c734f83605f7cd02d55d72bdd6db034ea6f9e26R249

i.e., preconditioner versus linear solver modes of GMGLinearSolver

@amartinhuertas
Copy link
Member Author

We have completed all the tasks in this issue. Closing ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants