Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distributed ConstantFESpace #153

Open
janmodderman opened this issue Jul 31, 2024 · 6 comments
Open

Distributed ConstantFESpace #153

janmodderman opened this issue Jul 31, 2024 · 6 comments

Comments

@janmodderman
Copy link

Hi all,

I've been trying to use a ConstantFESpace with the GridapDistributed.jl package, but within the definition of the reference space for the ConstantFESpace, I get an error that this function setup_cell_reffe requests a ::Gridap.Geometry.DiscreteModel and cannot use a ::GridapDistributed.GenericDistributedDiscreteModel. I attached the error I obtained below:

image

I have a reproduction file, but I cannot attach it here.

@oriolcg @JordiManyer

Best regards,

Jan

@amartinhuertas
Copy link
Member

Hi @janmodderman ... as far as I know, this space is not actually supported in a parallel distributed-memory setting.

You should use a fe space with the :zeromean constraint instead (assuming that you are willing to impose that the mean value of the discrete pressure is equal to zero).

@janmodderman
Copy link
Author

I'm actually interested in using a rigid object only defined on the domain boundary and hence describing all nodes on this boundary with 1 DOF, so I'm not sure if the :zeromean will do here 🤔

@amartinhuertas
Copy link
Member

Ok, I see. We typically use ConstantFESpace to implement lagrange multipliers; these can be used, e.g., to impose the mean value of the pressure. But here it seems that you are using it for a different purpose?

@janmodderman
Copy link
Author

Yes, we use the lagrange multipliers to constraint the nodal DOFs on the object boundary such that it reduces to rigid body motion (currently up to a maximum of 6DOF; 3 translational + 3 rotational)

@JordiManyer
Copy link
Member

JordiManyer commented Jul 31, 2024

Just to add a comment (maybe @amartinhuertas can correct me):
I believe this is deliberately NOT implemented in distributed. The implementation is not the issue here, it is actually quite straighforward.
The issue is that something like what we have in Gridap is never going to be scalable since the dof would have to tie all processors together (therefore growing the number of neighbors of the processor owning the dof).
Is this correct?

I don't know if there is a workaround that would make this scalable.

@amartinhuertas
Copy link
Member

amartinhuertas commented Jul 31, 2024

Is this correct?

Yes. Essentially (in the simplest case), you have a new full (dense) row and a new full (dense) column added to the end of the original matrix (i.e., the one without constraint).

The new full row has to be owned by one of the processors (as any other row in the matrix).

As a consequence, e.g. when doing matrix-vector products, the processor that owns the full row has to send a point-to-point message with a single entry to any other processor, i.e., like a sort of broadcast operation implemented in terms of point-to-point communication. Something similar happens during assembly, but the other way around, i.e., all processors send to the one owning the full row their local contributions to that row. It would be more scalable to use broadcast and gather, respectively, but I think this implies and important refactoring of the assembly process.

I don't know if there is a workaround that would make this scalable.

All this is assuming that are willing to assemble the matrix with the full row and full column. An alternative to think is whether it might be beneficial condensing the constraint in a clever way (e.g., as per Shephard, 84' paper). At the end of the day, you can think this condensation as a triple matrix-matrix-matrix product, C^T A C. Where A is the matrix without constraints, and C is a matrix built out of the constraints. Indeed, MFEM library pursues this approach for hanging node constraints (see https://epubs.siam.org/doi/10.1137/18M1193992). It might also be useful to see how other libraries are solving this, e.g., FENICs, Firedrake, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants