-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Execution error #104
Comments
For this large mesh you'll need more memory for sure. The error occurred before the mesh is partitioned, when the code is computing connectivity.
Also for this type of large simulations, do not use ParMETIS; use static partition for robustness.
…-Joseph
Y. Joseph Zhang
Web: schism.wiki
Office: 804 684 7466
From: George Breyiannis ***@***.***>
Sent: Tuesday, May 30, 2023 6:39 AM
To: schism-dev/schism ***@***.***>
Cc: Subscribed ***@***.***>
Subject: [schism-dev/schism] Execution error (Issue #104)
[EXTERNAL to VIMS received message]
We are testing our HR global model with nSCHISM_hgrid_node: 11880520 & nSCHISM_hgrid_face: 14840567.
When executing we get
0: ABORT: AQUIRE_HGRID: ilnd_global allocation failure
This happens either for sanity check (ipre=1) or general run (ipre=0).
Any ideas why this could be happening?
We have tried up to 10 nodes (960 cores) on Azure HPC and it doesn't work.
Please note that this mesh is with full resolution GSHHS and has 180491 boundaries. Could that be the reason?
-
Reply to this email directly, view it on GitHub<#104>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFBKNZYPWKUM6VNCWAN2R43XIXE2RANCNFSM6AAAAAAYTZ7IDY>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.******@***.***>>
|
Thanks @josephzhang8 . Two questions. Is the mesh loaded in its entirety on one node before partitioning? If this is the case, what is the amount of memory we might need for such a big mesh? I suppose this creates a high demand for the master node is terms of memory, no? Can you point me to the documentation on how to use static partition? |
@BREYIANNIS ***@***.***>
See below for static partitioning; you need to compile with 'NO_PARMETIS' first:
https://schism-dev.github.io/schism/master/getting-started/pre-processing.html#metis-for-offline-domain-decomposition
Before domain decomp, all MPI processes will need to read in mesh info, so each core needs to have sufficient amount of mem. For 11M node mesh, you may need >4Gb/core.
…-Joseph
Y. Joseph Zhang
Web: schism.wiki
Office: 804 684 7466
From: George Breyiannis ***@***.***>
Sent: Tuesday, May 30, 2023 8:54 AM
To: schism-dev/schism ***@***.***>
Cc: Y. Joseph Zhang ***@***.***>; Mention ***@***.***>
Subject: Re: [schism-dev/schism] Execution error (Issue #104)
[EXTERNAL to VIMS received message]
Thanks @josephzhang8<https://github.com/josephzhang8> . Two questions.
Is the mesh loaded in its entirety on one node before partitioning? If this is the case, what is the amount of memory we might need for such a big mesh? I suppose this creates a high demand for the master node is terms of memory, no?
Can you point me to the documentation on how to use static partition?
-
Reply to this email directly, view it on GitHub<#104 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFBKNZ7DK6AQW3F62SY4ZWTXIXUW7ANCNFSM6AAAAAAYTZ7IDY>.
You are receiving this because you were mentioned.Message ID: ***@***.******@***.***>>
|
@josephzhang8 Thank you! The VMs we use have 448GB RAM and 120 cores. But we can use fewer cores, e.g. 96 per node which would give us something like 4.5GB/core. So this amount of RAM per core should be doable. Are the instructions the same for schism 5.9? Because that's what we currently use |
Yes using few cores/node is a good way to conserve mem.
No, static partitioning was introduced after v5.9
…-Joseph
Y. Joseph Zhang
Web: schism.wiki
Office: 804 684 7466
From: pmav99 ***@***.***>
Sent: Tuesday, May 30, 2023 2:53 PM
To: schism-dev/schism ***@***.***>
Cc: Y. Joseph Zhang ***@***.***>; Mention ***@***.***>
Subject: Re: [schism-dev/schism] Execution error (Issue #104)
[EXTERNAL to VIMS received message]
@josephzhang8<https://github.com/josephzhang8> Thank you! The VMs we use<https://learn.microsoft.com/en-us/azure/virtual-machines/hbv3-series> have 448GB RAM and 120 cores. But we can use fewer cores, e.g. 96 per node which would give us something like 4.5GB/core. So this amount of RAM per core should be doable.. Are the instructions the same for schism 5.9? Because that's what we currently use
-
Reply to this email directly, view it on GitHub<#104 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFBKNZ7OU4ZIUIW3AJXOKYDXIY6XBANCNFSM6AAAAAAYTZ7IDY>.
You are receiving this because you were mentioned.Message ID: ***@***.******@***.***>>
|
I tried to follow the instructions for the static partitioning but the metis prepration step (i.e. step 2) is failing with a segmentation fault. The problem is that:
The good news is that, from what I understood, the metis prepration script does not really use the schism/src/Utility/Grid_Scripts/metis_prep.f90 Lines 272 to 290 in 9d7230e
@josephzhang8 Can you confirm that BTW, the segmentation fault happens when we first try to assign a value to If you want, I can make a PR to remove All that being said, I think that our main problem remains. If I understand the code correctly (and I should mention that my Fortran knowledge is nothing to speak of), the main schism code also tries to do the exact same allocation. The relevant lines are: schism/src/Hydro/grid_subs.F90 Lines 901 to 902 in 9d7230e
If this is True, then for the grid in question we do need 1705GB per process which unfortunately is not really feasible... |
Thanks @pmav99. This is indeed an extreme case.
I think splitting the Eurasia, Africa as separate land boundaries would resolve the issue. 1.18M points is too many. Meanwhile, I'll try to change it to type/pointer array in those programs.
…-Joseph
Y. Joseph Zhang
Web: schism.wiki
Office: 804 684 7466
From: pmav99 ***@***.***>
Sent: Wednesday, May 31, 2023 11:16 AM
To: schism-dev/schism ***@***.***>
Cc: Y. Joseph Zhang ***@***.***>; Mention ***@***.***>
Subject: Re: [schism-dev/schism] Execution error (Issue #104)
[EXTERNAL to VIMS received message]
I tried to follow the instructions for the static partitioning but the metis prepration step (i.e. step 2<https://schism-dev.github.io/schism/master/getting-started/pre-processing.html#metis-for-offline-domain-decomposition>) is failing with a segmentation fault. The problem is that:
1. we have a global mesh with the full resolution for the coastlines. This translates to 180491 land boundaries with the smallest one consisting of 3 nodes and the larger one (Eurasia+Africa) consisting of 1181108 nodes.
2. The code tries to allocate a single 2D array for all the land boundaries, so it needs enough RAM for: 8 * 180491 * 1181108 = 1705GB and this obviously fails.
The good news is that, from what I understood, the metis prepration script does not really use the ilnd table. So if we comment out the lines referencing ilnd then the script runs and produces the graphinfo file. The relevant block of code is:
https://github.com/schism-dev/schism/blob/9d7230eea8b688874737d0703033721e52cd1b55/src/Utility/Grid_Scripts/metis_prep.f90#L272-L290
@josephzhang8<https://github.com/josephzhang8> Can you confirm that ilnd is indeed not needed for the metis preparation?
BTW, the segmentation fault happens when we first try to assign a value to ilnd (i.e. line 287). Checking stat after the allocation (line 272) would make it a bit easier to figure out what is going on.
If you want, I can make a PR to remove ilnd or add a check after the allocation. No problem if you'd rather fix it on your end, too.
All that being said, I think that our main problem remains. If I understand the code correctly (and I should mention that my Fortran knowledge is nothing to speak of), the main schism code also tries to do the exact same allocation. The relevant lines are:
https://github.com/schism-dev/schism/blob/9d7230eea8b688874737d0703033721e52cd1b55/src/Hydro/grid_subs.F90#L901-L902
If this is True, then for the grid in question we do need 1705GB per process which unfortunately is not really feasible...
-
Reply to this email directly, view it on GitHub<#104 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFBKNZ6VM7UL7RYG73BVHCTXI5OENANCNFSM6AAAAAAYTZ7IDY>.
You are receiving this because you were mentioned.Message ID: ***@***.******@***.***>>
|
@pmav99
It'll take me a while to implement the new land boundary data type and test. Meanwhile, there is a work-around you can try. As I suggested before, you can divide the large land segment into smaller pieces (say 1K modes each) using scripts. This should significantly reduce the mem consumption. Let me know if it works.
…-Joseph
Y. Joseph Zhang
Web: schism.wiki
Office: 804 684 7466
From: Y. Joseph Zhang
Sent: Wednesday, May 31, 2023 5:15 PM
To: schism-dev/schism ***@***.***>; schism-dev/schism ***@***.***>
Cc: Mention ***@***.***>
Subject: RE: [schism-dev/schism] Execution error (Issue #104)
Thanks @pmav99. This is indeed an extreme case.
I think splitting the Eurasia, Africa as separate land boundaries would resolve the issue. 1.18M points is too many. Meanwhile, I'll try to change it to type/pointer array in those programs.
-Joseph
Y. Joseph Zhang
Web: schism.wiki
Office: 804 684 7466
From: pmav99 ***@***.******@***.***>>
Sent: Wednesday, May 31, 2023 11:16 AM
To: schism-dev/schism ***@***.******@***.***>>
Cc: Y. Joseph Zhang ***@***.******@***.***>>; Mention ***@***.******@***.***>>
Subject: Re: [schism-dev/schism] Execution error (Issue #104)
[EXTERNAL to VIMS received message]
I tried to follow the instructions for the static partitioning but the metis prepration step (i.e. step 2<https://schism-dev.github.io/schism/master/getting-started/pre-processing.html#metis-for-offline-domain-decomposition>) is failing with a segmentation fault. The problem is that:
1. we have a global mesh with the full resolution for the coastlines. This translates to 180491 land boundaries with the smallest one consisting of 3 nodes and the larger one (Eurasia+Africa) consisting of 1181108 nodes.
2. The code tries to allocate a single 2D array for all the land boundaries, so it needs enough RAM for: 8 * 180491 * 1181108 = 1705GB and this obviously fails.
The good news is that, from what I understood, the metis prepration script does not really use the ilnd table. So if we comment out the lines referencing ilnd then the script runs and produces the graphinfo file. The relevant block of code is:
https://github.com/schism-dev/schism/blob/9d7230eea8b688874737d0703033721e52cd1b55/src/Utility/Grid_Scripts/metis_prep.f90#L272-L290
@josephzhang8<https://github.com/josephzhang8> Can you confirm that ilnd is indeed not needed for the metis preparation?
BTW, the segmentation fault happens when we first try to assign a value to ilnd (i.e. line 287). Checking stat after the allocation (line 272) would make it a bit easier to figure out what is going on.
If you want, I can make a PR to remove ilnd or add a check after the allocation. No problem if you'd rather fix it on your end, too.
All that being said, I think that our main problem remains. If I understand the code correctly (and I should mention that my Fortran knowledge is nothing to speak of), the main schism code also tries to do the exact same allocation. The relevant lines are:
https://github.com/schism-dev/schism/blob/9d7230eea8b688874737d0703033721e52cd1b55/src/Hydro/grid_subs.F90#L901-L902
If this is True, then for the grid in question we do need 1705GB per process which unfortunately is not really feasible...
-
Reply to this email directly, view it on GitHub<#104 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFBKNZ6VM7UL7RYG73BVHCTXI5OENANCNFSM6AAAAAAYTZ7IDY>.
You are receiving this because you were mentioned.Message ID: ***@***.******@***.***>>
|
Thank you @josephzhang8 |
Dear @josephzhang8. We have split the boundaries on the big mesh and although the sanity check seems to work we were unable to effectively run it on Azure. You can find the model here. Hopefully, you can use it as a test case for possible modifications in SCHISM. If you manage to make it work on your end, we would be interested to try it out. In the mean time we'll try something simpler. Thanks. |
Will do...
…-Joseph
Y. Joseph Zhang
Web: schism.wiki
Office: 804 684 7466
From: George Breyiannis ***@***.***>
Sent: Monday, June 12, 2023 11:52 AM
To: schism-dev/schism ***@***.***>
Cc: Y. Joseph Zhang ***@***.***>; Mention ***@***.***>
Subject: Re: [schism-dev/schism] Execution error (Issue #104)
[EXTERNAL to VIMS received message]
Dear @josephzhang8<https://github.com/josephzhang8>. We have split the boundaries on the big mesh and although the sanity check seems to work we were unable to effectively run it on Azure. You can find the model here<https://static.techrad.eu/global_full.tar.gz>. Hopefully, you can use it as a test case for possible modifications in SCHISM. If you manage to make it work on your end, we would be interested to try it out. In the mean time we'll try something simpler. Thanks.
-
Reply to this email directly, view it on GitHub<#104 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFBKNZY5LX2BFY3QJ2SIA7DXK43IZANCNFSM6AAAAAAYTZ7IDY>.
You are receiving this because you were mentioned.Message ID: ***@***.******@***.***>>
|
I need sflux/.
…-Joseph
Y. Joseph Zhang
Web: schism.wiki
Office: 804 684 7466
From: George Breyiannis ***@***.***>
Sent: Monday, June 12, 2023 11:52 AM
To: schism-dev/schism ***@***.***>
Cc: Y. Joseph Zhang ***@***.***>; Mention ***@***.***>
Subject: Re: [schism-dev/schism] Execution error (Issue #104)
[EXTERNAL to VIMS received message]
Dear @josephzhang8<https://github.com/josephzhang8>. We have split the boundaries on the big mesh and although the sanity check seems to work we were unable to effectively run it on Azure. You can find the model here<https://static.techrad.eu/global_full.tar.gz>. Hopefully, you can use it as a test case for possible modifications in SCHISM. If you manage to make it work on your end, we would be interested to try it out. In the mean time we'll try something simpler. Thanks.
-
Reply to this email directly, view it on GitHub<#104 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFBKNZY5LX2BFY3QJ2SIA7DXK43IZANCNFSM6AAAAAAYTZ7IDY>.
You are receiving this because you were mentioned.Message ID: ***@***.******@***.***>>
|
The page shows no content, probably b/c of the file size?
…-Joseph
Y. Joseph Zhang
Web: schism.wiki
Office: 804 684 7466
From: pmav99 ***@***.***>
Sent: Monday, June 12, 2023 5:00 PM
To: schism-dev/schism ***@***.***>
Cc: Y. Joseph Zhang ***@***.***>; Mention ***@***.***>
Subject: Re: [schism-dev/schism] Execution error (Issue #104)
[EXTERNAL to VIMS received message]
Thank you for looking into this Joseph. Try this: https://ppwdevarchivesa.blob.core.windows.net/seareport/sflux_sample?sp=r&st=2023-06-12T20:57:49Z&se=2023-07-12T04:57:49Z&spr=https&sv=2022-11-02&sr=d&sig=5FZSchXoh1xv1ylZytrxit92%2FN7zBz5xTRnTcikU0mA%3D&sdd=1
-
Reply to this email directly, view it on GitHub<#104 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFBKNZ3D7WRVM2JMDPZNPADXK57LVANCNFSM6AAAAAAYTZ7IDY>.
You are receiving this because you were mentioned.Message ID: ***@***.******@***.***>>
|
It seems the max # of surrounding elem's around a node is 64, which is too excessive and blew up the memory. Can you please reduce this?
…-Joseph
Y. Joseph Zhang
Web: schism.wiki
Office: 804 684 7466
From: pmav99 ***@***.***>
Sent: Monday, June 12, 2023 5:00 PM
To: schism-dev/schism ***@***.***>
Cc: Y. Joseph Zhang ***@***.***>; Mention ***@***.***>
Subject: Re: [schism-dev/schism] Execution error (Issue #104)
[EXTERNAL to VIMS received message]
Thank you for looking into this Joseph. Try this: https://ppwdevarchivesa.blob.core.windows.net/seareport/sflux_sample?sp=r&st=2023-06-12T20:57:49Z&se=2023-07-12T04:57:49Z&spr=https&sv=2022-11-02&sr=d&sig=5FZSchXoh1xv1ylZytrxit92%2FN7zBz5xTRnTcikU0mA%3D&sdd=1
-
Reply to this email directly, view it on GitHub<#104 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFBKNZ3D7WRVM2JMDPZNPADXK57LVANCNFSM6AAAAAAYTZ7IDY>.
You are receiving this because you were mentioned.Message ID: ***@***.******@***.***>>
|
After making more mem available by reducing # of MPI processes per node, I'm able to run the mesh with nws=1 (since I did not have sflux). I also made a change ibtp=0, and since I'm using the latest master I had to remove some obsolete parameters (nramp*).
The amount of mem I used is ~9GB/process in order to get over the init (pre-partition) reads. This is b/c the mesh has a lot of imbalances; this is really an extreme case. The good news is that SCHISM can still run (yay!).
Regards,
…-------------------------------
Joseph Zhang
(804)684 7595 (office)
SCHISM web: http://ccrm.vims.edu/schism/
________________________________
From: Joseph Zhang ***@***.***>
Sent: Monday, June 12, 2023 6:02 PM
To: schism-dev/schism ***@***.***>
Cc: Y. Joseph Zhang ***@***.***>; Your activity ***@***.***>
Subject: Re: [schism-dev/schism] Execution error (Issue #104)
[EXTERNAL to VIMS received message]
It seems the max # of surrounding elem's around a node is 64, which is too excessive and blew up the memory. Can you please reduce this?
-Joseph
Y. Joseph Zhang
Web: schism.wiki
Office: 804 684 7466
From: pmav99 ***@***.***>
Sent: Monday, June 12, 2023 5:00 PM
To: schism-dev/schism ***@***.***>
Cc: Y. Joseph Zhang ***@***.***>; Mention ***@***.***>
Subject: Re: [schism-dev/schism] Execution error (Issue #104)
[EXTERNAL to VIMS received message]
Thank you for looking into this Joseph. Try this: https://ppwdevarchivesa.blob.core.windows.net/seareport/sflux_sample?sp=r&st=2023-06-12T20:57:49Z&se=2023-07-12T04:57:49Z&spr=https&sv=2022-11-02&sr=d&sig=5FZSchXoh1xv1ylZytrxit92%2FN7zBz5xTRnTcikU0mA%3D&sdd=1
-
Reply to this email directly, view it on GitHub<#104 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFBKNZ3D7WRVM2JMDPZNPADXK57LVANCNFSM6AAAAAAYTZ7IDY>.
You are receiving this because you were mentioned.Message ID: ***@***.******@***.***>>
—
Reply to this email directly, view it on GitHub<#104 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFBKNZ45IVSYEPHSBHOHAZ3XK6GYFANCNFSM6AAAAAAYTZ7IDY>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
Great news! I know that by forcing it to follow such a convoluted coastline I am asking for trouble. I will try some ways to make it more manageable and let you know. We'll also try pre-partition and with your estimate of of ram/core well give it another try. Based on your experience, could such a mesh work? I know that the mesh is not balanced and I wonder if the skewness of the elements might also give problems both in terms of stability and accuracy. |
Sflux size seems to be 24GB. Reading large sflux files with large dimensions in x,y will slow down the model (parallel I/O is not cheap). I have not checked those dimensions yet as it's still downloading.
Also consider using a larger wtiminc - most atmos model outputs are at hourly or coarser time step. This would reduce the I/O cost.
…-Joseph
Y. Joseph Zhang
Web: schism.wiki
Office: 804 684 7466
From: pmav99 ***@***.***>
Sent: Tuesday, June 13, 2023 2:34 AM
To: schism-dev/schism ***@***.***>
Cc: Y. Joseph Zhang ***@***.***>; Mention ***@***.***>
Subject: Re: [schism-dev/schism] Execution error (Issue #104)
[EXTERNAL to VIMS received message]
@josephzhang8<https://github.com/josephzhang8> Try with this: https://ppwdevarchivesa.blob.core.windows.net/seareport/sflux_sample/sflux_air_1.0001.nc?sp=r&st=2023-06-13T06:26:59Z&se=2023-06-17T14:26:59Z&spr=https&sv=2022-11-02&sr=b&sig=YvGUDz5EzKWbLUw3YOt%2BSFzajJTV7txLCszAHcXWHqQ%3D
-
Reply to this email directly, view it on GitHub<#104 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFBKNZ7KUIRJJABY5FMAOXTXLACURANCNFSM6AAAAAAYTZ7IDY>.
You are receiving this because you were mentioned.Message ID: ***@***.******@***.***>>
|
@josephzhang8 The netcdf is indeed 24GB but it is uncompressed. Does schism support reading compressed/deflated Netcdf files? |
SCHISM accepts netcdf4 classic format, which allows deflation.
…-Joseph
Y. Joseph Zhang
Web: schism.wiki
Office: 804 684 7466
From: pmav99 ***@***.***>
Sent: Tuesday, June 13, 2023 9:51 AM
To: schism-dev/schism ***@***.***>
Cc: Y. Joseph Zhang ***@***.***>; Mention ***@***.***>
Subject: Re: [schism-dev/schism] Execution error (Issue #104)
[EXTERNAL to VIMS received message]
@josephzhang8<https://github.com/josephzhang8> The netcdf is indeed 24GB but it is uncompressed. Does schism support reading compressed/deflated Netcdf files?
-
Reply to this email directly, view it on GitHub<#104 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFBKNZ7W3EZ2RNXL7J5VZ53XLBV4PANCNFSM6AAAAAAYTZ7IDY>.
You are receiving this because you were mentioned.Message ID: ***@***.******@***.***>>
|
Joseph, indeed the metro forcing is every hour. That means that |
Correct.
…-Joseph
Y. Joseph Zhang
Web: schism.wiki
Office: 804 684 7466
From: George Breyiannis ***@***.***>
Sent: Tuesday, June 13, 2023 11:52 AM
To: schism-dev/schism ***@***.***>
Cc: Y. Joseph Zhang ***@***.***>; Mention ***@***.***>
Subject: Re: [schism-dev/schism] Execution error (Issue #104)
[EXTERNAL to VIMS received message]
Joseph, indeed the metro forcing is every hour. That means that wtiminc should be 3600?
-
Reply to this email directly, view it on GitHub<#104 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFBKNZ34XZ3KNYPLZHLFOB3XLCECLANCNFSM6AAAAAAYTZ7IDY>.
You are receiving this because you were mentioned.Message ID: ***@***.******@***.***>>
|
Regarding skew elements: SCHISM will run and usually the 'noise' won't spread. If you demand accuracy in those regions then you'll have to revise the mesh.
…-Joseph
Y. Joseph Zhang
Web: schism.wiki
Office: 804 684 7466
From: George Breyiannis ***@***.***>
Sent: Tuesday, June 13, 2023 1:48 AM
To: schism-dev/schism ***@***.***>
Cc: Y. Joseph Zhang ***@***.***>; Mention ***@***.***>
Subject: Re: [schism-dev/schism] Execution error (Issue #104)
[EXTERNAL to VIMS received message]
Great news!
I know that by forcing it to follow such a convoluted coastline I am asking for trouble. I will try some ways to make it more manageable and let you know. We'll also try pre-partition and with your estimate of of ram/core well give it another try.
Based on your experience, could such a mesh work? I know that the mesh is not balanced and I wonder if the skewness of the elements might also give problems both in terms of stability and accuracy.
-
Reply to this email directly, view it on GitHub<#104 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFBKNZ77YE536P4H4QDODNLXK75IFANCNFSM6AAAAAAYTZ7IDY>.
You are receiving this because you were mentioned.Message ID: ***@***.******@***.***>>
|
We are testing our HR global model with
nSCHISM_hgrid_node
: 11880520 &nSCHISM_hgrid_face
: 14840567.When executing we get
This happens either for sanity check (ipre=1) or general run (ipre=0).
Any ideas why this could be happening?
We have tried up to 10 nodes (960 cores) on Azure HPC and it doesn't work.
Please note that this mesh is with full resolution GSHHS and has 180491 boundaries. Could that be the reason?
The text was updated successfully, but these errors were encountered: