Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add fswthru as four components. #320

Merged
merged 22 commits into from
Jun 30, 2020
Merged

Conversation

dabail10
Copy link
Contributor

For detailed information about submitting Pull Requests (PRs) to the CICE-Consortium,
please refer to: https://github.com/CICE-Consortium/About-Us/wiki/Resource-Index#information-for-developers

PR checklist

  • Short (1 sentence) summary of your PR:
    MOM6 requires the fswthru split into four components (vis dir, vis dif, nir dir, nir dif).
  • Developer(s):
    dabail10 (D. Bailey)
  • Suggest PR reviewers from list in the column to the right.
  • Please copy the PR test results link or provide a summary of testing completed below.
    https://github.com/CICE-Consortium/Test-Results/wiki/icepack_by_hash_forks#dc9ffdfd7fd35bfb169c6ea65dc160510c2497eb
  • How much do the PR code changes differ from the unmodified code?
    • bit for bit
    • different at roundoff level
    • more substantial
  • Does this PR create or have dependencies on CICE or any other models?
    • Yes
    • No
  • Does this PR add any new test cases?
    • Yes
    • No
  • Is the documentation being updated? ("Documentation" includes information on the wiki or in the .rst files from doc/source/, which are used to create the online technical docs at https://readthedocs.org/projects/cice-consortium-cice/.)
    • Yes
    • No, does the documentation need to be updated at a later time?
      • Yes
      • No
  • Please provide any additional information or relevant details below:
    This will need to go with a corresponding CICE tag which is coming.

@apcraig
Copy link
Contributor

apcraig commented May 29, 2020

It looks like this changes the icepack interfaces and will break backwards compatibility. Can these new fields be added as optional arguments at least in the public icepack interfaces. My sense is that they are strictly intent(out)? If so, it should be pretty easy to do that.

@dabail10
Copy link
Contributor Author

Sounds good. I can add them as optional.

@dabail10
Copy link
Contributor Author

Looks like I have some things to fix in the build as well. I'm a bit concerned about the optional thing. This kind of opens up a can of worms. So say for example you have an optional argument to icepack_step_radiation.

Then if this is passed to shortwave_ccsm3, you need a check to see if it is present right? Then would you have two calls to shortwave_ccsm3 with and without the argument?

@dabail10
Copy link
Contributor Author

Looks like the Travis build problem is an issue with Zenodo.

@apcraig
Copy link
Contributor

apcraig commented May 29, 2020

I've retriggered travis once already. Will try again, but will also assume more changes are coming.

We have developed a strategy for handling the optional arguments. See the last bullet in

https://cice-consortium-icepack.readthedocs.io/en/master/developer_guide/dg_col_phys.html#coding-standard

If you have questions or can suggest improvements to the documentation, let me know. There are examples of this approach to optional arguments already in Icepack.

@apcraig
Copy link
Contributor

apcraig commented Jun 4, 2020

I haven't looked thru everything but I don't think the optional arguments are being handle correctly yet.

In icepack_therm_step1,
the following arguments are not yet optional but should be,

     fswthrunvdr , & ! vis dir SW through ice to ocean            (W/m^2)
     fswthrunvdf , & ! vis dif SW through ice to ocean            (W/m^2)
     fswthrunidr , & ! nir dir SW through ice to ocean            (W/m^2)
     fswthrunidf , & ! nir dif SW through ice to ocean            (W/m^2)

Then you call merge fluxes with the optional arguments as if they are passed, I'm not sure this is robust. The way we have proposed to handle optional arguments is to do the following

  • define local variables, ie, l_fswthrunvdr, ...
  • if the optional argument is passed, allocate and copy
  • if the optional argument is not passed, allocate as size 1 and set values to zero
  • use the local variable to pass into subsequent interfaces.
  • Look at isosno as a good example in icepack_step_therm1

You do not need to carry the optional arguments around inside icepack unless they need to be optional because different parts of the model are calling into them. The optional arguments are mostly needed at the public (outside columnphysics) interface level.

The public interfaces (anything that is referenced in icepack_intfc and mostly subroutines that start with icepack_) are the ones where extra focus is needed. I think other arguments are missing the optional arguments too in public subroutines in icepack_shortwave. Again, when I say public, I mean public outside the columnphysics. Maybe we need a different term for those, external interfaces?

Let me know if you have questions or want some help.

@dabail10
Copy link
Contributor Author

dabail10 commented Jun 4, 2020

Sorry, I hadn't completed the optional stuff. I only did a partial commit. Still working on it.

@dabail10
Copy link
Contributor Author

Added the optional argument code for the four penetrating shortwave components. Running test suite now.

@dabail10
Copy link
Contributor Author

Here are the test results. Everything is bfb on nag, intel, pgi, and gnu on izumi.

https://github.com/CICE-Consortium/Test-Results/wiki/icepack_by_mach_forks#izumi

Copy link
Contributor

@eclare108213 eclare108213 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I request @apcraig to double-check the interface changes, but this looks fine to me.

Copy link
Contributor

@apcraig apcraig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is pretty close. I had one comment about whether a size 1 fake array was going to work in one subroutine.

The second comment is that the 8 new variable names are really hard to parse visually; fswthruvdr, fswthruvdf, fswthruidr, fswthruidf, fswthrunvdr, fswthrunvdf, fswthrunidr, fswthrunidf. It would be nice if we found some better character strings for these names, but will not insist.

Have you tested this without passing in the new variables? I think CICE should be run (unchanged) with this version of Icepack to make sure it is backwards compatible and bit-for-bit. I'm happy to help do that testing, just let me know if you'd like me to do that.

if (present(fswthruidr)) &
fswthruidr = fswthruidr + fswthrunidr * aicen
if (present(fswthruidf)) &
fswthruidf = fswthruidf + fswthrunidf * aicen
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If fswthrun* is size 1 and aicen is not, I think this is risky and will fail on some/all compilers. Recommend adding a check that fswthrun* either matches the size of aicen then this code is run, fswthrun* is size 1 then this code is skipped, and anything else might generate an error. Thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fortunately in merge_fluxes, this is called by category and everything is dimension 1. However, I do have a problem in icepack_shortwave.F90 (run_dEdd) here:

        if(present(fswthrun_vdr)) fswthrun_vdr(n) = l_fswthrun_vdr
        if(present(fswthrun_vdf)) fswthrun_vdf(n) = l_fswthrun_vdf
        if(present(fswthrun_idr)) fswthrun_idr(n) = l_fswthrun_idr
        if(present(fswthrun_idf)) fswthrun_idf(n) = l_fswthrun_idf

This is causing a segmentation violation, but it shouldn't be a problem because of the "present"? We can take this discussion offline.

@dabail10
Copy link
Contributor Author

Ok. I will take a look. I agree the names are unwieldy, but it is kind of what we need. I will try testing with CICE to make sure it is all good.

Dave

@eclare108213
Copy link
Contributor

For the names, how about just adding an underscore? fswthru needs to remain, the following n indicates category values as usual (right?), and the final suffix needs to stand out for readability. So maybe
fswthru_vdr, fswthru_vdf, fswthru_idr, fswthru_idf,
fswthrun_vdr, fswthrun_vdf, fswthrun_idr, fswthrun_idf

@dabail10
Copy link
Contributor Author

@apcraig
Copy link
Contributor

apcraig commented Jun 24, 2020

I'll run some tests myself just to confirm the results. Can you explain what the two CICE sandboxes are, "with and without the interface changes".

@dabail10
Copy link
Contributor Author

Sorry. The one sandbox had a CICE tag that does not know about the fswthru components and calls the icepack interfaces without them. The other sandbox is a branch tag that I have where I do call with the fswthru components.

@apcraig
Copy link
Contributor

apcraig commented Jun 24, 2020

OK, thanks. I am testing the current CICE master with the Icepack fswthru version vs the current CICE master on three compilers on cheyenne. Testing on multiple compilers can provide additional insight. For instance, if only intel runs fail a subset of the regression testing, then it's more likely to be a compiler issue. If all compilers fail the same tests, then it's more likely to be a coding issue. I may setup the same full suites on gordon or conrad as well just to have more data.

@apcraig
Copy link
Contributor

apcraig commented Jun 26, 2020

The test results are here, https://github.com/CICE-Consortium/Test-Results/wiki/cice_by_hash_forks#3fedc78b1b012f8bd6c0fde7c1b41e79b9db63a9 for conrad and cheyenne.

It's likely it's just compiler optimization. No debug tests are failing regression, plus it's a subset of all tests that are failing and only on some compilers. I also compared some of the differences and they tend to diverge several days into the simulation, not on the first timestep. Again, that suggests a roundoff error introduced in some computation. I might try to do a bit of extra work this weekend just to confirm, but I suspect everything is fine.

@dabail10 is this ready to merge once we agree the results are reasonable or is there still a bit more work to do?

@dabail10
Copy link
Contributor Author

If you are happy with it, then it is ready to merge. I have the CICE changes nearly ready to go.

@apcraig
Copy link
Contributor

apcraig commented Jun 30, 2020

I spent a bit of time exploring the non-bit-for-bit results. I am confident that this is a compiler optimization thing. I isolated the optimization to a few lines of code and could trick the compiler into producing bit-for-bit results, but not in a way that we could use for production code. I think we can merge this now and will do so.

@apcraig apcraig merged commit be68b25 into CICE-Consortium:master Jun 30, 2020
@dabail10 dabail10 deleted the fswthru branch December 6, 2022 18:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants