-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Read in full LUH2 dataset for use by FATES #1077
Comments
Another potential wrinkle is that FATES really wants the totals and not allocated out by PFT. |
In fact, at the end of discussion today, we started thinking that maybe
what is really needed is to generate a direct regridded mapping of LUH2
data for use in FATES, including other transition information outside of
harvest. Any chance that streams could be utilized? Would require
conservative remapping.
…On Tue, Jul 7, 2020 at 5:19 PM Erik Kluzek ***@***.***> wrote:
Another potential wrinkle is that FATES really wants the totals and not
allocated out by PFT.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1077 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFABYVEJVK46EHGDMGOU6SLR2OUPFANCNFSM4OTZYKNQ>
.
|
Streams in the MCT coupler can't do conservative mapping. But, I'm pretty sure the NUOPC coupler can do conservative. And I would think it should be easy to add even if isn't. Since, the NUOPC coupler is the next version for CESM, if it would be OK to have these two happen at the same time, that would be one way forward. Or we could start with bilinear with the MCT coupler, and add conservative when it's available for NUOPC. |
The streams are not in the MCT coupler. They are in the data models. The
MCT data models cannot do conservative mapping UNLESS you create offline
mapping files and point to them as part of the input namelist. The new
NUOPC data models (which are being tested right now) can do online
conservative remapping from the stream mesh to the model mesh very easily.
They can also be called directly from CLM to do this mapping. I am happy to
provide more details if needed.
…On Tue, Jul 7, 2020 at 8:12 PM Erik Kluzek ***@***.***> wrote:
Streams in the MCT coupler can't do conservative mapping. But, I'm pretty
sure the NUOPC coupler can do conservative. And I would think it should be
easy to add even if isn't. Since, the NUOPC coupler is the next version for
CESM, if it would be OK to have these two happen at the same time, that
would be one way forward. Or we could start with bilinear with the MCT
coupler, and add conservative when it's available for NUOPC.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1077 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB4XCEZLJ3OP64ZFA4M2UJTR2PIZ7ANCNFSM4OTZYKNQ>
.
--
Mariana Vertenstein
CESM Software Engineering Group Head
National Center for Atmospheric Research
Boulder, Colorado
Office 303-497-1349
Email: [email protected]
|
OK, plan is we'll bring in Charilie's PR as is, which will be using data that isn't really correct for how it's used. We'll bring in the LUH2 data (that @lawrencepj1 must have downloaded already) as streams data. For the MCT coupler we'll do this with a conservative mapping file. We'll be able to get this working in the NUOPC coupler without the need for mapping files. |
The rawdata files that we already list in the XML database has the total harvest NOT segregated out by PFT so this might be what we want to use. It is stored in units of mass rather than area, so we might need to convert it.
|
Hi @ekluzek, see the metadata for the input4mips land use data that I pasted into an old FATES comment a while ago -- they have it by area too I think? NGEET/fates#491 (comment) |
One, thing I wasn't sure about was if we could have multiple filenames for streams files in CTSM as we haven't done that before. But, I just checked and it asks for an array of filenames, we just always only give it a single one. So we will be able to do that. It's just we'll have to have a namelist that has a long list of filenames. We can make the filenames list separate from the directory, but it will still be a long list. But, since we use build-namelist to make the namelist for us, and these files are already listed in the XML database, it won't be hard to do. So we just need to make sure the rawdata files have both mass and area units on them. |
The metadata that @ekluzek pointed to is from data that has been post-processed by Peter in some manner. We want to use the raw LUH2 data as noted in the FATES issue #491 that @ckoven pointed to. For each scenario (e.g., scenario, SSP, alternate scenario), there are 3 files (transitions.nc, states.nc, and management.nc ) See here for examples: /glade/p/cesm/sdwg_dev/thesis/data/cesm_tools/lumipfinallcc Here are the harvest variables including both area and mass based harvest. ncdump -h transitions.nc | grep harvest All the transitions are in that file as well. Just listing 10 or so here (there are 50+ transitions). I believe that included in these are the transitions required to be able to do a full land cover / use change implementation (i.e., deforestation for crop or pastureland). Obviously, this is a lot of data so will need to carefully consider what FATES and/or CTSM needs, but if we can read in and regrid via streams it will hopefully be relatively straightforward to add required transitions. ncdump -h transitions.nc | grep transition EDIT: These files are still there as of Dec/1/2022. The management file is 1.4GB, states is 5.8GB, and transitions is 16GB. |
OK, it looks like with a small amount of processing we could use those datasets for streams into CTSM/FATES. There's only two things I noticed. One is that we need a creation date on the filenames, and we'd need to enter them into CESM inputdata. The other thing is that time is in units of "years since ...", we'd need to convert that to "days since ...". The other thing to do for these datasets would be to enter all of these datasets into the XML database. That was one reason I was interested in the rawdata files that @lawrencepj1 processed, because we already have all those entered in. But, since these datasets have multiple years on single files, there still wouldn't be that many datasets to enter in. And of course we'd just enter the ones that are important at the time. Finally as Dave points out this dataset has a whole ton more of variables, whereas the rawdata files are processed down into just a few. But, yes it looks like we could read in and regrid these datasets using streams without too much work. |
I think this is clear, but I'll restate anyway in case it isn't. One of
the benefits of using the LUH2 source data is that it DOES include all
these extra fields that will be needed in LUH2 but are not needed in CTSM
big leaf. Plus, it will also be easier to update to LUH3 or if other new
SSPs are created, for example. No extra processing (other than modifying
time units, etc) would be required.
…On Thu, Jul 9, 2020 at 3:30 PM Erik Kluzek ***@***.***> wrote:
OK, it looks like with a small amount of processing we could use those
datasets for streams into CTSM/FATES. There's only two things I noticed.
One is that we need a creation date on the filenames, and we'd need to
enter them into CESM inputdata. The other thing is that time is in units of
"years since ...", we'd need to convert that to "days since ...".
The other thing to do for these datasets would be to enter all of these
datasets into the XML database. That was one reason I was interested in the
rawdata files that @lawrencepj1 <https://github.com/lawrencepj1>
processed, because we already have all those entered in. But, since these
datasets have multiple years on single files, there still wouldn't be that
many datasets to enter in. And of course we'd just enter the ones that are
important at the time.
Finally as Dave points out this dataset has a whole ton more of variables,
whereas the rawdata files are processed down into just a few. But, yes it
looks like we could read in and regrid these datasets using streams without
too much work.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1077 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFABYVFNVB2LUA4LMBIPUDDR2YZH5ANCNFSM4OTZYKNQ>
.
|
One thing that you might consider is that there is no land mask or land
frac data on the LUH2 data sets and the indexing for latitude is the
opposite direction to the raw data 89.875 ... -89.875 rather than -89.875
... 89.875
--
Dr Peter Lawrence
Terrestrial Science Section
National Center for Atmospheric Research
1850 Table Mesa Drive
Boulder Colorado 80305
Work: 1-303-497-1727
Cell: 1-303-956-6932
On Thu, Jul 9, 2020 at 3:42 PM David Lawrence <[email protected]>
wrote:
… I think this is clear, but I'll restate anyway in case it isn't. One of
the benefits of using the LUH2 source data is that it DOES include all
these extra fields that will be needed in LUH2 but are not needed in CTSM
big leaf. Plus, it will also be easier to update to LUH3 or if other new
SSPs are created, for example. No extra processing (other than modifying
time units, etc) would be required.
On Thu, Jul 9, 2020 at 3:30 PM Erik Kluzek ***@***.***>
wrote:
> OK, it looks like with a small amount of processing we could use those
> datasets for streams into CTSM/FATES. There's only two things I noticed.
> One is that we need a creation date on the filenames, and we'd need to
> enter them into CESM inputdata. The other thing is that time is in units
of
> "years since ...", we'd need to convert that to "days since ...".
>
> The other thing to do for these datasets would be to enter all of these
> datasets into the XML database. That was one reason I was interested in
the
> rawdata files that @lawrencepj1 <https://github.com/lawrencepj1>
> processed, because we already have all those entered in. But, since these
> datasets have multiple years on single files, there still wouldn't be
that
> many datasets to enter in. And of course we'd just enter the ones that
are
> important at the time.
>
> Finally as Dave points out this dataset has a whole ton more of
variables,
> whereas the rawdata files are processed down into just a few. But, yes it
> looks like we could read in and regrid these datasets using streams
without
> too much work.
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <#1077 (comment)>, or
> unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/AFABYVFNVB2LUA4LMBIPUDDR2YZH5ANCNFSM4OTZYKNQ
>
> .
>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1077 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AC3OJOMSHOVDI6FXPEJAJ63R2Y2TNANCNFSM4OTZYKNQ>
.
|
In ctsm5.1.dev022 we started allowing FATES to run for transient cases. |
In order to use conservative remapping for these streams we will need to resolve #1912. |
In our meeting today, @lawrencepj1 talked about the need to clean up some of the data, so it's cleaned and more CTSM-ified. So there's probably pre-processing that needs to happen before we read it into CTSM. We can use his scripts that clean and process the data to just do the first part. The difference in latitudes shouldn't be a problem, we just need a mesh file that describes what the mesh is and that the latitudes go the opposite directions from other grids. |
OK, in the meeting we went back to the idea of adding this to the landuse.timeseries files again. @lawrencepj1 could you lay out what the raw data files would look like for this method, and how it would appear on the landuse.timeseries files? I think having that would really help so I could picture what this would look like. |
Hi @ekluzek Thanks for capturing this. My suggested framework would be to add an extra level of detail on the surface and landuse timeseries files such that we keep the information that is specified in the LUH2 data. Currently we lose that in the combination of data to put it in CTSM/CLM format of only having a single set of PFTs for all natural vegetation. This could look like this: PCT_NAT_VEG We already have the Wood Harvest carbon amount divided out into primf, primn, secmf, secyf, secnf. |
And the other element we would like to have is a method of incorporating hillslope hydrology connectivity aspect and slope into these datasets |
I'd love to be able to bring hillslope into this, but can we focus on getting FATES what it needs first to avoid dragging this out too long? |
@wwieder ha ha ha yes just wanted to put that out as a longer term goal |
Hi all, thanks for continuing this discussion. @lawrencepj1 you list out the state data but I'm curious about the relative pros and cons of using the transitions data rather than the state data (or both). In principle I think that the way that FATES will actually use this is as a rate rather than a change in states, so wouldn't that be the more natural data to drive the model with? Dumping out the LUH2 transitions netcdf metadata gives the following. Would it make sense to include that data either in addition to or instead of the state data? Obviously some of the transitions (e.g. to/from urban) won't be something that FATES knows what to do with but for others it will.
|
Also, we should probably include the info from the LUMIP
|
Thanks @ckoven Yes the transitions in and out of each of the LUH2 states could be explicitly captured. The gross changes would have to be maintained as the dataset was regridded to different resolutions by mksrf which may require some assumptions. The number of states in the transition matrix could be rationalized down to just having in and out of crop rather than the c3ann, c3per, c3nfx, c4ann and c4per. The individual crop type information could be captured as the transitions between the crops within the crop land unit if needed. At present we are capturing the added-tree-cover as additional secdf so yes the _to_secdf transitions would cover this. Peter |
Thanks @lawrencepj1, could you say more what you mean by
So does this mean that the information pipeline is that the land use data tool is run once at high resolution and then regridded to the various lower resolutions? How would the transitions regridding logic differ from the state regridding logic? |
Hi @ckoven Yes I was thinking pretty high level without details but the resolution that we produce the data for and the resolution the model is run at can be different. With simple aggregation you can keep track of biomass or area but transition may not be conservative at different resolutions was all I was trying to point towards. |
@lawrencepj1 I was thinking more about how FATES will actually process this info, and had a few other questions that you would know the answers to:
Thanks! |
Hi @ckoven Great questions.
For other transitions between PFTs or land units there is an associated fraction of the wood carbon being transferred to product pools as well as litter and coarse woody debris. This is accounted for separately from the explicit wood harvest amounts.
With the fraction of the wood conversion to product pools defined at the PFT level (1-pconv) in:
Additionally the conversion fluxes include increased fire fluxes in the CTSM fire model which are not included in the DWT fluxes. For CTSM/ELM/FATES I suggest that we explicitly prescribe the primf_to_secdf along with the carbon amounts extracted through wood harvest. This would be the same for the primn_harv and transitions to primn_to_secdn which would need to be calculated.
Hope that clarifies the process a little. |
Thanks @lawrencepj1, that is very helpful! |
Also linking this to FATES wood harvest #869 |
@wwieder Thank you! |
Hi All, just wanted to give a brief update. @glemieux and I have been working on this and have made a lot of progress. Following the design of the FATES LUH2 design document that several of us discussed a few weeks ago, we are working towards initial versions of all of the steps in various repositories:
Code is not ready yet but most of the pieces are in place to drive patch-level changes to land use types in FATES with the LUH2 data directly. |
I changed the title and definition of done for this, since the original thought was to read in the original data as a streams file, but now we are reading it from the landuse.timeseries file. |
Hi @ekluzek thanks! Just two slight things:
|
@ekluzek yep it meets the definition of completeness noted in the first comment. |
FATES is going to use the area option for harvest variables from a streams file rather than the mass option (on the landuse.timeseries files). Hence, we'll need to support both variables on the new landuse.timeseries files.
It looks like what we really need is to read in the raw LUH2 harvest data for FATES as a stream file. In the landuse.timeseries files harvest is per PFT, but what FATES needs is the total, rather than resolved by PFT.
Definition of Done: CTSM/FATES reads in full LUH2 data and uses it in FATES
The text was updated successfully, but these errors were encountered: