Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add page on scene-linear images to website #1683

Merged
merged 10 commits into from
Mar 24, 2024

Conversation

peterhillman
Copy link
Contributor

@peterhillman peterhillman commented Mar 20, 2024

This is a first pass at adding more information about scene linear images. The intended audience is users of software that supports OpenEXR images, rather than developers.
It probably says too much, but it's easy to delete stuff. It may also be a bit too armwavy and vague with terminology in places.

Website preview: https://openexr--1683.org.readthedocs.build/en/1683/

@kdt3rd
Copy link
Contributor

kdt3rd commented Mar 20, 2024

Like how this is spelling things out, wonder if instead of trying to use all the same-meaning terms, it'd be better to use just one and have a synonym table somewhere?

Also, the graph does not appear to be working, is that something on my side?

@peterhillman
Copy link
Contributor Author

You would have needed 'graphviz' for the graph to work. I've now separated that out to be a manual step so the website can be rebuilt without that dependency. I've also removed some synonyms and moved the others to footnotes

@cary-ilm
Copy link
Member

In general, this is fantastic, a great addition to our documentation, and about the right level of detail. I'll read it over in more detail and make some minor editing suggestions shortly.

I think it would be worth a disclaimer about the discrepancy between the intentions and the reality of what likely appears in exr's in the wild. One of the audiences for this information is the user who says "I've been given this .exr file; how do I interpret this data?" The answer is, unfortunately, you might have to ask the person who created it. The format and library doesn't enforce much in terms of the numbers in the data, you can store anything you want, but we're saying a central purpose of the library is to support storing scene-linear data in a way that other formats do not.

Copy link
Contributor

@lgritz lgritz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, this is a great addition

@lji-ilm
Copy link

lji-ilm commented Mar 21, 2024

Thanks Peter for putting down this new article. I think as you said the use of terms here isn't 100% rigorous as in an academic paper, but I agree that's it's at the right level of specificity for a general audience.

I'm working on a slide deck on EXR these days with Cary and here is my two cents about scene linear:

  • The number stored in EXR files are claimed to be not display-referenced but scene-referenced, or scene-linear.
  • “Not display-referenced” is easier to explain - the numbers cannot be directly mapped to display voltages without knowing that display’s color profile.
  • “Scene-referenced” and “scene-linear” do not have a standard interpretation yet. Sometime they’re confusing. I think they can be thought in two ways:
  1. The physical perspective: the pixel numbers in EXR are linear to the intensity of light; they’re radiometrically linear measurements of the power of light.
  2. The synthesis perspective: the pixel numbers are linear to the parametric space of graphics calculations;
    If you input the number from EXR directly into a graphics/rendering calculation, the result is what you expect, at most differed by multiplying by a scalar (e.g. “linear”).

I think the current exposition is too much leaning into the physical perspective and it miss an important aspect of purely synthesized productions such as Animation and esp. non-photorealistic animation. In those production contexts there are no concept of either a physical light nor physical reflectance, but of course EXR should still be used because of its high precision and linear-to-the-synthesis-parametric-space (in this case, a non-photorealistic rendering parametric space) properties.

When studying EXR's history since its 2003 open source debut, I have also noted that when it was first made open source, there are barely any digital cameras and sensors, save for the scientific instruments, that are capable of directly capturing high dynamic range photographs. In the first batch of discussions ILM made about EXR, it was talking about EXR was better at holding data from scans of films than the 8 bit image formats, instead of talking about a "raw" digital camera file. In this sense, it is not exactly radiometrically linear to the light in the original on-set condition but more of linear to the chemical deposit on the film, upon which all the post-processing designed to work on films will work linearly on EXR, as expected. This film perspective has faded out over time, and I did not list it along the above two perspectives, but it would be good for us in this group here to keep a note.

other nitpicks:

  1. In the flowchart there is a "demosaicking"... is this spelled right ("de-mosaic?") What does it mean? remove sensor pattern? does that alway happen after linearization? or sometimes before it?
  2. "A flat surface that reflects 90% of the light should be stored with a value of 0.90. .... Typically, bright reflections on metal would read around 10.0,"
    I get what you're trying to say about specular but a reflectance above 1 is just bizarre (and not energy-conserved and will eventually blow up the universe). There are two ways to go around this, first is to move that footnote into the paragraph and just say something about "the reflected light", so it's about light intensity, not the reflectance or reflection. The second way is to do BRDF properly. I prefer the first approach.

@peterhillman
Copy link
Contributor Author

Thanks for those thoughts @lji-ilm . I think there could well be a separate page to cover "OpenEXR in Computer Graphics" that might talk about BRDFs as well as conventions for storing multi-channel textures for shading and non-color channels output by renderers, perhaps touching on USD and also non-photorealistic images. Arguably, that starts to get away from explaining scene-referred linear and output referred images, which is all this text is trying to do. I've avoided talking about digitizing film because there's a lot of complication and history there, and sadly it is not a very common thing these days.

I've tweaked the introduction to mention why an explanation that talks about (real) photographed images is also relevant to photorealistic rendering. I also changed the spelling to demosaicing, because that's the name of the wikipedia page, and taken out the 90% reference, which was vaguely worded. I've left in the description of the brightness of a reflection, since I think that's a clearer explanation and I think it's important to hint at why there are values above 1.

@lji-ilm
Copy link

lji-ilm commented Mar 22, 2024

Thanks Peter, this looks good and I think it can go onto the website.

And I agree that a more scholarly study of what EXR intents to do/should do/can do will be outside of the scope of the website page. Maybe a IEEE column article or sth similar. If I keep studying this and stay motivated enough after another while, I'll see if i can give it a shot :)

Amongst these "extra" points, however, the most high priority one seems to be the "synthesis-linear" perspective. There are a lot of feature animation studios in this gang that relies on EXR -- pixar, WDAS, DreamWorks to name a few; and it's hard to motivate radiometrically linear in a feature animation because that entire production does not root in photography, or any form of measurements of the light, to start with.

I agree the "film-linear" perspective is probably only of scholarly interest by now. It's interesting that, when EXR was first invented (late 90s), in fact the radiometric perspective didn't exist because of advanced digital cameras didn't exist -- it almost always to ensure that the films are properly scanned and then the CG synthesis calculation plays well on top of this scan data. Time certainly has changed.

@meshula
Copy link
Contributor

meshula commented Mar 22, 2024

This is very nice, thank you!

I wonder if you might also include the exr version of the fruits and color checker image? It seems appropriate to demonstrate an actual scene referred image rather than a jpg simulation?

@peterhillman
Copy link
Contributor Author

@meshula I've just been looking into that. I've made a PR to add it as an example image to make it downloadable. @cary-ilm would it be best to add this to the Test Images section of the webpage too? I'll have to dig into doing that

Copy link
Member

@cary-ilm cary-ilm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

two small typos but otherwise looks great.

display-referred, where the values indicate how much light should be used to
display the image on a screen (or how much ink to use to print the image onto
paper), and many image formats apply an encoding to the image so that the
numbers are not linear. Some sources use the term 'input-referred' and
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to use either ' or " consistently throughout.

[#fterms]_.

This is a brief description of the difference between scene-referred and
display-refrred representations, what linear-light means, and why using
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

referred

@@ -238,8 +244,14 @@ OpenColorIO.

.. rubric:: Footnotes

.. [#fterms] Color scientists use a bewilderingly large number of special terms and
acronyms. Some use two different terms and mean exactly the same thing; others
might insist there is a subtle but important distinction between them. To keep things brief,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LOL! I have nothing to suggest here, just noting that you made me snort tea through my nose

@cary-ilm
Copy link
Member

@peterhillman, this looks good to go to me, anything else you want to add?

@peterhillman
Copy link
Contributor Author

I think this is in a good place for a first revision. I added the new example image to the site so I could link to it. I notice this has also fixed the index of test images, which seemed to have 'zips' for multi-scanline ZIP and 'zip' for single scanline, which was the wrong way round.

@cary-ilm cary-ilm merged commit 0e92e7d into AcademySoftwareFoundation:main Mar 24, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants