-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add scalable linear light texture for shaders #9818
Comments
FSR is not one of them. It's designed to upscale images that haven't been downscaled (in linear space in this case). Otherwise gamma light is preferred. |
Shader math seems fine to me. But you probably want I can certainly think about making LINEAR resizable. It doesn't seem too difficult. |
Thank you for your reply. However, since I have a very basic understanding of color theory, I don't know what to make of your reply. I am not hooking |
Actually, what type of artifacts do you get? Something I just noticed is that So yes, you do what to make sure you clamp the incoming values. Try adding |
I am currently working on porting FidelityFX CAS and FSR to mpv. And yeah, sRGB assumption comes from the fact those filters are designed for video games which use sRGB textures most of the time. But when it comes to video content, you can no longer assume a source colorspace. That's why I don't really want to assume anything, and do any in-shader conversions because it's not as simple as in video games. As a best practice, mpv should take care of all the color conversion stuff. The manual states the following:
May I learn what kind of RGB is this? Is this already linear? If so what's difference between |
Yeah, I am just not going to try doing it in-shader. Because even if I did then it is not going to work with HDR content, then I will have to create another version for HDR which is a code smell and bad practice. I am surprised |
I appreciate the sentiment but does it really matter? as long as it's vaguely linear it should be good enough to do the job, right? These are subjective shaders after all. As long as you exactly inverse it on the other end it still ends up decoding to the right thing. (Though I can definitely see that logic breaking down for PQ-encoded content)
No, it absolutely isn't. (*) It depends on the source tagging. So you are definitely correct in needing to linearize it yourself if you want linear light content. If it really matters, for the time being, you could make both a HDR and SDR version of the shader (with PQ and pure power gamma respectively) and choosing the right one with an auto-profile or something. Though I will warn you that scaling HDR things in linear light will probably lead to very bad results. It might be best to just throw a gamma 2.4 curve on the content regardless of the source transfer. But I definitely agree that LINEAR can be made resizable. There's no really good reason not to. The only reason it isn't, is because the decision to linearize or not is done as a result of the scaling direction. So we lock down the scaling parameters before we know if there even is a linear hook or not. But for libplacebo at least I can change this a bit more easily, to make it also force a linear path (independent of scaling) if there is any shader hook on it. Though that does raise the question of what should happen if you end up transforming the texture in such a way that it no longer wants linearization, and especially e.g. for HDR scaling. (*) Unless you're opening a linear light floating point EXR image or something... |
@haasn What about an alternative? Instead of playing with the pipeline, can you make certain preprocessor macros available about the nature of the source content? So I can read which gamma curve to apply/inverse, if the source is SDR/HDR etc. I think that's a better practice in the long term since it has more applications besides just this single use case. |
@agyild Yeah that's a good solution, actually. I pushed something like this to libplacebo master which should work for vo_gpu_next. I'll try seeing if I can port it to vo_gpu if you need it. |
Release you FSR already. Gamma or linear it doesn't matter anyway. Just hook MAIN. You will add the linearize macro later. |
As for the time being until the source colorspace preprocessor macros become available, I have decided to use the following the color conversion algorithms from AMD's vec3 From709(vec3 c) {
return max(min(c * vec3(1.0 / 4.5), vec3(0.081)), pow((c + vec3(0.099)) * (vec3(1.0) / (vec3(1.099))), vec3(1.0 / 0.45)));
}
vec3 FromPq(vec3 x) {
vec3 p = pow(x, vec3(0.0126833));
return pow(clamp(p - vec3(0.835938), 0.0, 1.0) / (vec3(18.8516) - vec3(18.6875) * p), vec3(6.27739));
}
vec3 FromSrgb(vec3 c) {
return max(min(c * vec3(1.0 / 12.92), vec3(0.04045)), pow((c + vec3(0.055)) * (vec3(1.0) / vec3(1.055)), vec3(2.4)));
}
vec3 To709(vec3 c) {
return max(min(c * vec3(4.5), vec3(0.018)), vec3(1.099) * pow(c, vec3(0.45)) - vec3(0.099));
}
vec3 ToPq(vec3 x) {
vec3 p = pow(x, vec3(0.159302));
return pow((vec3(0.835938) + vec3(18.8516) * p) / (vec3(1.0) + vec3(18.6875) * p), vec3(78.8438));
}
vec3 ToSrgb(vec3 c) {
return max(min(c * vec3(12.92), vec3(0.0031308)), vec3(1.055) * pow(c, vec3(0.41666)) - vec3(0.055));
} In this case PQ has the same bug as the one in in the previous sRGB implementation, where there is this color crush artifact when black and white edges appear next to each other such as in a movie credits. Other than that, how do they look? Any thoughts? @haasn And yes, the artifacts in the previous sRGB to linear implementation were probably caused because of how
CAS with scaling is nearly finished. I will start FSR after this one. Shouldn't take long. Edit: So the color crushing bug in PQ was apparently due to not clamping the normalized input color values during the relinearization pass. Apparently, it is needed because that is also how mpv does it. So, the problem seems to be solved for now. |
I have just re-released CAS along with scaling support, and further AMD optimizations. FYI. |
Should be |
So I am in the process of porting FSR, and it is nearly finished. However, I have two questions: FSR requires an additional sharpening pass on the upscaled image. Considering the first pass uses hooks |
It's basically a dumbed down version of CAS with optional denoising (which you can port into CAS).
Just port only EASU part of FSR, and use it like this: |
FSR has been ported as well. FYI. |
I recommend to enable denoise by default.
Redundant. |
Thanks. It was a leftover from debugging code.
From my testing, visually I did not notice any positive benefits besides reducing the effect. But that might be just me. Did you test it? |
Yes. |
@igv Do you think that porting FSR and CAS as compute shaders as they are originally intended would improve performance? I have never programmed computer shaders before, but made some research and people say that they are better suitable for image processing algorithms such as sharpening, resizing etc.. And if I wanted to create their compute shader versions, how would I proceed? I imagine it's just some syntax and texture coordinate modifications, right? Thanks for assisting me by the way. |
No, these shaders are very simple, there is not much to optimize with CS (you can think of them as a low level PS).
Take a look at this version of FSRCNN, where I merged all the "expansion" and "sub-pixel convolution" passes + "aggregation" into 1 "reconstruction" pass (compare to PS version). |
They should be somewhat faster as compute shaders if you use local shared memory to share data between adjacent pixels, which is basically what mpv does to speed up normal convolutions (e.g. |
Kernel size is too small, speed up will be like 5% maybe. |
NVIDIA Image Scaling has also been ported as well. |
@haasn According to ITU-R BT.709, colors should be linearized before luma calculation. Does that mean |
BT.709 only describes how to encode video. To do the opposite (e.g. what mpv needs) it's implicitly assumed that we're reversing the order of operations from the spec. That means we decode YCbCr first, and then linearize. (Compare to first delinearizing, then encoding into YCbCr which is what BT.709 specifies) And no, LUMA is always non-linear - even for color spaces in which the order of operations is different (e.g. BT.2020-CL). LUMA literally contains the encoded source texture, bit-for-bit. |
Certain scaling filters are designed to operate in non-gamma (i.e. linear) color space. Manually trying to calculate linear light in shader code interferes with mpv's color conversions and might break things. I want to be able to request a scalable linear texture from mpv, process it, and give it back to mpv for necessary color conversions.
Alternatively, if you can suggest me a proper shader code that won't break things, I could use that. I tried implementing mpv's sRGB to linear and vice-versa conversions in shader code, and it mostly worked fine, but I have encountered some cases where it causes artifacts. If I hook and process
MAIN
with the code below, I get artifacts. I don't understand much of the color theory, but I presume it is becauseMAIN
is not sRGB but RGB. Is that correct?The text was updated successfully, but these errors were encountered: