-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WGSL textureSample tests #3940
WGSL textureSample tests #3940
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
(I don't really do the % / math for calculating coords. If they pass local test I assume they are good -_-)
src/webgpu/shader/execution/expression/call/builtin/texture_utils.ts
Outdated
Show resolved
Hide resolved
src/webgpu/shader/execution/expression/call/builtin/texture_utils.ts
Outdated
Show resolved
Hide resolved
src/webgpu/shader/execution/expression/call/builtin/textureSample.spec.ts
Show resolved
Hide resolved
21a8fe1
to
4269864
Compare
Add all the textureSample tests. cube and cube-array tests with derivatives are currently skipped or filtered out as the software rasterizer can't handle this case correctly or at least doesn't match too many GPUs. Rather than increase the tolerances I'm hoping to find something I can measure, like the mapping between derivative and mix-weight, that will give us some way to test per GPU. There's a big change to the soft rasterizer to compute derivatives by setting up 2x2 pixels and computing the derivatives by the differences between the directions.
4269864
to
6dd0281
Compare
Hey @shrekshao, could you please re-review this. I changed it to use your min/max suggestion. The first pass I tried Anyway, the tests seem to pass except for legit failures (see run) Legit failures are (1) OpenGL on Linux (compat), reading depth textures returns 0 always (2) Linux Vulkan Intel seems completely broken. Otherwise, lots of stuff that was failing is no longer failing. Also, I added some more weight queries which was an attempt to figure out what the GPU is doing and applying those to the software renderer. That code path is used by the non-derivative tests. I didn't change the non-derviative tests to use the min/max path as they are already passing. Anyway, with such a big change it seemed best to ask for a new review. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know if I understand shaders in queryMipGradientValuesForDevice
. I had some comments. Please take a look
src/webgpu/shader/execution/expression/call/builtin/texture_utils.ts
Outdated
Show resolved
Hide resolved
src/webgpu/shader/execution/expression/call/builtin/texture_utils.ts
Outdated
Show resolved
Hide resolved
let g = mix(0.5, 1.0, mipLevel); | ||
|
||
let ndx = v.ndx * ${kNumWeightTypes}; | ||
result[ndx + 0] = textureSampleLevel(tex, smp, vec2f(0.5), mipLevel).r; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does mipLevel
need to be mipLevelNum
(f32(v.ndx)
)?
@fragment fn fs(v: VSOutput) -> @location(0) vec4f { | ||
let mipLevel = f32(v.ndx) / ${kMipGradientSteps}; | ||
let size = textureDimensions(tex); | ||
let d = mix(0.125, 0.25, mipLevel) * 4.; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: d
seems unused
result[u32(pos.x)] = textureSampleLevel(tex, smp, vec2f(0.5), mipLevel).r; | ||
|
||
@fragment fn fs(v: VSOutput) -> @location(0) vec4f { | ||
let mipLevel = f32(v.ndx) / ${kMipGradientSteps}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused here.
say we have
let mipLevelNum = f32(v.ndx);
let mipLevel = mipLevelNum / ${kMipGradientSteps}; // maybe rename to mipLevelMix?
let mipLevel = f32(v.ndx) / ${kMipGradientSteps}; | ||
let size = textureDimensions(tex); | ||
let d = mix(0.125, 0.25, mipLevel) * 4.; | ||
let u = f32(v.pos.x) * pow(2.0, mipLevel) / f32(size.x); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does mipLevel
need to be mipLevelNum
((f32(v.ndx)
)?
Am I understanding the queryMipGradientValuesForDevice shader correctly (based on my questions)? |
Your understanding is correct but, given it's been a few days since I asked for this review I've made so new many changes that I guess we should close this and start a new one with the latest. |
Add all the textureSample tests. cube and cube-array tests with derivatives are currently skipped or filtered out as the software rasterizer can't handle this case correctly or at least doesn't match too many GPUs. Rather than increase the tolerances I'm hoping to find something I can measure, like the mapping between derivative and mix-weight, that will give us some way to test per GPU.
There's a big change to the soft rasterizer to compute derivatives by setting up 2x2 pixels and computing the derivatives by the differences between the directions.
Requirements for PR author:
.unimplemented()
./** documented */
and new helper files are found inhelper_index.txt
.Requirements for reviewer sign-off:
When landing this PR, be sure to make any necessary issue status updates.