Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on ablation study #2

Open
JHLew opened this issue Jun 16, 2022 · 1 comment
Open

Question on ablation study #2

JHLew opened this issue Jun 16, 2022 · 1 comment

Comments

@JHLew
Copy link

JHLew commented Jun 16, 2022

Hi authors, thank you for your awesome work.

I was going through the VFIformer paper and I got curious of something from the ablation study.
It would have been great to attend CVPR and ask in person, but unfortunately I cannot do so, so I leave my question here.

In short, to my understanding, the main contribution of the paper is:
use of transformer layers in VFI, with a novel cross-scale window attention, reaching a state-of-the-art performance.

So I assume the 'Model 1' configuration of Table 2 consists of Convolutional layers only, yet it still outperforms (36.27 in Vimeo90k) the best baseline (36.18).
I came to wonder the reason for this.
To me, the 'Model 1' configuration did not seem to have anything special (no offense) since it did not contain the proposed modules.

Can you give an explanation on this?
What was the difference that lead to a strong base model (Model 1)?
Or did I miss anything on the 'Model 1' configuration...?

@SkyeLu
Copy link
Collaborator

SkyeLu commented Jun 17, 2022

Hi, thanks for your interest in our work. As mentioned in the appendix of our paper, the main difference of Model 1 and the best baseline model is the flow estimator with the proposed Bilateral Local Refinement Blocks (BLRBs in Fig.9 b), which in fact bring about 0.1 dB improvement. But we do not claim BLRB as one of our key contributions, because when the model is equipped with transformer layers, the contribution of BLRBs is limited.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants