Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

你好,为什么用layer i-1 于layer i 层的预测来计算relation embedding #20

Open
runfeng-q opened this issue Sep 13, 2024 · 2 comments
Labels
question Further information is requested

Comments

@runfeng-q
Copy link

Question

你好,请问为什么用layer i-1 于layer i 层的预测来计算relation embedding, 有什么额外的好处吗。 如果之用layer i 层的预测两两配对来计算relation 会发生什么?

补充信息

No response

@runfeng-q runfeng-q added the question Further information is requested label Sep 13, 2024
@xiuqhou
Copy link
Owner

xiuqhou commented Sep 13, 2024

Hi @runfeng-q
这里使用第i-1层和第i层的 $Box$ 来计算relation,是受iterative bounding box refinment启发。网络在Decoder阶段预测的是box的微调量 $\Delta_i=Box_i - Box_{i - 1}$ 。既然网络输出的是两层box之间的微调量,那么我们也相应地用相邻层 $Box_{i}$$Box_{i-1}$ 来编码位置关系,应该是更有助于预测第i+1层的输出 $Box_{i+1} - Box_{i}$ 的。

如果只用 $layer-i$ 层来计算relation,这种做法的优势是可以多加1层relation,达到6层relation。我印象中是做过一次消融实验的,最后发现6层只用layer i的relation和5层用 layer i-1/layer i的relation 性能差不多。但考虑到后者可以少一层relation计算,就使用layer i-1和layer i来作为默认设置了。

如果您有兴趣,欢迎自己做实验对比一下,代码实现起来也非常简单~

@runfeng-q
Copy link
Author

谢谢!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants