You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hey, thanks for ur wonderful work. but here i got some problem when i read the code,
what is the difference between the query_pos and query_sine_embed. it seems that query_sine_embed is the pos embedding vector, but in your code, the first decoder layer, q = q_content + q_pos. Why
what is the function of the hyperparameter of self.keep_query_pos? in the original paper, your key insight seems to keep content query and pos query apart so that they can compute attention respectively
I would appreciated it if could give me a insight.
The text was updated successfully, but these errors were encountered:
hey, thanks for ur wonderful work. but here i got some problem when i read the code,
I would appreciated it if could give me a insight.
The text was updated successfully, but these errors were encountered: