Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concatenation of prefix and embedding_text in torch.cat #79

Open
Tshuning opened this issue Jan 19, 2024 · 0 comments
Open

Concatenation of prefix and embedding_text in torch.cat #79

Tshuning opened this issue Jan 19, 2024 · 0 comments

Comments

@Tshuning
Copy link

First of all, thank you for your valuable contribution! I am currently studying your code and have a question about a particular section. Could you please help me understand the following line of code:

embedding_cat = torch.cat((prefix_projections, embedding_text), dim=1)

I noticed that prefix_projections and embedding_text are concatenated along dim=1. I'd like to inquire about the purpose behind combining these two tensors. Is it serving as a prompt?

Once again, thank you for your patience and contribution!

Best regards

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant