-
Notifications
You must be signed in to change notification settings - Fork 357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does it support embedding as input #1420
Comments
Hi @ltroin, can you be more specific about the model or component you're trying to use? LITs model wrappers generally assume the data will be sent in as represented in the dataset, so typically text, numbers, or image bytes, not pre-computed tensors. Components (interpreters, projectors, etc) are more flexible but also tend toward assuming the incoming data will be the same as the dataset or whatever the model outputs. That said, we should be able to help you adapt/customize the LIT code you want to use to fit your needs. |
You can use tensor data for |
Thank you for the quick response. I'm currently working with the LLama model, which accepts two types of inputs: tokenized text or input embeddings. I'm interested in using the input embeddings to analyze and visualize the patterns of attention weights across different layers and create a salience map based on "tokens", where each "token" corresponds to a row of input embeddings. |
Does this imply that the LIT framework will interpret this numpy array as input embeddings rather than raw text? |
tl;dr -- Probably not. NumPy arrays are not raw text and LIT expects to operate over language-native Longer explanation -- LIT operates over a JSON Object representation of examples. Since JSON has very limited support for types, LIT provides its own type system that our TypeScript and Python codebases use to decide how to handle different values in the JSON Objects we pass around. Model and Dataset classes declare the shape (i.e., field names and types) of the JSON Objects that provide/accept as Specs, and components (interpreters, generators, metrics, and UI modules) look for specific LIT types in these Specs to determine compatibility and decide how to handle them. As above, you should expect a
We don't have a wrapper for Llama in an official release yet, but we're working on adding one in #1421. It's designed to take raw text as the input and then the HF implementation handles tokenization, embedding, generation, etc. It would be possible for you to subclass this (or write your own) so that the wrapper class takes embeddings as input instead of raw text.
LIT provides a Sequence Salience module that renders a salience map over tokenized text. Is that the kind of thing you're looking for, or do you want to display a salience map over a matrix of shape |
Hi, sorry for the late reply. Thank you so much for the detailed explanation.
This feature will be awesome! And yes, I also want to display a salience map over a matrix of shape (num_tokens, hidden_dims), where certain num_tokens are highlighted. |
Hello, I am wondering is there any plan of supporting torch tensors as data input...?
The text was updated successfully, but these errors were encountered: