You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the notebooks in python-topic-model/notebook/, there are no small examples provided of how to infer the topic distribution for a new document or for the documents that the model was trained on.
Something like giving a list of integers as input (that map to the words of voca) for a new document, and getting the probability distribution that this document has for the trained topics. Or accessing the topics of all the trained documents.
How can this be achieved for lets say the LDA or the supervised LDA?
The text was updated successfully, but these errors were encountered:
In the notebooks in
python-topic-model/notebook/
, there are no small examples provided of how to infer the topic distribution for a new document or for the documents that the model was trained on.Something like giving a list of integers as input (that map to the words of
voca
) for a new document, and getting the probability distribution that this document has for the trained topics. Or accessing the topics of all the trained documents.How can this be achieved for lets say the LDA or the supervised LDA?
The text was updated successfully, but these errors were encountered: