Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve prediction wall-time #13

Open
josephbirkner opened this issue Sep 17, 2017 · 0 comments
Open

Improve prediction wall-time #13

josephbirkner opened this issue Sep 17, 2017 · 0 comments
Assignees

Comments

@josephbirkner
Copy link
Member

josephbirkner commented Sep 17, 2017

Currently, the performance of the completer-lstm drops approx. linearly with the length of the prefix string. This problem could be circumnavigated by caching the last LSTM-State-Tuple-List on the client side, and feeding it to the server when an extended completion is requested.

Furthermore, a significant share in walltime is occupied by JSON Serialization/Transmission/Deserialization of the M*N*(char, probability) prediction matrix, where N is the number of chars to predict and M is the total number of lexical features. To reduce the size of matrix to significant entries, M should only cover a top portion of the (5?) most probable next characters.

@josephbirkner josephbirkner self-assigned this Sep 17, 2017
@josephbirkner josephbirkner changed the title Improve prediction performance Improve prediction wall-time Sep 18, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant