Replies: 3 comments 5 replies
-
@mjto Recently when dealing with DJL parameter saving, I made some changes to allow the InputStream input for PyTorch and MXNet: 92cc448. However, we haven't implement the loading mechanism to load directly from InputStream, we can add one if needed. |
Beta Was this translation helpful? Give feedback.
4 replies
-
Hi @mjto, Since the PR is merged. You should be able to try it out loading from PyTorch directly. Let me know if there is any more issues. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
For the deployment of deep learning model, I have the need to encrypt the trained model that will be saved locally on end user computer. Then before running inference, the model will be loaded (by ModelZoo.loadModel) after decryption. Ideally, I'd like to keep the decrypted model in memory.
The above decryption and model loading can be achieved smoothly if ModelZoo.loadModel takes stream as input (the result of the decryption). I can see that there exists low level function that does take stream as input such as torch::jit::load(std::istream &in, …) but is not wrapped and exposed in DJL.
My question is whether such feature (ModelZoo.loadModel takes stream as input) or something similar is planned to be added in DJL in the near future. This seems something very useful for certain application scenarios but correct me if there is already very straightforward way to implement model encryption and decryption while using DJL.
Many thanks.
Beta Was this translation helpful? Give feedback.
All reactions