You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As more people jump on board with using and developing Qwen's open-source models, we've seen a bunch of variants popping up. I think it'd be really cool if this project could support them. Right now, there's a lot of buzz around a variant from a Japanese LLM developer named Rinna, specifically the Nekomata-7b/14b based on Qwen, and it would be cool if this works on mobile devices easily.
I'm not totally sure how tough it would be to add this, but Qwen's already up and running in the original llama.cpp repo here, so this might help a bit.
(Also, English isn't my first language, so sorry for any odd bits🙏)
Thank you!
The text was updated successfully, but these errors were encountered:
Hello,
As more people jump on board with using and developing Qwen's open-source models, we've seen a bunch of variants popping up. I think it'd be really cool if this project could support them. Right now, there's a lot of buzz around a variant from a Japanese LLM developer named Rinna, specifically the Nekomata-7b/14b based on Qwen, and it would be cool if this works on mobile devices easily.
I'm not totally sure how tough it would be to add this, but Qwen's already up and running in the original llama.cpp repo here, so this might help a bit.
(Also, English isn't my first language, so sorry for any odd bits🙏)
Thank you!
The text was updated successfully, but these errors were encountered: