Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

General Reflection on AI and Fragmentation #713

Closed
vricosti opened this issue Apr 3, 2023 · 0 comments
Closed

General Reflection on AI and Fragmentation #713

vricosti opened this issue Apr 3, 2023 · 0 comments

Comments

@vricosti
Copy link

vricosti commented Apr 3, 2023

First of all, a huge thank you for your work. I stumbled upon it by accident after having issues trying to use the original Python version. I also found a Swift fork that seems to be based on your work initially. I'm new to the AI "game," but I already see fragmentation regarding OS/hardware. This implementation is pure CPU, and if I want a faster version, I would need a GPU version, and generally, CUDA is preferred or CoreML in the Apple world. Similarly, if we use CoreML, I understand that we need to convert the models into a specific format. Since Swift is available on Windows and Linux, I was almost wondering if it wouldn't be simpler to implement CoreML on these platforms and use the Swift version, ...
Is implementing a GPU version complicated and time-consuming, and if someone does it, will it work on an AMD card, for example?
Will the GPU version be much faster on a macintel with a card like a 5700XT, for example?
Sorry for all these questions.

Repository owner locked and limited conversation to collaborators Apr 14, 2023
@ggerganov ggerganov converted this issue into discussion #762 Apr 14, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant