Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MacOS M1 cannot find library OpenMP_CXX #13

Open
kofj opened this issue Nov 15, 2023 · 4 comments
Open

MacOS M1 cannot find library OpenMP_CXX #13

kofj opened this issue Nov 15, 2023 · 4 comments

Comments

@kofj
Copy link

kofj commented Nov 15, 2023

CMake Error at /opt/homebrew/Cellar/cmake/3.27.4/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
  Could NOT find OpenMP_CXX (missing: OpenMP_CXX_FLAGS OpenMP_CXX_LIB_NAMES)
Call Stack (most recent call first):
  /opt/homebrew/Cellar/cmake/3.27.4/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:600 (_FPHSA_FAILURE_MESSAGE)
  /opt/homebrew/Cellar/cmake/3.27.4/share/cmake/Modules/FindOpenMP.cmake:577 (find_package_handle_standard_args)
  src/lymath/CMakeLists.txt:1 (find_package)


-- Configuring incomplete, errors occurred!
@ling0322
Copy link
Owner

MacOS is not supported yet. Please wait for the future update.

@tongji1907
Copy link

what's difference between llama.cpp ?

@ling0322
Copy link
Owner

ling0322 commented Mar 7, 2024

We could compile libllm in macOS now, but the speed is very slow since there is no optimized kernel for it. I am implementing the aarch64 kernel for it now.

@ling0322
Copy link
Owner

ling0322 commented Mar 7, 2024

what's difference between llama.cpp ?

Currently, they are very similar. In my side, I focus on the client side delivery that makes local LLM easy to deploy and run for everyone.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants