Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build : enable more non-default compiler warnings #3200

Merged
merged 21 commits into from
Sep 28, 2023

Conversation

cebtenzzre
Copy link
Collaborator

@cebtenzzre cebtenzzre commented Sep 15, 2023

I compiled llama.cpp with clang's -Weverything option, and found some warning flags that I think we should use.

The new warnings are -Wmissing-noreturn -Wextra-semi for g++ and clang++, -Wunreachable-code-break -Wunreachable-code-return for clang and clang++, and -Wrange-loop-bind-reference for clang++.

I made a new GGML_UNREACHABLE() macro to cover some of the cases where it should be explicit that code is unreachable. It's basically assert(false), but with the added benefit that it will compile to __builtin_unreachable() when assertions are disabled, both to enable compiler optimizations and to make sure no -Wreturn-type warnings appear because of the lack of a return statement at the end of a function.

Other included changes:

  • Add missing 'static' specifiers
  • Use -Werror=implicit-function-declaration for C
  • Don't pass -Wdouble-promotion to old clang
  • Build q8dot and benchmark-matmult in the Makefile by default

@cebtenzzre cebtenzzre marked this pull request as draft September 15, 2023 20:30
@cebtenzzre cebtenzzre marked this pull request as ready for review September 15, 2023 21:57
@Green-Sky
Copy link
Collaborator

did you check which version of the compilers support the flags?

@cebtenzzre
Copy link
Collaborator Author

cebtenzzre commented Sep 19, 2023

did you check which version of the compilers support the flags?

It looks like -Wextra-semi was only added in GCC 8.1. master compiles fine with GCC 5. So I'll have to add some version checks to the build scripts.

@cebtenzzre
Copy link
Collaborator Author

I added some compiler version checks. I've tested with several compilers, including LLVM clang 3.7.0, Apple clang 8.0.0, and gcc 5.5.0.

Copy link
Owner

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should merge after resolving conflicts

@cebtenzzre cebtenzzre merged commit bc39553 into ggerganov:master Sep 28, 2023
27 of 33 checks passed
@cebtenzzre cebtenzzre deleted the clang-warnings branch September 28, 2023 21:41
@cebtenzzre cebtenzzre restored the clang-warnings branch September 28, 2023 21:41
@ggerganov
Copy link
Owner

With CUDA builds, I think the -Wpedantic now spams the build with the following warning for every line:

...

In file included from tmpxft_001370c3_00000000-7_ggml-cuda.cudafe1.stub.c:1:
/home/ubuntu/ggerganov/llama.cpp/ggml-cuda.cu:1295:3: warning: style of line directive is a GCC extension
 1295 | 
      |   ^
In file included from tmpxft_001370c3_00000000-7_ggml-cuda.cudafe1.stub.c:1:
/tmp/tmpxft_001370c3_00000000-7_ggml-cuda.cudafe1.stub.c:5:3: warning: style of line directive is a GCC extension
    5 | #if !defined(__CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__)
      |   ^~~~
In file included from tmpxft_001370c3_00000000-7_ggml-cuda.cudafe1.stub.c:1:
/home/ubuntu/ggerganov/llama.cpp/ggml-cuda.cu:1297:3: warning: style of line directive is a GCC extension
 1297 | 
      |   ^   
In file included from tmpxft_001370c3_00000000-7_ggml-cuda.cudafe1.stub.c:1:
/home/ubuntu/ggerganov/llama.cpp/ggml-cuda.cu:1297:3: warning: style of line directive is a GCC extension
 1297 | 
      |   ^   
In file included from tmpxft_001370c3_00000000-7_ggml-cuda.cudafe1.stub.c:1:
/home/ubuntu/ggerganov/llama.cpp/ggml-cuda.cu:1411:3: warning: style of line directive is a GCC extension
 1411 | 
      |   ^
In file included from tmpxft_001370c3_00000000-7_ggml-cuda.cudafe1.stub.c:1:
/tmp/tmpxft_001370c3_00000000-7_ggml-cuda.cudafe1.stub.c:5:3: warning: style of line directive is a GCC extension
    5 | #if !defined(__CUDA_INCLUDE_COMPILER_INTERNAL_HEADERS__)
      |   ^~~~
In file included from tmpxft_001370c3_00000000-7_ggml-cuda.cudafe1.stub.c:1:
/home/ubuntu/ggerganov/llama.cpp/ggml-cuda.cu:1413:3: warning: style of line directive is a GCC extension
 1413 | 

...

@slaren
Copy link
Collaborator

slaren commented Sep 30, 2023

I also see that when building with cmake with CUDA enabled. The Makefile build disables -Wpendatic with nvcc to avoid this issue:

NVCCFLAGS := $(NVCCFLAGS) $(CXXFLAGS) $(CUDA_CXXFLAGS) -Wno-pedantic -Xcompiler "$(HOST_CXXFLAGS)"

@cebtenzzre
Copy link
Collaborator Author

I could try and change the structure of the flags to match the Makefile better. I wasn't really thinking about CUDA when I was porting my changes to CMake.

joelkuiper added a commit to vortext/llama.cpp that referenced this pull request Oct 2, 2023
…example

* 'master' of github.com:ggerganov/llama.cpp:
  ggml-cuda : perform cublas mat mul of quantized types as f16 (ggerganov#3412)
  llama.cpp : add documentation about rope_freq_base and scale values (ggerganov#3401)
  train : fix KQ_pos allocation (ggerganov#3392)
  llama : quantize up to 31% faster on Linux and Windows with mmap (ggerganov#3206)
  readme : update hot topics + model links (ggerganov#3399)
  readme : add link to grammars app (ggerganov#3388)
  swift : fix build on xcode 15 (ggerganov#3387)
  build : enable more non-default compiler warnings (ggerganov#3200)
  ggml_tensor: update the structure comments. (ggerganov#3283)
  ggml : release the requested thread pool resource (ggerganov#3292)
  llama.cpp : split llama_context_params into model and context params (ggerganov#3301)
  ci : multithreaded builds (ggerganov#3311)
  train : finetune LORA (ggerganov#2632)
  gguf : basic type checking in gguf_get_* (ggerganov#3346)
  gguf : make token scores and types optional (ggerganov#3347)
  ci : disable freeBSD builds due to lack of VMs (ggerganov#3381)
  llama : custom attention mask + parallel decoding + no context swaps (ggerganov#3228)
  docs : mark code as Bash (ggerganov#3375)
  readme : add Mistral AI release 0.1 (ggerganov#3362)
  ggml-cuda : perform cublas fp16 matrix multiplication as fp16 (ggerganov#3370)
yusiwen pushed a commit to yusiwen/llama.cpp that referenced this pull request Oct 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants