Skip to content

Commit

Permalink
llama : revert n_threads_batch logic
Browse files Browse the repository at this point in the history
ggml-ci
  • Loading branch information
ggerganov committed Nov 27, 2023
1 parent e9b7a5c commit 87f4102
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -5433,7 +5433,7 @@ static int llama_decode_internal(

GGML_ASSERT(n_tokens <= n_batch);

int n_threads = n_tokens < 32 ? cparams.n_threads : cparams.n_threads_batch;
int n_threads = n_tokens == 1 ? cparams.n_threads : cparams.n_threads_batch;
GGML_ASSERT((!batch.token && batch.embd) || (batch.token && !batch.embd)); // NOLINT

const int64_t t_start_us = ggml_time_us();
Expand Down

0 comments on commit 87f4102

Please sign in to comment.