Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- feat: Update llama.cpp to ggerganov/llama.cpp@5d6f18a63 and sync Python bindings
- fix: Correct batched embedding outputs for multi-sequence `embed()` calls by @Anai-Guo in #2205
- fix: Configure embedding contexts with enough sequence slots for batched `embed()` calls
- fix: Mark all embedding input tokens as outputs to avoid llama.cpp override warnings by @Anai-Guo in #2212

## [0.3.22]

Expand Down
8 changes: 7 additions & 1 deletion llama_cpp/llama.py
Original file line number Diff line number Diff line change
Expand Up @@ -1040,7 +1040,13 @@ def embed(

# get pooling information
pooling_type = self.pooling_type()
logits_all = pooling_type == llama_cpp.LLAMA_POOLING_TYPE_NONE
# In embedding mode every input token must be marked as an output, regardless of
# pooling type. llama.cpp would otherwise override per-token `logits[i]` and emit
# "embeddings required but some input tokens were not marked as outputs ->
# overriding" once per input. Pooling NONE vs MEAN/CLS only changes how the
# per-token outputs are read back (see decode_batch below), not whether they are
# produced. See abetlen/llama-cpp-python#2208.
logits_all = True

if self.context_params.embeddings is False:
raise RuntimeError(
Expand Down
Loading