Skip to content

Fix local embedding context rebuilds and batch safety#7

Open
andinux wants to merge 2 commits into
mainfrom
fix/local-embedding-batch-safety
Open

Fix local embedding context rebuilds and batch safety#7
andinux wants to merge 2 commits into
mainfrom
fix/local-embedding-batch-safety

Conversation

@andinux
Copy link
Copy Markdown
Contributor

@andinux andinux commented May 14, 2026

Summary

Fixes local embedding stability and context handling for sqlmem / sqlite-memory.

Changes

  • Skip whitespace-only semantic chunks before embedding.
  • Size local llama contexts from max_tokens + overlay_tokens.
  • Configure n_ctx, n_batch, n_ubatch, and token buffers consistently to avoid llama.cpp encoder assertions.
  • Use reusable llama batches with explicit sequence metadata.
  • Truncate over-capacity tokenization safely instead of relying on llama.cpp failures.
  • Move llama diagnostics out of process-global logger user_data into thread-local storage.
  • Rebuild the local engine when max_tokens or overlay_tokens changes after memory_set_model.
  • Roll back option updates if local engine rebuild fails.
  • Clear cached embeddings for the active local provider/model after context rebuilds.
  • Add regression coverage for whitespace-only chunks and stale logger user_data.
  • Bump extension version to 1.2.1.

Why

Local embedding models could crash or return stale results in several cases:

  • whitespace-only chunks could reach the embedding path;
  • encoder models could receive chunks larger than n_ubatch;
  • llama’s process-global logger could retain stale per-engine state;
  • changing token options after model initialization could leave the local engine sized for old chunk limits;
  • cached embeddings could be reused after a local context rebuild even though tokenization/truncation may change.

andinux added 2 commits May 14, 2026 12:56
Prevent empty semantic chunks from reaching embedding providers, where they can produce invalid zero-dimensional results and pollute the vault or cache. The check lives in the parser callback path so all embedding providers and SQLite entry points share the same filtering behavior before provider-specific code runs.
Configure local llama contexts from max_tokens plus overlay_tokens so encoder inputs fit the chunk sizes produced by the parser. The local engine now sizes n_ctx, n_batch, n_ubatch, and token buffers together, caps the context to bounded/model-supported values, prepares reusable batches with sequence metadata, and truncates over-capacity tokenization explicitly instead of relying on llama.cpp assertions.

Move llama diagnostics out of per-engine logger user_data because llama_log_set installs a process-global callback. Thread-local diagnostic capture keeps load/context errors useful without writing through stale engine pointers after another connection replaces or frees an engine.

Rebuild the local engine when max_tokens or overlay_tokens changes because those options alter chunk sizes after memory_set_model. The option update now runs under a savepoint, rolls back in-memory and persisted settings on rebuild failure, and keeps the previous engine alive unless the replacement is fully ready.

Invalidate cached embeddings for the active local provider/model after a successful context rebuild. The cache key does not include local context sizing, so clearing that provider/model avoids reusing stale embeddings, token counts, or truncation metadata generated under the previous context window.

Add a logger regression test for stale global logger user_data and bump the extension version to 1.2.1.
@andinux andinux requested a review from marcobambini May 14, 2026 20:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants