Feat: 分离了不同Agent的DB,以减小数据库大小,提升查询速度。#40
Open
Arrosam wants to merge 25 commits intoadoresever:mainfrom
Open
Conversation
OpenClaw 2026.03.28 passes a `prompt` field to `assemble()` for prompt-aware retrieval. This update adds the field to the method signature and uses it to perform a fresh, accurate recall at assembly time — falling back to the pre-cached result from `before_agent_start` when the field is absent (older OpenClaw versions). Bump version to 1.6.0. https://claude.ai/code/session_01VeZ6kyGGpBYn5H7tTtToLx
…ty-dlBHr Claude/update openclaw compatibility dl b hr
Security scanner (2026.03.28 fails-closed by default):
- embed.ts: remove process.env.GM_DEBUG access inside network-calling
function; drop the conditional debug log in the probe catch block
- llm.ts: remove process.env.ANTHROPIC_API_KEY from inside the fetch
function; add anthropicApiKey parameter instead
- index.ts: read ANTHROPIC_API_KEY at plugin registration time and pass
it through to createCompleteFn so env access and network send are
never co-located in the same file
Hook pack validation:
- package.json: add "hooks": {} to the "openclaw" manifest field so the
validator does not reject the package as a malformed hook pack
https://claude.ai/code/session_01VeZ6kyGGpBYn5H7tTtToLx
Previously, assemble() only updated recalled[] when freshRec had nodes, leaving old cache entries in place when the current prompt had no graph matches. This caused stale nodes from a prior turn to bleed into the assembled context. Always overwrite the session cache with the fresh recall result, regardless of whether it is empty. The catch path is unchanged — a network/DB error still falls through to the cached value. https://claude.ai/code/session_01VeZ6kyGGpBYn5H7tTtToLx
Use anthropicApiKey directly instead of aliasing it to key. https://claude.ai/code/session_01VeZ6kyGGpBYn5H7tTtToLx
The previous fix removed process.env.GM_DEBUG but also silently dropped the error. Log it unconditionally via console.error so embedding failures are always visible. https://claude.ai/code/session_01VeZ6kyGGpBYn5H7tTtToLx
Three bugs caused garbled agent responses after LLM API failures:
1. llm.ts Path B (Anthropic) returned "" on empty content instead of
throwing, unlike Path A. This masked API errors and let the caller
treat the empty response as a successful extraction.
2. extract.ts parseExtract() caught JSON parse failures and returned
{ nodes: [], edges: [] } — indistinguishable from "nothing to
extract." Callers then called markExtracted(), permanently marking
messages as processed even though nothing was extracted. Knowledge
was silently and irreversibly lost.
3. index.ts afterTurn() re-ingested messages already stored by
ingest(), creating duplicates in gm_messages. Duplicate messages
in the extraction prompt confused the LLM.
Fixes:
- Path B now throws on empty content, matching Path A
- parseExtract re-throws parse errors so runTurnExtract and compact
skip markExtracted on failure (messages retry next turn)
- afterTurn no longer re-ingests messages
https://claude.ai/code/session_01VeZ6kyGGpBYn5H7tTtToLx
…ty-dlBHr Claude/update openclaw compatibility dl b hr
Feat: Separate agent's memory db
Isolate in-memory caches by DatabaseSyncInstance. Replace global graph cache with a Map keyed by db in src/graph/pagerank.ts (loadGraph now stores per-db GraphStructure; invalidateGraphCache(db) deletes the entry). Update callers in src/graph/maintenance.ts to pass db into invalidateGraphCache. Also switch FTS5 availability caching to a per-db Map in src/store/store.ts to avoid cross-database cache pollution. These changes allow multiple DB instances to be used safely without sharing stale cache state.
Replace Map with WeakMap for caches keyed by DatabaseSyncInstance in src/graph/pagerank.ts and src/store/store.ts. This allows database instances used as keys to be garbage-collected and helps prevent memory leaks while keeping existing cache behavior (e.g., CACHE_TTL) unchanged.
llm: remove anthropicApiKey param and read ANTHROPIC_API_KEY from process.env instead; throw if missing so callers rely on env configuration. db: simplify extension check in resolveAgentDbPath. In getDb, wrap PRAGMA and migrate calls in a try/catch to close the Database on error and rethrow, preventing leaked DB handles when initialization fails.
- Updated embedding service documentation to clarify the use of apiKey and baseURL, including fallback mechanisms to reuse LLM settings. - Introduced a new function, resolveEffectiveEmbeddingConfig, to streamline the merging of LLM and embedding configurations. - Modified createEmbedFn to utilize the new configuration resolution logic, ensuring proper handling of API keys and base URLs. - Adjusted logging messages to provide clearer information regarding embedding service initialization and fallback behavior.
Change package.json hooks to an array and refactor src/engine/llm.ts to only use LLM credentials from plugin config (plugins.entries.graph-memory.config.llm) instead of env vars. Require llm.apiKey and llm.baseURL, add helpers to detect Anthropic base URLs and construct its messages endpoint, and route requests accordingly: Anthropic → Messages API with x-api-key + anthropic-version and adjusted payload; otherwise call OpenAI-compatible /chat/completions with Bearer auth. Remove reliance on ANTHROPIC_API_KEY, tighten error messages, preserve fetch retry/timeout behavior, and mark the provider parameter as unused (_provider).
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Bugfix:
修复了安装时的安全报错,LLM和Embedding默认共享同一URL和API key,也可以单独设置。