feat(experimental): small model mode for local inference#884
Draft
feat(experimental): small model mode for local inference#884
Conversation
When onboarding with a local inference provider (Ollama, vLLM), enable small model mode which reduces system prompt overhead so small models have more context capacity for actual conversation. Changes: - Add explicit ollama-local/vllm-local cases to getSandboxInferenceConfig - New NEMOCLAW_SMALL_MODEL_MODE build arg sets bootstrapMaxChars=4000 and bootstrapTotalMaxChars=8000 in openclaw.json - Write compact SOUL.md and AGENTS.md workspace files at build time - Log "[experimental]" during onboarding when small model mode is active Ref: NVBUG 6018719
Contributor
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
Docker's parser interprets heredoc end markers as the end of the RUN instruction, causing lines after the first heredoc to be parsed as unknown Dockerfile instructions. Switch to printf with escaped newlines.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
ollama-local/vllm-localcases ingetSandboxInferenceConfig()(was relying ondefaultfallthrough)NEMOCLAW_SMALL_MODEL_MODEbuild arg lowers bootstrap token budgets (bootstrapMaxChars=4000,bootstrapTotalMaxChars=8000) and writes compact workspace files (SOUL.md, AGENTS.md) at image build time[experimental]during onboarding when activeContext
NVBUG 6018719: Ollama with qwen2.5:0.5b produces usable inference but garbage answers because OpenClaw's ~14KB+ default system prompt overwhelms the model's capacity. A/B testing showed the compact prompt saves ~1700 prompt tokens per turn — the difference between ~18 and ~30+ conversation turns before context exhaustion.
This is experimental — needs design review before graduating from draft.
Test plan