fix: improve activeModelEntries consistency and validation#853
fix: improve activeModelEntries consistency and validation#853hobostay wants to merge 1 commit intoNVIDIA:mainfrom
Conversation
- Add explicit check for non-empty model string before using configured model - Add 'inference/' prefix to default models for consistency with configured models - Improve code clarity by extracting configuredModel variable - Add defensive validation against empty/whitespace model strings This addresses potential edge cases where: 1. A config object exists but model is undefined/empty 2. Model entry IDs are inconsistent (default vs configured) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughThe Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@nemoclaw/src/index.ts`:
- Around line 140-145: The code uses the untrimmed configuredModel when building
the menu entry (id: `inference/${configuredModel}` and label: configuredModel)
even though you validate with configuredModel.trim(), so trim and reuse the
cleaned value: compute a trimmedModel = configuredModel.trim() after reading
onboardCfg?.model, check trimmedModel.length > 0, and then use trimmedModel for
the id and label (e.g., `inference/${trimmedModel}` and label: trimmedModel) to
avoid leading/trailing whitespace in IDs and labels.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3f7487c3-81e0-4dfc-ba6b-5783032f6f7b
📒 Files selected for processing (1)
nemoclaw/src/index.ts
| const configuredModel = onboardCfg?.model; | ||
| if (configuredModel && configuredModel.trim().length > 0) { | ||
| return [ | ||
| { | ||
| id: "nvidia/nemotron-3-super-120b-a12b", | ||
| label: "Nemotron 3 Super 120B (March 2026)", | ||
| id: `inference/${configuredModel}`, | ||
| label: configuredModel, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for any test fixtures or config examples that might show the expected model format
rg -n "inference/" --type=ts --type=json -C2 | grep -i "model"
# Check how model values are being set in tests
ast-grep --pattern 'model: "$_"'Repository: NVIDIA/NemoClaw
Length of output: 720
🏁 Script executed:
cat -n nemoclaw/src/index.ts | sed -n '130,180p'Repository: NVIDIA/NemoClaw
Length of output: 1730
🏁 Script executed:
# Find how loadOnboardConfig loads the model value
rg -n "loadOnboardConfig" --type=ts -A 20 | grep -E "(model|trim|load)" -B 2 -A 2
# Check the definition of loadOnboardConfig
ast-grep --pattern 'function loadOnboardConfig($_) { $$$ }'Repository: NVIDIA/NemoClaw
Length of output: 8081
🏁 Script executed:
cat -n nemoclaw/src/onboard/config.ts | sed -n '85,115p'Repository: NVIDIA/NemoClaw
Length of output: 852
🏁 Script executed:
# Check the NemoClawOnboardConfig type definition
rg -n "NemoClawOnboardConfig" --type=ts -B 3 -A 10 | head -60
# Check if there's any UI form that accepts model input
rg -n "model" nemoclaw/src/onboard --type=ts | grep -E "(input|prompt|stdin|read)"Repository: NVIDIA/NemoClaw
Length of output: 4114
Trim the configured model value to prevent whitespace in ID and label.
The validation checks configuredModel.trim().length > 0, but the untrimmed value is used in the ID and label. If the model config has leading/trailing whitespace (e.g., from manual JSON editing), it would create an invalid ID like "inference/ nvidia/model ".
Apply .trim() at assignment:
Proposed fix
const configuredModel = onboardCfg?.model;
if (configuredModel && configuredModel.trim().length > 0) {
return [
{
- id: `inference/${configuredModel}`,
+ id: `inference/${configuredModel.trim()}`,
- label: configuredModel,
+ label: configuredModel.trim(),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const configuredModel = onboardCfg?.model; | |
| if (configuredModel && configuredModel.trim().length > 0) { | |
| return [ | |
| { | |
| id: "nvidia/nemotron-3-super-120b-a12b", | |
| label: "Nemotron 3 Super 120B (March 2026)", | |
| id: `inference/${configuredModel}`, | |
| label: configuredModel, | |
| const configuredModel = onboardCfg?.model; | |
| if (configuredModel && configuredModel.trim().length > 0) { | |
| return [ | |
| { | |
| id: `inference/${configuredModel.trim()}`, | |
| label: configuredModel.trim(), |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@nemoclaw/src/index.ts` around lines 140 - 145, The code uses the untrimmed
configuredModel when building the menu entry (id: `inference/${configuredModel}`
and label: configuredModel) even though you validate with
configuredModel.trim(), so trim and reuse the cleaned value: compute a
trimmedModel = configuredModel.trim() after reading onboardCfg?.model, check
trimmedModel.length > 0, and then use trimmedModel for the id and label (e.g.,
`inference/${trimmedModel}` and label: trimmedModel) to avoid leading/trailing
whitespace in IDs and labels.
Summary
This PR improves the
activeModelEntriesfunction innemoclaw/src/index.tsto:inference/prefix (both default and configured models)configuredModelvariable for better code readabilityChanges
Before
nvidia/nemotron-3-super-120b-a12binference/moonshotai/kimi-k2.5After
inference/prefixinference/nvidia/nemotron-3-super-120b-a12binference/moonshotai/kimi-k2.5.trim().length > 0check to handle empty/whitespace stringsTesting
The change is defensive and maintains backward compatibility:
onboardCfgis null/undefined or model is empty → returns default modelsinference/prefixRelated Issues
This addresses potential edge cases in model registration that could cause confusion when:
Co-Authored-By: Claude Opus 4.6 (1M context) noreply@anthropic.com
Summary by CodeRabbit