Skip to content

fix: improve activeModelEntries consistency and validation#853

Open
hobostay wants to merge 1 commit intoNVIDIA:mainfrom
hobostay:fix/model-entry-consistency
Open

fix: improve activeModelEntries consistency and validation#853
hobostay wants to merge 1 commit intoNVIDIA:mainfrom
hobostay:fix/model-entry-consistency

Conversation

@hobostay
Copy link

@hobostay hobostay commented Mar 25, 2026

Summary

This PR improves the activeModelEntries function in nemoclaw/src/index.ts to:

  1. Add consistency: All model IDs now use the inference/ prefix (both default and configured models)
  2. Add validation: Explicit check for non-empty model string before using configured model
  3. Improve clarity: Extract configuredModel variable for better code readability

Changes

Before

  • Default models had IDs like nvidia/nemotron-3-super-120b-a12b
  • Configured models had IDs like inference/moonshotai/kimi-k2.5
  • Inconsistent format between default and configured models
  • No explicit check for empty strings

After

  • All models consistently use the inference/ prefix
  • Default models: inference/nvidia/nemotron-3-super-120b-a12b
  • Configured models: inference/moonshotai/kimi-k2.5
  • Added .trim().length > 0 check to handle empty/whitespace strings

Testing

The change is defensive and maintains backward compatibility:

  • When onboardCfg is null/undefined or model is empty → returns default models
  • When a valid model is configured → returns that model with inference/ prefix

Related Issues

This addresses potential edge cases in model registration that could cause confusion when:

  • A config object exists but the model property is undefined/empty
  • Model entries have inconsistent ID formats

Co-Authored-By: Claude Opus 4.6 (1M context) noreply@anthropic.com

Summary by CodeRabbit

  • Refactor
    • Model selection logic updated to recognize configured models; default model identifiers and labels standardized.

- Add explicit check for non-empty model string before using configured model
- Add 'inference/' prefix to default models for consistency with configured models
- Improve code clarity by extracting configuredModel variable
- Add defensive validation against empty/whitespace model strings

This addresses potential edge cases where:
1. A config object exists but model is undefined/empty
2. Model entry IDs are inconsistent (default vs configured)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 25, 2026

📝 Walkthrough

Walkthrough

The activeModelEntries function in index.ts was refactored to prioritize a configured model when onboardCfg.model is present and non-empty. It returns a model entry with inference/${configuredModel} as the ID; otherwise it returns predefined defaults with explicit inference/nvidia/... format IDs. Inline comments document the selection behavior.

Changes

Cohort / File(s) Summary
Model Selection Logic
nemoclaw/src/index.ts
Refactored activeModelEntries to select a single configured model when available, using explicit inference/nvidia/... ID format for defaults. Added clarifying inline comments documenting configured vs default selection behavior.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Poem

🐰 A model to choose, a default in place,
Configuration blooms with explicit face,
inference/nvidia now shines so bright,
Comments guide readers through logic's light! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: improving consistency and validation in the activeModelEntries function.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@nemoclaw/src/index.ts`:
- Around line 140-145: The code uses the untrimmed configuredModel when building
the menu entry (id: `inference/${configuredModel}` and label: configuredModel)
even though you validate with configuredModel.trim(), so trim and reuse the
cleaned value: compute a trimmedModel = configuredModel.trim() after reading
onboardCfg?.model, check trimmedModel.length > 0, and then use trimmedModel for
the id and label (e.g., `inference/${trimmedModel}` and label: trimmedModel) to
avoid leading/trailing whitespace in IDs and labels.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3f7487c3-81e0-4dfc-ba6b-5783032f6f7b

📥 Commits

Reviewing files that changed from the base of the PR and between 6964d78 and 72b2a31.

📒 Files selected for processing (1)
  • nemoclaw/src/index.ts

Comment on lines +140 to +145
const configuredModel = onboardCfg?.model;
if (configuredModel && configuredModel.trim().length > 0) {
return [
{
id: "nvidia/nemotron-3-super-120b-a12b",
label: "Nemotron 3 Super 120B (March 2026)",
id: `inference/${configuredModel}`,
label: configuredModel,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for any test fixtures or config examples that might show the expected model format
rg -n "inference/" --type=ts --type=json -C2 | grep -i "model"

# Check how model values are being set in tests
ast-grep --pattern 'model: "$_"'

Repository: NVIDIA/NemoClaw

Length of output: 720


🏁 Script executed:

cat -n nemoclaw/src/index.ts | sed -n '130,180p'

Repository: NVIDIA/NemoClaw

Length of output: 1730


🏁 Script executed:

# Find how loadOnboardConfig loads the model value
rg -n "loadOnboardConfig" --type=ts -A 20 | grep -E "(model|trim|load)" -B 2 -A 2

# Check the definition of loadOnboardConfig
ast-grep --pattern 'function loadOnboardConfig($_) { $$$ }'

Repository: NVIDIA/NemoClaw

Length of output: 8081


🏁 Script executed:

cat -n nemoclaw/src/onboard/config.ts | sed -n '85,115p'

Repository: NVIDIA/NemoClaw

Length of output: 852


🏁 Script executed:

# Check the NemoClawOnboardConfig type definition
rg -n "NemoClawOnboardConfig" --type=ts -B 3 -A 10 | head -60

# Check if there's any UI form that accepts model input
rg -n "model" nemoclaw/src/onboard --type=ts | grep -E "(input|prompt|stdin|read)"

Repository: NVIDIA/NemoClaw

Length of output: 4114


Trim the configured model value to prevent whitespace in ID and label.

The validation checks configuredModel.trim().length > 0, but the untrimmed value is used in the ID and label. If the model config has leading/trailing whitespace (e.g., from manual JSON editing), it would create an invalid ID like "inference/ nvidia/model ".

Apply .trim() at assignment:

Proposed fix
  const configuredModel = onboardCfg?.model;
  if (configuredModel && configuredModel.trim().length > 0) {
    return [
      {
-       id: `inference/${configuredModel}`,
+       id: `inference/${configuredModel.trim()}`,
-       label: configuredModel,
+       label: configuredModel.trim(),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const configuredModel = onboardCfg?.model;
if (configuredModel && configuredModel.trim().length > 0) {
return [
{
id: "nvidia/nemotron-3-super-120b-a12b",
label: "Nemotron 3 Super 120B (March 2026)",
id: `inference/${configuredModel}`,
label: configuredModel,
const configuredModel = onboardCfg?.model;
if (configuredModel && configuredModel.trim().length > 0) {
return [
{
id: `inference/${configuredModel.trim()}`,
label: configuredModel.trim(),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@nemoclaw/src/index.ts` around lines 140 - 145, The code uses the untrimmed
configuredModel when building the menu entry (id: `inference/${configuredModel}`
and label: configuredModel) even though you validate with
configuredModel.trim(), so trim and reuse the cleaned value: compute a
trimmedModel = configuredModel.trim() after reading onboardCfg?.model, check
trimmedModel.length > 0, and then use trimmedModel for the id and label (e.g.,
`inference/${trimmedModel}` and label: trimmedModel) to avoid leading/trailing
whitespace in IDs and labels.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant