Skip to content

fix(security): add rate limiting to Telegram bridge#865

Merged
cv merged 2 commits intoNVIDIA:mainfrom
fdzdev:fix/telegram-rate-limit
Mar 30, 2026
Merged

fix(security): add rate limiting to Telegram bridge#865
cv merged 2 commits intoNVIDIA:mainfrom
fdzdev:fix/telegram-rate-limit

Conversation

@fdzdev
Copy link
Copy Markdown
Contributor

@fdzdev fdzdev commented Mar 25, 2026

Summary

  • The Telegram bridge spawns a full SSH + agent session per message with zero throttling (CWE-770, NVBUG 6002809)
  • Each message triggers a cloud inference call — rapid messages cause cost amplification
  • Adds per-chat 5-second cooldown, per-chat busy guard (rejects while session active), and raises poll interval from 100ms to 1000ms
  • /start and /reset commands are unaffected (checked before rate limiter)

Test plan

  • Send two messages <5s apart from same chat — second gets "Please wait Xs" reply
  • Send message while previous is still processing — gets "Still processing" reply
  • Messages from different chats are not blocked by each other
  • /start and /reset still work instantly regardless of cooldown
  • busyChats is cleaned up in finally even if agent call throws

Summary by CodeRabbit

  • New Features

    • Per-chat rate limiting with a 5-second cooldown to throttle repeated requests and reply “Please wait Ns…”
    • Sequential message processing per chat to prevent concurrent handling and reply “Still processing your previous message.”
  • Performance

    • Increased polling interval to reduce polling frequency and improve efficiency.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 25, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: c3e93168-b10c-437a-97d0-f4d4c5526866

📥 Commits

Reviewing files that changed from the base of the PR and between 1195812 and 8d89b3f.

📒 Files selected for processing (1)
  • scripts/telegram-bridge.js
🚧 Files skipped from review as they are similar to previous changes (1)
  • scripts/telegram-bridge.js

📝 Walkthrough

Walkthrough

The Telegram bridge now enforces per-chat rate limiting (5s cooldown) and per-chat serialization to prevent concurrent processing. It tracks lastMessageTime and busyChats, sets/clears busy state reliably, and increases the poll reschedule delay from 100ms to 1000ms. No public API changes.

Changes

Cohort / File(s) Summary
Telegram Bridge Concurrency Control
scripts/telegram-bridge.js
Added per-chat 5s cooldown via lastMessageTime. Introduced busyChats to serialize per-chat processing and reply when busy. Ensured busyChats is set before agent execution and cleared in a finally block. Increased poll reschedule delay from 100ms to 1000ms.

Sequence Diagram

sequenceDiagram
    actor Telegram
    participant PollingLoop as "Polling Loop"
    participant ChatState as "Chat State\n(`lastMessageTime`, `busyChats`)"
    participant Agent

    Telegram->>PollingLoop: Deliver message (chatId)
    PollingLoop->>ChatState: Is chatId in busyChats?
    alt chatId busy
        PollingLoop->>Telegram: Send "Still processing your previous message."
    else chatId not busy
        PollingLoop->>ChatState: Check lastMessageTime cooldown
        alt cooldown active
            PollingLoop->>Telegram: Send "Please wait Ns..."
        else cooldown expired
            PollingLoop->>ChatState: Mark chatId as busy
            PollingLoop->>Agent: Execute agent for message
            Agent->>Agent: Process message and generate reply
            Agent->>ChatState: Update lastMessageTime
            PollingLoop->>ChatState: Clear chatId from busyChats (finally)
            Agent->>Telegram: Send reply
        end
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Poem

🐰 I hopped the bridge to tend the chat,
Timers set to keep things pat.
Five seconds pause, one second to poll,
Busy flags guard the processing role,
Cleared at end — now carrots for all! 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: adding rate limiting (cooldown and busy guards) to the Telegram bridge, which directly addresses the security/resource issue described in the PR.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
scripts/telegram-bridge.js (1)

206-239: ⚠️ Potential issue | 🔴 Critical

Don't await the agent run inside poll().

busyChats only works if poll() keeps fetching updates while a chat is in flight. Here the agent call is still awaited inline, so follow-up messages are not examined until after finally clears the busy flag; once a run lasts longer than 5s, the queued message can start a second session instead of getting Still processing, and unrelated chats are blocked behind the same await. Dispatch the per-chat work without awaiting it in poll(), and keep the busy check ahead of the cooldown check.

Suggested direction
-        // Rate limiting: per-chat cooldown
-        const now = Date.now();
-        const lastTime = lastMessageTime.get(chatId) || 0;
-        if (now - lastTime < COOLDOWN_MS) {
-          const wait = Math.ceil((COOLDOWN_MS - (now - lastTime)) / 1000);
-          await sendMessage(chatId, `Please wait ${wait}s before sending another message.`, msg.message_id);
-          continue;
-        }
-
-        // Per-chat serialization: reject if this chat already has an active session
-        if (busyChats.has(chatId)) {
-          await sendMessage(chatId, "Still processing your previous message.", msg.message_id);
-          continue;
-        }
-
-        lastMessageTime.set(chatId, now);
-        busyChats.add(chatId);
-
-        // Send typing indicator
-        await sendTyping(chatId);
-
-        // Keep a typing indicator going while agent runs
-        const typingInterval = setInterval(() => sendTyping(chatId), 4000);
-
-        try {
-          const response = await runAgentInSandbox(msg.text, chatId);
-          clearInterval(typingInterval);
-          console.log(`[${chatId}] agent: ${response.slice(0, 100)}...`);
-          await sendMessage(chatId, response, msg.message_id);
-        } catch (err) {
-          clearInterval(typingInterval);
-          await sendMessage(chatId, `Error: ${err.message}`, msg.message_id);
-        } finally {
-          busyChats.delete(chatId);
-        }
+        // Per-chat serialization: reject if this chat already has an active session
+        if (busyChats.has(chatId)) {
+          await sendMessage(chatId, "Still processing your previous message.", msg.message_id);
+          continue;
+        }
+
+        // Rate limiting: per-chat cooldown
+        const now = Date.now();
+        const lastTime = lastMessageTime.get(chatId) || 0;
+        if (now - lastTime < COOLDOWN_MS) {
+          const wait = Math.ceil((COOLDOWN_MS - (now - lastTime)) / 1000);
+          await sendMessage(chatId, `Please wait ${wait}s before sending another message.`, msg.message_id);
+          continue;
+        }
+
+        lastMessageTime.set(chatId, now);
+        busyChats.add(chatId);
+
+        const handler = (async () => {
+          await sendTyping(chatId);
+          const typingInterval = setInterval(() => sendTyping(chatId), 4000);
+
+          try {
+            const response = await runAgentInSandbox(msg.text, chatId);
+            console.log(`[${chatId}] agent: ${response.slice(0, 100)}...`);
+            await sendMessage(chatId, response, msg.message_id);
+          } catch (err) {
+            await sendMessage(chatId, `Error: ${err.message}`, msg.message_id).catch(() => {});
+          } finally {
+            clearInterval(typingInterval);
+            busyChats.delete(chatId);
+          }
+        })();
+
+        handler.catch((err) => {
+          console.error(`[${chatId}] handler error:`, err.message);
+        });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/telegram-bridge.js` around lines 206 - 239, The polling loop
currently awaits runAgentInSandbox inside poll(), which blocks further update
processing and makes busyChats ineffective; change poll() to (1) check busyChats
before cooldown (move busyChats.has(chatId) above the COOLDOWN_MS check), (2)
dispatch the per-chat work as an unawaited async task so poll() can continue
fetching updates — create an async helper (or inline async IIFE) that sets
lastMessageTime, busyChats.add(chatId), starts sendTyping and the
typingInterval, calls runAgentInSandbox, handles sendMessage on success/error,
clears the typingInterval and finally removes busyChats; call that helper
without awaiting it from poll() so other updates are still processed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@scripts/telegram-bridge.js`:
- Around line 206-239: The polling loop currently awaits runAgentInSandbox
inside poll(), which blocks further update processing and makes busyChats
ineffective; change poll() to (1) check busyChats before cooldown (move
busyChats.has(chatId) above the COOLDOWN_MS check), (2) dispatch the per-chat
work as an unawaited async task so poll() can continue fetching updates — create
an async helper (or inline async IIFE) that sets lastMessageTime,
busyChats.add(chatId), starts sendTyping and the typingInterval, calls
runAgentInSandbox, handles sendMessage on success/error, clears the
typingInterval and finally removes busyChats; call that helper without awaiting
it from poll() so other updates are still processed.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: afab6a82-6eef-47a6-b42f-ef513c2fcce7

📥 Commits

Reviewing files that changed from the base of the PR and between da973d7 and 2e4de4f.

📒 Files selected for processing (1)
  • scripts/telegram-bridge.js

The bridge spawns an SSH + agent session per message with zero
throttling — a flood of messages causes resource exhaustion and
cost amplification via cloud inference (CWE-770, NVBUG 6002809).

Add per-chat 5s cooldown, per-chat busy guard to reject messages
while a session is active, and raise the poll interval from 100ms
to 1000ms.

Made-with: Cursor
@fdzdev fdzdev force-pushed the fix/telegram-rate-limit branch from 2e4de4f to 1195812 Compare March 26, 2026 00:12
@cv cv merged commit 86a573e into NVIDIA:main Mar 30, 2026
6 checks passed
quanticsoul4772 pushed a commit to quanticsoul4772/NemoClaw that referenced this pull request Mar 30, 2026
The bridge spawns an SSH + agent session per message with zero
throttling — a flood of messages causes resource exhaustion and
cost amplification via cloud inference (CWE-770, NVBUG 6002809).

Add per-chat 5s cooldown, per-chat busy guard to reject messages
while a session is active, and raise the poll interval from 100ms
to 1000ms.

Made-with: Cursor

Co-authored-by: Facundo Fernandez <facundofernandez@Facundos-MacBook-Pro.local>
Co-authored-by: Carlos Villela <cvillela@nvidia.com>
laitingsheng pushed a commit that referenced this pull request Apr 2, 2026
The bridge spawns an SSH + agent session per message with zero
throttling — a flood of messages causes resource exhaustion and
cost amplification via cloud inference (CWE-770, NVBUG 6002809).

Add per-chat 5s cooldown, per-chat busy guard to reject messages
while a session is active, and raise the poll interval from 100ms
to 1000ms.

Made-with: Cursor

Co-authored-by: Facundo Fernandez <facundofernandez@Facundos-MacBook-Pro.local>
Co-authored-by: Carlos Villela <cvillela@nvidia.com>
lakamsani pushed a commit to lakamsani/NemoClaw that referenced this pull request Apr 4, 2026
The bridge spawns an SSH + agent session per message with zero
throttling — a flood of messages causes resource exhaustion and
cost amplification via cloud inference (CWE-770, NVBUG 6002809).

Add per-chat 5s cooldown, per-chat busy guard to reject messages
while a session is active, and raise the poll interval from 100ms
to 1000ms.

Made-with: Cursor

Co-authored-by: Facundo Fernandez <facundofernandez@Facundos-MacBook-Pro.local>
Co-authored-by: Carlos Villela <cvillela@nvidia.com>
gemini2026 pushed a commit to gemini2026/NemoClaw that referenced this pull request Apr 14, 2026
The bridge spawns an SSH + agent session per message with zero
throttling — a flood of messages causes resource exhaustion and
cost amplification via cloud inference (CWE-770, NVBUG 6002809).

Add per-chat 5s cooldown, per-chat busy guard to reject messages
while a session is active, and raise the poll interval from 100ms
to 1000ms.

Made-with: Cursor

Co-authored-by: Facundo Fernandez <facundofernandez@Facundos-MacBook-Pro.local>
Co-authored-by: Carlos Villela <cvillela@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants