Vulnerable sample for Action test#2
Conversation
WalkthroughIntroduces a Flask application scaffold with a public app instance, a hardcoded API_KEY constant, and a new /ping HTTP route. The ping handler reads an optional name query parameter (default "world"), invokes a shell echo via os.system, and responds with "ok". Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor C as Client
participant F as Flask App
participant H as ping() Handler
participant S as OS Shell
C->>F: GET /ping?name=...
F->>H: Dispatch request
H->>H: name = request.args.get("name","world")
H->>S: os.system("echo hello, {name}")
S-->>H: exit code
H-->>F: "ok"
F-->>C: 200 OK, body: "ok"
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Pre-merge checks (2 passed, 1 warning)❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Poem
✨ Finishing touches
🧪 Generate unit tests
Comment |
🔐 Secure Code Review (AI)Risk Summary: High (3), Medium (1)
Safeguards Checklist:
The diff is small and focused, but the issues identified are critical and should be addressed immediately. Models can make mistakes. Verify before merging. |
Micro-Learning Topic: OS command injection (Detected by phrase)Matched on "Command Injection"In many situations, applications will rely on OS provided functions, scripts, macros and utilities instead of reimplementing them in code. While functions would typically be accessed through a native interface library, the remaining three OS provided features will normally be invoked via the command line or launched as a process. If unsafe inputs are used to construct commands or arguments, it may allow arbitrary OS operations to be performed that can compromise the server. Try a challenge in Secure Code WarriorHelpful references
Micro-Learning Topic: Hard-coded API key (Detected by phrase)Matched on "Hardcoded API Key"Poor or missing authentication/authorization schemes allow an adversary to anonymously execute functionality within the mobile app or backend server used by the mobile app. Weaker authentication for mobile apps is fairly prevalent due to a mobile device's input form factor. To detect poor authentication schemes, testers can perform binary attacks against the mobile app while it is in 'offline' mode. Through the attack, the tester will force the app to bypass offline authentication and then execute functionality that should require offline authentication. To test for poor authorization schemes, testers can perform binary attacks against the mobile app and try to execute privileged functionality that should only be executable with a user of higher privilege while the mobile app is in 'offline' mode. This kind of vulnerability occurs when the app uses a hardcoded API key to communicate to the server. It is possible for an attacker to use the API key and make direct calls to the server. Try a challenge in Secure Code WarriorMicro-Learning Topic: Injection attack (Detected by phrase)Matched on "injection attack"Injection flaws, such as SQL, NoSQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing data without proper authorization. Source: https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project Try a challenge in Secure Code WarriorHelpful references
|
| @app.route("/ping") | ||
| def ping(): | ||
| name = request.args.get("name", "world") | ||
| os.system(f"echo Hello {name}") |
Check failure
Code scanning / CodeQL
Uncontrolled command line Critical
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 8 months ago
To prevent command injection, user-supplied data should never be directly interpolated into shell commands. The fix is to avoid interpolating the value of name directly in the shell command. Instead, use hard-coded commands, or if you want to output the supplied name, call the command using a list with individual arguments so that the shell is not invoked or escape the user input.
Best practice: Use the subprocess module with the argument list, rather than invoking the shell (i.e., do not use shell=True). For example, subprocess.run(["echo", "Hello", name]) will safely pass the user input as an argument to echo, without allowing it to affect command parsing.
Where to edit:
- app.py, line 8: Replace
os.system(f"echo Hello {name}")with a safe alternative. - Add an import for
subprocess.
What's needed:
- Import the
subprocessmodule. - Update the code to use
subprocess.run(["echo", "Hello", name])instead ofos.system(). - Optionally, you may want to escape or safely constrain the value of
name, but usingsubprocess.runwith argument list suffices for safety in this context.
| @@ -1,9 +1,10 @@ | ||
| from flask import Flask, request | ||
| import os | ||
| import subprocess | ||
| app = Flask(__name__) | ||
| API_KEY = "sk-test-123456" | ||
| @app.route("/ping") | ||
| def ping(): | ||
| name = request.args.get("name", "world") | ||
| os.system(f"echo Hello {name}") | ||
| subprocess.run(["echo", "Hello", name]) | ||
| return "ok" |
There was a problem hiding this comment.
Micro-Learning Topic: Uncontrolled command line (Detected by phrase)
Matched on "Uncontrolled command line"
Using externally controlled strings in a command line may allow a malicious user to change the meaning of the command.
Try a challenge in Secure Code Warrior
Helpful references
- OWASP Command Injection - OWASP community page with comprehensive information about command injection, and links to various OWASP resources to help detect or prevent it.
- OWASP testing for Command Injection - This article is focused on providing testing techniques for identifying command injection flaws in your applications
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
app.py (1)
5-5: Tighten route and inputs (defense-in-depth).Make the method explicit and cap input size to avoid log/processing abuse.
-@app.route("/ping") +@app.route("/ping", methods=["GET"]) - name = request.args.get("name", "world") + name = request.args.get("name", "world")[:100]Also applies to: 7-7
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
app.py(1 hunks)
🧰 Additional context used
🪛 Ruff (0.12.2)
app.py
8-8: Starting a process with a shell, possible injection detected
(S605)
🔇 Additional comments (1)
app.py (1)
1-3: LGTM on Flask app scaffolding.App creation/imports are fine.
| from flask import Flask, request | ||
| import os | ||
| app = Flask(__name__) | ||
| API_KEY = "sk-test-123456" |
There was a problem hiding this comment.
Hardcoded secret committed — remove and source from env/secrets manager.
Never commit API keys, even placeholders; rotate if real, purge from history, and load via environment/secret manager.
Apply this diff:
-API_KEY = "sk-test-123456"
+API_KEY = os.environ.get("API_KEY")
+if not API_KEY:
+ raise RuntimeError("API_KEY must be provided via environment/secret manager")Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In app.py around line 4, a hardcoded API_KEY ("sk-test-123456") is committed;
remove the secret literal and instead read the key from an environment variable
or secret manager (e.g., os.environ["API_KEY"] or a secrets client), add a clear
startup check that raises/logs and exits if the env var is missing, update any
README or deployment manifests to document the required secret, and rotate/purge
the exposed key from repo/history if it was real.
| name = request.args.get("name", "world") | ||
| os.system(f"echo Hello {name}") |
There was a problem hiding this comment.
Command injection via os.system with untrusted input.
User-controlled name flows into a shell command; this is exploitable (also flagged by Ruff S605).
Replace the shell call; log or handle in Python instead:
- name = request.args.get("name", "world")
- os.system(f"echo Hello {name}")
+ name = request.args.get("name", "world")
+ app.logger.info("Hello %s", name)If you truly must spawn a process, use subprocess with shell=False:
import subprocess
subprocess.run(["echo", "Hello", name], check=False)🧰 Tools
🪛 Ruff (0.12.2)
8-8: Starting a process with a shell, possible injection detected
(S605)
🤖 Prompt for AI Agents
In app.py around lines 7 to 8, the code uses os.system(f"echo Hello {name}")
with untrusted request.args leading to command injection; replace this by
avoiding shell invocation — either construct the greeting in Python and
log/return it (e.g., greeting = f"Hello {name}") or, if a process must be
spawned, use subprocess.run with shell=False and pass the command as a list
(e.g., ["echo", name]) to avoid shell interpolation; ensure you also validate or
sanitize name if needed and remove the os.system call and import.
Micro-Learning Topic: External entity injection (Detected by phrase)Matched on "XxE"An XML External Entity attack is a type of attack against an application that parses XML input. This attack occurs when XML input containing a reference to an external entity is processed by a weakly configured XML parser. This attack may lead to the disclosure of confidential data, denial of service, server-side request forgery, port scanning from the perspective of the machine where the parser is located, and other system impacts. Try a challenge in Secure Code WarriorHelpful references
|
This PR intentionally adds risky patterns for testing ai-secure-code-review-action.
Summary by CodeRabbit
New Features
Chores