fix: improve SDK compatibility and add auth enhancements#629
fix: improve SDK compatibility and add auth enhancements#629xuanxiang wants to merge 2 commits intositeboon:mainfrom
Conversation
- Fix SDK spawning child processes by temporarily unsetting CLAUDECODE env var - Add /register_licc endpoint with signature verification - Use local proxy for GitHub release API to avoid CORS issues Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdded GitLab platform support alongside GitHub with platform abstraction layer. Introduced file upload functionality via multipart endpoints. Enhanced database with GitLab token management and new registration endpoint. Updated agent route to support both platforms for repository operations. Modified version check to use relative release endpoint. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant AgentRoute as Agent Route<br/>(server/routes/agent.js)
participant Factory as GitPlatformFactory
participant TokenDB as Token Database
participant Platform as Git Platform<br/>(GitHub/GitLab)
participant Git as Git/Remote API
Client->>AgentRoute: POST /api/agent<br/>(gitUrl, ...)
AgentRoute->>Factory: detectPlatform(gitUrl)
Factory-->>AgentRoute: platform (GITHUB|GITLAB)
AgentRoute->>Factory: createFromUrl(gitUrl, token)
Factory->>Platform: new GitHubPlatform(token)<br/>OR new GitLabPlatform(token)
Factory-->>AgentRoute: platform instance
AgentRoute->>TokenDB: getTokens(userId)<br/>from appropriate DB
TokenDB-->>AgentRoute: token
AgentRoute->>Platform: cloneRepo(gitUrl, projectPath)
Platform->>Git: git clone
Git-->>Platform: repo cloned
Platform-->>AgentRoute: success
AgentRoute->>Platform: createBranch(owner, repo, branchName)
Platform->>Git: git API call
Git-->>Platform: branch created
Platform-->>AgentRoute: branch URL
AgentRoute->>Platform: createPR(owner, repo, branchName, title, body)
Platform->>Git: GitHub/GitLab API call
Git-->>Platform: PR/MR created
Platform-->>AgentRoute: {number, url}
AgentRoute-->>Client: stream events (git-branch, git-pr)
Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 13
🧹 Nitpick comments (1)
public/repos/siteboon/claudecodeui/releases/latest.json (1)
1-56: Trim this JSON to only fields consumed by the hook.
useVersionCheckonly needstag_name,name,body,html_url, andpublished_at. Keeping the full GitHub payload adds unnecessary static size and noisy version-control churn.Lean payload example
{ - "url": "...", - "assets_url": "...", - "upload_url": "...", "html_url": "https://github.com/siteboon/claudecodeui/releases/tag/v1.28.0", - "id": 305083973, - "author": { ... }, - "node_id": "...", "tag_name": "v1.28.0", - "target_commitish": "main", "name": "CloudCLI UI v1.28.0", - "draft": false, - "immutable": false, - "prerelease": false, - "created_at": "...", - "updated_at": "...", "published_at": "2026-04-03T15:41:11Z", - "assets": [], - "tarball_url": "...", - "zipball_url": "...", - "reactions": { ... }, "body": "## [1.28.0](...) ..." }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@public/repos/siteboon/claudecodeui/releases/latest.json` around lines 1 - 56, The JSON release payload is too large; trim public/repos/siteboon/claudecodeui/releases/latest.json to only the fields consumed by useVersionCheck (keep tag_name, name, body, html_url, published_at) and remove everything else (author, assets, urls, reactions, etc.); update the file so it contains a minimal object with only those five keys and their values to reduce repo noise and static size.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/claude-sdk.js`:
- Around line 15-19: The module-level deletion of CLAUDECODE
(savedClaudecode/delete process.env.CLAUDECODE) only runs at import and later
restores reintroduce the var; remove the eager delete/save at module scope and
instead ensure process.env.CLAUDECODE is deleted immediately before each call
path that spawns Claude child processes (e.g., inside the Query constructor or
right before Query.prototype.query()/the query() call site), and stop restoring
the savedClaudecode afterward so the environment remains cleared for subsequent
calls; update references to savedClaudecode and the restore logic near the
Query/query implementation accordingly.
- Around line 625-630: Remove the debug console logging that exposes secrets:
delete the console.log('prevClaudecode:', savedClaudecode) and
console.log('env.prevClaudecode:', process.env) statements in the block that
restores process.env.CLAUDECODE (references: savedClaudecode and process.env,
CLAUDECODE). After removing those logs, normalize indentation in that block to
match the file's style (use the existing 2-space indentation elsewhere) so the
restore logic remains the only action performed.
In `@server/database/db.js`:
- Around line 634-636: getActiveGitlabToken is incorrectly fetching
credentialsDb.getActiveCredential(userId, 'github_token') which can return a
GitHub PAT; update the call in getActiveGitlabToken to request the GitLab
credential type (e.g., 'gitlab_token') so the function returns the correct
GitLab token for clone/MR operations and avoids exposing wrong secrets.
In `@server/git-platforms/github-platform.js`:
- Around line 101-107: The code currently builds an authenticated clone URL
(cloneUrl via url.replace and this.token) which embeds the GitHub PAT into the
remote and risks leakage; instead change the cloning flow in the method that
computes cloneUrl so it does not embed this.token into the URL — use a
header/askpass-based auth approach or perform the clone without credentials and
then, if a token is required, immediately run a git command to set
remote.origin.url back to the original credential-free url (or use git -c
http.extraHeader='Authorization: bearer <token>' or an askpass helper) to avoid
persisting the PAT; update the logic around cloneUrl, token and the cloning step
to implement one of these safer auth patterns.
In `@server/git-platforms/gitlab-platform.js`:
- Around line 141-150: The code currently embeds this.token into cloneUrl
(creating https://oauth2:<token>@...) which can be persisted in .git/config;
instead, avoid putting credentials in remote.origin.url by using an
authentication header or helper and/or scrubbing the remote immediately after
clone: when handling cloneUrl and this.token in the GitLab platform method (the
code that sets cloneUrl), change to perform the clone with an auth mechanism
such as passing http.extraHeader or GIT_ASKPASS/credential helper instead of
modifying the URL, and if you must use an authenticated URL ensure you run a
scrub step after cloning to remove the credential from remote.origin.url (e.g.,
set remote.url back to the non-authenticated url) so the token is never left on
disk or in git config.
In `@server/git-platforms/index.js`:
- Around line 31-41: The platform-detection currently treats any non-GitHub URL
as GitLab which can cause credential leakage; update the detection logic in the
platform-detection function (the block returning PLATFORMS.GITLAB) so it only
returns PLATFORMS.GITLAB for explicit GitLab indicators (e.g., exact
'gitlab.com', SELF_HOSTED_GITLAB regex, GITLAB_SSH_PORT) and otherwise return a
safe sentinel (e.g., null or PLATFORMS.UNKNOWN) or throw an error rather than
defaulting to GITLAB; also make callers (e.g., server/routes/agent.js and any
code that calls GitLabPlatform.cloneRepo()) handle that sentinel by rejecting
unsupported remotes and not selecting or attaching stored GitLab tokens.
In `@server/index.js`:
- Around line 408-409: The upload router is mounted without authentication
(app.use('/api/upload', uploadRoutes)) exposing an unauthenticated
arbitrary-write endpoint; protect it by applying the existing authenticateToken
middleware when mounting the route (e.g., use app.use('/api/upload',
authenticateToken, uploadRoutes)) or ensure uploadRoutes is wrapped with
authenticateToken inside its module so all handlers in uploadRoutes require
authentication before allowing file writes.
In `@server/routes/agent.js`:
- Around line 896-905: The code prints sensitive credentials by logging the
variable tokenToUse in server/routes/agent.js; remove the
console.log("tokenToUse:", tokenToUse) call (or replace it with a non-sensitive
indicator), and ensure any logging around the token selection in the token
selection block that computes tokenToUse (using githubToken,
githubTokensDb.getActiveGithubToken, gitlabTokensDb.getActiveGitlabToken) only
logs safe metadata (e.g., presence/length or masked value) rather than the raw
token itself.
In `@server/routes/auth.js`:
- Around line 95-97: The current hardcoded 'sign' check in the auth route should
be replaced with real signature verification: in server/routes/auth.js locate
the route handler that reads the sign variable and replace the static comparison
with verification using an env-backed secret or key (e.g.,
process.env.SIGN_SECRET or process.env.SIGN_PUBLIC_KEY); compute/verify an HMAC
(crypto.createHmac) or verify a digital signature/JWT (crypto.verify or
jwt.verify) against the expected payload (e.g., email+timestamp or the request
body), and compare using a timing-safe comparison (crypto.timingSafeEqual)
before returning errors via res.status; keep key material in env so it can be
rotated without code changes.
- Around line 102-107: The current check uses
userDb.hasUsersByUsername(username) which only prevents duplicate usernames but
allows creating additional accounts; replace it with a global-existence check
(e.g., userDb.hasUsers() or userDb.countUsers() > 0) so the route enforces "only
one user allowed"; keep the transaction rollback (db.prepare('ROLLBACK').run())
and return the same 403 response when any user exists, and update the route
comment to reflect that it checks for any existing users rather than that
specific username.
In `@server/routes/upload.js`:
- Around line 14-23: getDirectoryFromReq currently returns untrusted paths from
req.query.directory or req.headers['x-upload-directory'] which allows absolute
paths and parent-traversal; instead, resolve the provided value against a fixed
upload root (e.g. const UPLOAD_ROOT = '/opt/licc/aicoding/uploads') using
path.resolve(UPLOAD_ROOT, provided || '') and then verify the resolved path is
inside UPLOAD_ROOT (containment check like the one used in server/routes/git.js:
resolvedPath.startsWith(UPLOAD_ROOT + path.sep) or === UPLOAD_ROOT); if the
check fails, reject the request (throw or return an error) and do the same
containment resolution/validation for the duplicate logic at lines ~27-38 so no
code path uses raw user input as a filesystem path.
- Around line 40-42: The upload storage currently uses the client-supplied
file.originalname in the filename callback (filename: (req, file, cb) => { ...
}), which allows overwrites; change this to generate a unique server-side
filename (e.g., using a UUID or timestamp+random suffix) inside the filename
callback and pass that unique name to cb instead, while storing
file.originalname in req/file metadata or a DB field for display; ensure the
storage directory and file extension (from file.mimetype or originalname) are
preserved when composing the generated name.
In `@src/hooks/useVersionCheck.ts`:
- Line 50: The hook useVersionCheck.ts currently fetches a static artifact at
`/repos/${owner}/${repo}/releases/latest.json`, which can become stale; change
the call to hit a runtime proxy endpoint (e.g., an app API route that forwards
to GitHub Releases) instead of the static file and ensure the client fetch
disables caching (use fetch with cache: 'no-store' or equivalent) so each check
queries the runtime proxy; implement the proxy to request the real GitHub latest
release and return fresh JSON, and update the fetch in useVersionCheck (the
fetch(...) line) to call that proxy endpoint with cache disabled.
---
Nitpick comments:
In `@public/repos/siteboon/claudecodeui/releases/latest.json`:
- Around line 1-56: The JSON release payload is too large; trim
public/repos/siteboon/claudecodeui/releases/latest.json to only the fields
consumed by useVersionCheck (keep tag_name, name, body, html_url, published_at)
and remove everything else (author, assets, urls, reactions, etc.); update the
file so it contains a minimal object with only those five keys and their values
to reduce repo noise and static size.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 0b1dbe3f-2da9-4f2f-bd45-0e025543b634
📒 Files selected for processing (12)
public/repos/siteboon/claudecodeui/releases/latest.jsonserver/claude-sdk.jsserver/database/db.jsserver/git-platforms/constants.jsserver/git-platforms/github-platform.jsserver/git-platforms/gitlab-platform.jsserver/git-platforms/index.jsserver/index.jsserver/routes/agent.jsserver/routes/auth.jsserver/routes/upload.jssrc/hooks/useVersionCheck.ts
| // IMPORTANT: Unset CLAUDECODE before importing the SDK to prevent the error: | ||
| // "Claude Code cannot be launched inside another Claude Code session" | ||
| // This allows the SDK to spawn child claude processes properly | ||
| const savedClaudecode = process.env.CLAUDECODE; | ||
| delete process.env.CLAUDECODE; |
There was a problem hiding this comment.
Module-level environment manipulation may not cover all cases.
The module-level deletion of CLAUDECODE only happens once at import time. While this prevents the error during SDK import, subsequent Query constructor calls after the first restoration (lines 625-628) will see CLAUDECODE set in the environment again, potentially reintroducing the original error.
Consider deleting CLAUDECODE immediately before each query() call instead of at module load, and avoid restoring it if the goal is to prevent the child process error consistently.
🛠️ Proposed approach: delete before each Query call
+ // Temporarily unset CLAUDECODE to prevent child process spawn error
+ const savedClaudecode = process.env.CLAUDECODE;
+ delete process.env.CLAUDECODE;
+
let queryInstance;
try {
queryInstance = query({
prompt: finalCommand,
options: sdkOptions
});And at module level, remove the eager deletion:
-// IMPORTANT: Unset CLAUDECODE before importing the SDK to prevent the error:
-// "Claude Code cannot be launched inside another Claude Code session"
-// This allows the SDK to spawn child claude processes properly
-const savedClaudecode = process.env.CLAUDECODE;
-delete process.env.CLAUDECODE;
-
import { query } from '@anthropic-ai/claude-agent-sdk';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/claude-sdk.js` around lines 15 - 19, The module-level deletion of
CLAUDECODE (savedClaudecode/delete process.env.CLAUDECODE) only runs at import
and later restores reintroduce the var; remove the eager delete/save at module
scope and instead ensure process.env.CLAUDECODE is deleted immediately before
each call path that spawns Claude child processes (e.g., inside the Query
constructor or right before Query.prototype.query()/the query() call site), and
stop restoring the savedClaudecode afterward so the environment remains cleared
for subsequent calls; update references to savedClaudecode and the restore logic
near the Query/query implementation accordingly.
| // Restore CLAUDECODE to its previous value | ||
| if (savedClaudecode !== undefined) { | ||
| process.env.CLAUDECODE = savedClaudecode; | ||
| } | ||
| console.log('prevClaudecode:', savedClaudecode); | ||
| console.log('env.prevClaudecode:', process.env); |
There was a problem hiding this comment.
Critical: Remove debug logging that exposes secrets and entire process.env.
Line 630 logs the entire process.env object, which typically contains sensitive information such as API keys, tokens, database credentials, and other secrets. This is a serious security risk in production.
Additionally, the inconsistent indentation (4-space vs 2-space) should be fixed.
🔒 Proposed fix: remove debug logging
// Restore CLAUDECODE to its previous value
if (savedClaudecode !== undefined) {
- process.env.CLAUDECODE = savedClaudecode;
+ process.env.CLAUDECODE = savedClaudecode;
}
- console.log('prevClaudecode:', savedClaudecode);
- console.log('env.prevClaudecode:', process.env);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Restore CLAUDECODE to its previous value | |
| if (savedClaudecode !== undefined) { | |
| process.env.CLAUDECODE = savedClaudecode; | |
| } | |
| console.log('prevClaudecode:', savedClaudecode); | |
| console.log('env.prevClaudecode:', process.env); | |
| // Restore CLAUDECODE to its previous value | |
| if (savedClaudecode !== undefined) { | |
| process.env.CLAUDECODE = savedClaudecode; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/claude-sdk.js` around lines 625 - 630, Remove the debug console
logging that exposes secrets: delete the console.log('prevClaudecode:',
savedClaudecode) and console.log('env.prevClaudecode:', process.env) statements
in the block that restores process.env.CLAUDECODE (references: savedClaudecode
and process.env, CLAUDECODE). After removing those logs, normalize indentation
in that block to match the file's style (use the existing 2-space indentation
elsewhere) so the restore logic remains the only action performed.
| getActiveGitlabToken: (userId) => { | ||
| return credentialsDb.getActiveCredential(userId, 'github_token'); // todo 临时使用 github_token | ||
| }, |
There was a problem hiding this comment.
Use the GitLab credential type here.
getActiveGitlabToken() is querying 'github_token'. In the GitLab flow this can return a GitHub PAT, causing auth failures and potentially disclosing the wrong secret to GitLab during clone/MR creation.
🐛 Proposed fix
getActiveGitlabToken: (userId) => {
- return credentialsDb.getActiveCredential(userId, 'github_token'); // todo 临时使用 github_token
+ return credentialsDb.getActiveCredential(userId, 'gitlab_token');
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| getActiveGitlabToken: (userId) => { | |
| return credentialsDb.getActiveCredential(userId, 'github_token'); // todo 临时使用 github_token | |
| }, | |
| getActiveGitlabToken: (userId) => { | |
| return credentialsDb.getActiveCredential(userId, 'gitlab_token'); | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/database/db.js` around lines 634 - 636, getActiveGitlabToken is
incorrectly fetching credentialsDb.getActiveCredential(userId, 'github_token')
which can return a GitHub PAT; update the call in getActiveGitlabToken to
request the GitLab credential type (e.g., 'gitlab_token') so the function
returns the correct GitLab token for clone/MR operations and avoids exposing
wrong secrets.
| // Prepare git clone URL with authentication if token is provided | ||
| let cloneUrl = url; | ||
| if (this.token) { | ||
| // Convert HTTPS URL to authenticated URL | ||
| // Example: https://github.com/user/repo -> https://token@github.com/user/repo | ||
| cloneUrl = url.replace('https://github.com', `https://${this.token}@github.com`); | ||
| } |
There was a problem hiding this comment.
Don't embed the GitHub PAT in the clone URL.
Using https://<token>@github.com/... risks persisting the PAT in the cloned repo's remote config and exposing it through logs/process args. Use header-based auth/askpass instead, or reset remote.origin.url to a credential-free URL immediately after cloning.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/git-platforms/github-platform.js` around lines 101 - 107, The code
currently builds an authenticated clone URL (cloneUrl via url.replace and
this.token) which embeds the GitHub PAT into the remote and risks leakage;
instead change the cloning flow in the method that computes cloneUrl so it does
not embed this.token into the URL — use a header/askpass-based auth approach or
perform the clone without credentials and then, if a token is required,
immediately run a git command to set remote.origin.url back to the original
credential-free url (or use git -c http.extraHeader='Authorization: bearer
<token>' or an askpass helper) to avoid persisting the PAT; update the logic
around cloneUrl, token and the cloning step to implement one of these safer auth
patterns.
| let cloneUrl = url; | ||
| if (this.token) { | ||
| // For HTTPS URLs, convert to authenticated URL | ||
| if (url.startsWith('https://')) { | ||
| // GitLab uses: https://oauth2:TOKEN@gitlab.com/owner/repo | ||
| cloneUrl = url.replace('https://', `https://oauth2:${this.token}@`); | ||
| } else { | ||
| // For SSH URLs, token is not used for clone | ||
| console.log('ℹ️ Token provided but using SSH URL - ensure SSH key is configured'); | ||
| } |
There was a problem hiding this comment.
Avoid persisting the GitLab token in remote.origin.url.
This builds https://oauth2:<token>@... for clone. With cleanup=false or a user-provided project path, that credential can remain on disk in .git/config, and it is also easier to leak through logs/process inspection. Prefer http.extraHeader, GIT_ASKPASS, or scrub the remote URL immediately after clone.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/git-platforms/gitlab-platform.js` around lines 141 - 150, The code
currently embeds this.token into cloneUrl (creating https://oauth2:<token>@...)
which can be persisted in .git/config; instead, avoid putting credentials in
remote.origin.url by using an authentication header or helper and/or scrubbing
the remote immediately after clone: when handling cloneUrl and this.token in the
GitLab platform method (the code that sets cloneUrl), change to perform the
clone with an auth mechanism such as passing http.extraHeader or
GIT_ASKPASS/credential helper instead of modifying the URL, and if you must use
an authenticated URL ensure you run a scrub step after cloning to remove the
credential from remote.origin.url (e.g., set remote.url back to the
non-authenticated url) so the token is never left on disk or in git config.
| if (sign !== 'jhpakjseiudsalebkjbg') { | ||
| return res.status(400).json({ error: 'Invalid sign' }); | ||
| } |
There was a problem hiding this comment.
Replace the hardcoded sign check with real signature verification.
This is a baked-in shared string, not a signature. Anyone who learns it can register accounts, and rotating it requires a code change. Use an env-backed secret/public key and verify an HMAC or signed payload instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/routes/auth.js` around lines 95 - 97, The current hardcoded 'sign'
check in the auth route should be replaced with real signature verification: in
server/routes/auth.js locate the route handler that reads the sign variable and
replace the static comparison with verification using an env-backed secret or
key (e.g., process.env.SIGN_SECRET or process.env.SIGN_PUBLIC_KEY);
compute/verify an HMAC (crypto.createHmac) or verify a digital signature/JWT
(crypto.verify or jwt.verify) against the expected payload (e.g.,
email+timestamp or the request body), and compare using a timing-safe comparison
(crypto.timingSafeEqual) before returning errors via res.status; keep key
material in env so it can be rotated without code changes.
| // Check if users already exist (only allow one user) | ||
| const hasUsers = userDb.hasUsersByUsername(username); | ||
| if (hasUsers) { | ||
| db.prepare('ROLLBACK').run(); | ||
| return res.status(403).json({ error: 'User already exists.' }); | ||
| } |
There was a problem hiding this comment.
This bypasses the single-user setup restriction.
The route comment says “only allowed if no users exist”, but this only checks whether that username already exists. After the first account is created, /register_licc can still create additional users with new usernames, which breaks the single-user model enforced by /register and surfaced by /status.
🐛 Proposed fix
- const hasUsers = userDb.hasUsersByUsername(username);
+ const hasUsers = userDb.hasUsers();
if (hasUsers) {
db.prepare('ROLLBACK').run();
- return res.status(403).json({ error: 'User already exists.' });
+ return res.status(403).json({ error: 'User already exists. This is a single-user system.' });
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Check if users already exist (only allow one user) | |
| const hasUsers = userDb.hasUsersByUsername(username); | |
| if (hasUsers) { | |
| db.prepare('ROLLBACK').run(); | |
| return res.status(403).json({ error: 'User already exists.' }); | |
| } | |
| // Check if users already exist (only allow one user) | |
| const hasUsers = userDb.hasUsers(); | |
| if (hasUsers) { | |
| db.prepare('ROLLBACK').run(); | |
| return res.status(403).json({ error: 'User already exists. This is a single-user system.' }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/routes/auth.js` around lines 102 - 107, The current check uses
userDb.hasUsersByUsername(username) which only prevents duplicate usernames but
allows creating additional accounts; replace it with a global-existence check
(e.g., userDb.hasUsers() or userDb.countUsers() > 0) so the route enforces "only
one user allowed"; keep the transaction rollback (db.prepare('ROLLBACK').run())
and return the same 403 response when any user exists, and update the route
comment to reflect that it checks for any existing users rather than that
specific username.
| const getDirectoryFromReq = (req) => { | ||
| // 优先级:查询参数 > 请求头 > 默认值 | ||
| if (req.query.directory) { | ||
| return req.query.directory; | ||
| } | ||
| if (req.headers['x-upload-directory']) { | ||
| return req.headers['x-upload-directory']; | ||
| } | ||
| return '/opt/licc/aicoding/uploads'; | ||
| }; |
There was a problem hiding this comment.
Constrain directory to a fixed upload root.
These lines trust directory from the query/header as a raw filesystem path. Absolute paths and .. segments can both escape the intended upload area, and mkdirSync then creates that location before the file is written. Please resolve against a single trusted root and reject anything outside it, like the containment check already used in server/routes/git.js.
🛡️ Proposed fix
const getDirectoryFromReq = (req) => {
- // 优先级:查询参数 > 请求头 > 默认值
- if (req.query.directory) {
- return req.query.directory;
- }
- if (req.headers['x-upload-directory']) {
- return req.headers['x-upload-directory'];
- }
- return '/opt/licc/aicoding/uploads';
+ const requestedDir = req.query.directory ?? req.headers['x-upload-directory'] ?? '';
+ const uploadRoot = path.resolve('/opt/licc/aicoding/uploads');
+ const resolved = path.resolve(uploadRoot, String(requestedDir));
+ const normalizedRoot = `${uploadRoot}${path.sep}`;
+
+ if (resolved !== uploadRoot && !resolved.startsWith(normalizedRoot)) {
+ throw new Error('Invalid upload directory');
+ }
+
+ return resolved;
};
@@
destination: (req, file, cb) => {
- let uploadDir = getDirectoryFromReq(req);
-
- // 如果是相对路径,相对于 server 目录
- if (!path.isAbsolute(uploadDir)) {
- uploadDir = path.join(__dirname, '..', uploadDir);
- }
-
- // 确保目录存在
- fs.mkdirSync(uploadDir, { recursive: true });
-
- cb(null, uploadDir);
+ try {
+ const uploadDir = getDirectoryFromReq(req);
+ fs.mkdirSync(uploadDir, { recursive: true });
+ cb(null, uploadDir);
+ } catch (error) {
+ cb(error);
+ }
},Also applies to: 27-38
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/routes/upload.js` around lines 14 - 23, getDirectoryFromReq currently
returns untrusted paths from req.query.directory or
req.headers['x-upload-directory'] which allows absolute paths and
parent-traversal; instead, resolve the provided value against a fixed upload
root (e.g. const UPLOAD_ROOT = '/opt/licc/aicoding/uploads') using
path.resolve(UPLOAD_ROOT, provided || '') and then verify the resolved path is
inside UPLOAD_ROOT (containment check like the one used in server/routes/git.js:
resolvedPath.startsWith(UPLOAD_ROOT + path.sep) or === UPLOAD_ROOT); if the
check fails, reject the request (throw or return an error) and do the same
containment resolution/validation for the duplicate logic at lines ~27-38 so no
code path uses raw user input as a filesystem path.
| filename: (req, file, cb) => { | ||
| // 保留原始文件名 | ||
| cb(null, file.originalname); |
There was a problem hiding this comment.
Do not persist uploads under raw originalname.
Line 42 uses the client-supplied filename as the final on-disk path, so an upload can silently overwrite an existing file in the target directory. Generate a unique server-side name, and keep the original name only as metadata if you need to display it later.
🧯 Proposed fix
filename: (req, file, cb) => {
- // 保留原始文件名
- cb(null, file.originalname);
+ const safeName = path.basename(file.originalname).replace(/[^a-zA-Z0-9._-]/g, '_');
+ const uniqueSuffix = `${Date.now()}-${Math.round(Math.random() * 1e9)}`;
+ cb(null, `${uniqueSuffix}-${safeName}`);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| filename: (req, file, cb) => { | |
| // 保留原始文件名 | |
| cb(null, file.originalname); | |
| filename: (req, file, cb) => { | |
| const safeName = path.basename(file.originalname).replace(/[^a-zA-Z0-9._-]/g, '_'); | |
| const uniqueSuffix = `${Date.now()}-${Math.round(Math.random() * 1e9)}`; | |
| cb(null, `${uniqueSuffix}-${safeName}`); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/routes/upload.js` around lines 40 - 42, The upload storage currently
uses the client-supplied file.originalname in the filename callback (filename:
(req, file, cb) => { ... }), which allows overwrites; change this to generate a
unique server-side filename (e.g., using a UUID or timestamp+random suffix)
inside the filename callback and pass that unique name to cb instead, while
storing file.originalname in req/file metadata or a DB field for display; ensure
the storage directory and file extension (from file.mimetype or originalname)
are preserved when composing the generated name.
| const checkVersion = async () => { | ||
| try { | ||
| const response = await fetch(`https://api.github.com/repos/${owner}/${repo}/releases/latest`); | ||
| const response = await fetch(`/repos/${owner}/${repo}/releases/latest.json`); |
There was a problem hiding this comment.
Version checks now depend on a static artifact and can become stale.
At Line 50, the hook fetches /repos/${owner}/${repo}/releases/latest.json (a public static file), so “latest” is only as current as the last deployment/artifact refresh. That can silently suppress update notifications after new releases.
Suggested fix (use runtime proxy + disable cache for this check)
- const response = await fetch(`/repos/${owner}/${repo}/releases/latest.json`);
+ const response = await fetch(
+ `/api/github/repos/${encodeURIComponent(owner)}/${encodeURIComponent(repo)}/releases/latest`,
+ { cache: 'no-store' },
+ );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/hooks/useVersionCheck.ts` at line 50, The hook useVersionCheck.ts
currently fetches a static artifact at
`/repos/${owner}/${repo}/releases/latest.json`, which can become stale; change
the call to hit a runtime proxy endpoint (e.g., an app API route that forwards
to GitHub Releases) instead of the static file and ensure the client fetch
disables caching (use fetch with cache: 'no-store' or equivalent) so each check
queries the runtime proxy; implement the proxy to request the real GitHub latest
release and return fresh JSON, and update the fetch in useVersionCheck (the
fetch(...) line) to call that proxy endpoint with cache disabled.
Summary
This PR includes several improvements to enhance SDK compatibility and authentication:
/register_liccendpoint with signature verification for secure user registrationFiles Changed
server/claude-sdk.js: Added CLAUDECODE environment variable handlingserver/routes/auth.js: Added/register_liccendpointsrc/hooks/useVersionCheck.ts: Changed to use local proxy endpointpublic/repos/siteboon/claudecodeui/releases/latest.json: Added local release infoCo-Authored-By: Claude Sonnet 4.5 noreply@anthropic.com
Summary by CodeRabbit
New Features
Refactor