SF-30 add storage, files, and middleware shared packages#82
SF-30 add storage, files, and middleware shared packages#82graphite-app[bot] merged 1 commit intodevfrom
Conversation
Reviewer's GuideAdds the foundational shared storage, file, and middleware packages for Superfunctions, including a generic storage contract, multiple cloud/local storage adapters (S3, R2, MinIO, GCS, Azure, local FS), a routed storage facade and conformance helpers, a file provider contract with conformance tests, and a configurable KV-backed rate limiter, all wired up with package metadata and basic test configs so downstream features can build against them. Sequence diagram for routed storage signDownloadUrl delegationsequenceDiagram
actor App
participant RoutedStorageAdapter
participant DurableAdapter
participant TemporaryAdapter
App->>RoutedStorageAdapter: signDownloadUrl({ key:"files/b", target:"temporary", expiresInSeconds:60 })
RoutedStorageAdapter->>RoutedStorageAdapter: pickAdapter(target="temporary")
RoutedStorageAdapter->>TemporaryAdapter: signDownloadUrl(inputWithTarget)
TemporaryAdapter-->>RoutedStorageAdapter: { url, headers }
RoutedStorageAdapter-->>App: { url, headers }
App->>RoutedStorageAdapter: statObject({ key:"files/a", target:"durable" })
RoutedStorageAdapter->>RoutedStorageAdapter: pickAdapter(target="durable")
RoutedStorageAdapter->>DurableAdapter: statObject(inputWithTarget)
DurableAdapter-->>RoutedStorageAdapter: StorageObjectStat
RoutedStorageAdapter-->>App: StorageObjectStat
Class diagram for core storage adapter contracts and routingclassDiagram
class StorageAdapterCapabilities {
+boolean signedUploadUrls
+boolean signedDownloadUrls
+boolean multipart
+boolean proxyStreamingUpload
+boolean proxyStreamingDownload
}
class SignedUrlConstraints {
+string contentType
+number contentLength
+string checksumSha256Base64
}
class StorageObjectStat {
+string key
+number size
+string contentType
+string etag
+string lastModifiedAt
}
class StorageAdapter {
<<interface>>
+StorageAdapterName name
+StorageAdapterCapabilities capabilities
+capabilitiesForTarget(target StorageTargetName) StorageAdapterCapabilities
+statObject(input) Promise~StorageObjectStat~
+deleteObject(input) Promise~void~
+signUploadUrl(input) Promise~SignedUrlDescriptor~
+signDownloadUrl(input) Promise~SignedUrlDescriptor~
+createMultipartUpload(input) Promise~MultipartUploadDescriptor~
+signMultipartUploadPartUrl(input) Promise~SignedUrlDescriptor~
+completeMultipartUpload(input) Promise~void~
+abortMultipartUpload(input) Promise~void~
+openUploadStream(input) Promise~WritableStream~
+openDownloadStream(input) Promise~ReadableStream~
}
class RoutedStorageAdapter {
<<interface extension>>
+StorageTargetName defaultTarget
+StorageTargetName[] targets
+capabilitiesForTarget(target StorageTargetName) StorageAdapterCapabilities
}
class RoutedStorageAdapterConfig {
+Record~StorageTargetName, StorageAdapter~ adapters
+StorageTargetName defaultTarget
+StorageAdapterName name
}
class StorageRoutingError {
+string code
+string target
}
class ConformanceResult {
+boolean ok
+ErrorInfo error
+string[] passed
+string[] failed
}
class ErrorInfo {
+string code
+string message
}
class ConformanceTestContext {
+string testKey
+Uint8Array testData
+string testContentType
}
StorageAdapter <|-- RoutedStorageAdapter
RoutedStorageAdapterConfig o--> StorageAdapter
RoutedStorageAdapter ..> RoutedStorageAdapterConfig : createdWith
StorageRoutingError --> StorageAdapter : thrownWhenTargetMissing
StorageAdapter --> StorageAdapterCapabilities
StorageAdapter --> StorageObjectStat
StorageAdapter --> SignedUrlConstraints
ConformanceResult --> ErrorInfo
ConformanceTestContext ..> StorageAdapter : usedBy
class LocalStorageConfig {
+string rootDir
}
class S3StorageConfig {
+string bucket
+string region
+string accessKeyId
+string secretAccessKey
+string endpoint
+boolean forcePathStyle
}
class R2StorageConfig {
+string accountId
+string bucket
+string accessKeyId
+string secretAccessKey
+string jurisdiction
}
class MinIOStorageConfig {
+string endpoint
+string bucket
+string accessKeyId
+string secretAccessKey
+string region
+boolean useSSL
+number port
}
class GCSStorageConfig {
+string bucket
+string projectId
+string keyFilename
+Credentials credentials
}
class AzureStorageConfig {
+string containerName
+string connectionString
+string accountName
+string accountKey
+string sasToken
+string endpoint
}
class Credentials {
+string client_email
+string private_key
}
GCSStorageConfig --> Credentials
LocalStorageConfig ..> StorageAdapter : creates
S3StorageConfig ..> StorageAdapter : creates
R2StorageConfig ..> StorageAdapter : creates
MinIOStorageConfig ..> StorageAdapter : creates
GCSStorageConfig ..> StorageAdapter : creates
AzureStorageConfig ..> StorageAdapter : creates
Class diagram for rate limiting middlewareclassDiagram
class Adapter {
<<interface>>
+findOne(input) Promise~any~
+create(input) Promise~any~
+upsert(input) Promise~any~
+delete(input) Promise~void~
}
class KVStoreAdapter {
<<interface>>
+get(key string) Promise~string~
+set(input) Promise~void~
+delete(key string) Promise~void~
}
class RateLimitConfig {
+number windowMs
+number maxRequests
+string keyPrefix
+Adapter persistence
+KVStoreAdapter persistence
+RateLimitAlgorithm algorithm
+function now
}
class RateLimitResult {
+boolean allowed
+number remaining
+string resetAt
+number total
}
class CheckLimitInput {
+string key
+number windowSeconds
+number limit
}
class RateLimiter {
<<interface>>
+check(input CheckLimitInput) Promise~RateLimitResult~
+reset(key string) Promise~void~
}
class FixedWindowState {
+number count
+number windowStart
}
class SlidingWindowState {
+number[] timestamps
}
class TokenBucketState {
+number tokens
+number lastRefill
}
class InMemoryKVStore {
+Map store
+get(key string) Promise~string~
+set(input) Promise~void~
+delete(key string) Promise~void~
}
RateLimitConfig --> Adapter : optional
RateLimitConfig --> KVStoreAdapter : optional
RateLimiter ..> RateLimitConfig : createdWith
RateLimiter --> RateLimitResult
RateLimiter --> FixedWindowState : usesWhenFixedWindow
RateLimiter --> SlidingWindowState : usesWhenSlidingWindow
RateLimiter --> TokenBucketState : usesWhenTokenBucket
InMemoryKVStore ..|> KVStoreAdapter
KVStoreAdapter <.. Adapter : ensuredByEnsureKV
Class diagram for file provider contractsclassDiagram
class FileProviderContext {
+string principalId
+string tenantId
+string requestId
}
class FileProvider {
<<interface>>
+createUploadSession(input, ctx FileProviderContext) Promise~UploadSessionDescriptor~
+getUploadSessionStatus(input, ctx FileProviderContext) Promise~any~
+signUploadPart(input, ctx FileProviderContext) Promise~SignedUrlDescriptor~
+completeUploadPart(input, ctx FileProviderContext) Promise~void~
+completeUploadSession(input, ctx FileProviderContext) Promise~FileVersionDescriptor~
+abortUploadSession(input, ctx FileProviderContext) Promise~void~
+getFile(input, ctx FileProviderContext) Promise~any~
+listFiles(input, ctx FileProviderContext) Promise~any~
+deleteFile(input, ctx FileProviderContext) Promise~void~
}
class UploadSessionDescriptor {
+string uploadSessionId
}
class FileVersionDescriptor {
+string fileId
+string versionId
}
class FileMetadata {
+string id
+string name
+number size
+string mimeType
+Date createdAt
+Date updatedAt
+Record metadata
}
class FileUpload {
+File file
+Blob file
+Uint8Array file
+string name
+string mimeType
+Record metadata
}
FileProvider --> FileProviderContext
FileProvider --> UploadSessionDescriptor
FileProvider --> FileVersionDescriptor
FileProvider --> FileMetadata : typicalReturn
FileUpload ..> FileMetadata : creates
File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds multiple new packages: file-provider types/tests, a rate-limiting middleware, a storage core (types, conformance, router, errors, s3-compatible), and adapters for Azure, GCS, S3, R2, MinIO, and local filesystem, each with package manifests, tsconfigs, Vitest configs, implementations, and comprehensive tests. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant RoutedAdapter
participant Adapter
participant Backend
Client->>RoutedAdapter: signDownloadUrl({ key, target? })
RoutedAdapter->>RoutedAdapter: resolve target (input || default)
RoutedAdapter->>Adapter: signDownloadUrl({ key })
Adapter->>Backend: generate presigned URL / SAS / signer call
Backend-->>Adapter: { url, headers?, expiresAt }
Adapter-->>RoutedAdapter: { url, headers?, expiresAt }
RoutedAdapter-->>Client: signed URL response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Review Summary by QodoAdd storage, files, and middleware shared packages with multi-cloud adapter support
WalkthroughsDescription• Add foundational shared packages for storage, files, and middleware to the monorepo • Implement comprehensive storage adapter system with support for multiple cloud providers: - **AWS S3** adapter with multipart uploads and signed URLs - **Azure Blob Storage** adapter with multiple authentication methods - **Google Cloud Storage** adapter with streaming support - **Cloudflare R2** adapter with jurisdiction support - **MinIO** adapter for S3-compatible deployments - **Local filesystem** adapter with security hardening (path traversal prevention, symlink protection) • Provide storage routing layer for multi-target deployments with capability merging • Implement rate-limiting middleware with three algorithms: fixed-window, sliding-window, and token-bucket • Define file provider interface and types for upload session management • Include comprehensive test suites for all adapters and middleware with 30+ test cases per component • Add conformance testing framework to validate adapter implementations • Configure TypeScript, Vitest, and package metadata for all new packages Diagramflowchart LR
A["Storage Base<br/>Types & Interfaces"] --> B["S3-Compatible<br/>Factory"]
B --> C["AWS S3<br/>Adapter"]
B --> D["MinIO<br/>Adapter"]
B --> E["Cloudflare R2<br/>Adapter"]
A --> F["Azure Blob<br/>Adapter"]
A --> G["GCS<br/>Adapter"]
A --> H["Local FS<br/>Adapter"]
C --> I["Storage Router<br/>Multi-target"]
D --> I
E --> I
F --> I
G --> I
H --> I
J["Files Package<br/>Types"] --> K["FileProvider<br/>Interface"]
L["Middleware<br/>Rate Limiter"] --> M["Fixed/Sliding/<br/>Token-Bucket"]
A --> N["Conformance<br/>Testing"]
File Changes1. packages/storage-azure/src/adapter.ts
|
Code Review by Qodo
|
|
Warning Review the following alerts detected in dependencies. According to your organization's Security Policy, it is recommended to resolve "Warn" alerts. Learn more about Socket for GitHub.
|
There was a problem hiding this comment.
Hey - I've found 3 issues, and left some high level feedback:
- In
storage-local'sresolvePath/validateResolvedPath, thestartsWith(root + '/')checks assume POSIX-style separators and may misbehave on Windows paths; consider usingpath.seporpath.relativeto enforce containment in a cross-platform way. - The Azure adapter’s
openUploadStreambuffers the entire blob in memory before a singleupload, which can be problematic for large objects; you may want to switch to a chunked/streaming upload (e.g.,uploadStream) to avoid unbounded memory usage. - In the Azure multipart implementation,
completeMultipartUploadinfers block IDs frometagwith a loose heuristic; consider making the block ID contract explicit (e.g., documenting that clients must persist the generated block IDs) or storing block IDs separately to avoid accidental mismatch with provider-returned etags.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `storage-local`'s `resolvePath`/`validateResolvedPath`, the `startsWith(root + '/')` checks assume POSIX-style separators and may misbehave on Windows paths; consider using `path.sep` or `path.relative` to enforce containment in a cross-platform way.
- The Azure adapter’s `openUploadStream` buffers the entire blob in memory before a single `upload`, which can be problematic for large objects; you may want to switch to a chunked/streaming upload (e.g., `uploadStream`) to avoid unbounded memory usage.
- In the Azure multipart implementation, `completeMultipartUpload` infers block IDs from `etag` with a loose heuristic; consider making the block ID contract explicit (e.g., documenting that clients must persist the generated block IDs) or storing block IDs separately to avoid accidental mismatch with provider-returned etags.
## Individual Comments
### Comment 1
<location path="packages/storage-local/src/adapter.ts" line_range="62-64" />
<code_context>
+ function resolvePath(key: string): string {
+ validateKey(key);
+
+ const resolved = join(normalizedRoot, normalize(key));
+
+ if (!resolved.startsWith(normalizedRoot + '/') && resolved !== normalizedRoot) {
+ throw new Error('Invalid key: path traversal detected');
+ }
</code_context>
<issue_to_address>
**🚨 issue (security):** Path prefix checks are not portable and can be bypassed on Windows due to hardcoded '/' separators.
These `startsWith(normalizedRoot + '/')` checks rely on POSIX `/`, but `join`/`realpath` return platform-specific paths. On Windows they use `\`, so the prefix check can fail or be bypassed, weakening the traversal protection. Please use `path.resolve` and `path.sep`-aware logic (e.g. `path.relative` or a normalized `root` that ends with `path.sep` and a corresponding `real.startsWith(root)` check). The same adjustment is needed in `validateResolvedPath` and the directory check in `openUploadStream`.
</issue_to_address>
### Comment 2
<location path="packages/storage-azure/src/adapter.ts" line_range="294-317" />
<code_context>
+ // No explicit cleanup needed for stateless tracking
+ },
+
+ async openUploadStream(input: { key: string; contentType?: string }): Promise<WritableStream> {
+ const blockBlobClient = containerClient.getBlockBlobClient(input.key);
+
+ let buffer: Buffer[] = [];
+
+ return new WritableStream({
+ async write(chunk) {
+ const buf = chunk instanceof Uint8Array ? Buffer.from(chunk) : chunk;
+ buffer.push(buf);
+ },
+ async close() {
+ const content = Buffer.concat(buffer);
+ await blockBlobClient.upload(content, content.length, {
+ blobHTTPHeaders: {
+ blobContentType: input.contentType,
+ },
+ });
+ buffer = [];
+ },
+ abort() {
</code_context>
<issue_to_address>
**suggestion (performance):** The Azure upload stream buffers the entire object in memory before uploading, which can be problematic for large files.
This accumulates all chunks into memory and only uploads once in `close()`, which causes high memory usage and GC pressure for large uploads and undermines the streaming API. Since `BlockBlobClient` supports streaming (e.g., `uploadStream`) and chunked uploads, consider sending chunks directly to Azure as they arrive instead of buffering the entire payload first.
```suggestion
async openUploadStream(input: { key: string; contentType?: string }): Promise<WritableStream> {
const blockBlobClient = containerClient.getBlockBlobClient(input.key);
// Stream to Azure by staging each chunk as a block and committing at close.
// This avoids buffering the entire payload in memory.
const blockIds: string[] = [];
let blockIndex = 0;
const blockIdPrefix = Date.now().toString(36) + "-" + Math.random().toString(36).slice(2, 8);
return new WritableStream<Uint8Array | Buffer>({
async write(chunk) {
const buf = chunk instanceof Uint8Array ? Buffer.from(chunk) : chunk;
const rawId = `${blockIdPrefix}-${blockIndex.toString().padStart(8, "0")}`;
const blockId = Buffer.from(rawId).toString("base64");
blockIndex += 1;
// Stage the current chunk as a block immediately, so we don't accumulate
// large amounts of data in memory.
await blockBlobClient.stageBlock(blockId, buf, buf.length);
blockIds.push(blockId);
},
async close() {
// Commit all staged blocks as the final blob.
await blockBlobClient.commitBlockList(blockIds, {
blobHTTPHeaders: {
blobContentType: input.contentType,
},
});
},
abort() {
// No explicit cleanup: Azure will garbage-collect uncommitted blocks.
blockIds.length = 0;
},
});
},
```
</issue_to_address>
### Comment 3
<location path="packages/middleware/src/rate-limit.ts" line_range="49-58" />
<code_context>
+ lastRefill: number;
+}
+
+function createInMemoryKVStore(): KVStoreAdapter {
+ const store = new Map<string, { value: string; expiresAt?: number }>();
+
+ return {
+ async get(key: string): Promise<string | null> {
+ const entry = store.get(key);
+ if (!entry) return null;
+ if (entry.expiresAt && Date.now() > entry.expiresAt) {
+ store.delete(key);
+ return null;
+ }
+ return entry.value;
+ },
+ async set(input: { key: string; value: string; ttlSeconds?: number }): Promise<void> {
+ store.set(input.key, {
+ value: input.value,
+ expiresAt: input.ttlSeconds ? Date.now() + input.ttlSeconds * 1000 : undefined,
+ });
+ },
</code_context>
<issue_to_address>
**suggestion (bug_risk):** In-memory KV TTL uses Date.now while the rate limiter supports a custom time source, which can cause inconsistent behavior.
The rate limiter uses an injectable `now` function, but both `createInMemoryKVStore` and the adapter-backed KV use `Date.now()` for TTLs. This can cause expirations to drift from rate calculations when `now` is overridden (e.g. in tests or custom time sources). Consider passing the same `now` function into the KV layer so TTLs and rate limiting share a consistent time source.
Suggested implementation:
```typescript
function createInMemoryKVStore(now: () => number = Date.now): KVStoreAdapter {
const store = new Map<string, { value: string; expiresAt?: number }>();
return {
async get(key: string): Promise<string | null> {
const entry = store.get(key);
if (!entry) return null;
const currentTime = now();
if (entry.expiresAt && currentTime > entry.expiresAt) {
store.delete(key);
return null;
}
return entry.value;
},
async set(input: { key: string; value: string; ttlSeconds?: number }): Promise<void> {
const currentTime = now();
store.set(input.key, {
value: input.value,
expiresAt: input.ttlSeconds ? currentTime + input.ttlSeconds * 1000 : undefined,
});
},
async delete(key: string): Promise<void> {
store.delete(key);
},
};
}
```
To fully align TTL handling with the rate limiter's time source, you should also:
1. Identify where `createInMemoryKVStore()` is called in this file (and possibly others) and pass in the same `now` function that the rate limiter uses, e.g. `createInMemoryKVStore(now)` instead of `createInMemoryKVStore()`.
2. If there is a separate "adapter-backed" KV implementation in this file (or related files) that uses `Date.now()` for TTLs, refactor it similarly to accept a `now: () => number` parameter and use that instead of `Date.now()`.
3. Ensure any public factory or configuration API for the rate limiter that exposes `now` is also used to configure the KV store so they always share the same time source in tests and custom time setups.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 13559f7c6a
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const blockId = p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/) | ||
| ? p.etag | ||
| : generateBlockId(p.partNumber); |
There was a problem hiding this comment.
Use block IDs, not ETags, in Azure multipart completion
The multipart completion path currently treats any base64/alphanumeric etag as a block ID, but real part ETags commonly also match that pattern. When clients pass back upload ETags (rather than the generated block IDs), commitBlockList receives identifiers that were never uploaded by signMultipartUploadPartUrl, which causes Azure to reject completion (e.g., invalid block list). This should derive or validate block IDs from partNumber instead of trusting generic ETag text.
Useful? React with 👍 / 👎.
| if (!dirRealPath.startsWith(root + '/') && dirRealPath !== root) { | ||
| throw new Error('Invalid key: path traversal detected via symlink'); | ||
| } | ||
|
|
||
| const nodeStream = createWriteStream(filePath); |
There was a problem hiding this comment.
Validate upload target path before local write stream creation
The upload path checks that the parent directory resolves under rootDir, but it does not validate the final file path before createWriteStream. If a symlink file already exists at that key and points outside the root, Node follows it and writes outside the storage sandbox, bypassing the traversal protection used in the read/delete paths. Add a no-follow file-level check (e.g., lstat/realpath guard) before opening the writer.
Useful? React with 👍 / 👎.
|
|
||
| return new WritableStream({ | ||
| async write(chunk) { | ||
| const buf = chunk instanceof Uint8Array ? Buffer.from(chunk) : chunk; | ||
| buffer.push(buf); |
There was a problem hiding this comment.
Stream Azure uploads instead of buffering whole object
This implementation stores every incoming chunk in memory and concatenates the entire payload on close before uploading. Under large or concurrent uploads, memory usage grows with object size and can trigger process OOMs, which breaks the expected behavior of a proxyStreamingUpload adapter. Switching to Azure's streaming upload API (or piping chunks directly) avoids this unbounded buffering.
Useful? React with 👍 / 👎.
| const sortedParts = input.parts.sort((a, b) => a.partNumber - b.partNumber); | ||
| const blockList = sortedParts.map(p => { | ||
| // The etag should be the blockId we generated | ||
| // If client recorded blockId as etag, use it; otherwise regenerate | ||
| const blockId = p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/) | ||
| ? p.etag | ||
| : generateBlockId(p.partNumber); | ||
| return blockId; | ||
| }); |
There was a problem hiding this comment.
Critical bug: The logic incorrectly handles unencoded block IDs, which will cause Azure API failures.
When p.etag starts with 'block-', it's the raw unencoded string (e.g., 'block-000001'), but Azure requires base64-encoded block IDs. The current code uses this unencoded string directly, which will fail when calling blockBlobClient.commitBlockList().
Fix:
const blockList = sortedParts.map(p => {
// If etag is already base64-encoded, use it
if (p.etag.match(/^[A-Za-z0-9+/=]+$/) && !p.etag.startsWith('block-')) {
return p.etag;
}
// Otherwise regenerate the properly encoded blockId
return generateBlockId(p.partNumber);
});The test at line 210 demonstrates this issue by passing etag: 'block-000001' (unencoded), which would be accepted but fail at Azure's API.
| const sortedParts = input.parts.sort((a, b) => a.partNumber - b.partNumber); | |
| const blockList = sortedParts.map(p => { | |
| // The etag should be the blockId we generated | |
| // If client recorded blockId as etag, use it; otherwise regenerate | |
| const blockId = p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/) | |
| ? p.etag | |
| : generateBlockId(p.partNumber); | |
| return blockId; | |
| }); | |
| const sortedParts = input.parts.sort((a, b) => a.partNumber - b.partNumber); | |
| const blockList = sortedParts.map(p => { | |
| // If etag is already base64-encoded, use it | |
| if (p.etag.match(/^[A-Za-z0-9+/=]+$/) && !p.etag.startsWith('block-')) { | |
| return p.etag; | |
| } | |
| // Otherwise regenerate the properly encoded blockId | |
| return generateBlockId(p.partNumber); | |
| }); | |
Spotted by Graphite
Is this helpful? React 👍 or 👎 to let us know.
Greptile SummaryThis PR introduces six new shared packages to the monorepo (
Confidence Score: 2/5Not safe to merge as-is — the Azure adapter has a P0 bug that will cause runtime 400 errors when completing multipart uploads against a real Azure endpoint. The overall architecture is clean and the non-Azure packages look correct. However, the Azure completeMultipartUpload block-ID bug is a definite runtime failure (not a potential one): the condition branches to the decoded string form when Azure requires base64, and the fallback that produces the correct form is effectively unreachable. This needs to be fixed before downstream code can rely on the Azure adapter. packages/storage-azure/src/adapter.ts (lines 277-287 and 294-316) and packages/storage-azure/src/adapter.test.ts (line 211) Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
Client([Caller]) --> RA[RoutedStorageAdapter]
RA -- target lookup --> AM{adapter map}
AM -- s3 / r2 / minio --> S3C[S3CompatibleAdapter]
AM -- gcs --> GCS[GCSAdapter]
AM -- azure --> AZ[AzureAdapter]
AM -- local --> LOC[LocalAdapter]
S3C --> S3SDK[@aws-sdk/client-s3]
GCS --> GCSSDK[@google-cloud/storage]
AZ --> AZSDK[@azure/storage-blob]
LOC --> FS[node:fs]
subgraph Capabilities checked at runtime
S3C -. signUploadUrl .-> S3SDK
S3C -. signDownloadUrl .-> S3SDK
S3C -. createMultipartUpload .-> S3SDK
AZ -. signUploadUrl .-> AZSDK
AZ -. openUploadStream .-> AZSDK
AZ -. completeMultipartUpload --> BUG([⚠ blockId decoded not base64])
end
MW[RateLimiter] -. check / reset .-> KV[KVStoreAdapter]
KV -- in-memory default --> MAP[(Map)]
KV -- external --> EXT[(Redis / DB)]
Prompt To Fix All With AIThis is a comment left during a code review.
Path: packages/storage-azure/src/adapter.ts
Line: 277-287
Comment:
**`completeMultipartUpload` commits wrong block IDs to Azure**
The condition on line 280 is `p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/)`. The second branch matches virtually every string (hex ETags, base64, alphanumeric), so the `generateBlockId` fallback is unreachable.
More critically, the first branch (`p.etag.startsWith('block-')`) passes the *decoded* form (e.g. `'block-000001'`) straight to `commitBlockList`. Azure requires block IDs to be **base64-encoded**. `generateBlockId` correctly returns the base64 form (e.g. `'YmxvY2stMDAwMDAxIwI='`), but the test on line 211 passes `etag: 'block-000001'` (decoded), triggering the first branch and sending a non-base64 string to Azure — causing a `400 InvalidBlockId` error at runtime.
The simplest correct fix is to always derive the block ID from the part number, which guarantees the IDs committed here match those embedded in the signed URLs produced by `signMultipartUploadPartUrl`:
```suggestion
const sortedParts = input.parts.sort((a, b) => a.partNumber - b.partNumber);
const blockList = sortedParts.map(p => generateBlockId(p.partNumber));
await blockBlobClient.commitBlockList(blockList);
```
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: packages/storage-azure/src/adapter.ts
Line: 83-88
Comment:
**`parseBlockId` is dead code**
`parseBlockId` is defined but never called anywhere in the codebase. With the corrected `completeMultipartUpload` logic above (always regenerating from `partNumber`) it would remain unused. Consider removing it to keep the module clean.
```suggestion
```
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: packages/storage-azure/src/adapter.ts
Line: 294-316
Comment:
**`openUploadStream` buffers entire file in memory**
The `write` sink appends every chunk to an in-memory `Buffer[]` array, and only uploads on `close`. For any file larger than available heap this will either be very slow (GC pressure) or crash the process.
Consider uploading through a block-blob append stream or using the SDK's `uploadStream` helper instead:
```typescript
import { streamToBuffer } from '@azure/storage-blob'; // or use uploadStream
async openUploadStream(input) {
const blockBlobClient = containerClient.getBlockBlobClient(input.key);
const { writable } = blockBlobClient.uploadStream(/* bufferSize, maxConcurrency, options */);
return writable;
}
```
Alternatively, `blockBlobClient.uploadStream(readableStream, ...)` accepts a Node.js Readable, which you can bridge from the WritableStream sink without buffering everything.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: packages/storage-azure/src/adapter.test.ts
Line: 207-214
Comment:
**Test passes a decoded block ID that will fail with the real Azure API**
The test exercises `completeMultipartUpload` with `etag: 'block-000001'`, which is the *decoded* form of the block ID. After the fix above (always using `generateBlockId`), this test will correctly commit `YmxvY2stMDAwMDAxIwI=` (or equivalent base64) regardless of what the caller passes as `etag`. Consider updating the test to reflect the expected stateless behaviour:
```suggestion
try {
await adapterB.completeMultipartUpload!({
key,
uploadId,
parts: [{ partNumber: 1, etag: Buffer.from('block-000001').toString('base64') }],
});
} catch (err: any) {
expect(err.message).not.toContain('Invalid uploadId');
}
```
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: packages/storage/src/conformance.ts
Line: 27-30
Comment:
**`_ctx` parameter is accepted but never read**
`runConformanceTests` declares a `_ctx: ConformanceTestContext` parameter (the `_` prefix signals intentional non-use), but the current implementation performs only static structural checks and never exercises actual I/O using the supplied key or test data. If live-I/O conformance tests are planned, the parameter will be needed; if the function will remain static-only, removing the parameter simplifies the API and avoids confusing callers who pass a custom context expecting it to take effect.
How can I resolve this? If you propose a fix, please make it concise.Reviews (1): Last reviewed commit: "SF-30 add storage, files, and middleware..." | Re-trigger Greptile |
| const blockList = sortedParts.map(p => { | ||
| // The etag should be the blockId we generated | ||
| // If client recorded blockId as etag, use it; otherwise regenerate | ||
| const blockId = p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/) | ||
| ? p.etag | ||
| : generateBlockId(p.partNumber); | ||
| return blockId; | ||
| }); | ||
|
|
||
| await blockBlobClient.commitBlockList(blockList); | ||
| }, |
There was a problem hiding this comment.
completeMultipartUpload commits wrong block IDs to Azure
The condition on line 280 is p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/). The second branch matches virtually every string (hex ETags, base64, alphanumeric), so the generateBlockId fallback is unreachable.
More critically, the first branch (p.etag.startsWith('block-')) passes the decoded form (e.g. 'block-000001') straight to commitBlockList. Azure requires block IDs to be base64-encoded. generateBlockId correctly returns the base64 form (e.g. 'YmxvY2stMDAwMDAxIwI='), but the test on line 211 passes etag: 'block-000001' (decoded), triggering the first branch and sending a non-base64 string to Azure — causing a 400 InvalidBlockId error at runtime.
The simplest correct fix is to always derive the block ID from the part number, which guarantees the IDs committed here match those embedded in the signed URLs produced by signMultipartUploadPartUrl:
| const blockList = sortedParts.map(p => { | |
| // The etag should be the blockId we generated | |
| // If client recorded blockId as etag, use it; otherwise regenerate | |
| const blockId = p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/) | |
| ? p.etag | |
| : generateBlockId(p.partNumber); | |
| return blockId; | |
| }); | |
| await blockBlobClient.commitBlockList(blockList); | |
| }, | |
| const sortedParts = input.parts.sort((a, b) => a.partNumber - b.partNumber); | |
| const blockList = sortedParts.map(p => generateBlockId(p.partNumber)); | |
| await blockBlobClient.commitBlockList(blockList); |
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/storage-azure/src/adapter.ts
Line: 277-287
Comment:
**`completeMultipartUpload` commits wrong block IDs to Azure**
The condition on line 280 is `p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/)`. The second branch matches virtually every string (hex ETags, base64, alphanumeric), so the `generateBlockId` fallback is unreachable.
More critically, the first branch (`p.etag.startsWith('block-')`) passes the *decoded* form (e.g. `'block-000001'`) straight to `commitBlockList`. Azure requires block IDs to be **base64-encoded**. `generateBlockId` correctly returns the base64 form (e.g. `'YmxvY2stMDAwMDAxIwI='`), but the test on line 211 passes `etag: 'block-000001'` (decoded), triggering the first branch and sending a non-base64 string to Azure — causing a `400 InvalidBlockId` error at runtime.
The simplest correct fix is to always derive the block ID from the part number, which guarantees the IDs committed here match those embedded in the signed URLs produced by `signMultipartUploadPartUrl`:
```suggestion
const sortedParts = input.parts.sort((a, b) => a.partNumber - b.partNumber);
const blockList = sortedParts.map(p => generateBlockId(p.partNumber));
await blockBlobClient.commitBlockList(blockList);
```
How can I resolve this? If you propose a fix, please make it concise.| function parseBlockId(blockId: string): number { | ||
| // Decode block ID to extract part number | ||
| const decoded = Buffer.from(blockId, 'base64').toString('utf-8'); | ||
| const match = decoded.match(/block-(\d+)/); | ||
| return match ? parseInt(match[1], 10) : 0; | ||
| } |
There was a problem hiding this comment.
parseBlockId is defined but never called anywhere in the codebase. With the corrected completeMultipartUpload logic above (always regenerating from partNumber) it would remain unused. Consider removing it to keep the module clean.
| function parseBlockId(blockId: string): number { | |
| // Decode block ID to extract part number | |
| const decoded = Buffer.from(blockId, 'base64').toString('utf-8'); | |
| const match = decoded.match(/block-(\d+)/); | |
| return match ? parseInt(match[1], 10) : 0; | |
| } |
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/storage-azure/src/adapter.ts
Line: 83-88
Comment:
**`parseBlockId` is dead code**
`parseBlockId` is defined but never called anywhere in the codebase. With the corrected `completeMultipartUpload` logic above (always regenerating from `partNumber`) it would remain unused. Consider removing it to keep the module clean.
```suggestion
```
How can I resolve this? If you propose a fix, please make it concise.| async openUploadStream(input: { key: string; contentType?: string }): Promise<WritableStream> { | ||
| const blockBlobClient = containerClient.getBlockBlobClient(input.key); | ||
|
|
||
| let buffer: Buffer[] = []; | ||
|
|
||
| return new WritableStream({ | ||
| async write(chunk) { | ||
| const buf = chunk instanceof Uint8Array ? Buffer.from(chunk) : chunk; | ||
| buffer.push(buf); | ||
| }, | ||
| async close() { | ||
| const content = Buffer.concat(buffer); | ||
| await blockBlobClient.upload(content, content.length, { | ||
| blobHTTPHeaders: { | ||
| blobContentType: input.contentType, | ||
| }, | ||
| }); | ||
| buffer = []; | ||
| }, | ||
| abort() { | ||
| buffer = []; | ||
| }, | ||
| }); |
There was a problem hiding this comment.
openUploadStream buffers entire file in memory
The write sink appends every chunk to an in-memory Buffer[] array, and only uploads on close. For any file larger than available heap this will either be very slow (GC pressure) or crash the process.
Consider uploading through a block-blob append stream or using the SDK's uploadStream helper instead:
import { streamToBuffer } from '@azure/storage-blob'; // or use uploadStream
async openUploadStream(input) {
const blockBlobClient = containerClient.getBlockBlobClient(input.key);
const { writable } = blockBlobClient.uploadStream(/* bufferSize, maxConcurrency, options */);
return writable;
}Alternatively, blockBlobClient.uploadStream(readableStream, ...) accepts a Node.js Readable, which you can bridge from the WritableStream sink without buffering everything.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/storage-azure/src/adapter.ts
Line: 294-316
Comment:
**`openUploadStream` buffers entire file in memory**
The `write` sink appends every chunk to an in-memory `Buffer[]` array, and only uploads on `close`. For any file larger than available heap this will either be very slow (GC pressure) or crash the process.
Consider uploading through a block-blob append stream or using the SDK's `uploadStream` helper instead:
```typescript
import { streamToBuffer } from '@azure/storage-blob'; // or use uploadStream
async openUploadStream(input) {
const blockBlobClient = containerClient.getBlockBlobClient(input.key);
const { writable } = blockBlobClient.uploadStream(/* bufferSize, maxConcurrency, options */);
return writable;
}
```
Alternatively, `blockBlobClient.uploadStream(readableStream, ...)` accepts a Node.js Readable, which you can bridge from the WritableStream sink without buffering everything.
How can I resolve this? If you propose a fix, please make it concise.| await adapterB.completeMultipartUpload!({ | ||
| key, | ||
| uploadId, | ||
| parts: [{ partNumber: 1, etag: 'block-000001' }], | ||
| }); | ||
| } catch (err: any) { | ||
| expect(err.message).not.toContain('Invalid uploadId'); | ||
| } |
There was a problem hiding this comment.
Test passes a decoded block ID that will fail with the real Azure API
The test exercises completeMultipartUpload with etag: 'block-000001', which is the decoded form of the block ID. After the fix above (always using generateBlockId), this test will correctly commit YmxvY2stMDAwMDAxIwI= (or equivalent base64) regardless of what the caller passes as etag. Consider updating the test to reflect the expected stateless behaviour:
| await adapterB.completeMultipartUpload!({ | |
| key, | |
| uploadId, | |
| parts: [{ partNumber: 1, etag: 'block-000001' }], | |
| }); | |
| } catch (err: any) { | |
| expect(err.message).not.toContain('Invalid uploadId'); | |
| } | |
| try { | |
| await adapterB.completeMultipartUpload!({ | |
| key, | |
| uploadId, | |
| parts: [{ partNumber: 1, etag: Buffer.from('block-000001').toString('base64') }], | |
| }); | |
| } catch (err: any) { | |
| expect(err.message).not.toContain('Invalid uploadId'); | |
| } |
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/storage-azure/src/adapter.test.ts
Line: 207-214
Comment:
**Test passes a decoded block ID that will fail with the real Azure API**
The test exercises `completeMultipartUpload` with `etag: 'block-000001'`, which is the *decoded* form of the block ID. After the fix above (always using `generateBlockId`), this test will correctly commit `YmxvY2stMDAwMDAxIwI=` (or equivalent base64) regardless of what the caller passes as `etag`. Consider updating the test to reflect the expected stateless behaviour:
```suggestion
try {
await adapterB.completeMultipartUpload!({
key,
uploadId,
parts: [{ partNumber: 1, etag: Buffer.from('block-000001').toString('base64') }],
});
} catch (err: any) {
expect(err.message).not.toContain('Invalid uploadId');
}
```
How can I resolve this? If you propose a fix, please make it concise.| export async function runConformanceTests( | ||
| adapter: StorageAdapter, | ||
| _ctx: ConformanceTestContext = createDefaultContext() | ||
| ): Promise<ConformanceResult> { |
There was a problem hiding this comment.
_ctx parameter is accepted but never read
runConformanceTests declares a _ctx: ConformanceTestContext parameter (the _ prefix signals intentional non-use), but the current implementation performs only static structural checks and never exercises actual I/O using the supplied key or test data. If live-I/O conformance tests are planned, the parameter will be needed; if the function will remain static-only, removing the parameter simplifies the API and avoids confusing callers who pass a custom context expecting it to take effect.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/storage/src/conformance.ts
Line: 27-30
Comment:
**`_ctx` parameter is accepted but never read**
`runConformanceTests` declares a `_ctx: ConformanceTestContext` parameter (the `_` prefix signals intentional non-use), but the current implementation performs only static structural checks and never exercises actual I/O using the supplied key or test data. If live-I/O conformance tests are planned, the parameter will be needed; if the function will remain static-only, removing the parameter simplifies the API and avoids confusing callers who pass a custom context expecting it to take effect.
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
21 issues found across 61 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="packages/storage-local/src/adapter.ts">
<violation number="1" location="packages/storage-local/src/adapter.ts:64">
P2: Path traversal checks use hardcoded `'/'` separator (e.g., `normalizedRoot + '/'`), but `path.resolve`/`realpath` return `'\\'` on Windows. On that platform, the `startsWith` guard will never match, silently disabling traversal protection. Use `path.sep` or `path.relative`-based logic so the check is correct on both POSIX and Windows. This affects `resolvePath`, `validateResolvedPath`, and the directory check in `openUploadStream`.</violation>
<violation number="2" location="packages/storage-local/src/adapter.ts:140">
P1: The upload path validates the parent directory against the root but does not validate the final file path before opening the write stream. Unlike `statObject` and `deleteObject` (which call `validateResolvedPath(filePath)`), a pre-existing symlink at `filePath` pointing outside the root will be followed by `createWriteStream`, writing data outside the storage sandbox. Add a `validateResolvedPath(filePath)` check (or equivalent `lstat`-based no-follow guard) before creating the write stream.</violation>
<violation number="3" location="packages/storage-local/src/adapter.ts:146">
P2: Event listener leak: when `drain` fires, the `once('error', reject)` listener from the same promise is never cleaned up (and vice versa). Over many back-pressured writes, orphaned listeners accumulate on the node stream. Remove the complementary listener when either event fires.</violation>
<violation number="4" location="packages/storage-local/src/adapter.ts:205">
P2: Missing backpressure handling: the node readable stream pushes data into the `ReadableStream` controller unconditionally. For large files with a slow consumer, memory usage grows without bound. Pause the node stream by default and use the `pull` callback to resume it on demand.</violation>
</file>
<file name="packages/storage-azure/README.md">
<violation number="1" location="packages/storage-azure/README.md:21">
P3: The usage example redeclares `const adapter` multiple times in one code block, which makes the snippet invalid TypeScript/JavaScript when copied.</violation>
</file>
<file name="packages/storage/src/conformance.ts">
<violation number="1" location="packages/storage/src/conformance.ts:29">
P3: `runConformanceTests` eagerly creates a default test context that is never used, introducing dead code and unnecessary per-call allocations.</violation>
</file>
<file name="packages/files/src/__tests__/file-provider-conformance.test.ts">
<violation number="1" location="packages/files/src/__tests__/file-provider-conformance.test.ts:4">
P1: This conformance suite only tests a local mock provider, so it can pass while real FileProvider implementations are broken.</violation>
<violation number="2" location="packages/files/src/__tests__/file-provider-conformance.test.ts:125">
P2: This assertion defaults to `true`, so missing/undefined completion results still pass.</violation>
</file>
<file name="packages/storage-minio/src/adapter.test.ts">
<violation number="1" location="packages/storage-minio/src/adapter.test.ts:140">
P2: This test is misleading: it claims to verify `forcePathStyle` but only asserts the adapter name, so path-style addressing regressions would not be caught.</violation>
</file>
<file name="packages/middleware/src/rate-limit.ts">
<violation number="1" location="packages/middleware/src/rate-limit.ts:56">
P2: The default in-memory KV store hardcodes `Date.now()` for TTL checks, but the rate limiter's state calculations use the injectable `now()` clock. When a custom `now()` is provided (e.g., for testing), state can be prematurely evicted by the KV store — resetting rate limits unexpectedly. Consider passing the `now` function into `createInMemoryKVStore` so its TTL logic uses the same clock.</violation>
</file>
<file name="packages/storage-azure/src/adapter.test.ts">
<violation number="1" location="packages/storage-azure/src/adapter.test.ts:207">
P2: This test performs a real Azure network operation with fake credentials, making the suite flaky and environment-dependent. Mock `commitBlockList` (or avoid calling `completeMultipartUpload` here) so the test stays deterministic.</violation>
</file>
<file name="packages/storage-r2/src/adapter.test.ts">
<violation number="1" location="packages/storage-r2/src/adapter.test.ts:15">
P3: This assertion duplicates an existing adapter-name check and does not test S3-compatible API behavior described by the test.</violation>
<violation number="2" location="packages/storage-r2/src/adapter.test.ts:97">
P2: This test name claims content-type constraint coverage, but the assertion only re-checks `signedUploadUrls` and does not validate the claimed behavior.</violation>
</file>
<file name="packages/storage-azure/src/adapter.ts">
<violation number="1" location="packages/storage-azure/src/adapter.ts:276">
P2: `Array.sort()` mutates `input.parts` in-place. Use a copy to avoid unexpected side effects on the caller's data.</violation>
<violation number="2" location="packages/storage-azure/src/adapter.ts:280">
P1: Block ID lookup logic will produce IDs that don't match the ones staged during upload, causing `commitBlockList` to fail. The regex `^[A-Za-z0-9+/=]+$` is too permissive (matches real Azure ETags), and the `'block-'` prefix check uses the decoded form instead of the base64-encoded form. Since block IDs are deterministic from `partNumber`, always regenerate them.</violation>
<violation number="3" location="packages/storage-azure/src/adapter.ts:297">
P2: `openUploadStream` accumulates all chunks in memory and uploads the full concatenated buffer on `close()`. For large files this causes unbounded memory growth and can trigger OOM, undermining the streaming API contract. Consider using Azure's `stageBlock`/`commitBlockList` pattern to send chunks to Azure as they arrive, avoiding full-payload buffering.</violation>
</file>
<file name="packages/storage/src/internal/s3-compatible.ts">
<violation number="1" location="packages/storage/src/internal/s3-compatible.ts:53">
P2: Avoid exposing the shared default capabilities object directly; clone it to prevent cross-instance mutation side effects.</violation>
<violation number="2" location="packages/storage/src/internal/s3-compatible.ts:190">
P2: Do not mutate the caller-provided `parts` array when completing multipart uploads.</violation>
</file>
<file name="packages/storage-minio/src/adapter.ts">
<violation number="1" location="packages/storage-minio/src/adapter.ts:28">
P2: Scheme detection is too broad; `startsWith('http')` misclassifies hostnames that begin with `http` and can generate an invalid S3 endpoint.</violation>
</file>
<file name="packages/storage-local/package.json">
<violation number="1" location="packages/storage-local/package.json:32">
P1: Avoid using `*` for a published runtime dependency; pin to a compatible semver range to prevent pulling incompatible major versions.</violation>
</file>
<file name="packages/storage-gcs/src/adapter.ts">
<violation number="1" location="packages/storage-gcs/src/adapter.ts:130">
P2: Orphaned event listeners accumulate on the Node stream. When `drain` fires, the `error` listener from that write remains, and vice-versa. Over many backpressured writes this leaks listeners and can trigger `MaxListenersExceeded` warnings. Remove the losing listener when the winning one fires.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| import { describe, expect, it } from 'vitest'; | ||
| import type { FileProvider } from '../types.js'; | ||
|
|
||
| function createConformantProvider(): FileProvider { |
There was a problem hiding this comment.
P1: This conformance suite only tests a local mock provider, so it can pass while real FileProvider implementations are broken.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/files/src/__tests__/file-provider-conformance.test.ts, line 4:
<comment>This conformance suite only tests a local mock provider, so it can pass while real FileProvider implementations are broken.</comment>
<file context>
@@ -0,0 +1,149 @@
+import { describe, expect, it } from 'vitest';
+import type { FileProvider } from '../types.js';
+
+function createConformantProvider(): FileProvider {
+ return {
+ async createUploadSession() {
</file context>
|
|
||
| describe('Adapter instantiation (STORAGE-001)', () => { | ||
| it('should create adapter with correct name', () => { | ||
| expect(adapter.name).toBe('r2'); |
There was a problem hiding this comment.
P3: This assertion duplicates an existing adapter-name check and does not test S3-compatible API behavior described by the test.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/storage-r2/src/adapter.test.ts, line 15:
<comment>This assertion duplicates an existing adapter-name check and does not test S3-compatible API behavior described by the test.</comment>
<file context>
@@ -0,0 +1,136 @@
+
+ describe('Adapter instantiation (STORAGE-001)', () => {
+ it('should create adapter with correct name', () => {
+ expect(adapter.name).toBe('r2');
+ });
+
</file context>
There was a problem hiding this comment.
1 issue found across 11 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="packages/storage-azure/src/adapter.ts">
<violation number="1" location="packages/storage-azure/src/adapter.ts:316">
P1: Unhandled promise rejection: `abort()` destroys the PassThrough stream, which causes `uploadPromise` (from `blockBlobClient.uploadStream`) to reject, but that rejection is never caught. This will trigger a Node.js `unhandledRejection` warning (or crash, depending on the runtime configuration). Silence the rejection in `abort` by catching it.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| function createInMemoryKVStore(now: () => number): KVStoreAdapter { | ||
| const store = new Map<string, { value: string; expiresAt?: number }>(); | ||
|
|
||
| return { | ||
| async get(key: string): Promise<string | null> { | ||
| const entry = store.get(key); | ||
| if (!entry) return null; | ||
| if (entry.expiresAt && now() > entry.expiresAt) { | ||
| store.delete(key); | ||
| return null; | ||
| } | ||
| return entry.value; | ||
| }, | ||
| async set(input: { key: string; value: string; ttlSeconds?: number }): Promise<void> { | ||
| store.set(input.key, { | ||
| value: input.value, | ||
| expiresAt: input.ttlSeconds ? now() + input.ttlSeconds * 1000 : undefined, | ||
| }); | ||
| }, | ||
| async delete(key: string): Promise<void> { | ||
| store.delete(key); | ||
| }, | ||
| }; | ||
| } |
There was a problem hiding this comment.
createInMemoryKVStore requires a now: () => number parameter but is exported and used without it. The function signature expects now as a required parameter, but the export at line 269 and tests at line 182 call it without arguments, which will cause a runtime error.
// Fix: Make the parameter optional with a default
export function createInMemoryKVStore(now: () => number = Date.now): KVStoreAdapter {
const store = new Map<string, { value: string; expiresAt?: number }>();
// ...
}| function createInMemoryKVStore(now: () => number): KVStoreAdapter { | |
| const store = new Map<string, { value: string; expiresAt?: number }>(); | |
| return { | |
| async get(key: string): Promise<string | null> { | |
| const entry = store.get(key); | |
| if (!entry) return null; | |
| if (entry.expiresAt && now() > entry.expiresAt) { | |
| store.delete(key); | |
| return null; | |
| } | |
| return entry.value; | |
| }, | |
| async set(input: { key: string; value: string; ttlSeconds?: number }): Promise<void> { | |
| store.set(input.key, { | |
| value: input.value, | |
| expiresAt: input.ttlSeconds ? now() + input.ttlSeconds * 1000 : undefined, | |
| }); | |
| }, | |
| async delete(key: string): Promise<void> { | |
| store.delete(key); | |
| }, | |
| }; | |
| } | |
| function createInMemoryKVStore(now: () => number = Date.now): KVStoreAdapter { | |
| const store = new Map<string, { value: string; expiresAt?: number }>(); | |
| return { | |
| async get(key: string): Promise<string | null> { | |
| const entry = store.get(key); | |
| if (!entry) return null; | |
| if (entry.expiresAt && now() > entry.expiresAt) { | |
| store.delete(key); | |
| return null; | |
| } | |
| return entry.value; | |
| }, | |
| async set(input: { key: string; value: string; ttlSeconds?: number }): Promise<void> { | |
| store.set(input.key, { | |
| value: input.value, | |
| expiresAt: input.ttlSeconds ? now() + input.ttlSeconds * 1000 : undefined, | |
| }); | |
| }, | |
| async delete(key: string): Promise<void> { | |
| store.delete(key); | |
| }, | |
| }; | |
| } | |
Spotted by Graphite
Is this helpful? React 👍 or 👎 to let us know.
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (24)
packages/files/src/types.ts (1)
18-18: Use generic return types to replacePromise<unknown>in three FileProvider methods.Lines 18, 41, and 42 currently return
Promise<unknown>, forcing downstream consumers to cast aggressively. Generic parameters withunknowndefaults (<TStatus = unknown>,<TFile = unknown>,<TList = unknown>) maintain backward compatibility while enabling implementers to specify actual return types and improving compile-time safety.♻️ Minimal non-breaking direction (generic return types)
- getUploadSessionStatus(input: { uploadSessionId: string }, ctx: FileProviderContext): Promise<unknown>; + getUploadSessionStatus<TStatus = unknown>( + input: { uploadSessionId: string }, + ctx: FileProviderContext + ): Promise<TStatus>; ... - getFile(input: { fileId: string; versionId?: string }, ctx: FileProviderContext): Promise<unknown>; - listFiles(input: { cursor?: string; limit?: number }, ctx: FileProviderContext): Promise<unknown>; + getFile<TFile = unknown>( + input: { fileId: string; versionId?: string }, + ctx: FileProviderContext + ): Promise<TFile>; + listFiles<TList = unknown>( + input: { cursor?: string; limit?: number }, + ctx: FileProviderContext + ): Promise<TList>;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/files/src/types.ts` at line 18, Update the three FileProvider method signatures to use generic return types with unknown defaults instead of Promise<unknown>: add a generic type parameter to getUploadSessionStatus (e.g., <TStatus = unknown>) and change its return type to Promise<TStatus>, add <TFile = unknown> to getFile and return Promise<TFile>, and add <TList = unknown> to listFiles (or the corresponding method) and return Promise<TList>; this preserves backward compatibility while allowing implementers to specify concrete return types for getUploadSessionStatus, getFile, and listFiles.packages/storage/src/index.test.ts (1)
4-7: Consider asserting concrete exports, not just module presence.
toBeDefined()is a very weak signal for an entrypoint test.🔍 Stronger smoke assertion
describe('storage', () => { it('should export types', async () => { const mod = await import('./index.js'); - expect(mod).toBeDefined(); + expect(mod).toBeDefined(); + expect(Object.keys(mod).length).toBeGreaterThan(0); }); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage/src/index.test.ts` around lines 4 - 7, The test in it('should export types') currently only checks that the module imported from './index.js' is defined; instead assert the actual named exports from that entrypoint by importing './index.js' and verifying specific export symbols (e.g., the public API names your package exposes) are present — use assertions such as expect(mod).toHaveProperty('<ExportName>') or expect(Object.keys(mod)).toEqual(expect.arrayContaining(['<ExportA>','<ExportB>'])) to assert concrete exports from the module rather than only checking mod is defined; update the test to reference the real export identifiers your package exposes.packages/files/src/__tests__/file-provider-conformance.test.ts (1)
14-14: Reduceas anyusage to keep conformance tests type-sensitive.Right now the suite can mask interface drift because many return values/results are explicitly cast to
any.♻️ Example tightening (pattern to apply across the suite)
- const session = await provider.createUploadSession( + const session = await provider.createUploadSession( { policy: 'user-avatar', fileName: 'avatar.png', size: 3, mimeType: 'image/png', }, { principalId: 'user_123', tenantId: 'org_123', requestId: 'req_001' }, - ) as any; + ); - async completeUploadSession() { - return { fileId: 'file_0001', versionId: 'ver_0001' }; + async completeUploadSession() { + return { + fileId: 'file_0001', + versionId: 'ver_0001', + } satisfies Awaited<ReturnType<FileProvider['completeUploadSession']>>; },Also applies to: 25-25, 28-28, 31-31, 43-43, 46-46, 66-67, 84-85, 103-104, 114-115, 123-124, 134-135, 143-144
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/files/src/__tests__/file-provider-conformance.test.ts` at line 14, Several mocks/return objects in the conformance tests are being cast to `any` (e.g., the object ending with `} as any;`) which masks type drift; replace those `as any` casts with explicit, type-safe typings such as using the library's FileProvider interface (or the actual conformance interface name) or Partial<FileProvider> for stubs and then cast via `as FileProvider` only when necessary, or better yet implement minimal typed mocks that satisfy the required methods (e.g., implement the specific methods used in the test rather than blanket `any`); apply this change to the occurrences you listed (lines ~14,25,28,31,43,46,66-67,84-85,103-104,114-115,123-124,134-135,143-144) so the test suite remains type-sensitive.packages/storage-gcs/src/index.ts (1)
1-1: Prefer explicit entrypoint exports over wildcard.
export *can accidentally expose new symbols whenadapter.tschanges. Explicit exports make versioning safer.Suggested change
-export * from './adapter.js'; +export { createGCSStorageAdapter, type GCSStorageConfig } from './adapter.js';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-gcs/src/index.ts` at line 1, The file currently re-exports everything from adapter.js using "export *", which can accidentally expose new symbols; replace the wildcard with explicit named exports from adapter.js (e.g., export { AdapterClass, createAdapter, TYPES } — use the actual exported identifiers defined in adapter.js) so only intended symbols are exposed, updating the export list in packages/storage-gcs/src/index.ts to match adapter.js's public API.packages/middleware/src/index.test.ts (1)
11-11: Rename test title to match runtime behavior.Line 11 mentions interface export, but the test validates method presence on an instance. Consider wording like “should expose
checkandresetmethods”.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/middleware/src/index.test.ts` at line 11, Update the test title string for the failing "it" block currently named "should export RateLimiter interface" to accurately describe what the test asserts — rename it to something like "should expose `check` and `reset` methods" since the assertions check method presence on an instance (verify the `check` and `reset` methods on the RateLimiter instance).packages/storage-azure/README.md (1)
44-53: ClarifyuploadIdas an opaque value to avoid integration mistakes.The multipart section is good; a slightly tighter wording helps consumers avoid transforming/regenerating
uploadId.✏️ Suggested doc tweak
-- **uploadId**: Azure commitment placeholder (client-generated) +- **uploadId**: Opaque multipart session identifier; obtain it once and pass it unchanged to part-signing and completion calls🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-azure/README.md` around lines 44 - 53, Clarify in the "Azure Block Blob Semantics" section that uploadId is an opaque client-provided identifier and must not be parsed, transformed, or regenerated by integrators; update the description mapping for **uploadId** to explicitly state it is an opaque value used only to correlate parts and should be treated verbatim by clients and servers to avoid integration errors.packages/storage-s3/src/adapter.test.ts (1)
7-14: Consider capturing conformance failure details.Same suggestion as for GCS adapter: include failure details in the assertion message to aid debugging when conformance tests fail.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-s3/src/adapter.test.ts` around lines 7 - 14, The test for createS3StorageAdapter currently asserts runConformanceTests returned ok without showing failure details; update the assertion in the test that calls runConformanceTests(adapter) to include the failure information in the expect message (e.g., include result.error or JSON.stringify(result) in the expect(...).toBe(true) message) so that when runConformanceTests (the conformance harness) fails you get the detailed failure output for debugging.packages/storage-r2/src/adapter.ts (1)
13-27: Consider validating required config fields.The adapter accepts
accountId,accessKeyId, andsecretAccessKeyas required fields but doesn't validate them before constructing the endpoint or client. Empty strings would result in malformed URLs or confusing auth errors.Optional validation
export function createR2StorageAdapter(config: R2StorageConfig): StorageAdapter { const { accountId, bucket, accessKeyId, secretAccessKey, jurisdiction } = config; + if (!accountId || !bucket || !accessKeyId || !secretAccessKey) { + throw new Error('R2 adapter requires accountId, bucket, accessKeyId, and secretAccessKey'); + } + const endpoint = jurisdiction ? `https://${accountId}.${jurisdiction}.r2.cloudflarestorage.com` : `https://${accountId}.r2.cloudflarestorage.com`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-r2/src/adapter.ts` around lines 13 - 27, In createR2StorageAdapter, validate that required config fields (accountId, accessKeyId, secretAccessKey) from the R2StorageConfig are present and non-empty before building endpoint or instantiating the S3Client; if any are missing/empty throw a clear error (or return a failed result) naming the missing field(s) so callers can fix configuration, and ensure you still accept optional jurisdiction as-is when computing endpoint. Use the existing symbols (accountId, accessKeyId, secretAccessKey, jurisdiction, endpoint, S3Client, createR2StorageAdapter) to locate where to add the checks.packages/storage-gcs/src/adapter.test.ts (1)
7-13: Consider capturing conformance failure details.When
result.okisfalse, the test fails but doesn't reveal which checks failed. Capturingresult.errorsorresult.details(if available) in the assertion message would improve debuggability.Suggested improvement
it('should pass conformance tests (method existence)', async () => { const adapter = createGCSStorageAdapter({ bucket: 'test-bucket', }); const result = await runConformanceTests(adapter); - expect(result.ok).toBe(true); + expect(result.ok, `Conformance failures: ${JSON.stringify(result)}`).toBe(true); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-gcs/src/adapter.test.ts` around lines 7 - 13, The test for createGCSStorageAdapter calling runConformanceTests only asserts result.ok and loses failure details; update the assertion to include result.errors or result.details in the failure message (or log them) so failing checks are visible. Locate the test around createGCSStorageAdapter and runConformanceTests, and change the expect to something like expecting result.ok to be true while passing result.errors/result.details into the expect message or call console.error(result.errors) before the assertion to surface which checks failed.packages/storage-r2/package.json (1)
34-34: Inconsistent internal dependency version specifier.Same issue as other adapter packages: uses
"*"instead of"^0.1.0"for@superfunctions/storage.Suggested fix
"dependencies": { "@aws-sdk/client-s3": "^3.600.0", "@aws-sdk/s3-request-presigner": "^3.600.0", - "@superfunctions/storage": "*" + "@superfunctions/storage": "^0.1.0" },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-r2/package.json` at line 34, Update the dependency spec for the internal package to match other adapter packages: replace the loose version "*" for the "@superfunctions/storage" entry in package.json with the caret-pinned version "^0.1.0" so it becomes consistent and allows semver-compatible updates; ensure the change is applied to the "@superfunctions/storage" dependency line in this package.json.packages/middleware/package.json (1)
36-41: Consider version range for peer dependency.Using
"*"for@superfunctions/dbmeans any version will satisfy the peer dependency. While this is marked optional, once a consumer includes it, there's no guarantee of API compatibility. Consider specifying a minimum version range (e.g.,">=0.1.0") once the db package API stabilizes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/middleware/package.json` around lines 36 - 41, Replace the wildcard peer dependency for `@superfunctions/db` in package.json with a semantically meaningful minimum version range (e.g., ">=0.1.0" or a caret range like "^0.1.0") so consumers get an API compatibility guarantee; keep the peerDependenciesMeta optional setting as-is and update the "peerDependencies" entry for "@superfunctions/db" accordingly.packages/storage-s3/package.json (1)
34-34: Inconsistent internal dependency version specifier.This package uses
"*"for@superfunctions/storage, whilepackages/storage-local/package.jsonuses"^0.1.0". Using"*"can lead to unexpected breaking changes when the core package is updated. Consider using a consistent semver range like"^0.1.0"across all adapter packages for predictable dependency resolution.Suggested fix
"dependencies": { "@aws-sdk/client-s3": "^3.600.0", "@aws-sdk/s3-request-presigner": "^3.600.0", - "@superfunctions/storage": "*" + "@superfunctions/storage": "^0.1.0" },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-s3/package.json` at line 34, Update the dependency specifier for `@superfunctions/storage` in the package.json that currently has "@superfunctions/storage": "*" to use a consistent semver range (e.g., "^0.1.0") matching other adapters; locate the dependency entry and replace the "*" value with the chosen caret range so all adapter packages use the same predictable versioning.packages/storage-azure/package.json (1)
33-33: Inconsistent internal dependency version specifier.Same issue as in
storage-s3: uses"*"instead of a semver range for@superfunctions/storage.Suggested fix
"dependencies": { "@azure/storage-blob": "^12.18.0", - "@superfunctions/storage": "*" + "@superfunctions/storage": "^0.1.0" },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-azure/package.json` at line 33, The dependency entry for `@superfunctions/storage` in package.json currently uses "*" which is inconsistent with other packages (like storage-s3); replace the "*" specifier with the same semver range used across the repo (e.g., a caret range such as ^X.Y.Z or the workspace: range your monorepo uses) so `@superfunctions/storage` matches the project's versioning policy; update the dependency value for "@superfunctions/storage" in packages/storage-azure/package.json and ensure it mirrors the specifier used in storage-s3.packages/storage/src/router.test.ts (1)
68-70: Avoid optional chaining on required-path test calls.At Line 68 and Line 127,
?.can hide missing-method regressions by short-circuiting instead of failing fast. Assert method presence, then call directly.♻️ Suggested test hardening
- await storage.createMultipartUpload?.({ key: 'files/a', target: 'durable' }); - await storage.createMultipartUpload?.({ key: 'files/b', target: 'temporary' }); - const descriptor = await storage.signDownloadUrl?.({ key: 'files/b', target: 'temporary', expiresInSeconds: 60 }); + expect(typeof storage.createMultipartUpload).toBe('function'); + expect(typeof storage.signDownloadUrl).toBe('function'); + await storage.createMultipartUpload({ key: 'files/a', target: 'durable' }); + await storage.createMultipartUpload({ key: 'files/b', target: 'temporary' }); + const descriptor = await storage.signDownloadUrl({ key: 'files/b', target: 'temporary', expiresInSeconds: 60 }); ... - storage.signDownloadUrl?.({ key: 'files/missing', target: 'temporary', expiresInSeconds: 60 }), + storage.signDownloadUrl({ key: 'files/missing', target: 'temporary', expiresInSeconds: 60 }),Also applies to: 127-129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage/src/router.test.ts` around lines 68 - 70, The tests use optional chaining on required storage methods which can mask regressions; change occurrences of storage.createMultipartUpload?. and storage.signDownloadUrl?. (and similar uses around lines 127-129) to assert the methods exist and then call them normally (e.g., verify typeof storage.createMultipartUpload === 'function' or expect(storage.createMultipartUpload).toBeDefined() before invoking) so missing-method failures surface immediately; update the test to call storage.createMultipartUpload(...) and storage.signDownloadUrl(...) directly after the assertion.packages/storage-r2/src/adapter.test.ts (1)
127-134: Convert doc-style assertions into behavior checks (or comments).These tests currently restate existing assertions and don’t validate R2-specific behavior. Consider replacing with concrete R2 edge-case checks or moving the note to documentation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-r2/src/adapter.test.ts` around lines 127 - 134, The tests inside the "R2-specific limitations" describe block are only restating docs (checking adapter.capabilities.signedUploadUrls and adapter.name) rather than validating behavior; replace these doc-style it blocks with either real behavior assertions—e.g., in the signedUploadUrls test actually generate a signed upload URL via the adapter and assert that uploads with a mismatched Content-Type are rejected or that the URL contains the content-type constraint—and for the S3-compatibility test use the adapter to exercise an S3-compatible API edge case (presigned PUT/GET headers or path vs host styles) or convert the two it blocks into plain comments documenting the limitation; locate and update the tests referencing adapter.capabilities.signedUploadUrls and adapter.name inside the describe('R2-specific limitations') block accordingly.packages/middleware/src/rate-limit.test.ts (1)
200-205: Make TTL expiration test deterministic with fake timers.The real sleep at Line 204 can introduce CI flakiness. Prefer fake timers + controlled advancement.
♻️ Suggested deterministic version
it('expires keys after TTL', async () => { + vi.useFakeTimers(); await store.set({ key: 'key1', value: 'value1', ttlSeconds: 0.01 }); expect(await store.get('key1')).toBe('value1'); - await new Promise((resolve) => setTimeout(resolve, 20)); + vi.advanceTimersByTime(20); expect(await store.get('key1')).toBeNull(); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/middleware/src/rate-limit.test.ts` around lines 200 - 205, The TTL expiration test for "expires keys after TTL" is flaky due to a real sleep; switch the test to use fake timers from the test runner (e.g., jest/vi): call useFakeTimers() at the start of the test, call store.set({ key: 'key1', value: 'value1', ttlSeconds: 0.01 }) as before, then advance timers by >=20ms with advanceTimersByTime(20) and flush pending microtasks if needed (e.g., await Promise.resolve() or nextTick) before asserting store.get('key1') is null, and finally restore real timers with useRealTimers()/clearAllTimers; reference the test name "expires keys after TTL" and the methods store.set and store.get so the change is applied in that test only.packages/storage-minio/src/adapter.test.ts (1)
139-143: Test does not verify the claimed behavior.The test title claims to verify "path-style addressing (forcePathStyle)" but only asserts that
adapter.nameequals'minio'. This doesn't validate thatforcePathStyle: trueis actually configured on the underlying S3 client.Consider either:
- Removing this test if path-style addressing is an internal implementation detail
- Actually verifying the configuration (e.g., by exposing it or testing URL output format)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-minio/src/adapter.test.ts` around lines 139 - 143, The test title claims to verify path-style addressing but only checks adapter.name; either remove the test or update it to actually assert forcePathStyle on the underlying S3 client: locate the adapter instance (e.g., adapter or adapter.s3Client / adapter.s3) and assert the S3 client's configuration includes forcePathStyle === true (or call a URL-generation method like adapter.getUrl(bucket, key) and assert the returned URL uses path-style host/path format rather than virtual-hosted style). Update the spec to reference adapter.name only if you remove the path-style assertion.packages/storage/src/internal/s3-compatible.ts (1)
64-70: Defaultingsizeto 0 whenContentLengthis undefined may be misleading.When
HeadObjectCommanddoesn't returnContentLength(rare, but possible for some S3-compatible backends), the adapter returnssize: 0. This could lead consumers to believe the object is empty rather than that the size is unknown.Consider either:
- Throwing an error if
ContentLengthis missing (stricter)- Making
sizeoptional inStorageObjectStatto represent "unknown" (breaking change)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage/src/internal/s3-compatible.ts` around lines 64 - 70, The current code defaults ContentLength to 0 which misrepresents unknown sizes; update the adapter to propagate unknown size instead of defaulting: return size as undefined when response.ContentLength is undefined, and update the StorageObjectStat type (make size?: number) and any callers/consumers of the object (e.g., functions that read size) to handle an optional size; alternatively, if you prefer a strict approach, throw an error from the method that issues HeadObjectCommand when ContentLength is missing—choose one strategy and apply it consistently across the StorageObjectStat definition and all usages.packages/storage/src/types.ts (1)
88-97: Consider adding type parameters to stream types.The
WritableStreamandReadableStreamtypes lack type parameters, defaulting toany. Adding explicit type parameters would improve type safety for consumers.♻️ Proposed improvement
openUploadStream?(input: { key: string; target?: StorageTargetName; contentType?: string; - }): Promise<WritableStream>; + }): Promise<WritableStream<Uint8Array>>; openDownloadStream?(input: { key: string; target?: StorageTargetName; range?: { start: number; endInclusive: number }; - }): Promise<ReadableStream>; + }): Promise<ReadableStream<Uint8Array>>;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage/src/types.ts` around lines 88 - 97, Update the method signatures to use generic stream element types instead of bare any by adding a type parameter (with sensible default) to both methods: change openUploadStream to openUploadStream?<T = Uint8Array>(input: { key: string; target?: StorageTargetName; contentType?: string; }): Promise<WritableStream<T>> and change openDownloadStream to openDownloadStream?<T = Uint8Array>(input: { key: string; target?: StorageTargetName; range?: { start: number; endInclusive: number }; }): Promise<ReadableStream<T>> so consumers can opt into a specific chunk type while preserving a safe default; update any related type imports/exports in types.ts to reflect the generics.packages/storage-gcs/src/adapter.ts (2)
58-65: Redundant try-catch block.The try-catch here simply re-throws the error without any transformation. Since
ignoreNotFound: truealready handles the not-found case, this wrapper adds no value.♻️ Simplify by removing unnecessary try-catch
async deleteObject(input: { key: string }): Promise<void> { - try { - const file = bucket.file(input.key); - await file.delete({ ignoreNotFound: true }); - } catch (err: unknown) { - throw err; - } + const file = bucket.file(input.key); + await file.delete({ ignoreNotFound: true }); },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-gcs/src/adapter.ts` around lines 58 - 65, The deleteObject method contains a redundant try-catch that only rethrows errors; remove the try-catch wrapper in async deleteObject (which calls bucket.file(input.key) and await file.delete({ ignoreNotFound: true })) so that errors propagate naturally and the ignoreNotFound option handles missing objects—leave the bucket.file and file.delete call as-is and return Promise<void> from the async function without catching and rethrowing.
171-176: Redundant catch block with identical branches.Both branches of this catch block throw
err, making the entire try-catch unnecessary sincecreateNotFoundError()at line 169 already throws.♻️ Simplify error handling
try { const [exists] = await file.exists(); if (!exists) { throw createNotFoundError(); } - } catch (err: unknown) { - if (err instanceof Error && 'code' in err && err.code === 'STORAGE_NOT_FOUND') { - throw err; - } - throw err; - } + } catch (err: unknown) { + throw err; + }Or simply remove the try-catch entirely since it adds no value.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-gcs/src/adapter.ts` around lines 171 - 176, The catch block that inspects and rethrows err is redundant because createNotFoundError() already throws; remove the entire try-catch surrounding that call (or at minimum remove the catch and let the error propagate) so the function/program doesn't contain an identical rethrow branch; look for the createNotFoundError call and the catch (err: unknown) block in adapter.ts and delete the catch block (or the encompassing try) to simplify error handling.packages/storage-azure/src/adapter.ts (1)
335-340: Redundant catch block.Similar to the GCS adapter, both branches throw
err, making the conditional check unnecessary.♻️ Simplify or remove
} catch (err: unknown) { - if (err instanceof Error && 'code' in err && err.code === 'STORAGE_NOT_FOUND') { - throw err; - } throw err; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-azure/src/adapter.ts` around lines 335 - 340, The catch block that tests "if (err instanceof Error && 'code' in err && err.code === 'STORAGE_NOT_FOUND')" is redundant because both branches rethrow err; simplify by removing the conditional and either remove the catch entirely to let errors bubble or replace it with a single rethrow (catch (err: unknown) { throw err; }) in the same function scope where the current try/catch appears (the catch handling "err" in adapter.ts).packages/storage/src/router.ts (1)
32-40: Capability intersection may be too restrictive.Using
every()means the routed adapter only advertises a capability if all underlying adapters support it. This could be limiting—for example, if one adapter supports multipart and another doesn't, the router reportsmultipart: falseeven when the target that supports it is used.Consider whether this is the intended behavior or if capability checks should be deferred to per-target resolution via
capabilitiesForTarget.The current design is safe (won't advertise unsupported features), but consumers should use
getStorageCapabilities(adapter, target)for accurate per-target capabilities rather than relying onadapter.capabilities.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage/src/router.ts` around lines 32 - 40, mergeCapabilities currently uses every() which only advertises a capability if all underlying adapters support it; update the codebase and docs to avoid misleading consumers by either (A) changing mergeCapabilities to compute per-capability availability more appropriately (e.g., union/any() if you want advertised features when at least one adapter supports them) or (B) keep the conservative every() behavior but add a clear comment above mergeCapabilities and update documentation and call sites to direct consumers to use the per-target resolver (capabilitiesForTarget or getStorageCapabilities) for accurate target-specific capability checks; locate mergeCapabilities and any callers that read adapter.capabilities and update them or their docs to reference capabilitiesForTarget/getStorageCapabilities.packages/middleware/src/rate-limit.ts (1)
151-169: Key lock implementation may leak entries over time.The lock cleanup at line 165-167 only deletes the key if the current promise matches. However, if multiple concurrent calls queue up and one completes while another is still waiting, the map entry persists. This isn't a functional bug, but the map could grow unbounded with unique keys.
♻️ Consider using WeakRef or periodic cleanup for long-running processes
For high-cardinality key spaces, consider periodically clearing the
keyLocksmap or using a different approach. In practice, this is likely fine for most use cases since the Promise objects are small.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/middleware/src/rate-limit.ts` around lines 151 - 169, The withKeyLock implementation can leak map entries for high-cardinality keys because cleanup only runs in the function's finally and may miss removing the entry when promises chain; update withKeyLock to attach a cleanup handler to the queued Promise itself (e.g., queued.finally(...)) that checks keyLocks.get(key) === queued and deletes the key when queued settles, and keep the existing resolveLock/await prior behavior; reference symbols: withKeyLock, keyLocks, queued, resolveLock.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/storage-gcs/package.json`:
- Around line 31-34: Update the storage adapter packages to remove the wildcard
dependency on `@superfunctions/storage`: replace the dependency entry
"@superfunctions/storage": "*" with a semver-constrained version such as
"@superfunctions/storage": "^0.1.0" in each package.json for the storage
adapters (refer to the dependency key "@superfunctions/storage" in
packages/storage-gcs/package.json and the equivalent entries in storage-azure,
storage-minio, storage-r2, and storage-s3) so the published packages retain
semver safety and avoid resolving incompatible upstream releases.
In `@packages/storage-gcs/src/adapter.ts`:
- Around line 151-156: The close() implementation on the adapter can miss
synchronous errors because it calls gcstream.end() before attaching the 'error'
listener and never cleans up listeners; update close() to attach the error
handler before calling gcstream.end(), use once for both 'finish'/'close' and
'error', and remove the opposite listener in each handler to avoid
leaks/handled-after-resolve scenarios—i.e., on gcstream.once('error', onErr) and
gcstream.once('finish' or 'close', onSuccess), call removeListener for the
counterpart in each handler and resolve/reject accordingly so gcstream errors
are caught and listeners cleaned up.
In `@packages/storage-local/src/adapter.ts`:
- Around line 182-186: The close() Promise may miss errors because
nodeStream.end() is called before attaching the 'error' listener; change the
order so the error listener (nodeStream.once('error', reject)) is registered
before calling nodeStream.end(), and ensure the listener is removed/cleanup when
end's callback resolves (e.g., remove the error listener inside the end callback
before calling resolve) so synchronous or immediate errors are properly caught
and the Promise always settles.
In `@packages/storage-minio/package.json`:
- Around line 31-35: Update the loosened dependency spec for
`@superfunctions/storage` which is currently "*" to a pinned semver range (e.g.
"^0.1.0") so consumers don't get future incompatible releases; change the
dependency entry named "@superfunctions/storage" in the package.json for
storage-minio (and likewise in storage-azure, storage-s3, storage-r2,
storage-gcs) from "*" to the chosen compatible range "^0.1.0" and run install to
verify lockfile updates.
In `@packages/storage-minio/src/adapter.ts`:
- Around line 28-30: The current logic skips portSuffix when endpoint already
includes a scheme, so fix the construction of fullEndpoint in adapter.ts by
detecting whether the given endpoint (when starting with 'http://' or
'https://') already contains an explicit port and, if not, append portSuffix;
update the code that computes fullEndpoint (which currently checks
endpoint.startsWith and returns endpoint unchanged) to parse or regex-check the
endpoint for an existing :port (or use the URL constructor) and only leave
endpoint as-is if a port is present, otherwise append portSuffix while
preserving the original scheme and host.
In `@packages/storage-s3/src/adapter.ts`:
- Around line 21-24: The credentials branch silently falls back when only one of
accessKeyId or secretAccessKey is provided; add a guard in the adapter
initialization (before constructing the credentials object) that checks the
variables accessKeyId and secretAccessKey and throws a descriptive Error if
exactly one is set, otherwise proceed: if both are set attach { accessKeyId,
secretAccessKey }, if neither set leave credentials undefined to use the default
provider chain. Ensure the check is placed near the existing credentials
construction (the credentials: accessKeyId && secretAccessKey ? ... expression)
so it prevents partial-static credential usage.
---
Nitpick comments:
In `@packages/files/src/__tests__/file-provider-conformance.test.ts`:
- Line 14: Several mocks/return objects in the conformance tests are being cast
to `any` (e.g., the object ending with `} as any;`) which masks type drift;
replace those `as any` casts with explicit, type-safe typings such as using the
library's FileProvider interface (or the actual conformance interface name) or
Partial<FileProvider> for stubs and then cast via `as FileProvider` only when
necessary, or better yet implement minimal typed mocks that satisfy the required
methods (e.g., implement the specific methods used in the test rather than
blanket `any`); apply this change to the occurrences you listed (lines
~14,25,28,31,43,46,66-67,84-85,103-104,114-115,123-124,134-135,143-144) so the
test suite remains type-sensitive.
In `@packages/files/src/types.ts`:
- Line 18: Update the three FileProvider method signatures to use generic return
types with unknown defaults instead of Promise<unknown>: add a generic type
parameter to getUploadSessionStatus (e.g., <TStatus = unknown>) and change its
return type to Promise<TStatus>, add <TFile = unknown> to getFile and return
Promise<TFile>, and add <TList = unknown> to listFiles (or the corresponding
method) and return Promise<TList>; this preserves backward compatibility while
allowing implementers to specify concrete return types for
getUploadSessionStatus, getFile, and listFiles.
In `@packages/middleware/package.json`:
- Around line 36-41: Replace the wildcard peer dependency for `@superfunctions/db`
in package.json with a semantically meaningful minimum version range (e.g.,
">=0.1.0" or a caret range like "^0.1.0") so consumers get an API compatibility
guarantee; keep the peerDependenciesMeta optional setting as-is and update the
"peerDependencies" entry for "@superfunctions/db" accordingly.
In `@packages/middleware/src/index.test.ts`:
- Line 11: Update the test title string for the failing "it" block currently
named "should export RateLimiter interface" to accurately describe what the test
asserts — rename it to something like "should expose `check` and `reset`
methods" since the assertions check method presence on an instance (verify the
`check` and `reset` methods on the RateLimiter instance).
In `@packages/middleware/src/rate-limit.test.ts`:
- Around line 200-205: The TTL expiration test for "expires keys after TTL" is
flaky due to a real sleep; switch the test to use fake timers from the test
runner (e.g., jest/vi): call useFakeTimers() at the start of the test, call
store.set({ key: 'key1', value: 'value1', ttlSeconds: 0.01 }) as before, then
advance timers by >=20ms with advanceTimersByTime(20) and flush pending
microtasks if needed (e.g., await Promise.resolve() or nextTick) before
asserting store.get('key1') is null, and finally restore real timers with
useRealTimers()/clearAllTimers; reference the test name "expires keys after TTL"
and the methods store.set and store.get so the change is applied in that test
only.
In `@packages/middleware/src/rate-limit.ts`:
- Around line 151-169: The withKeyLock implementation can leak map entries for
high-cardinality keys because cleanup only runs in the function's finally and
may miss removing the entry when promises chain; update withKeyLock to attach a
cleanup handler to the queued Promise itself (e.g., queued.finally(...)) that
checks keyLocks.get(key) === queued and deletes the key when queued settles, and
keep the existing resolveLock/await prior behavior; reference symbols:
withKeyLock, keyLocks, queued, resolveLock.
In `@packages/storage-azure/package.json`:
- Line 33: The dependency entry for `@superfunctions/storage` in package.json
currently uses "*" which is inconsistent with other packages (like storage-s3);
replace the "*" specifier with the same semver range used across the repo (e.g.,
a caret range such as ^X.Y.Z or the workspace: range your monorepo uses) so
`@superfunctions/storage` matches the project's versioning policy; update the
dependency value for "@superfunctions/storage" in
packages/storage-azure/package.json and ensure it mirrors the specifier used in
storage-s3.
In `@packages/storage-azure/README.md`:
- Around line 44-53: Clarify in the "Azure Block Blob Semantics" section that
uploadId is an opaque client-provided identifier and must not be parsed,
transformed, or regenerated by integrators; update the description mapping for
**uploadId** to explicitly state it is an opaque value used only to correlate
parts and should be treated verbatim by clients and servers to avoid integration
errors.
In `@packages/storage-azure/src/adapter.ts`:
- Around line 335-340: The catch block that tests "if (err instanceof Error &&
'code' in err && err.code === 'STORAGE_NOT_FOUND')" is redundant because both
branches rethrow err; simplify by removing the conditional and either remove the
catch entirely to let errors bubble or replace it with a single rethrow (catch
(err: unknown) { throw err; }) in the same function scope where the current
try/catch appears (the catch handling "err" in adapter.ts).
In `@packages/storage-gcs/src/adapter.test.ts`:
- Around line 7-13: The test for createGCSStorageAdapter calling
runConformanceTests only asserts result.ok and loses failure details; update the
assertion to include result.errors or result.details in the failure message (or
log them) so failing checks are visible. Locate the test around
createGCSStorageAdapter and runConformanceTests, and change the expect to
something like expecting result.ok to be true while passing
result.errors/result.details into the expect message or call
console.error(result.errors) before the assertion to surface which checks
failed.
In `@packages/storage-gcs/src/adapter.ts`:
- Around line 58-65: The deleteObject method contains a redundant try-catch that
only rethrows errors; remove the try-catch wrapper in async deleteObject (which
calls bucket.file(input.key) and await file.delete({ ignoreNotFound: true })) so
that errors propagate naturally and the ignoreNotFound option handles missing
objects—leave the bucket.file and file.delete call as-is and return
Promise<void> from the async function without catching and rethrowing.
- Around line 171-176: The catch block that inspects and rethrows err is
redundant because createNotFoundError() already throws; remove the entire
try-catch surrounding that call (or at minimum remove the catch and let the
error propagate) so the function/program doesn't contain an identical rethrow
branch; look for the createNotFoundError call and the catch (err: unknown) block
in adapter.ts and delete the catch block (or the encompassing try) to simplify
error handling.
In `@packages/storage-gcs/src/index.ts`:
- Line 1: The file currently re-exports everything from adapter.js using "export
*", which can accidentally expose new symbols; replace the wildcard with
explicit named exports from adapter.js (e.g., export { AdapterClass,
createAdapter, TYPES } — use the actual exported identifiers defined in
adapter.js) so only intended symbols are exposed, updating the export list in
packages/storage-gcs/src/index.ts to match adapter.js's public API.
In `@packages/storage-minio/src/adapter.test.ts`:
- Around line 139-143: The test title claims to verify path-style addressing but
only checks adapter.name; either remove the test or update it to actually assert
forcePathStyle on the underlying S3 client: locate the adapter instance (e.g.,
adapter or adapter.s3Client / adapter.s3) and assert the S3 client's
configuration includes forcePathStyle === true (or call a URL-generation method
like adapter.getUrl(bucket, key) and assert the returned URL uses path-style
host/path format rather than virtual-hosted style). Update the spec to reference
adapter.name only if you remove the path-style assertion.
In `@packages/storage-r2/package.json`:
- Line 34: Update the dependency spec for the internal package to match other
adapter packages: replace the loose version "*" for the
"@superfunctions/storage" entry in package.json with the caret-pinned version
"^0.1.0" so it becomes consistent and allows semver-compatible updates; ensure
the change is applied to the "@superfunctions/storage" dependency line in this
package.json.
In `@packages/storage-r2/src/adapter.test.ts`:
- Around line 127-134: The tests inside the "R2-specific limitations" describe
block are only restating docs (checking adapter.capabilities.signedUploadUrls
and adapter.name) rather than validating behavior; replace these doc-style it
blocks with either real behavior assertions—e.g., in the signedUploadUrls test
actually generate a signed upload URL via the adapter and assert that uploads
with a mismatched Content-Type are rejected or that the URL contains the
content-type constraint—and for the S3-compatibility test use the adapter to
exercise an S3-compatible API edge case (presigned PUT/GET headers or path vs
host styles) or convert the two it blocks into plain comments documenting the
limitation; locate and update the tests referencing
adapter.capabilities.signedUploadUrls and adapter.name inside the
describe('R2-specific limitations') block accordingly.
In `@packages/storage-r2/src/adapter.ts`:
- Around line 13-27: In createR2StorageAdapter, validate that required config
fields (accountId, accessKeyId, secretAccessKey) from the R2StorageConfig are
present and non-empty before building endpoint or instantiating the S3Client; if
any are missing/empty throw a clear error (or return a failed result) naming the
missing field(s) so callers can fix configuration, and ensure you still accept
optional jurisdiction as-is when computing endpoint. Use the existing symbols
(accountId, accessKeyId, secretAccessKey, jurisdiction, endpoint, S3Client,
createR2StorageAdapter) to locate where to add the checks.
In `@packages/storage-s3/package.json`:
- Line 34: Update the dependency specifier for `@superfunctions/storage` in the
package.json that currently has "@superfunctions/storage": "*" to use a
consistent semver range (e.g., "^0.1.0") matching other adapters; locate the
dependency entry and replace the "*" value with the chosen caret range so all
adapter packages use the same predictable versioning.
In `@packages/storage-s3/src/adapter.test.ts`:
- Around line 7-14: The test for createS3StorageAdapter currently asserts
runConformanceTests returned ok without showing failure details; update the
assertion in the test that calls runConformanceTests(adapter) to include the
failure information in the expect message (e.g., include result.error or
JSON.stringify(result) in the expect(...).toBe(true) message) so that when
runConformanceTests (the conformance harness) fails you get the detailed failure
output for debugging.
In `@packages/storage/src/index.test.ts`:
- Around line 4-7: The test in it('should export types') currently only checks
that the module imported from './index.js' is defined; instead assert the actual
named exports from that entrypoint by importing './index.js' and verifying
specific export symbols (e.g., the public API names your package exposes) are
present — use assertions such as expect(mod).toHaveProperty('<ExportName>') or
expect(Object.keys(mod)).toEqual(expect.arrayContaining(['<ExportA>','<ExportB>']))
to assert concrete exports from the module rather than only checking mod is
defined; update the test to reference the real export identifiers your package
exposes.
In `@packages/storage/src/internal/s3-compatible.ts`:
- Around line 64-70: The current code defaults ContentLength to 0 which
misrepresents unknown sizes; update the adapter to propagate unknown size
instead of defaulting: return size as undefined when response.ContentLength is
undefined, and update the StorageObjectStat type (make size?: number) and any
callers/consumers of the object (e.g., functions that read size) to handle an
optional size; alternatively, if you prefer a strict approach, throw an error
from the method that issues HeadObjectCommand when ContentLength is
missing—choose one strategy and apply it consistently across the
StorageObjectStat definition and all usages.
In `@packages/storage/src/router.test.ts`:
- Around line 68-70: The tests use optional chaining on required storage methods
which can mask regressions; change occurrences of
storage.createMultipartUpload?. and storage.signDownloadUrl?. (and similar uses
around lines 127-129) to assert the methods exist and then call them normally
(e.g., verify typeof storage.createMultipartUpload === 'function' or
expect(storage.createMultipartUpload).toBeDefined() before invoking) so
missing-method failures surface immediately; update the test to call
storage.createMultipartUpload(...) and storage.signDownloadUrl(...) directly
after the assertion.
In `@packages/storage/src/router.ts`:
- Around line 32-40: mergeCapabilities currently uses every() which only
advertises a capability if all underlying adapters support it; update the
codebase and docs to avoid misleading consumers by either (A) changing
mergeCapabilities to compute per-capability availability more appropriately
(e.g., union/any() if you want advertised features when at least one adapter
supports them) or (B) keep the conservative every() behavior but add a clear
comment above mergeCapabilities and update documentation and call sites to
direct consumers to use the per-target resolver (capabilitiesForTarget or
getStorageCapabilities) for accurate target-specific capability checks; locate
mergeCapabilities and any callers that read adapter.capabilities and update them
or their docs to reference capabilitiesForTarget/getStorageCapabilities.
In `@packages/storage/src/types.ts`:
- Around line 88-97: Update the method signatures to use generic stream element
types instead of bare any by adding a type parameter (with sensible default) to
both methods: change openUploadStream to openUploadStream?<T =
Uint8Array>(input: { key: string; target?: StorageTargetName; contentType?:
string; }): Promise<WritableStream<T>> and change openDownloadStream to
openDownloadStream?<T = Uint8Array>(input: { key: string; target?:
StorageTargetName; range?: { start: number; endInclusive: number }; }):
Promise<ReadableStream<T>> so consumers can opt into a specific chunk type while
preserving a safe default; update any related type imports/exports in types.ts
to reflect the generics.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d8eb8541-1db6-4b68-9954-74251ef7b47f
📒 Files selected for processing (61)
packages/files/package.jsonpackages/files/src/__tests__/file-provider-conformance.test.tspackages/files/src/index.tspackages/files/src/types.tspackages/files/tsconfig.jsonpackages/middleware/package.jsonpackages/middleware/src/index.test.tspackages/middleware/src/index.tspackages/middleware/src/rate-limit.test.tspackages/middleware/src/rate-limit.tspackages/middleware/tsconfig.jsonpackages/middleware/vitest.config.tspackages/storage-azure/README.mdpackages/storage-azure/package.jsonpackages/storage-azure/src/adapter.test.tspackages/storage-azure/src/adapter.tspackages/storage-azure/src/index.tspackages/storage-azure/tsconfig.jsonpackages/storage-azure/vitest.config.tspackages/storage-gcs/package.jsonpackages/storage-gcs/src/adapter.test.tspackages/storage-gcs/src/adapter.tspackages/storage-gcs/src/index.tspackages/storage-gcs/tsconfig.jsonpackages/storage-gcs/vitest.config.tspackages/storage-local/package.jsonpackages/storage-local/src/adapter.test.tspackages/storage-local/src/adapter.tspackages/storage-local/src/index.tspackages/storage-local/tsconfig.jsonpackages/storage-local/vitest.config.tspackages/storage-minio/package.jsonpackages/storage-minio/src/adapter.test.tspackages/storage-minio/src/adapter.tspackages/storage-minio/src/index.tspackages/storage-minio/tsconfig.jsonpackages/storage-r2/package.jsonpackages/storage-r2/src/adapter.test.tspackages/storage-r2/src/adapter.tspackages/storage-r2/src/index.tspackages/storage-r2/tsconfig.jsonpackages/storage-s3/package.jsonpackages/storage-s3/src/adapter.test.tspackages/storage-s3/src/adapter.tspackages/storage-s3/src/index.tspackages/storage-s3/tsconfig.jsonpackages/storage-s3/vitest.config.tspackages/storage/README.mdpackages/storage/package.jsonpackages/storage/src/conformance.test.tspackages/storage/src/conformance.tspackages/storage/src/index.test.tspackages/storage/src/index.tspackages/storage/src/internal/errors.tspackages/storage/src/internal/s3-compatible.test.tspackages/storage/src/internal/s3-compatible.tspackages/storage/src/router.test.tspackages/storage/src/router.tspackages/storage/src/types.tspackages/storage/tsconfig.jsonpackages/storage/vitest.config.ts
| "dependencies": { | ||
| "@google-cloud/storage": "^7.0.0", | ||
| "@superfunctions/storage": "*" | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
python - <<'PY'
import json, pathlib
for p in pathlib.Path(".").rglob("package.json"):
if "node_modules" in p.parts:
continue
data = json.loads(p.read_text())
for section in ("dependencies", "peerDependencies"):
for name, ver in data.get(section, {}).items():
if name.startswith("@superfunctions/") and str(ver).strip() == "*":
print(f"{p}: {section}.{name}={ver}")
PYRepository: 21nCo/super-functions
Length of output: 3451
🏁 Script executed:
#!/bin/bash
# Check if there's a version constraint defined elsewhere for `@superfunctions/storage`
rg "@superfunctions/storage" --type json -A 2 packages/storage-*/package.json | head -30Repository: 21nCo/super-functions
Length of output: 1638
🏁 Script executed:
#!/bin/bash
# Check root package.json for workspace or version constraints
if [ -f package.json ]; then
echo "=== Root package.json ==="
cat package.json | head -50
fi
# Check if there's a lerna.json or workspace configuration
if [ -f lerna.json ]; then
echo "=== lerna.json ==="
cat lerna.json
fiRepository: 21nCo/super-functions
Length of output: 1866
🏁 Script executed:
#!/bin/bash
# Find and read the `@superfunctions/storage` package.json to check its version
find . -path "*/node_modules" -prune -o -name "package.json" -type f -print | xargs grep -l '"name".*"@superfunctions/storage"' | grep -v node_modulesRepository: 21nCo/super-functions
Length of output: 96
🏁 Script executed:
#!/bin/bash
# Check the version of `@superfunctions/storage` package
cat packages/storage/package.json | head -20Repository: 21nCo/super-functions
Length of output: 680
Wildcard runtime dependency on @superfunctions/storage affects all storage adapter packages.
All storage adapter packages (gcs, azure, minio, r2, s3) use "@superfunctions/storage": "*" in dependencies, which removes semver safety for a published package and can resolve incompatible upstream releases. Since @superfunctions/storage is at version 0.1.0, use a compatible version constraint instead.
🔧 Suggested changes
Fix all storage adapter packages:
- packages/storage-gcs/package.json
- packages/storage-azure/package.json
- packages/storage-minio/package.json
- packages/storage-r2/package.json
- packages/storage-s3/package.json
Replace "@superfunctions/storage": "*" with "@superfunctions/storage": "^0.1.0" in each file's dependencies section.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/storage-gcs/package.json` around lines 31 - 34, Update the storage
adapter packages to remove the wildcard dependency on `@superfunctions/storage`:
replace the dependency entry "@superfunctions/storage": "*" with a
semver-constrained version such as "@superfunctions/storage": "^0.1.0" in each
package.json for the storage adapters (refer to the dependency key
"@superfunctions/storage" in packages/storage-gcs/package.json and the
equivalent entries in storage-azure, storage-minio, storage-r2, and storage-s3)
so the published packages retain semver safety and avoid resolving incompatible
upstream releases.
There was a problem hiding this comment.
2 issues found across 8 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="packages/storage-minio/src/adapter.test.ts">
<violation number="1" location="packages/storage-minio/src/adapter.test.ts:154">
P3: This test’s assertion does not validate the behavior described by the test name (explicit port application), so it can pass without actually verifying port handling.</violation>
</file>
<file name="packages/storage-s3/src/adapter.ts">
<violation number="1" location="packages/storage-s3/src/adapter.ts:16">
P2: Credential presence validation is inconsistent with credential assignment; empty-string keys bypass the new check and silently fall back to ambient credentials.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| useSSL: false, | ||
| }); | ||
|
|
||
| expect(customAdapter.name).toBe('minio'); |
There was a problem hiding this comment.
P3: This test’s assertion does not validate the behavior described by the test name (explicit port application), so it can pass without actually verifying port handling.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/storage-minio/src/adapter.test.ts, line 154:
<comment>This test’s assertion does not validate the behavior described by the test name (explicit port application), so it can pass without actually verifying port handling.</comment>
<file context>
@@ -141,6 +141,19 @@ describe('@superfunctions/storage-minio', () => {
+ useSSL: false,
+ });
+
+ expect(customAdapter.name).toBe('minio');
+ });
+
</file context>
| completeUploadPart(input: { | ||
| uploadSessionId: string; | ||
| partNumber: number; | ||
| etag: string; | ||
| size: number; | ||
| checksumSha256Base64?: string; | ||
| }, ctx: FileProviderContext): Promise<void>; |
There was a problem hiding this comment.
Type mismatch: completeUploadPart is declared to return Promise<void> but the conformance test at file-provider-conformance.test.ts:120-125 expects it to return an object with a recorded property. This inconsistency will cause runtime type errors when implementers follow the interface but tests expect different behavior.
// Fix: Update return type to match expected behavior
completeUploadPart(input: {
uploadSessionId: string;
partNumber: number;
etag: string;
size: number;
checksumSha256Base64?: string;
}, ctx: FileProviderContext): Promise<{ recorded: boolean }>;| completeUploadPart(input: { | |
| uploadSessionId: string; | |
| partNumber: number; | |
| etag: string; | |
| size: number; | |
| checksumSha256Base64?: string; | |
| }, ctx: FileProviderContext): Promise<void>; | |
| completeUploadPart(input: { | |
| uploadSessionId: string; | |
| partNumber: number; | |
| etag: string; | |
| size: number; | |
| checksumSha256Base64?: string; | |
| }, ctx: FileProviderContext): Promise<{ recorded: boolean }>; |
Spotted by Graphite
Is this helpful? React 👍 or 👎 to let us know.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 8a840d65ac
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const capabilities: StorageAdapterCapabilities = { | ||
| signedUploadUrls: true, | ||
| signedDownloadUrls: true, | ||
| multipart: true, |
There was a problem hiding this comment.
Gate signed/multipart capabilities on signing credentials
This adapter always advertises signedUploadUrls, signedDownloadUrls, and multipart as supported, but createContainerClient can return without a credential (for example with accountName + sasToken), and in that case the signing methods throw immediately with SAS URL generation requires ... accountKey. Any caller that selects behavior from capabilities can choose signed multipart mode and then fail at runtime. Capabilities should reflect the active auth mode, or the methods should support the SAS-only path.
Useful? React with 👍 / 👎.
| // Stateless: uploadId encodes the key | ||
| const uploadId = Buffer.from(input.key).toString('base64'); | ||
| return { uploadId }; |
There was a problem hiding this comment.
Make Azure upload IDs unique per multipart session
createMultipartUpload derives uploadId only from key, while block IDs are derived only from partNumber, so separate multipart uploads targeting the same key are not isolated. In concurrent uploads, parts from one session can overwrite the other's uncommitted blocks and either completion can commit mixed data. Upload IDs need per-session entropy and block IDs should be scoped to that session.
Useful? React with 👍 / 👎.
|
|
||
| if (accountName && sasToken) { | ||
| const blobServiceClient = new BlobServiceClient( | ||
| `${endpoint || `https://${accountName}.blob.core.windows.net`}?${sasToken}` |
There was a problem hiding this comment.
Normalize SAS token before composing service URL
The SAS-token path always concatenates ?${sasToken}. If a token already includes a leading ? (a common Azure portal format), this produces ??..., which changes the first query parameter name and breaks authentication. Trimming a leading ? (or building via URL APIs) avoids malformed URLs and makes configuration robust.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="packages/storage-azure/src/adapter.ts">
<violation number="1" location="packages/storage-azure/src/adapter.ts:54">
P2: Legacy uploadIds (base64 of the key) will be rejected when the key is valid JSON (e.g., "null" or "123") because the fallback only runs on parse errors. Move the legacy fallback outside the catch so any non-conforming JSON also uses the legacy path.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c86e18f5d7
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const headers: Record<string, string> = {}; | ||
| if (input.constraints?.contentType) { | ||
| headers['x-ms-blob-content-type'] = input.constraints.contentType; | ||
| } |
There was a problem hiding this comment.
Add x-ms-blob-type to Azure signed upload headers
The signed upload URL path builds a plain PUT-blob SAS URL but does not return the required x-ms-blob-type: BlockBlob header, so clients that use this adapter output directly (without Azure-specific custom headers) will fail uploads with 400-level errors from Blob Storage. This breaks the adapter’s provider-agnostic signed-upload contract for normal direct-to-Azure uploads.
Useful? React with 👍 / 👎.
| } | ||
| } | ||
|
|
||
| const nodeStream = createWriteStream(filePath); |
There was a problem hiding this comment.
Handle write-stream errors outside backpressure paths
This upload stream only listens for error when write() returns backpressure or during close(), so an asynchronous filesystem failure after a successful write() return (for example ENOSPC or I/O failure) can emit an unhandled stream error and crash the process while the write promise has already resolved. A persistent error listener should be attached for the stream lifetime and propagated to subsequent operations.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
🧹 Nitpick comments (3)
packages/storage-azure/src/adapter.ts (3)
318-321: Clarify misleading comment about etag usage.The comment states "the etag from each part is actually the blockId we generated" but the implementation actually ignores the
etagvalues entirely and regenerates the block IDs frompartNumberusinggenerateBlockId. Consider clarifying this behavior.📝 Suggested comment clarification
- // In Azure, the etag from each part is actually the blockId we generated - // Sort parts and commit blocks + // Sort parts by partNumber and regenerate block IDs deterministically. + // The etag values are ignored since block IDs are derived from uploadId + partNumber. const sortedParts = [...input.parts].sort((a, b) => a.partNumber - b.partNumber); const blockList = sortedParts.map((part) => generateBlockId(input.uploadId, part.partNumber));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-azure/src/adapter.ts` around lines 318 - 321, The comment misleadingly says "the etag from each part is actually the blockId we generated" while the code actually ignores etag and regenerates block IDs from partNumber; update the comment near sortedParts/blockList (reference input.parts, sortedParts, generateBlockId(input.uploadId, part.partNumber), blockList, uploadId, part.partNumber) to state that we sort by partNumber and regenerate block IDs using generateBlockId (thus ignoring any etag values) so the behavior is explicit and not confusing.
207-214: Minor: Redundant conditional check.The
headersobject always contains at least'x-ms-blob-type': 'BlockBlob', soObject.keys(headers).length > 0is always true and the ternary will never returnundefined.♻️ Simplify by removing the conditional
- return { url, headers: Object.keys(headers).length > 0 ? headers : undefined }; + return { url, headers };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-azure/src/adapter.ts` around lines 207 - 214, The return uses a redundant conditional because the headers object in getUploadUrl (the local variable headers containing 'x-ms-blob-type': 'BlockBlob') is never empty; simplify the return by returning { url, headers } directly (remove the Object.keys(headers).length > 0 ? ... : undefined check) so the code always returns the headers object and avoids the unnecessary ternary.
379-391: UsecreateNotFoundError()for consistent error handling.The error is manually constructed here instead of using the imported
createNotFoundError()helper, which is inconsistent withstatObject. Additionally, the catch block at lines 386-391 is redundant sinceexists()returns a boolean and shouldn't throw a not-found error—it just re-throws whatever it catches.♻️ Simplify using the shared error helper
- try { - const exists = await blobClient.exists(); - if (!exists) { - const notFoundError = new Error('Object not found') as Error & { code: string }; - notFoundError.code = 'STORAGE_NOT_FOUND'; - throw notFoundError; - } - } catch (err: unknown) { - if (err instanceof Error && 'code' in err && err.code === 'STORAGE_NOT_FOUND') { - throw err; - } - throw err; - } + const exists = await blobClient.exists(); + if (!exists) { + throw createNotFoundError(); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/storage-azure/src/adapter.ts` around lines 379 - 391, Replace the manual construction and redundant catch with the shared helper: call blobClient.exists() and if it returns false throw createNotFoundError() (the same helper used by statObject) instead of creating a new Error with code 'STORAGE_NOT_FOUND'; remove the surrounding try/catch that only re-throws errors from exists() since exists() returns a boolean and the catch is redundant—preserve other error propagation as-is so only the not-found path uses createNotFoundError().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/storage-azure/src/adapter.ts`:
- Around line 318-321: The comment misleadingly says "the etag from each part is
actually the blockId we generated" while the code actually ignores etag and
regenerates block IDs from partNumber; update the comment near
sortedParts/blockList (reference input.parts, sortedParts,
generateBlockId(input.uploadId, part.partNumber), blockList, uploadId,
part.partNumber) to state that we sort by partNumber and regenerate block IDs
using generateBlockId (thus ignoring any etag values) so the behavior is
explicit and not confusing.
- Around line 207-214: The return uses a redundant conditional because the
headers object in getUploadUrl (the local variable headers containing
'x-ms-blob-type': 'BlockBlob') is never empty; simplify the return by returning
{ url, headers } directly (remove the Object.keys(headers).length > 0 ? ... :
undefined check) so the code always returns the headers object and avoids the
unnecessary ternary.
- Around line 379-391: Replace the manual construction and redundant catch with
the shared helper: call blobClient.exists() and if it returns false throw
createNotFoundError() (the same helper used by statObject) instead of creating a
new Error with code 'STORAGE_NOT_FOUND'; remove the surrounding try/catch that
only re-throws errors from exists() since exists() returns a boolean and the
catch is redundant—preserve other error propagation as-is so only the not-found
path uses createNotFoundError().
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e688fc2a-2493-4c84-86d3-cc2e0366c8db
📒 Files selected for processing (11)
packages/storage-azure/src/adapter.test.tspackages/storage-azure/src/adapter.tspackages/storage-gcs/package.jsonpackages/storage-gcs/src/adapter.tspackages/storage-local/src/adapter.tspackages/storage-minio/package.jsonpackages/storage-minio/src/adapter.test.tspackages/storage-minio/src/adapter.tspackages/storage-s3/src/adapter.test.tspackages/storage-s3/src/adapter.tsrelease-packages.json
✅ Files skipped from review due to trivial changes (5)
- packages/storage-gcs/package.json
- packages/storage-s3/src/adapter.test.ts
- packages/storage-local/src/adapter.ts
- release-packages.json
- packages/storage-gcs/src/adapter.ts
🚧 Files skipped from review as they are similar to previous changes (5)
- packages/storage-minio/package.json
- packages/storage-minio/src/adapter.ts
- packages/storage-minio/src/adapter.test.ts
- packages/storage-azure/src/adapter.test.ts
- packages/storage-s3/src/adapter.ts
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 37ecbd1089
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const { adapter, target } = pickAdapter(adapterEntries, defaultTarget, input.target); | ||
| return adapter.statObject({ ...input, target }); |
There was a problem hiding this comment.
Stop forwarding outer target into delegated adapter calls
After resolving input.target via pickAdapter, this method forwards the same target back into the selected adapter ({ ...input, target }), and the same pattern is repeated across routed operations. If the selected adapter is itself routed (with a different target namespace), the injected outer target is interpreted by the inner router and causes false StorageRoutingError failures, so nested/composed routers cannot work correctly. Once routing is decided at this layer, delegation should not overwrite target with the outer alias.
Useful? React with 👍 / 👎.
| const ok = gcstream.write(buffer); | ||
| if (ok) { | ||
| resolve(); | ||
| } else { |
There was a problem hiding this comment.
Attach a persistent error listener for GCS upload streams
In openUploadStream, when gcstream.write() returns true, the write promise resolves immediately and no error listener remains attached until close() is called. If an asynchronous GCS write failure occurs in that window (for example transport or storage-side failure), the stream can emit an unhandled error, potentially crashing the process while the earlier write already reported success. A lifetime error handler should be installed and propagated to subsequent writes/close.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 8492cd0627
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const keyLocks = new Map<string, Promise<void>>(); | ||
|
|
||
| async function withKeyLock<T>(key: string, work: () => Promise<T>): Promise<T> { | ||
| const prior = keyLocks.get(key) ?? Promise.resolve(); | ||
| let resolveLock!: () => void; |
There was a problem hiding this comment.
Use atomic persistence ops for rate-limit checks
The limiter serializes requests with an in-memory keyLocks map, but that lock only works within a single process. When multiple instances share the same persistence backend, each instance can read the same old state and independently allow a request before either write lands, so limits are exceeded under concurrent traffic. This affects all three algorithms because they perform non-atomic read/modify/write cycles; the shared-store path needs atomic primitives (e.g., incr/CAS/transaction) rather than process-local locking.
Useful? React with 👍 / 👎.
| async abortMultipartUpload(input: { key: string; uploadId: string }): Promise<void> { | ||
| // Azure automatically garbage-collects uncommitted blocks after 7 days | ||
| // No explicit cleanup needed for stateless tracking | ||
| }, |
There was a problem hiding this comment.
Implement real abort behavior for Azure multipart uploads
abortMultipartUpload is currently a no-op, so calling abort does not invalidate the session or clear uncommitted blocks. A client that uploaded parts can still call completeMultipartUpload later with the same uploadId and successfully create/overwrite the blob, which violates expected abort semantics and can persist data that callers explicitly canceled. Abort should either perform server-side cleanup or record tombstones so completion is rejected after abort.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3b06713efc
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| contentType?: string; | ||
| metadata?: Record<string, string>; | ||
| }): Promise<{ uploadId: string }> { | ||
| const uploadId = encodeUploadId(input.key); |
There was a problem hiding this comment.
Preserve multipart metadata in Azure upload IDs
createMultipartUpload accepts contentType and metadata, but the generated uploadId only encodes key, so completeMultipartUpload has no way to restore those fields and commits with default blob properties. In multipart flows that rely on these inputs (for example setting Content-Type for browser downloads or storing object metadata), Azure uploads will silently lose that information while other adapters persist it, causing inconsistent behavior across providers.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 309c080274
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (decoded.length > 0) { | ||
| return { | ||
| key: decoded, | ||
| sessionId: uploadId, | ||
| }; |
There was a problem hiding this comment.
Reject legacy deterministic Azure upload IDs
The fallback branch in decodeUploadId accepts any non-empty base64 payload as a valid multipart session (key = decoded, sessionId = uploadId), so callers can forge uploadId values like base64(key) without ever calling createMultipartUpload. That bypasses the per-session isolation added by randomUUID(), allowing predictable IDs that can collide across concurrent uploads to the same key and undermining multipart-session integrity checks in signMultipartUploadPartUrl/completeMultipartUpload.
Useful? React with 👍 / 👎.
| async openUploadStream(input: { key: string; contentType?: string }): Promise<WritableStream> { | ||
| const blockBlobClient = containerClient.getBlockBlobClient(input.key); | ||
| const stream = new PassThrough(); | ||
| const uploadPromise = blockBlobClient.uploadStream(stream, 4 * 1024 * 1024, 5, { |
There was a problem hiding this comment.
Attach early rejection handling to Azure upload promise
uploadPromise is created eagerly but only observed later in close() (or in abort()), so if uploadStream rejects before either method is called (for example, auth/network failure during stream setup), Node can treat it as an unhandled rejection and terminate the process while prior writes may already have resolved. Install a rejection handler immediately and propagate that stored error through write/close.
Useful? React with 👍 / 👎.
| state.tokens -= 1; | ||
| } | ||
|
|
||
| const msUntilNextToken = state.tokens >= 1 ? 0 : Math.ceil((1 - state.tokens) / refillRatePerMs); |
There was a problem hiding this comment.
Handle zero-limit token bucket without Infinity math
In token-bucket mode, a configured maxRequests (or per-call limit) of 0 makes refillRatePerMs equal to 0, which turns the divisions on this line into Infinity; resetAt then becomes non-finite and new Date(resetAt).toISOString() throws instead of returning a normal rate-limit response. This makes zero-limit configurations crash the limiter rather than cleanly deny requests.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1767ab8fab
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const effectiveWindowMs = | ||
| (input.windowSeconds !== undefined ? input.windowSeconds * 1000 : undefined) ?? windowMs; |
There was a problem hiding this comment.
Guard against non-positive window lengths
check() allows windowSeconds overrides but never validates that the effective window is positive. With windowSeconds <= 0, fixed/sliding-window modes effectively reset on every call (bypassing rate limits), and token-bucket mode can divide by zero or produce non-finite reset timestamps that throw when serialized. This can be triggered by a bad config or user-provided override and causes incorrect enforcement or runtime failures.
Useful? React with 👍 / 👎.
|
You're iterating quickly on this pull request. To help protect your rate limits, cubic has paused automatic reviews on new pushes for now—when you're ready for another review, comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 434a89d374
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| await uploadPromise; | ||
| }, | ||
| abort(reason) { | ||
| stream.destroy(reason instanceof Error ? reason : new Error(String(reason))); |
There was a problem hiding this comment.
Add error listener before aborting Azure upload stream
In openUploadStream, abort() calls stream.destroy(error) on a PassThrough that never gets a lifetime 'error' listener, so cancelling an upload (or any path that destroys the stream with an error) can emit an unhandled error event and terminate the Node process. This is reproducible whenever callers invoke writer.abort(...) and makes normal cancellation a crash path; install a persistent error handler on stream and propagate that stored error through write/close.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d0c754735c
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| import { | ||
| AbortMultipartUploadCommand, | ||
| CompleteMultipartUploadCommand, | ||
| CreateMultipartUploadCommand, | ||
| DeleteObjectCommand, | ||
| GetObjectCommand, | ||
| HeadObjectCommand, | ||
| PutObjectCommand, | ||
| UploadPartCommand, | ||
| } from '@aws-sdk/client-s3'; | ||
| import { getSignedUrl } from '@aws-sdk/s3-request-presigner'; |
There was a problem hiding this comment.
Add runtime AWS deps for exported S3-compatible module
@superfunctions/storage publicly exports ./internal/s3-compatible, and that module imports @aws-sdk/client-s3 and @aws-sdk/s3-request-presigner, but the package manifest does not declare those as runtime dependencies. Under strict resolvers (notably pnpm/Yarn PnP), importing this subpath will fail with MODULE_NOT_FOUND, which breaks S3/R2/MinIO adapters that rely on it even when installation appears successful.
Useful? React with 👍 / 👎.
| return withKeyLock(key, async () => { | ||
| const effectiveWindowMs = | ||
| (input.windowSeconds !== undefined ? input.windowSeconds * 1000 : undefined) ?? windowMs; | ||
| const effectiveLimit = input.limit ?? maxRequests; |
There was a problem hiding this comment.
Validate effectiveLimit before rate-limit calculations
check() validates effectiveWindowMs but never validates effectiveLimit; if callers pass a non-finite limit (for example NaN from parsing user input), token-bucket math propagates non-finite values and new Date(resetAt).toISOString() can throw, turning a normal rate-limit decision into an exception path. Guard effectiveLimit as finite/non-negative before entering the algorithm branches.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 453b285610
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| nodeStream.on('data', (chunk: Buffer | string) => { | ||
| const buffer = typeof chunk === 'string' ? Buffer.from(chunk) : chunk; | ||
| controller.enqueue(new Uint8Array(buffer)); | ||
| }); |
There was a problem hiding this comment.
Honor backpressure in Azure download stream bridge
openDownloadStream converts a Node readable into a Web ReadableStream by listening to 'data', but it never pauses/resumes the Node stream based on consumer demand. In slow-consumer scenarios (large blobs over constrained clients), Node stays in flowing mode and keeps enqueueing chunks, so memory usage can grow with object size and cause process OOM. Add backpressure-aware flow control (e.g., pause() after enqueue and pull()/resume() or Readable.toWeb).
Useful? React with 👍 / 👎.
| gcstream.on('data', (chunk: Buffer | string) => { | ||
| const buffer = typeof chunk === 'string' ? Buffer.from(chunk) : chunk; | ||
| controller.enqueue(new Uint8Array(buffer)); | ||
| }); |
There was a problem hiding this comment.
Honor backpressure in GCS download stream bridge
This Web stream wrapper also subscribes to Node 'data' events without any pause/resume path, so the GCS stream can continue producing while downstream reads are slow. That allows the internal queue to grow unbounded for large downloads, which can significantly increase memory and destabilize workers under load. The bridge should implement backpressure (pause on enqueue, resume in pull) or delegate to a backpressure-aware Node→Web conversion.
Useful? React with 👍 / 👎.
Merge activity
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 48df48405e
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| @@ -0,0 +1,286 @@ | |||
| import type { Adapter, KVStoreAdapter } from '@superfunctions/db'; | |||
|
|
|||
| export { KVStoreAdapter }; | |||
There was a problem hiding this comment.
Stop requiring @superfunctions/db for basic TypeScript usage
@superfunctions/middleware marks @superfunctions/db as an optional peer, but this re-export forces the generated .d.ts to reference @superfunctions/db unconditionally. TypeScript consumers who only use the default in-memory limiter (and therefore do not install @superfunctions/db) will get TS2307 during compilation when importing this package at all, which makes the package effectively unusable in its advertised optional-dependency mode.
Useful? React with 👍 / 👎.
| const accountNameMatch = connectionString.match(/AccountName=([^;]+)/); | ||
| const accountKeyMatch = connectionString.match(/AccountKey=([^;]+)/); | ||
| const extractedAccountName = accountNameMatch?.[1]; | ||
| const extractedAccountKey = accountKeyMatch?.[1]; |
There was a problem hiding this comment.
Derive Azure signing creds without regex-parsing connection strings
The connection-string path infers signing capability by regex-extracting AccountName/AccountKey from the raw string, so valid connection-string formats that authenticate successfully but omit those literal fields (for example emulator/development formats) end up with credential unset. In that case the adapter disables signed URLs/multipart and throws from signing methods even though the client is already authenticated, causing feature loss in those environments.
Useful? React with 👍 / 👎.
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub. |
## Summary - add the missing shared file, middleware, and storage packages to `origin/dev` - include the base storage contract plus local, s3-compatible, azure, gcs, minio, and r2 adapters - keep the PR scoped to shared packages only so downstream function PRs can build against them ## Validation - inspected the copied package set in a clean worktree created from `origin/dev` - attempted a root install probe, but did not complete a reliable monorepo validation in this worktree ## Notes - no `.conduct` changes included - opened directly against `origin/dev` with `GH_CONFIG_DIR=~/.config/gh-other` ## Summary by Sourcery Add shared storage, file, and middleware packages to support multi-provider file storage and rate limiting across the monorepo. New Features: - Introduce core @superfunctions/storage package defining the storage adapter contract, routing, and conformance helpers. - Add cloud and local storage adapter packages for S3, Cloudflare R2, MinIO, Azure Blob Storage, Google Cloud Storage, and local filesystem. - Add @superfunctions/files package with shared file and upload provider types for FileFn workflows. - Add @superfunctions/middleware package providing a configurable, pluggable rate limiter with multiple algorithms and persistence backends. Enhancements: - Provide a routed storage adapter to fan out logical storage targets to different physical backends with merged capability reporting. - Add shared error utilities and S3-compatible adapter helper to standardize storage error handling and multipart behavior across providers. Build: - Add package manifests, TypeScript configs, and Vitest configs for the new storage, files, and middleware-related packages so they can build and test independently. Documentation: - Document the Azure storage adapter usage and capabilities in a new README. - Add a basic README for the core storage package. Tests: - Add conformance-style tests for storage routing and adapter capability validation. - Add a FileProvider conformance test suite to enforce upload and file API shapes across implementations. - Wire up Vitest configs for the new storage, files, and middleware-related packages to enable isolated testing. <!-- This is an auto-generated description by cubic. --> --- ## Summary by cubic Adds shared `@superfunctions/storage`, `@superfunctions/files`, and `@superfunctions/middleware` packages to enable a pluggable storage system and rate limiting across the monorepo, and registers release targets (SF-30). - **New Features** - `@superfunctions/storage`: adapter contract, router, conformance helpers, standardized errors, and shared S3-compatible core. - Adapters: `@superfunctions/storage-local`, `-s3`, `-gcs`, `-azure`, `-minio`, `-r2` with signed URLs and multipart where supported. - `@superfunctions/files`: `FileProvider` types and a conformance test suite. - `@superfunctions/middleware`: rate limiter (fixed-window, sliding-window, token-bucket) with pluggable persistence via `@superfunctions/db` `KVStoreAdapter` or in-memory. - **Bug Fixes** - Rate limiter: input validation and hardened token-bucket edge cases. - Storage router: fixed target resolution and corrected capability aggregation. - Azure adapter: hardened multipart session handling and upload URL compatibility. - S3 adapter: reject empty static credentials. - Uploads/streaming: persistently track upload stream errors and fix abort handling. <sup>Written for commit 87da4d6. Summary will update on new commits.</sup> <!-- End of auto-generated description by cubic. --> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Rate-limiting middleware with multiple algorithms and pluggable persistence. * File management API for upload sessions, multipart uploads, signed URLs, and streaming. * New storage adapters: S3, MinIO, R2, Azure Blob, GCS, and local filesystem. * Storage routing to select adapters per logical target with aggregated capabilities. * **Documentation** * Azure adapter README with installation and usage examples. * **Tests** * Extensive conformance and unit tests across middleware, file provider, and all storage adapters. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
87da4d6 to
48410cc
Compare
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 87da4d6696
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| capabilitiesForTarget(target) { | ||
| return pickAdapter(adapterEntries, defaultTarget, target).adapter.capabilities; | ||
| }, |
There was a problem hiding this comment.
Resolve delegated capabilities via child target logic
capabilitiesForTarget returns pickAdapter(...).adapter.capabilities, which is only the child adapter’s aggregate capability set. For nested routed adapters this can under-report what the delegated call path actually supports (because delegated operations strip target and use the child’s default routing), and callers like filefn/server/src/upload-sessions/service.ts use getStorageCapabilities(...) to choose upload mode. In mixed-capability nested setups, this can incorrectly select noSupportedUploadMode even though the delegated adapter can handle uploads.
Useful? React with 👍 / 👎.
| const fileStats = await lstat(filePath); | ||
| if (fileStats.isSymbolicLink()) { | ||
| throw new Error('Invalid key: upload target cannot be a symlink'); |
There was a problem hiding this comment.
Eliminate symlink race before opening local upload file
The symlink check is a TOCTOU sequence: lstat(filePath) runs first, then the code later opens the path with createWriteStream. If another process can modify entries under rootDir, it can swap the checked path to a symlink after the check and before open, and Node will follow that symlink, allowing writes outside the configured storage root. The open step needs no-follow/atomic validation rather than a pre-open check alone.
Useful? React with 👍 / 👎.


Summary
origin/devValidation
origin/devNotes
.conductchanges includedorigin/devwithGH_CONFIG_DIR=~/.config/gh-otherSummary by Sourcery
Add shared storage, file, and middleware packages to support multi-provider file storage and rate limiting across the monorepo.
New Features:
Enhancements:
Build:
Documentation:
Tests:
Summary by cubic
Adds shared
@superfunctions/storage,@superfunctions/files, and@superfunctions/middlewarepackages to enable a pluggable storage system and rate limiting across the monorepo, and registers release targets (SF-30).New Features
@superfunctions/storage: adapter contract, router with per-target capabilities, conformance helpers, standardized errors, and an S3‑compatible shared core.@superfunctions/storage-local,-s3,-gcs,-azure,-minio,-r2with signed URLs and multipart where supported.@superfunctions/files:FileProvidertypes plus a conformance test suite.@superfunctions/middleware: rate limiter (fixed-window, sliding-window, token-bucket) with pluggable persistence via@superfunctions/dbKVStoreAdapteror in-memory.Bug Fixes
Written for commit 48410cc. Summary will update on new commits.
Summary by CodeRabbit
New Features
Documentation
Tests