Skip to content

SF-30 add storage, files, and middleware shared packages#82

Merged
graphite-app[bot] merged 1 commit intodevfrom
SF-30/storage-files-foundations
Apr 13, 2026
Merged

SF-30 add storage, files, and middleware shared packages#82
graphite-app[bot] merged 1 commit intodevfrom
SF-30/storage-files-foundations

Conversation

@thyaravind
Copy link
Copy Markdown
Member

@thyaravind thyaravind commented Apr 4, 2026

Summary

  • add the missing shared file, middleware, and storage packages to origin/dev
  • include the base storage contract plus local, s3-compatible, azure, gcs, minio, and r2 adapters
  • keep the PR scoped to shared packages only so downstream function PRs can build against them

Validation

  • inspected the copied package set in a clean worktree created from origin/dev
  • attempted a root install probe, but did not complete a reliable monorepo validation in this worktree

Notes

  • no .conduct changes included
  • opened directly against origin/dev with GH_CONFIG_DIR=~/.config/gh-other

Summary by Sourcery

Add shared storage, file, and middleware packages to support multi-provider file storage and rate limiting across the monorepo.

New Features:

  • Introduce core @superfunctions/storage package defining the storage adapter contract, routing, and conformance helpers.
  • Add cloud and local storage adapter packages for S3, Cloudflare R2, MinIO, Azure Blob Storage, Google Cloud Storage, and local filesystem.
  • Add @superfunctions/files package with shared file and upload provider types for FileFn workflows.
  • Add @superfunctions/middleware package providing a configurable, pluggable rate limiter with multiple algorithms and persistence backends.

Enhancements:

  • Provide a routed storage adapter to fan out logical storage targets to different physical backends with merged capability reporting.
  • Add shared error utilities and S3-compatible adapter helper to standardize storage error handling and multipart behavior across providers.

Build:

  • Add package manifests, TypeScript configs, and Vitest configs for the new storage, files, and middleware-related packages so they can build and test independently.

Documentation:

  • Document the Azure storage adapter usage and capabilities in a new README.
  • Add a basic README for the core storage package.

Tests:

  • Add conformance-style tests for storage routing and adapter capability validation.
  • Add a FileProvider conformance test suite to enforce upload and file API shapes across implementations.
  • Wire up Vitest configs for the new storage, files, and middleware-related packages to enable isolated testing.

Summary by cubic

Adds shared @superfunctions/storage, @superfunctions/files, and @superfunctions/middleware packages to enable a pluggable storage system and rate limiting across the monorepo, and registers release targets (SF-30).

  • New Features

    • @superfunctions/storage: adapter contract, router with per-target capabilities, conformance helpers, standardized errors, and an S3‑compatible shared core.
    • Adapters: @superfunctions/storage-local, -s3, -gcs, -azure, -minio, -r2 with signed URLs and multipart where supported.
    • @superfunctions/files: FileProvider types plus a conformance test suite.
    • @superfunctions/middleware: rate limiter (fixed-window, sliding-window, token-bucket) with pluggable persistence via @superfunctions/db KVStoreAdapter or in-memory.
  • Bug Fixes

    • Rate limiter: input validation and token-bucket edge cases.
    • Storage router: target resolution and capability aggregation.
    • Adapters: Azure multipart session and SAS URL handling; S3 rejects empty static credentials; improved upload/streaming error tracking and abort handling.

Written for commit 48410cc. Summary will update on new commits.

Summary by CodeRabbit

  • New Features

    • Rate-limiting middleware with multiple algorithms and pluggable persistence.
    • File management API for upload sessions, multipart uploads, signed URLs, and streaming.
    • New storage adapters: S3, MinIO, R2, Azure Blob, GCS, and local filesystem.
    • Storage routing to select adapters per logical target with aggregated capabilities.
  • Documentation

    • Azure adapter README with installation and usage examples.
  • Tests

    • Extensive conformance and unit tests across middleware, file provider, and all storage adapters.

@linear
Copy link
Copy Markdown

linear bot commented Apr 4, 2026

SF-30 Shared packages

@sourcery-ai
Copy link
Copy Markdown

sourcery-ai bot commented Apr 4, 2026

Reviewer's Guide

Adds the foundational shared storage, file, and middleware packages for Superfunctions, including a generic storage contract, multiple cloud/local storage adapters (S3, R2, MinIO, GCS, Azure, local FS), a routed storage facade and conformance helpers, a file provider contract with conformance tests, and a configurable KV-backed rate limiter, all wired up with package metadata and basic test configs so downstream features can build against them.

Sequence diagram for routed storage signDownloadUrl delegation

sequenceDiagram
  actor App
  participant RoutedStorageAdapter
  participant DurableAdapter
  participant TemporaryAdapter

  App->>RoutedStorageAdapter: signDownloadUrl({ key:"files/b", target:"temporary", expiresInSeconds:60 })
  RoutedStorageAdapter->>RoutedStorageAdapter: pickAdapter(target="temporary")
  RoutedStorageAdapter->>TemporaryAdapter: signDownloadUrl(inputWithTarget)
  TemporaryAdapter-->>RoutedStorageAdapter: { url, headers }
  RoutedStorageAdapter-->>App: { url, headers }

  App->>RoutedStorageAdapter: statObject({ key:"files/a", target:"durable" })
  RoutedStorageAdapter->>RoutedStorageAdapter: pickAdapter(target="durable")
  RoutedStorageAdapter->>DurableAdapter: statObject(inputWithTarget)
  DurableAdapter-->>RoutedStorageAdapter: StorageObjectStat
  RoutedStorageAdapter-->>App: StorageObjectStat
Loading

Class diagram for core storage adapter contracts and routing

classDiagram
  class StorageAdapterCapabilities {
    +boolean signedUploadUrls
    +boolean signedDownloadUrls
    +boolean multipart
    +boolean proxyStreamingUpload
    +boolean proxyStreamingDownload
  }

  class SignedUrlConstraints {
    +string contentType
    +number contentLength
    +string checksumSha256Base64
  }

  class StorageObjectStat {
    +string key
    +number size
    +string contentType
    +string etag
    +string lastModifiedAt
  }

  class StorageAdapter {
    <<interface>>
    +StorageAdapterName name
    +StorageAdapterCapabilities capabilities
    +capabilitiesForTarget(target StorageTargetName) StorageAdapterCapabilities
    +statObject(input) Promise~StorageObjectStat~
    +deleteObject(input) Promise~void~
    +signUploadUrl(input) Promise~SignedUrlDescriptor~
    +signDownloadUrl(input) Promise~SignedUrlDescriptor~
    +createMultipartUpload(input) Promise~MultipartUploadDescriptor~
    +signMultipartUploadPartUrl(input) Promise~SignedUrlDescriptor~
    +completeMultipartUpload(input) Promise~void~
    +abortMultipartUpload(input) Promise~void~
    +openUploadStream(input) Promise~WritableStream~
    +openDownloadStream(input) Promise~ReadableStream~
  }

  class RoutedStorageAdapter {
    <<interface extension>>
    +StorageTargetName defaultTarget
    +StorageTargetName[] targets
    +capabilitiesForTarget(target StorageTargetName) StorageAdapterCapabilities
  }

  class RoutedStorageAdapterConfig {
    +Record~StorageTargetName, StorageAdapter~ adapters
    +StorageTargetName defaultTarget
    +StorageAdapterName name
  }

  class StorageRoutingError {
    +string code
    +string target
  }

  class ConformanceResult {
    +boolean ok
    +ErrorInfo error
    +string[] passed
    +string[] failed
  }

  class ErrorInfo {
    +string code
    +string message
  }

  class ConformanceTestContext {
    +string testKey
    +Uint8Array testData
    +string testContentType
  }

  StorageAdapter <|-- RoutedStorageAdapter
  RoutedStorageAdapterConfig o--> StorageAdapter
  RoutedStorageAdapter ..> RoutedStorageAdapterConfig : createdWith
  StorageRoutingError --> StorageAdapter : thrownWhenTargetMissing
  StorageAdapter --> StorageAdapterCapabilities
  StorageAdapter --> StorageObjectStat
  StorageAdapter --> SignedUrlConstraints
  ConformanceResult --> ErrorInfo
  ConformanceTestContext ..> StorageAdapter : usedBy

  class LocalStorageConfig {
    +string rootDir
  }

  class S3StorageConfig {
    +string bucket
    +string region
    +string accessKeyId
    +string secretAccessKey
    +string endpoint
    +boolean forcePathStyle
  }

  class R2StorageConfig {
    +string accountId
    +string bucket
    +string accessKeyId
    +string secretAccessKey
    +string jurisdiction
  }

  class MinIOStorageConfig {
    +string endpoint
    +string bucket
    +string accessKeyId
    +string secretAccessKey
    +string region
    +boolean useSSL
    +number port
  }

  class GCSStorageConfig {
    +string bucket
    +string projectId
    +string keyFilename
    +Credentials credentials
  }

  class AzureStorageConfig {
    +string containerName
    +string connectionString
    +string accountName
    +string accountKey
    +string sasToken
    +string endpoint
  }

  class Credentials {
    +string client_email
    +string private_key
  }

  GCSStorageConfig --> Credentials
  LocalStorageConfig ..> StorageAdapter : creates
  S3StorageConfig ..> StorageAdapter : creates
  R2StorageConfig ..> StorageAdapter : creates
  MinIOStorageConfig ..> StorageAdapter : creates
  GCSStorageConfig ..> StorageAdapter : creates
  AzureStorageConfig ..> StorageAdapter : creates
Loading

Class diagram for rate limiting middleware

classDiagram
  class Adapter {
    <<interface>>
    +findOne(input) Promise~any~
    +create(input) Promise~any~
    +upsert(input) Promise~any~
    +delete(input) Promise~void~
  }

  class KVStoreAdapter {
    <<interface>>
    +get(key string) Promise~string~
    +set(input) Promise~void~
    +delete(key string) Promise~void~
  }

  class RateLimitConfig {
    +number windowMs
    +number maxRequests
    +string keyPrefix
    +Adapter persistence
    +KVStoreAdapter persistence
    +RateLimitAlgorithm algorithm
    +function now
  }

  class RateLimitResult {
    +boolean allowed
    +number remaining
    +string resetAt
    +number total
  }

  class CheckLimitInput {
    +string key
    +number windowSeconds
    +number limit
  }

  class RateLimiter {
    <<interface>>
    +check(input CheckLimitInput) Promise~RateLimitResult~
    +reset(key string) Promise~void~
  }

  class FixedWindowState {
    +number count
    +number windowStart
  }

  class SlidingWindowState {
    +number[] timestamps
  }

  class TokenBucketState {
    +number tokens
    +number lastRefill
  }

  class InMemoryKVStore {
    +Map store
    +get(key string) Promise~string~
    +set(input) Promise~void~
    +delete(key string) Promise~void~
  }

  RateLimitConfig --> Adapter : optional
  RateLimitConfig --> KVStoreAdapter : optional
  RateLimiter ..> RateLimitConfig : createdWith
  RateLimiter --> RateLimitResult
  RateLimiter --> FixedWindowState : usesWhenFixedWindow
  RateLimiter --> SlidingWindowState : usesWhenSlidingWindow
  RateLimiter --> TokenBucketState : usesWhenTokenBucket
  InMemoryKVStore ..|> KVStoreAdapter
  KVStoreAdapter <.. Adapter : ensuredByEnsureKV
Loading

Class diagram for file provider contracts

classDiagram
  class FileProviderContext {
    +string principalId
    +string tenantId
    +string requestId
  }

  class FileProvider {
    <<interface>>
    +createUploadSession(input, ctx FileProviderContext) Promise~UploadSessionDescriptor~
    +getUploadSessionStatus(input, ctx FileProviderContext) Promise~any~
    +signUploadPart(input, ctx FileProviderContext) Promise~SignedUrlDescriptor~
    +completeUploadPart(input, ctx FileProviderContext) Promise~void~
    +completeUploadSession(input, ctx FileProviderContext) Promise~FileVersionDescriptor~
    +abortUploadSession(input, ctx FileProviderContext) Promise~void~
    +getFile(input, ctx FileProviderContext) Promise~any~
    +listFiles(input, ctx FileProviderContext) Promise~any~
    +deleteFile(input, ctx FileProviderContext) Promise~void~
  }

  class UploadSessionDescriptor {
    +string uploadSessionId
  }

  class FileVersionDescriptor {
    +string fileId
    +string versionId
  }

  class FileMetadata {
    +string id
    +string name
    +number size
    +string mimeType
    +Date createdAt
    +Date updatedAt
    +Record metadata
  }

  class FileUpload {
    +File file
    +Blob file
    +Uint8Array file
    +string name
    +string mimeType
    +Record metadata
  }

  FileProvider --> FileProviderContext
  FileProvider --> UploadSessionDescriptor
  FileProvider --> FileVersionDescriptor
  FileProvider --> FileMetadata : typicalReturn
  FileUpload ..> FileMetadata : creates
Loading

File-Level Changes

Change Details Files
Introduce a unified storage core package with adapter contract, routing, and conformance utilities used by all storage adapters.
  • Define StorageAdapter, capabilities model, multipart and signed URL contracts in a central types module.
  • Implement error helpers and S3-compatible adapter logic including multipart operations and signed URLs.
  • Add a routed storage adapter that dispatches operations to named targets and exposes per-target capabilities, with tests.
  • Provide conformance helpers to validate adapter capabilities and build fake adapters for testing.
  • Expose storage types, router, and conformance utilities via the package index and configure build/test tooling and exports.
packages/storage/src/types.ts
packages/storage/src/internal/errors.ts
packages/storage/src/internal/s3-compatible.ts
packages/storage/src/router.ts
packages/storage/src/router.test.ts
packages/storage/src/conformance.ts
packages/storage/src/index.ts
packages/storage/src/conformance.test.ts
packages/storage/src/internal/s3-compatible.test.ts
packages/storage/src/index.test.ts
packages/storage/package.json
packages/storage/tsconfig.json
packages/storage/README.md
packages/storage/vitest.config.ts
Add concrete storage adapter packages for local filesystem, AWS S3, Cloudflare R2, MinIO, Google Cloud Storage, and Azure Blob Storage built on the shared storage core.
  • Implement a secure local filesystem adapter with path validation, streaming upload/download, and basic stat/delete operations.
  • Implement S3, R2, and MinIO adapters using the shared S3-compatible helper and AWS SDK clients with appropriate configuration surfaces.
  • Implement a GCS adapter using @google-cloud/storage with signed URL and proxy streaming support and not-found normalization.
  • Implement an Azure Blob adapter supporting connection-string/account-key/SAS auth, SAS-based signed URLs, Azure block-based multipart uploads, and streaming APIs.
  • Expose adapter factory functions and configs via each package index, and wire package.json, tsconfig, and vitest configs for each adapter package.
packages/storage-local/src/adapter.ts
packages/storage-local/src/index.ts
packages/storage-local/package.json
packages/storage-local/tsconfig.json
packages/storage-local/vitest.config.ts
packages/storage-s3/src/adapter.ts
packages/storage-s3/src/index.ts
packages/storage-s3/package.json
packages/storage-s3/tsconfig.json
packages/storage-s3/vitest.config.ts
packages/storage-r2/src/adapter.ts
packages/storage-r2/src/index.ts
packages/storage-r2/package.json
packages/storage-r2/tsconfig.json
packages/storage-r2/src/adapter.test.ts
packages/storage-minio/src/adapter.ts
packages/storage-minio/src/index.ts
packages/storage-minio/package.json
packages/storage-minio/tsconfig.json
packages/storage-minio/src/adapter.test.ts
packages/storage-gcs/src/adapter.ts
packages/storage-gcs/src/index.ts
packages/storage-gcs/package.json
packages/storage-gcs/tsconfig.json
packages/storage-gcs/vitest.config.ts
packages/storage-gcs/src/adapter.test.ts
packages/storage-azure/src/adapter.ts
packages/storage-azure/src/index.ts
packages/storage-azure/README.md
packages/storage-azure/package.json
packages/storage-azure/tsconfig.json
packages/storage-azure/vitest.config.ts
packages/storage-azure/src/adapter.test.ts
Introduce a shared file abstraction package with a FileProvider contract and basic domain types, plus conformance tests to lock in provider behavior.
  • Define FileProvider interface, context, and file metadata/upload helper types for FileFn integrations.
  • Add a conformance test suite that validates provider behavior for upload session creation, status, part signing/completion, versioning, listing, and completion shapes using Vitest.
  • Wire up the files package build, exports, and tests via package.json and tsconfig.
packages/files/src/types.ts
packages/files/src/index.ts
packages/files/src/__tests__/file-provider-conformance.test.ts
packages/files/package.json
packages/files/tsconfig.json
Add a shared middleware package with a flexible KV-backed rate limiter supporting fixed window, sliding window, and token bucket algorithms.
  • Implement a KV abstraction with in-memory backing and optional bridging to the @superfunctions/db Adapter for persisted rate limits.
  • Provide a createRateLimiter factory supporting configurable window, max requests, algorithms (fixed-window, sliding-window, token-bucket), key prefixing, and custom time source.
  • Ensure atomicity per key via an in-process locking mechanism and expose utilities for checking limits and resetting keys.
  • Set up middleware package entrypoint, exports, tests scaffold, tsconfig, and package metadata, marking the DB dependency as optional peer.
packages/middleware/src/rate-limit.ts
packages/middleware/src/index.ts
packages/middleware/src/rate-limit.test.ts
packages/middleware/src/index.test.ts
packages/middleware/package.json
packages/middleware/tsconfig.json
packages/middleware/vitest.config.ts

Possibly linked issues

  • #Shared packages: The PR implements the missing shared storage, files, and middleware packages requested by the shared-packages issue.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds multiple new packages: file-provider types/tests, a rate-limiting middleware, a storage core (types, conformance, router, errors, s3-compatible), and adapters for Azure, GCS, S3, R2, MinIO, and local filesystem, each with package manifests, tsconfigs, Vitest configs, implementations, and comprehensive tests.

Changes

Cohort / File(s) Summary
Files Package
packages/files/package.json, packages/files/tsconfig.json, packages/files/src/index.ts, packages/files/src/types.ts, packages/files/src/__tests__/file-provider-conformance.test.ts
New @superfunctions/files package: FileProvider types, index re-export, package manifest/tsconfig, and provider conformance tests validating upload lifecycle and method shapes.
Middleware Package
packages/middleware/package.json, packages/middleware/tsconfig.json, packages/middleware/vitest.config.ts, packages/middleware/src/index.ts, packages/middleware/src/rate-limit.ts, packages/middleware/src/index.test.ts, packages/middleware/src/rate-limit.test.ts
New @superfunctions/middleware: rate limiter (fixed-window, sliding-window, token-bucket), KV adapter wrappers, per-key locking, exports and extensive tests.
Storage Core
packages/storage/package.json, packages/storage/tsconfig.json, packages/storage/vitest.config.ts, packages/storage/README.md, packages/storage/src/index.ts, packages/storage/src/types.ts, packages/storage/src/conformance.ts, packages/storage/src/router.ts, packages/storage/src/internal/errors.ts, packages/storage/src/internal/s3-compatible.ts, packages/storage/src/**/*.test.ts
New @superfunctions/storage: core types, conformance utilities, error helpers, S3-compatible adapter implementation, routing layer for multi-target adapters, exports, configs, and tests.
Azure Adapter
packages/storage-azure/package.json, packages/storage-azure/tsconfig.json, packages/storage-azure/vitest.config.ts, packages/storage-azure/README.md, packages/storage-azure/src/index.ts, packages/storage-azure/src/adapter.ts, packages/storage-azure/src/adapter.test.ts
Azure Blob adapter: multiple auth modes, stateless multipart IDs, SAS presign for upload/download, streaming upload/download, and conformance/unit tests.
GCS Adapter
packages/storage-gcs/package.json, packages/storage-gcs/tsconfig.json, packages/storage-gcs/vitest.config.ts, packages/storage-gcs/src/index.ts, packages/storage-gcs/src/adapter.ts, packages/storage-gcs/src/adapter.test.ts
GCS adapter: factory/config, V4 signed upload/download URLs, proxy streaming, multipart disabled, and tests.
Local Adapter
packages/storage-local/package.json, packages/storage-local/tsconfig.json, packages/storage-local/vitest.config.ts, packages/storage-local/src/index.ts, packages/storage-local/src/adapter.ts, packages/storage-local/src/adapter.test.ts
Local filesystem adapter: rootDir realpath normalization, path traversal & symlink protections, streaming upload/download with range support, directory creation, and extensive security/conformance tests.
MinIO Adapter
packages/storage-minio/package.json, packages/storage-minio/tsconfig.json, packages/storage-minio/src/index.ts, packages/storage-minio/src/adapter.ts, packages/storage-minio/src/adapter.test.ts
MinIO/S3-compatible adapter: endpoint/port normalization, S3Client with forcePathStyle, uses S3-compatible plumbing, and tests.
R2 Adapter
packages/storage-r2/package.json, packages/storage-r2/tsconfig.json, packages/storage-r2/src/index.ts, packages/storage-r2/src/adapter.ts, packages/storage-r2/src/adapter.test.ts
Cloudflare R2 adapter: config (accountId, bucket, credentials, optional jurisdiction), S3-compatible wiring, exports and tests.
S3 Adapter
packages/storage-s3/package.json, packages/storage-s3/tsconfig.json, packages/storage-s3/vitest.config.ts, packages/storage-s3/src/index.ts, packages/storage-s3/src/adapter.ts, packages/storage-s3/src/adapter.test.ts
AWS S3 adapter: region/endpoint/forcePathStyle, optional static credential validation, multipart lifecycle, signed URLs, and tests.
Release Manifest
release-packages.json
Appended entries for the new packages to the release manifest.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant RoutedAdapter
  participant Adapter
  participant Backend
  Client->>RoutedAdapter: signDownloadUrl({ key, target? })
  RoutedAdapter->>RoutedAdapter: resolve target (input || default)
  RoutedAdapter->>Adapter: signDownloadUrl({ key })
  Adapter->>Backend: generate presigned URL / SAS / signer call
  Backend-->>Adapter: { url, headers?, expiresAt }
  Adapter-->>RoutedAdapter: { url, headers?, expiresAt }
  RoutedAdapter-->>Client: signed URL response
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

Review effort 4/5

Poem

🐰 New packages hopped into the glade,
Files and storage neatly arrayed.
Rate limits hum, adapters sing,
Routes direct each uploaded thing.
A rabbit cheers — the tests all played!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and accurately summarizes the main changes: adding three major shared packages (storage, files, middleware) to the codebase.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch SF-30/storage-files-foundations

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Add storage, files, and middleware shared packages with multi-cloud adapter support

✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• Add foundational shared packages for storage, files, and middleware to the monorepo
• Implement comprehensive storage adapter system with support for multiple cloud providers:
  - **AWS S3** adapter with multipart uploads and signed URLs
  - **Azure Blob Storage** adapter with multiple authentication methods
  - **Google Cloud Storage** adapter with streaming support
  - **Cloudflare R2** adapter with jurisdiction support
  - **MinIO** adapter for S3-compatible deployments
  - **Local filesystem** adapter with security hardening (path traversal prevention, symlink
  protection)
• Provide storage routing layer for multi-target deployments with capability merging
• Implement rate-limiting middleware with three algorithms: fixed-window, sliding-window, and
  token-bucket
• Define file provider interface and types for upload session management
• Include comprehensive test suites for all adapters and middleware with 30+ test cases per
  component
• Add conformance testing framework to validate adapter implementations
• Configure TypeScript, Vitest, and package metadata for all new packages
Diagram
flowchart LR
  A["Storage Base<br/>Types & Interfaces"] --> B["S3-Compatible<br/>Factory"]
  B --> C["AWS S3<br/>Adapter"]
  B --> D["MinIO<br/>Adapter"]
  B --> E["Cloudflare R2<br/>Adapter"]
  A --> F["Azure Blob<br/>Adapter"]
  A --> G["GCS<br/>Adapter"]
  A --> H["Local FS<br/>Adapter"]
  C --> I["Storage Router<br/>Multi-target"]
  D --> I
  E --> I
  F --> I
  G --> I
  H --> I
  J["Files Package<br/>Types"] --> K["FileProvider<br/>Interface"]
  L["Middleware<br/>Rate Limiter"] --> M["Fixed/Sliding/<br/>Token-Bucket"]
  A --> N["Conformance<br/>Testing"]
Loading

Grey Divider

File Changes

1. packages/storage-azure/src/adapter.ts ✨ Enhancement +372/-0

Azure Blob Storage adapter implementation

• Implements Azure Blob Storage adapter with support for signed URLs, multipart uploads, and
 streaming
• Handles multiple authentication methods: connection string, account key, and SAS token
• Provides stateless multipart upload using base64-encoded block IDs
• Includes proxy streaming for upload and download operations

packages/storage-azure/src/adapter.ts


2. packages/storage-local/src/adapter.test.ts 🧪 Tests +328/-0

Local storage adapter test suite

• Comprehensive test suite for local file storage adapter with 30+ test cases
• Validates path traversal prevention, symlink security, and range request handling
• Tests conformance with storage adapter interface and capability validation
• Covers edge cases like nested directories, invalid ranges, and forbidden characters

packages/storage-local/src/adapter.test.ts


3. packages/middleware/src/rate-limit.ts ✨ Enhancement +269/-0

Rate limiting middleware with multiple algorithms

• Implements rate limiting middleware with three algorithms: fixed-window, sliding-window, and
 token-bucket
• Provides in-memory KV store and adapter-based persistence options
• Includes key locking mechanism for thread-safe concurrent access
• Supports configurable window duration, request limits, and custom key prefixes

packages/middleware/src/rate-limit.ts


View more (58)
4. packages/storage-azure/src/adapter.test.ts 🧪 Tests +237/-0

Azure storage adapter test suite

• Tests Azure adapter instantiation with various authentication methods
• Validates multipart upload semantics and stateless uploadId generation
• Tests signed URL generation and error handling for missing credentials
• Includes cross-instance multipart operation validation

packages/storage-azure/src/adapter.test.ts


5. packages/middleware/src/rate-limit.test.ts 🧪 Tests +208/-0

Rate limit middleware test suite

• Tests all three rate-limiting algorithms with distinct behavior validation
• Validates key separation, reset functionality, and TTL expiration
• Tests custom persistence adapters and in-memory KV store
• Includes fake timer tests for time-dependent behavior

packages/middleware/src/rate-limit.test.ts


6. packages/storage/src/conformance.ts ✨ Enhancement +230/-0

Storage adapter conformance testing framework

• Defines conformance testing framework for storage adapters
• Validates required methods exist and match declared capabilities
• Provides createFakeStorageAdapter factory for testing
• Ensures at least one upload and download mode is supported

packages/storage/src/conformance.ts


7. packages/storage/src/conformance.test.ts 🧪 Tests +216/-0

Storage conformance test suite

• Tests conformance validation for adapters with various capability combinations
• Validates error detection for missing methods and invalid capabilities
• Tests fake adapter factory with different capability profiles
• Ensures stateless and concurrent-safe adapter behavior

packages/storage/src/conformance.test.ts


8. packages/storage-local/src/adapter.ts ✨ Enhancement +224/-0

Local file system storage adapter

• Implements local file system storage adapter with streaming support
• Includes comprehensive path traversal and symlink attack prevention
• Validates keys for forbidden characters, absolute paths, and null bytes
• Supports range requests for partial file downloads

packages/storage-local/src/adapter.ts


9. packages/storage/src/internal/s3-compatible.ts ✨ Enhancement +234/-0

S3-compatible storage adapter implementation

• Provides S3-compatible storage adapter factory for AWS SDK v3
• Supports signed URLs, multipart uploads, and streaming downloads
• Configurable error detection for different S3-like services
• Handles range requests and content type constraints

packages/storage/src/internal/s3-compatible.ts


10. packages/files/src/__tests__/file-provider-conformance.test.ts 🧪 Tests +149/-0

File provider conformance test suite

• Defines conformance test suite for FileProvider interface
• Validates upload session creation, status tracking, and part signing
• Tests version-aware file reads and multipart completion workflow
• Ensures canonical field shapes for upload operations

packages/files/src/tests/file-provider-conformance.test.ts


11. packages/storage-gcs/src/adapter.ts ✨ Enhancement +195/-0

Google Cloud Storage adapter implementation

• Implements Google Cloud Storage adapter with signed URLs and streaming
• Supports multiple authentication methods: credentials, keyfile, projectId
• Provides streaming upload and download with range request support
• Does not support S3-style multipart uploads (GCS-native approach)

packages/storage-gcs/src/adapter.ts


12. packages/storage-minio/src/adapter.test.ts 🧪 Tests +166/-0

MinIO storage adapter test suite

• Tests MinIO adapter instantiation with various configurations
• Validates capabilities and conformance with storage interface
• Tests multipart and signed URL method availability
• Covers SSL/non-SSL endpoint support and custom regions

packages/storage-minio/src/adapter.test.ts


13. packages/storage/src/router.ts ✨ Enhancement +152/-0

Storage adapter routing and multi-target support

• Implements storage routing layer for multi-target deployments
• Routes operations to appropriate adapter based on target name
• Merges capabilities across adapters using intersection logic
• Provides target-specific capability queries and error handling

packages/storage/src/router.ts


14. packages/storage-r2/src/adapter.test.ts 🧪 Tests +136/-0

Cloudflare R2 storage adapter test suite

• Tests Cloudflare R2 adapter instantiation and configuration
• Validates capabilities and conformance with storage interface
• Tests multipart and signed URL method availability
• Covers jurisdiction option and stateless adapter behavior

packages/storage-r2/src/adapter.test.ts


15. packages/storage/src/router.test.ts 🧪 Tests +134/-0

Storage router test suite

• Tests routing logic for multi-target storage configurations
• Validates capability merging and target-specific capability queries
• Tests error handling for missing or unconfigured targets
• Ensures correct adapter selection and method delegation

packages/storage/src/router.test.ts


16. packages/storage-gcs/src/adapter.test.ts 🧪 Tests +99/-0

Google Cloud Storage adapter test suite

• Tests GCS adapter creation with various authentication methods
• Validates capabilities and conformance with storage interface
• Confirms absence of multipart methods (GCS-specific limitation)
• Tests signed URL and streaming method availability

packages/storage-gcs/src/adapter.test.ts


17. packages/storage-s3/src/adapter.test.ts 🧪 Tests +95/-0

AWS S3 storage adapter test suite

• Tests S3 adapter instantiation with minimal and full configurations
• Validates capabilities and conformance with storage interface
• Tests multipart method availability and validation
• Includes error handling for invalid multipart completion

packages/storage-s3/src/adapter.test.ts


18. packages/storage/src/types.ts ✨ Enhancement +98/-0

Storage adapter types and interfaces

• Defines core storage adapter interface and type definitions
• Specifies capabilities, multipart operations, and streaming methods
• Includes signed URL constraints and object stat structures
• Supports multi-target deployments with optional target parameter

packages/storage/src/types.ts


19. packages/middleware/src/index.ts ✨ Enhancement +1/-0

Middleware package exports

• Exports rate-limit module from middleware package

packages/middleware/src/index.ts


20. packages/files/src/types.ts ✨ Enhancement +61/-0

File provider and metadata type definitions

• Defines FileProviderContext interface for request context (principalId, tenantId, requestId)
• Defines FileProvider interface with methods for upload session management (create, status, sign
 parts, complete, abort)
• Defines FileMetadata interface for file properties (id, name, size, mimeType, timestamps,
 metadata)
• Defines FileUpload interface for file upload input (file, name, mimeType, metadata)

packages/files/src/types.ts


21. packages/storage/src/internal/errors.ts Error handling +59/-0

Storage error handling and validation utilities

• Implements error creation utilities (createStorageError, createNotFoundError,
 createMultipartInvalidError)
• Provides assertMultipartParts validation function for multipart uploads
• Implements isNotFoundError detection with flexible matching options (names, status codes,
 numeric/string codes)

packages/storage/src/internal/errors.ts


22. packages/storage-minio/src/adapter.ts ✨ Enhancement +47/-0

MinIO storage adapter implementation

• Defines MinIOStorageConfig interface with endpoint, bucket, credentials, and optional SSL/port
 settings
• Implements createMinIOStorageAdapter factory function that configures S3Client for MinIO
• Constructs endpoint URL with protocol and port handling
• Delegates to S3-compatible adapter with MinIO-specific error names

packages/storage-minio/src/adapter.ts


23. packages/storage/src/internal/s3-compatible.test.ts 🧪 Tests +46/-0

S3-compatible adapter unit tests

• Tests error mapping for S3-compatible adapters (NotFound errors map to STORAGE_NOT_FOUND)
• Tests multipart upload validation (rejects empty parts array with STORAGE_MULTIPART_INVALID)

packages/storage/src/internal/s3-compatible.test.ts


24. packages/storage-r2/src/adapter.ts ✨ Enhancement +36/-0

Cloudflare R2 storage adapter implementation

• Defines R2StorageConfig interface with accountId, bucket, credentials, and optional jurisdiction
• Implements createR2StorageAdapter factory function that constructs Cloudflare R2 endpoint
• Supports EU and FedRAMP jurisdictions with appropriate endpoint URLs
• Delegates to S3-compatible adapter with R2-specific error names

packages/storage-r2/src/adapter.ts


25. packages/storage-s3/src/adapter.ts ✨ Enhancement +34/-0

AWS S3 storage adapter implementation

• Defines S3StorageConfig interface with bucket, region, optional credentials, endpoint, and
 forcePathStyle
• Implements createS3StorageAdapter factory function that configures S3Client
• Handles optional credentials (uses undefined if not provided)
• Delegates to S3-compatible adapter with S3-specific error names

packages/storage-s3/src/adapter.ts


26. packages/middleware/src/index.test.ts 🧪 Tests +16/-0

Middleware rate limiter unit tests

• Tests rate limiter creation with configuration (windowMs, maxRequests)
• Tests rate limiter check method returns allowed boolean
• Verifies RateLimiter interface exports check and reset functions

packages/middleware/src/index.test.ts


27. packages/storage/vitest.config.ts ⚙️ Configuration changes +13/-0

Storage package Vitest configuration

• Configures Vitest with globals enabled and Node environment
• Sets test file patterns for src and __tests__ directories
• Configures coverage reporting with text and lcov formats using v8 provider

packages/storage/vitest.config.ts


28. packages/storage/src/index.test.ts 🧪 Tests +8/-0

Storage package export verification test

• Tests that storage package exports types and utilities
• Verifies module can be imported successfully

packages/storage/src/index.test.ts


29. packages/storage-gcs/vitest.config.ts ⚙️ Configuration changes +9/-0

GCS storage adapter Vitest configuration

• Configures Vitest with globals enabled and Node environment
• Sets test file pattern for src directory

packages/storage-gcs/vitest.config.ts


30. packages/storage-s3/vitest.config.ts ⚙️ Configuration changes +9/-0

S3 storage adapter Vitest configuration

• Configures Vitest with globals enabled and Node environment
• Sets test file pattern for src directory

packages/storage-s3/vitest.config.ts


31. packages/middleware/vitest.config.ts ⚙️ Configuration changes +9/-0

Middleware package Vitest configuration

• Configures Vitest with globals enabled and Node environment
• Sets test file pattern for src directory

packages/middleware/vitest.config.ts


32. packages/storage-local/vitest.config.ts ⚙️ Configuration changes +9/-0

Local storage adapter Vitest configuration

• Configures Vitest with globals enabled and Node environment
• Sets test file pattern for src directory

packages/storage-local/vitest.config.ts


33. packages/storage-azure/vitest.config.ts ⚙️ Configuration changes +8/-0

Azure storage adapter Vitest configuration

• Configures Vitest with globals enabled and Node environment

packages/storage-azure/vitest.config.ts


34. packages/storage-s3/src/index.ts ✨ Enhancement +2/-0

S3 adapter package exports

• Exports createS3StorageAdapter function with alias createS3Storage
• Exports S3StorageConfig type

packages/storage-s3/src/index.ts


35. packages/storage-local/src/index.ts ✨ Enhancement +2/-0

Local storage adapter package exports

• Exports createLocalStorageAdapter function with alias createLocalStorage
• Exports LocalStorageConfig type

packages/storage-local/src/index.ts


36. packages/storage/src/index.ts ✨ Enhancement +3/-0

Storage package main exports

• Exports all types from types module
• Exports conformance utilities
• Exports router utilities

packages/storage/src/index.ts


37. packages/storage-minio/src/index.ts ✨ Enhancement +1/-0

MinIO adapter package exports

• Exports createMinIOStorageAdapter function
• Exports MinIOStorageConfig type

packages/storage-minio/src/index.ts


38. packages/storage-r2/src/index.ts ✨ Enhancement +1/-0

R2 adapter package exports

• Exports createR2StorageAdapter function
• Exports R2StorageConfig type

packages/storage-r2/src/index.ts


39. packages/storage-azure/src/index.ts ✨ Enhancement +1/-0

Azure adapter package exports

• Re-exports all exports from adapter module

packages/storage-azure/src/index.ts


40. packages/storage-gcs/src/index.ts ✨ Enhancement +1/-0

GCS adapter package exports

• Re-exports all exports from adapter module

packages/storage-gcs/src/index.ts


41. packages/files/src/index.ts ✨ Enhancement +1/-0

Files package main exports

• Re-exports all types from types module

packages/files/src/index.ts


42. packages/storage/package.json ⚙️ Configuration changes +50/-0

Storage package configuration and metadata

• Defines storage package metadata (name, version 0.1.0, description)
• Configures ESM module with TypeScript declarations
• Exports main entry point and internal subpaths (s3-compatible, errors, types)
• Includes build, test, typecheck, and clean scripts
• Specifies dev dependencies (TypeScript, Vitest, Node types)

packages/storage/package.json


43. packages/middleware/package.json ⚙️ Configuration changes +47/-0

Middleware package configuration and metadata

• Defines middleware package metadata (name, version 0.1.0, description)
• Configures ESM module with TypeScript declarations
• Exports main entry point and rate-limit subpath
• Includes optional peer dependency on @superfunctions/db
• Specifies build, test, typecheck, and clean scripts

packages/middleware/package.json


44. packages/storage-r2/package.json ⚙️ Configuration changes +41/-0

R2 adapter package configuration and metadata

• Defines R2 adapter package metadata (name, version 0.1.0, description)
• Configures ESM module with TypeScript declarations
• Specifies dependencies on AWS SDK S3 client and @superfunctions/storage
• Includes build, test, typecheck, and clean scripts

packages/storage-r2/package.json


45. packages/storage-minio/package.json ⚙️ Configuration changes +41/-0

MinIO adapter package configuration and metadata

• Defines MinIO adapter package metadata (name, version 0.1.0, description)
• Configures ESM module with TypeScript declarations
• Specifies dependencies on AWS SDK S3 client and @superfunctions/storage
• Includes build, test, typecheck, and clean scripts

packages/storage-minio/package.json


46. packages/storage-s3/package.json ⚙️ Configuration changes +41/-0

S3 adapter package configuration and metadata

• Defines S3 adapter package metadata (name, version 0.1.0, description)
• Configures ESM module with TypeScript declarations
• Specifies dependencies on AWS SDK S3 client and @superfunctions/storage
• Includes build, test, typecheck, and clean scripts

packages/storage-s3/package.json


47. packages/storage-azure/package.json ⚙️ Configuration changes +40/-0

Azure adapter package configuration and metadata

• Defines Azure adapter package metadata (name, version 0.1.0, description)
• Configures ESM module with TypeScript declarations
• Specifies dependency on Azure Blob Storage SDK and @superfunctions/storage
• Includes build, test, typecheck, and clean scripts

packages/storage-azure/package.json


48. packages/storage-gcs/package.json ⚙️ Configuration changes +40/-0

GCS adapter package configuration and metadata

• Defines GCS adapter package metadata (name, version 0.1.0, description)
• Configures ESM module with TypeScript declarations
• Specifies dependency on Google Cloud Storage SDK and @superfunctions/storage
• Includes build, test, typecheck, and clean scripts

packages/storage-gcs/package.json


49. packages/storage-azure/README.md 📝 Documentation +56/-0

Azure storage adapter documentation

• Provides installation instructions for Azure adapter package
• Documents usage examples with connectionString, accountName+key, and SAS token authentication
• Lists supported features (signed URLs, multipart uploads, streaming)
• Explains Azure block blob semantics mapping (uploadId, partNumber, etag)

packages/storage-azure/README.md


50. packages/files/package.json ⚙️ Configuration changes +40/-0

Files package configuration and metadata

• Defines files package metadata (name, version 0.1.0, description)
• Configures ESM module with TypeScript declarations
• Exports main entry point and types subpath
• Includes build, test, typecheck, and clean scripts

packages/files/package.json


51. packages/storage-local/package.json ⚙️ Configuration changes +39/-0

Local storage adapter package configuration and metadata

• Defines local storage adapter package metadata (name, version 0.1.0, description)
• Configures ESM module with TypeScript declarations
• Specifies dependency on @superfunctions/storage
• Includes build, test, typecheck, and clean scripts

packages/storage-local/package.json


52. packages/storage/tsconfig.json ⚙️ Configuration changes +27/-0

Storage package TypeScript configuration

• Configures TypeScript for ES2022 target with ESNext modules
• Enables strict mode with declaration maps and source maps
• Sets bundler module resolution with strict checking (noUnusedLocals, noUnusedParameters)
• Excludes test files from compilation

packages/storage/tsconfig.json


53. packages/storage-minio/tsconfig.json ⚙️ Configuration changes +20/-0

MinIO adapter TypeScript configuration

• Configures TypeScript for ES2022 target with NodeNext modules
• Enables strict mode with declaration maps and source maps
• Sets NodeNext module resolution
• Excludes test files from compilation

packages/storage-minio/tsconfig.json


54. packages/storage-r2/tsconfig.json ⚙️ Configuration changes +20/-0

R2 adapter TypeScript configuration

• Configures TypeScript for ES2022 target with NodeNext modules
• Enables strict mode with declaration maps and source maps
• Sets NodeNext module resolution
• Excludes test files from compilation

packages/storage-r2/tsconfig.json


55. packages/storage-azure/tsconfig.json ⚙️ Configuration changes +14/-0

Azure adapter TypeScript configuration

• Configures TypeScript for ES2022 target with ESNext modules
• Enables strict mode with declaration generation
• Sets bundler module resolution

packages/storage-azure/tsconfig.json


56. packages/storage-gcs/tsconfig.json ⚙️ Configuration changes +14/-0

GCS adapter TypeScript configuration

• Configures TypeScript for ES2022 target with ESNext modules
• Enables strict mode with declaration generation
• Sets bundler module resolution

packages/storage-gcs/tsconfig.json


57. packages/storage-s3/tsconfig.json ⚙️ Configuration changes +14/-0

S3 adapter TypeScript configuration

• Configures TypeScript for ES2022 target with ESNext modules
• Enables strict mode with declaration generation
• Sets bundler module resolution

packages/storage-s3/tsconfig.json


58. packages/middleware/tsconfig.json ⚙️ Configuration changes +14/-0

Middleware package TypeScript configuration

• Configures TypeScript for ES2022 target with ESNext modules
• Enables strict mode with declaration generation
• Sets bundler module resolution

packages/middleware/tsconfig.json


59. packages/storage-local/tsconfig.json ⚙️ Configuration changes +14/-0

Local storage adapter TypeScript configuration

• Configures TypeScript for ES2022 target with ESNext modules
• Enables strict mode with declaration generation
• Sets bundler module resolution

packages/storage-local/tsconfig.json


60. packages/files/tsconfig.json ⚙️ Configuration changes +14/-0

Files package TypeScript configuration

• Configures TypeScript for ES2022 target with ESNext modules
• Enables strict mode with declaration generation
• Sets bundler module resolution

packages/files/tsconfig.json


61. packages/storage/README.md 📝 Documentation +9/-0

Storage package documentation

• Provides basic package description and installation instructions
• Documents npm install command for storage adapter system

packages/storage/README.md


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review bot commented Apr 4, 2026

Code Review by Qodo

🐞 Bugs (3)   📘 Rule violations (0)   📎 Requirement gaps (0)   🎨 UX Issues (0)
🐞\ ⚙ Maintainability (3)

Grey Divider


Action required

1. Root check uses '/'🐞
Description
createLocalStorageAdapter mixes platform-dependent normalize()/join() with hard-coded "/" in
root-prefix checks, which can incorrectly reject valid keys on platforms that use "\" separators.
This breaks local storage operations on those platforms by throwing false “path traversal detected”
errors.
Code

packages/storage-local/src/adapter.ts[R36-66]

+  function validateKey(key: string): void {
+    if (!key || key.length === 0) {
+      throw new Error('Invalid key: key cannot be empty');
+    }
+
+    if (isAbsolute(key)) {
+      throw new Error('Invalid key: absolute paths are not allowed');
+    }
+
+    const normalizedKey = normalize(key);
+
+    if (normalizedKey.startsWith('..') || normalizedKey.includes('/..') || normalizedKey.includes('\\..')) {
+      throw new Error('Invalid key: path traversal detected');
+    }
+
+    const forbidden = ['\0', ':', '*', '?', '"', '<', '>', '|'];
+    for (const char of forbidden) {
+      if (key.includes(char)) {
+        throw new Error(`Invalid key: forbidden character '${char}'`);
+      }
+    }
+  }
+
+  function resolvePath(key: string): string {
+    validateKey(key);
+
+    const resolved = join(normalizedRoot, normalize(key));
+
+    if (!resolved.startsWith(normalizedRoot + '/') && resolved !== normalizedRoot) {
+      throw new Error('Invalid key: path traversal detected');
+    }
Evidence
The adapter explicitly anticipates Windows-style separators in traversal detection, but later uses a
forward-slash-only root prefix check when validating the joined path, making the safety check
inconsistent with the path normalization strategy.

packages/storage-local/src/adapter.ts[36-49]
packages/storage-local/src/adapter.ts[59-66]
packages/storage-local/src/adapter.ts[71-77]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`packages/storage-local/src/adapter.ts` enforces root containment using `startsWith(normalizedRoot + '/')` / `startsWith(root + '/')`, which assumes POSIX separators and is inconsistent with platform-dependent `path.normalize()` output.
## Issue Context
This adapter already accounts for `\\..` traversal patterns, indicating it intends to work on Windows as well.
## Fix Focus Areas
- Use `path.relative(root, candidate)`-based containment checks (or `candidate.startsWith(root + path.sep)`) consistently in both `resolvePath()` and `validateResolvedPath()`.
- Ensure the comparison logic works regardless of path separator.
### Fix Focus Areas (locations)
- packages/storage-local/src/adapter.ts[45-77]
- packages/storage-local/src/adapter.ts[59-69]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Upload symlink gap🐞
Description
openUploadStream validates only the parent directory realpath and never validates the resolved
file path itself, unlike other methods that call validateResolvedPath(filePath). A file-level
symlink at filePath can bypass the root confinement checks during uploads.
Code

packages/storage-local/src/adapter.ts[R128-141]

+    async openUploadStream(input: { key: string; contentType?: string }): Promise<WritableStream> {
+      const filePath = resolvePath(input.key);
+      const dir = dirname(filePath);
+
+      await mkdir(dir, { recursive: true });
+
+      const dirRealPath = await realpath(dir);
+      const root = await getResolvedRoot();
+      if (!dirRealPath.startsWith(root + '/') && dirRealPath !== root) {
+        throw new Error('Invalid key: path traversal detected via symlink');
+      }
+
+      const nodeStream = createWriteStream(filePath);
+
Evidence
statObject, deleteObject, and openDownloadStream all validate the full filePath via
validateResolvedPath(filePath) before acting; openUploadStream does not, and writes directly to
filePath after only checking the directory realpath.

packages/storage-local/src/adapter.ts[90-96]
packages/storage-local/src/adapter.ts[114-120]
packages/storage-local/src/adapter.ts[128-141]
packages/storage-local/src/adapter.ts[167-175]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`openUploadStream()` does not validate the final `filePath` against symlink traversal the way `statObject/deleteObject/openDownloadStream` do, creating an upload-only escape hatch.
## Issue Context
Directory-level realpath validation does not guarantee the file path is not a symlink to an external location.
## Fix Focus Areas
- Call `validateResolvedPath(filePath)` after `mkdir(dir, { recursive: true })` and before `createWriteStream(filePath)`.
- Consider rejecting existing symlink files explicitly (e.g., `lstat` check) before opening the stream.
### Fix Focus Areas (locations)
- packages/storage-local/src/adapter.ts[128-141]
- packages/storage-local/src/adapter.ts[71-84]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Azure multipart commit mismatch🐞
Description
Azure multipart signing generates a deterministic base64 blockId in the signed URL, but completion
trusts parts[].etag as the committed block id (accepting any base64-ish string), so multipart
completion can commit the wrong identifiers or fail. This can break multipart uploads even when part
signing succeeds.
Code

packages/storage-azure/src/adapter.ts[R233-287]

+      const blockBlobClient = containerClient.getBlockBlobClient(input.key);
+      const blockId = generateBlockId(input.partNumber);
+
+      const startsOn = new Date();
+      const expiresOn = new Date(startsOn.getTime() + input.expiresInSeconds * 1000);
+
+      const permissions = BlobSASPermissions.parse('w'); // write
+
+      const sasToken = generateBlobSASQueryParameters(
+        {
+          containerName: config.containerName,
+          blobName: input.key,
+          permissions,
+          startsOn,
+          expiresOn,
+          protocol: SASProtocol.Https,
+        },
+        credential
+      ).toString();
+
+      // The URL must include the blockid query parameter for PutBlock operation
+      const url = `${blockBlobClient.url}?comp=block&blockid=${encodeURIComponent(blockId)}&${sasToken}`;
+
+      return { url };
+    },
+
+    async completeMultipartUpload(input: {
+      key: string;
+      uploadId: string;
+      parts: Array<{ partNumber: number; etag: string }>;
+    }): Promise<void> {
+      assertMultipartParts(input.parts);
+
+      // Verify uploadId
+      const decodedKey = Buffer.from(input.uploadId, 'base64').toString('utf-8');
+      if (decodedKey !== input.key) {
+        throw new Error(`Invalid uploadId for key: ${input.key}`);
+      }
+
+      const blockBlobClient = containerClient.getBlockBlobClient(input.key);
+
+      // In Azure, the etag from each part is actually the blockId we generated
+      // Sort parts and commit blocks
+      const sortedParts = input.parts.sort((a, b) => a.partNumber - b.partNumber);
+      const blockList = sortedParts.map(p => {
+        // The etag should be the blockId we generated
+        // If client recorded blockId as etag, use it; otherwise regenerate
+        const blockId = p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/)
+          ? p.etag
+          : generateBlockId(p.partNumber);
+        return blockId;
+      });
+
+      await blockBlobClient.commitBlockList(blockList);
+    },
Evidence
signMultipartUploadPartUrl generates a blockId internally and only places it in the URL;
completeMultipartUpload then uses the client-provided etag as the block ID via a permissive
check, instead of deterministically recomputing the same blockId from partNumber.

packages/storage-azure/src/adapter.ts[233-256]
packages/storage-azure/src/adapter.ts[259-287]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Multipart part signing uses `generateBlockId(partNumber)` but completion uses `etag` as the block identifier. This is fragile and can cause incorrect `commitBlockList` calls.
## Issue Context
The adapter is stateless; it can deterministically regenerate the exact block IDs from `partNumber` without trusting client input.
## Fix Focus Areas
- In `completeMultipartUpload`, build `blockList` from `generateBlockId(partNumber)` for every part (after sorting/validating parts), and do not use `etag` as the block identifier.
- If you need to validate, compare provided `etag` to expected blockId and throw on mismatch.
### Fix Focus Areas (locations)
- packages/storage-azure/src/adapter.ts[233-256]
- packages/storage-azure/src/adapter.ts[259-287]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. Azure upload buffers memory🐞
Description
createAzureStorageAdapter().openUploadStream() buffers all chunks in memory and only uploads on
close(), which can OOM on large uploads and is not actually streaming. This contradicts
capabilities.proxyStreamingUpload: true.
Code

packages/storage-azure/src/adapter.ts[R294-316]

+    async openUploadStream(input: { key: string; contentType?: string }): Promise<WritableStream> {
+      const blockBlobClient = containerClient.getBlockBlobClient(input.key);
+
+      let buffer: Buffer[] = [];
+
+      return new WritableStream({
+        async write(chunk) {
+          const buf = chunk instanceof Uint8Array ? Buffer.from(chunk) : chunk;
+          buffer.push(buf);
+        },
+        async close() {
+          const content = Buffer.concat(buffer);
+          await blockBlobClient.upload(content, content.length, {
+            blobHTTPHeaders: {
+              blobContentType: input.contentType,
+            },
+          });
+          buffer = [];
+        },
+        abort() {
+          buffer = [];
+        },
+      });
Evidence
The implementation pushes every written chunk into an array and then Buffer.concats it on close
before uploading, meaning memory scales with full object size.

packages/storage-azure/src/adapter.ts[294-316]
packages/storage-azure/src/adapter.ts[93-99]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Azure adapter's `openUploadStream` buffers the entire payload and uploads at the end, which can cause memory exhaustion and contradicts the declared `proxyStreamingUpload` capability.
## Issue Context
The adapter advertises `proxyStreamingUpload: true`, so downstream code may rely on this for large uploads.
## Fix Focus Areas
- Replace buffering with a streaming implementation (e.g., use an SDK streaming upload API) OR set `proxyStreamingUpload: false` and omit `openUploadStream`.
### Fix Focus Areas (locations)
- packages/storage-azure/src/adapter.ts[93-99]
- packages/storage-azure/src/adapter.ts[294-316]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Optional peer breaks typing 🐞
Description
@superfunctions/middleware marks @superfunctions/db as an optional peer dependency, but
rate-limit.ts publicly imports/re-exports DB types, so TypeScript consumers without
@superfunctions/db cannot typecheck this package. This conflicts with the “optional” peer
contract.
Code

packages/middleware/src/rate-limit.ts[R1-15]

+import type { Adapter, KVStoreAdapter } from '@superfunctions/db';
+
+export { KVStoreAdapter };
+
+export type LegacyWindowAlgorithm = 'sliding-window'|'fixed-window';
+export type RateLimitAlgorithm = LegacyWindowAlgorithm | 'token-bucket';
+
+export interface RateLimitConfig {
+  windowMs: number;
+  maxRequests: number;
+  keyPrefix?: string;
+  persistence?: Adapter | KVStoreAdapter;
+  algorithm?: RateLimitAlgorithm;
+  now?: () => number;
+}
Evidence
The middleware package exposes KVStoreAdapter and uses Adapter | KVStoreAdapter in its public
config type, while package.json declares @superfunctions/db optional.

packages/middleware/src/rate-limit.ts[1-15]
packages/middleware/package.json[36-41]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The rate-limiter types require `@superfunctions/db` to be present for TS resolution, but the dependency is marked optional.
## Issue Context
Optional peers are only safe when your public types do not require the peer module to exist.
## Fix Focus Areas
- Either remove `peerDependenciesMeta.optional` (make `@superfunctions/db` required), OR
- Stop importing/re-exporting `@superfunctions/db` types in the public surface: define local minimal interfaces for `KVStoreAdapter` and an adapter-like persistence shape.
### Fix Focus Areas (locations)
- packages/middleware/src/rate-limit.ts[1-15]
- packages/middleware/package.json[36-41]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. Token bucket divides by zero🐞
Description
createRateLimiter does not validate windowMs/windowSeconds > 0; token-bucket math divides by
effectiveWindowMs and can produce Infinity/NaN for refill rate and reset calculations. This
results in invalid resetAt values and unstable limiter behavior under misconfiguration.
Code

packages/middleware/src/rate-limit.ts[R175-224]

+        const effectiveWindowMs =
+          (input.windowSeconds !== undefined ? input.windowSeconds * 1000 : undefined) ?? windowMs;
+        const effectiveLimit = input.limit ?? maxRequests;
+        const currentTime = now();
+
+        if (algorithm === 'sliding-window') {
+          const state = (await getState<SlidingWindowState>(kv, key)) ?? { timestamps: [] };
+          const floor = currentTime - effectiveWindowMs;
+          state.timestamps = state.timestamps.filter((timestamp) => timestamp > floor);
+
+          const allowed = state.timestamps.length < effectiveLimit;
+          if (allowed) {
+            state.timestamps.push(currentTime);
+          }
+
+          const oldest = state.timestamps[0] ?? currentTime;
+          const resetAt = oldest + effectiveWindowMs;
+          const ttlMs = Math.max(1, resetAt - currentTime);
+          await setState(kv, key, state, ttlMs);
+
+          return {
+            allowed,
+            remaining: Math.max(0, effectiveLimit - state.timestamps.length),
+            resetAt: new Date(resetAt).toISOString(),
+            total: effectiveLimit,
+          };
+        }
+
+        if (algorithm === 'token-bucket') {
+          const refillRatePerMs = effectiveLimit / effectiveWindowMs;
+          const state = (await getState<TokenBucketState>(kv, key)) ?? {
+            tokens: effectiveLimit,
+            lastRefill: currentTime,
+          };
+
+          const elapsed = Math.max(0, currentTime - state.lastRefill);
+          state.tokens = Math.min(effectiveLimit, state.tokens + elapsed * refillRatePerMs);
+          state.lastRefill = currentTime;
+
+          const allowed = state.tokens >= 1;
+          if (allowed) {
+            state.tokens -= 1;
+          }
+
+          const msUntilNextToken = state.tokens >= 1 ? 0 : Math.ceil((1 - state.tokens) / refillRatePerMs);
+          const msUntilFull = Math.ceil((effectiveLimit - state.tokens) / refillRatePerMs);
+          const resetAt = currentTime + msUntilNextToken;
+
+          await setState(kv, key, state, Math.max(1, msUntilFull));
+
Evidence
The refill rate is computed as effectiveLimit / effectiveWindowMs and later used as a divisor
again, with no guard against 0.

packages/middleware/src/rate-limit.ts[175-224]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Token-bucket rate calculations divide by `effectiveWindowMs` (derived from config/input) without validating it is > 0.
## Issue Context
Misconfiguration should fail fast with a clear error, rather than producing `Infinity/NaN` timestamps.
## Fix Focus Areas
- Validate `windowMs > 0` and `maxRequests > 0` at `createRateLimiter` construction.
- Validate `input.windowSeconds` and `input.limit` overrides when provided.
### Fix Focus Areas (locations)
- packages/middleware/src/rate-limit.ts[138-178]
- packages/middleware/src/rate-limit.ts[203-224]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (1)
7. FileProvider types too weak 🐞
Description
FileProvider return types are mostly unknown or minimal shapes, but the package’s own
conformance tests require concrete fields like uploadMode, chunkSizeBytes, totalParts, and
expiresAt. This prevents TypeScript from enforcing the intended FileProvider contract and forces
consumers/tests to rely on any.
Code

packages/files/src/types.ts[R7-26]

+export interface FileProvider {
+  createUploadSession(input: {
+    policy: string;
+    fileName: string;
+    size: number;
+    mimeType: string;
+    fileId?: string;
+    metadata?: Record<string, unknown>;
+    idempotencyKey?: string;
+  }, ctx: FileProviderContext): Promise<{ uploadSessionId: string }>;
+
+  getUploadSessionStatus(input: { uploadSessionId: string }, ctx: FileProviderContext): Promise<unknown>;
+
+  signUploadPart(input: {
+    uploadSessionId: string;
+    partNumber: number;
+    contentLength: number;
+    checksumSha256Base64?: string;
+  }, ctx: FileProviderContext): Promise<{ url: string; headers?: Record<string, string> }>;
+
Evidence
The interface only guarantees uploadSessionId from createUploadSession and uses Promise for
session status and file reads, while tests assert richer required shapes and cast to any to bypass
typing.

packages/files/src/types.ts[7-43]
packages/files/src/tests/file-provider-conformance.test.ts[54-75]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The `FileProvider` interface does not encode the canonical response shapes that the conformance tests enforce, so incorrect implementations can typecheck.
## Issue Context
The test suite asserts specific required fields (e.g., `uploadMode`, `chunkSizeBytes`, `totalParts`, `expiresAt`).
## Fix Focus Areas
- Introduce explicit result types (e.g., `CreateUploadSessionResult`, `UploadSessionStatus`, `GetFileResult`, `ListFilesResult`) and use them in `FileProvider`.
- Update tests to remove `as any` once types match.
### Fix Focus Areas (locations)
- packages/files/src/types.ts[7-43]
- packages/files/src/__tests__/file-provider-conformance.test.ts[54-95]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Advisory comments

8. Tests emitted to dist 🐞
Description
Several new packages compile all of src/ without excluding *.test.ts/__tests__, so tsc will
emit test artifacts into dist and publish them. This increases package size and ships code that
imports dev-only modules like vitest.
Code

packages/storage-azure/tsconfig.json[R1-14]

+{
+  "compilerOptions": {
+    "target": "ES2022",
+    "module": "ESNext",
+    "moduleResolution": "bundler",
+    "outDir": "./dist",
+    "rootDir": "./src",
+    "declaration": true,
+    "strict": true,
+    "skipLibCheck": true
+  },
+  "include": ["src"],
+  "exclude": ["node_modules", "dist"]
+}
Evidence
Some packages include src broadly without excluding tests, while others (e.g., storage-minio)
explicitly exclude test files, demonstrating the intended pattern.

packages/storage-azure/tsconfig.json[1-14]
packages/middleware/tsconfig.json[1-14]
packages/storage-minio/tsconfig.json[18-20]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
These packages' `tsconfig.json` includes all `src` and does not exclude `*.test.ts` / `__tests__`, so tests get emitted to `dist`.
## Issue Context
Other packages in this PR already exclude tests (e.g., `storage-minio`).
## Fix Focus Areas
- Add `"**/*.test.ts"` and `"**/__tests__/**"` to `exclude` for packages that ship libraries.
### Fix Focus Areas (locations)
- packages/storage-azure/tsconfig.json[12-13]
- packages/middleware/tsconfig.json[12-13]
- packages/files/tsconfig.json[12-13]
- packages/storage-gcs/tsconfig.json[12-13]
- packages/storage-local/tsconfig.json[12-13]
- packages/storage-s3/tsconfig.json[12-13]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

@socket-security
Copy link
Copy Markdown

socket-security bot commented Apr 4, 2026

Warning

Review the following alerts detected in dependencies.

According to your organization's Security Policy, it is recommended to resolve "Warn" alerts. Learn more about Socket for GitHub.

Action Severity Alert  (click "▶" to expand/collapse)
Warn High
Obfuscated code: npm svelte is 91.0% likely obfuscated

Confidence: 0.91

Location: Package overview

From: ?npm/@sveltejs/vite-plugin-svelte@3.1.2npm/svelte-check@3.8.6npm/svelte@4.2.20

ℹ Read more on: This package | This alert | What is obfuscated code?

Next steps: Take a moment to review the security alert above. Review the linked package source code to understand the potential risk. Ensure the package is not malicious before proceeding. If you're unsure how to proceed, reach out to your security team or ask the Socket team for help at support@socket.dev.

Suggestion: Packages should not obfuscate their code. Consider not using packages with obfuscated code.

Mark the package as acceptable risk. To ignore this alert only in this pull request, reply with the comment @SocketSecurity ignore npm/svelte@4.2.20. You can also ignore all packages with @SocketSecurity ignore-all. To ignore an alert for all future pull requests, use Socket's Dashboard to change the triage state of this alert.

View full report

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 3 issues, and left some high level feedback:

  • In storage-local's resolvePath/validateResolvedPath, the startsWith(root + '/') checks assume POSIX-style separators and may misbehave on Windows paths; consider using path.sep or path.relative to enforce containment in a cross-platform way.
  • The Azure adapter’s openUploadStream buffers the entire blob in memory before a single upload, which can be problematic for large objects; you may want to switch to a chunked/streaming upload (e.g., uploadStream) to avoid unbounded memory usage.
  • In the Azure multipart implementation, completeMultipartUpload infers block IDs from etag with a loose heuristic; consider making the block ID contract explicit (e.g., documenting that clients must persist the generated block IDs) or storing block IDs separately to avoid accidental mismatch with provider-returned etags.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `storage-local`'s `resolvePath`/`validateResolvedPath`, the `startsWith(root + '/')` checks assume POSIX-style separators and may misbehave on Windows paths; consider using `path.sep` or `path.relative` to enforce containment in a cross-platform way.
- The Azure adapter’s `openUploadStream` buffers the entire blob in memory before a single `upload`, which can be problematic for large objects; you may want to switch to a chunked/streaming upload (e.g., `uploadStream`) to avoid unbounded memory usage.
- In the Azure multipart implementation, `completeMultipartUpload` infers block IDs from `etag` with a loose heuristic; consider making the block ID contract explicit (e.g., documenting that clients must persist the generated block IDs) or storing block IDs separately to avoid accidental mismatch with provider-returned etags.

## Individual Comments

### Comment 1
<location path="packages/storage-local/src/adapter.ts" line_range="62-64" />
<code_context>
+  function resolvePath(key: string): string {
+    validateKey(key);
+
+    const resolved = join(normalizedRoot, normalize(key));
+
+    if (!resolved.startsWith(normalizedRoot + '/') && resolved !== normalizedRoot) {
+      throw new Error('Invalid key: path traversal detected');
+    }
</code_context>
<issue_to_address>
**🚨 issue (security):** Path prefix checks are not portable and can be bypassed on Windows due to hardcoded '/' separators.

These `startsWith(normalizedRoot + '/')` checks rely on POSIX `/`, but `join`/`realpath` return platform-specific paths. On Windows they use `\`, so the prefix check can fail or be bypassed, weakening the traversal protection. Please use `path.resolve` and `path.sep`-aware logic (e.g. `path.relative` or a normalized `root` that ends with `path.sep` and a corresponding `real.startsWith(root)` check). The same adjustment is needed in `validateResolvedPath` and the directory check in `openUploadStream`.
</issue_to_address>

### Comment 2
<location path="packages/storage-azure/src/adapter.ts" line_range="294-317" />
<code_context>
+      // No explicit cleanup needed for stateless tracking
+    },
+
+    async openUploadStream(input: { key: string; contentType?: string }): Promise<WritableStream> {
+      const blockBlobClient = containerClient.getBlockBlobClient(input.key);
+
+      let buffer: Buffer[] = [];
+
+      return new WritableStream({
+        async write(chunk) {
+          const buf = chunk instanceof Uint8Array ? Buffer.from(chunk) : chunk;
+          buffer.push(buf);
+        },
+        async close() {
+          const content = Buffer.concat(buffer);
+          await blockBlobClient.upload(content, content.length, {
+            blobHTTPHeaders: {
+              blobContentType: input.contentType,
+            },
+          });
+          buffer = [];
+        },
+        abort() {
</code_context>
<issue_to_address>
**suggestion (performance):** The Azure upload stream buffers the entire object in memory before uploading, which can be problematic for large files.

This accumulates all chunks into memory and only uploads once in `close()`, which causes high memory usage and GC pressure for large uploads and undermines the streaming API. Since `BlockBlobClient` supports streaming (e.g., `uploadStream`) and chunked uploads, consider sending chunks directly to Azure as they arrive instead of buffering the entire payload first.

```suggestion
    async openUploadStream(input: { key: string; contentType?: string }): Promise<WritableStream> {
      const blockBlobClient = containerClient.getBlockBlobClient(input.key);

      // Stream to Azure by staging each chunk as a block and committing at close.
      // This avoids buffering the entire payload in memory.
      const blockIds: string[] = [];
      let blockIndex = 0;
      const blockIdPrefix = Date.now().toString(36) + "-" + Math.random().toString(36).slice(2, 8);

      return new WritableStream<Uint8Array | Buffer>({
        async write(chunk) {
          const buf = chunk instanceof Uint8Array ? Buffer.from(chunk) : chunk;

          const rawId = `${blockIdPrefix}-${blockIndex.toString().padStart(8, "0")}`;
          const blockId = Buffer.from(rawId).toString("base64");
          blockIndex += 1;

          // Stage the current chunk as a block immediately, so we don't accumulate
          // large amounts of data in memory.
          await blockBlobClient.stageBlock(blockId, buf, buf.length);
          blockIds.push(blockId);
        },
        async close() {
          // Commit all staged blocks as the final blob.
          await blockBlobClient.commitBlockList(blockIds, {
            blobHTTPHeaders: {
              blobContentType: input.contentType,
            },
          });
        },
        abort() {
          // No explicit cleanup: Azure will garbage-collect uncommitted blocks.
          blockIds.length = 0;
        },
      });
    },
```
</issue_to_address>

### Comment 3
<location path="packages/middleware/src/rate-limit.ts" line_range="49-58" />
<code_context>
+  lastRefill: number;
+}
+
+function createInMemoryKVStore(): KVStoreAdapter {
+  const store = new Map<string, { value: string; expiresAt?: number }>();
+
+  return {
+    async get(key: string): Promise<string | null> {
+      const entry = store.get(key);
+      if (!entry) return null;
+      if (entry.expiresAt && Date.now() > entry.expiresAt) {
+        store.delete(key);
+        return null;
+      }
+      return entry.value;
+    },
+    async set(input: { key: string; value: string; ttlSeconds?: number }): Promise<void> {
+      store.set(input.key, {
+        value: input.value,
+        expiresAt: input.ttlSeconds ? Date.now() + input.ttlSeconds * 1000 : undefined,
+      });
+    },
</code_context>
<issue_to_address>
**suggestion (bug_risk):** In-memory KV TTL uses Date.now while the rate limiter supports a custom time source, which can cause inconsistent behavior.

The rate limiter uses an injectable `now` function, but both `createInMemoryKVStore` and the adapter-backed KV use `Date.now()` for TTLs. This can cause expirations to drift from rate calculations when `now` is overridden (e.g. in tests or custom time sources). Consider passing the same `now` function into the KV layer so TTLs and rate limiting share a consistent time source.

Suggested implementation:

```typescript
function createInMemoryKVStore(now: () => number = Date.now): KVStoreAdapter {
  const store = new Map<string, { value: string; expiresAt?: number }>();

  return {
    async get(key: string): Promise<string | null> {
      const entry = store.get(key);
      if (!entry) return null;
      const currentTime = now();
      if (entry.expiresAt && currentTime > entry.expiresAt) {
        store.delete(key);
        return null;
      }
      return entry.value;
    },
    async set(input: { key: string; value: string; ttlSeconds?: number }): Promise<void> {
      const currentTime = now();
      store.set(input.key, {
        value: input.value,
        expiresAt: input.ttlSeconds ? currentTime + input.ttlSeconds * 1000 : undefined,
      });
    },
    async delete(key: string): Promise<void> {
      store.delete(key);
    },
  };
}

```

To fully align TTL handling with the rate limiter's time source, you should also:

1. Identify where `createInMemoryKVStore()` is called in this file (and possibly others) and pass in the same `now` function that the rate limiter uses, e.g. `createInMemoryKVStore(now)` instead of `createInMemoryKVStore()`.
2. If there is a separate "adapter-backed" KV implementation in this file (or related files) that uses `Date.now()` for TTLs, refactor it similarly to accept a `now: () => number` parameter and use that instead of `Date.now()`.
3. Ensure any public factory or configuration API for the rate limiter that exposes `now` is also used to configure the KV store so they always share the same time source in tests and custom time setups.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread packages/storage-local/src/adapter.ts Outdated
Comment thread packages/storage-azure/src/adapter.ts
Comment thread packages/middleware/src/rate-limit.ts Outdated
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 13559f7c6a

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/storage-azure/src/adapter.ts Outdated
Comment on lines +280 to +282
const blockId = p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/)
? p.etag
: generateBlockId(p.partNumber);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Use block IDs, not ETags, in Azure multipart completion

The multipart completion path currently treats any base64/alphanumeric etag as a block ID, but real part ETags commonly also match that pattern. When clients pass back upload ETags (rather than the generated block IDs), commitBlockList receives identifiers that were never uploaded by signMultipartUploadPartUrl, which causes Azure to reject completion (e.g., invalid block list). This should derive or validate block IDs from partNumber instead of trusting generic ETag text.

Useful? React with 👍 / 👎.

Comment thread packages/storage-local/src/adapter.ts Outdated
Comment on lines +136 to +140
if (!dirRealPath.startsWith(root + '/') && dirRealPath !== root) {
throw new Error('Invalid key: path traversal detected via symlink');
}

const nodeStream = createWriteStream(filePath);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Validate upload target path before local write stream creation

The upload path checks that the parent directory resolves under rootDir, but it does not validate the final file path before createWriteStream. If a symlink file already exists at that key and points outside the root, Node follows it and writes outside the storage sandbox, bypassing the traversal protection used in the read/delete paths. Add a no-follow file-level check (e.g., lstat/realpath guard) before opening the writer.

Useful? React with 👍 / 👎.

Comment thread packages/storage-azure/src/adapter.ts Outdated
Comment on lines +298 to +302

return new WritableStream({
async write(chunk) {
const buf = chunk instanceof Uint8Array ? Buffer.from(chunk) : chunk;
buffer.push(buf);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Stream Azure uploads instead of buffering whole object

This implementation stores every incoming chunk in memory and concatenates the entire payload on close before uploading. Under large or concurrent uploads, memory usage grows with object size and can trigger process OOMs, which breaks the expected behavior of a proxyStreamingUpload adapter. Switching to Azure's streaming upload API (or piping chunks directly) avoids this unbounded buffering.

Useful? React with 👍 / 👎.

Comment thread packages/storage-azure/src/adapter.ts Outdated
Comment on lines +276 to +284
const sortedParts = input.parts.sort((a, b) => a.partNumber - b.partNumber);
const blockList = sortedParts.map(p => {
// The etag should be the blockId we generated
// If client recorded blockId as etag, use it; otherwise regenerate
const blockId = p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/)
? p.etag
: generateBlockId(p.partNumber);
return blockId;
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Critical bug: The logic incorrectly handles unencoded block IDs, which will cause Azure API failures.

When p.etag starts with 'block-', it's the raw unencoded string (e.g., 'block-000001'), but Azure requires base64-encoded block IDs. The current code uses this unencoded string directly, which will fail when calling blockBlobClient.commitBlockList().

Fix:

const blockList = sortedParts.map(p => {
  // If etag is already base64-encoded, use it
  if (p.etag.match(/^[A-Za-z0-9+/=]+$/) && !p.etag.startsWith('block-')) {
    return p.etag;
  }
  // Otherwise regenerate the properly encoded blockId
  return generateBlockId(p.partNumber);
});

The test at line 210 demonstrates this issue by passing etag: 'block-000001' (unencoded), which would be accepted but fail at Azure's API.

Suggested change
const sortedParts = input.parts.sort((a, b) => a.partNumber - b.partNumber);
const blockList = sortedParts.map(p => {
// The etag should be the blockId we generated
// If client recorded blockId as etag, use it; otherwise regenerate
const blockId = p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/)
? p.etag
: generateBlockId(p.partNumber);
return blockId;
});
const sortedParts = input.parts.sort((a, b) => a.partNumber - b.partNumber);
const blockList = sortedParts.map(p => {
// If etag is already base64-encoded, use it
if (p.etag.match(/^[A-Za-z0-9+/=]+$/) && !p.etag.startsWith('block-')) {
return p.etag;
}
// Otherwise regenerate the properly encoded blockId
return generateBlockId(p.partNumber);
});

Spotted by Graphite

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.

@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Apr 4, 2026

Greptile Summary

This PR introduces six new shared packages to the monorepo (@superfunctions/files, @superfunctions/middleware, @superfunctions/storage, @superfunctions/storage-azure, @superfunctions/storage-gcs, @superfunctions/storage-local, @superfunctions/storage-minio, @superfunctions/storage-r2, @superfunctions/storage-s3), providing the foundational storage layer and middleware infrastructure that downstream function packages will build against.

  • Core design is sound: the StorageAdapter interface, routing layer, S3-compatible factory, and local adapter are all well-written
  • Rate-limiter implements three algorithms (fixed-window, sliding-window, token-bucket) with a correct per-key mutex; tests are thorough
  • Critical bug in the Azure adapter's completeMultipartUpload: the block-ID selection condition passes a decoded string to commitBlockList when Azure requires base64-encoded block IDs, and the fallback generateBlockId path is effectively unreachable
  • Memory issue in the Azure openUploadStream: the implementation buffers the entire file in memory before uploading, defeating the purpose of "proxy streaming upload"
  • Dead code: parseBlockId is defined in the Azure adapter but never called

Confidence Score: 2/5

Not safe to merge as-is — the Azure adapter has a P0 bug that will cause runtime 400 errors when completing multipart uploads against a real Azure endpoint.

The overall architecture is clean and the non-Azure packages look correct. However, the Azure completeMultipartUpload block-ID bug is a definite runtime failure (not a potential one): the condition branches to the decoded string form when Azure requires base64, and the fallback that produces the correct form is effectively unreachable. This needs to be fixed before downstream code can rely on the Azure adapter.

packages/storage-azure/src/adapter.ts (lines 277-287 and 294-316) and packages/storage-azure/src/adapter.test.ts (line 211)

Important Files Changed

Filename Overview
packages/storage-azure/src/adapter.ts Azure Block Blob adapter with two bugs: blockId logic in completeMultipartUpload uses decoded string instead of base64, and openUploadStream buffers entire file in memory
packages/storage/src/internal/s3-compatible.ts Well-structured S3-compatible adapter factory shared by S3, R2, and MinIO adapters
packages/storage/src/router.ts Clean routing adapter that multiplexes storage operations across named targets with correct capability merging
packages/middleware/src/rate-limit.ts Rate-limiter supporting three algorithms with per-key mutex serialization; solid implementation
packages/storage/src/conformance.ts Conformance test helper and fake adapter factory; _ctx parameter is accepted but never used in runConformanceTests body
packages/storage-local/src/adapter.ts Local filesystem adapter with robust path-traversal prevention including symlink validation
packages/storage-gcs/src/adapter.ts GCS adapter with correct signed URL generation and streaming support
packages/files/src/types.ts FileProvider interface and supporting types with a clean, well-typed multi-part upload contract
packages/storage/src/types.ts Core StorageAdapter interface with capability-gated optional methods; clean and extensible design
packages/storage-r2/src/adapter.ts Thin R2 adapter that correctly delegates to the shared S3-compatible factory

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    Client([Caller]) --> RA[RoutedStorageAdapter]
    RA -- target lookup --> AM{adapter map}
    AM -- s3 / r2 / minio --> S3C[S3CompatibleAdapter]
    AM -- gcs --> GCS[GCSAdapter]
    AM -- azure --> AZ[AzureAdapter]
    AM -- local --> LOC[LocalAdapter]

    S3C --> S3SDK[@aws-sdk/client-s3]
    GCS --> GCSSDK[@google-cloud/storage]
    AZ --> AZSDK[@azure/storage-blob]
    LOC --> FS[node:fs]

    subgraph Capabilities checked at runtime
      S3C -. signUploadUrl .-> S3SDK
      S3C -. signDownloadUrl .-> S3SDK
      S3C -. createMultipartUpload .-> S3SDK
      AZ -. signUploadUrl .-> AZSDK
      AZ -. openUploadStream .-> AZSDK
      AZ -. completeMultipartUpload --> BUG([⚠ blockId decoded not base64])
    end

    MW[RateLimiter] -. check / reset .-> KV[KVStoreAdapter]
    KV -- in-memory default --> MAP[(Map)]
    KV -- external --> EXT[(Redis / DB)]

Loading
Prompt To Fix All With AI
This is a comment left during a code review.
Path: packages/storage-azure/src/adapter.ts
Line: 277-287

Comment:
**`completeMultipartUpload` commits wrong block IDs to Azure**

The condition on line 280 is `p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/)`. The second branch matches virtually every string (hex ETags, base64, alphanumeric), so the `generateBlockId` fallback is unreachable.

More critically, the first branch (`p.etag.startsWith('block-')`) passes the *decoded* form (e.g. `'block-000001'`) straight to `commitBlockList`. Azure requires block IDs to be **base64-encoded**. `generateBlockId` correctly returns the base64 form (e.g. `'YmxvY2stMDAwMDAxIwI='`), but the test on line 211 passes `etag: 'block-000001'` (decoded), triggering the first branch and sending a non-base64 string to Azure — causing a `400 InvalidBlockId` error at runtime.

The simplest correct fix is to always derive the block ID from the part number, which guarantees the IDs committed here match those embedded in the signed URLs produced by `signMultipartUploadPartUrl`:

```suggestion
      const sortedParts = input.parts.sort((a, b) => a.partNumber - b.partNumber);
      const blockList = sortedParts.map(p => generateBlockId(p.partNumber));

      await blockBlobClient.commitBlockList(blockList);
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: packages/storage-azure/src/adapter.ts
Line: 83-88

Comment:
**`parseBlockId` is dead code**

`parseBlockId` is defined but never called anywhere in the codebase. With the corrected `completeMultipartUpload` logic above (always regenerating from `partNumber`) it would remain unused. Consider removing it to keep the module clean.

```suggestion

```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: packages/storage-azure/src/adapter.ts
Line: 294-316

Comment:
**`openUploadStream` buffers entire file in memory**

The `write` sink appends every chunk to an in-memory `Buffer[]` array, and only uploads on `close`. For any file larger than available heap this will either be very slow (GC pressure) or crash the process.

Consider uploading through a block-blob append stream or using the SDK's `uploadStream` helper instead:

```typescript
import { streamToBuffer } from '@azure/storage-blob'; // or use uploadStream

async openUploadStream(input) {
  const blockBlobClient = containerClient.getBlockBlobClient(input.key);
  const { writable } = blockBlobClient.uploadStream(/* bufferSize, maxConcurrency, options */);
  return writable;
}
```

Alternatively, `blockBlobClient.uploadStream(readableStream, ...)` accepts a Node.js Readable, which you can bridge from the WritableStream sink without buffering everything.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: packages/storage-azure/src/adapter.test.ts
Line: 207-214

Comment:
**Test passes a decoded block ID that will fail with the real Azure API**

The test exercises `completeMultipartUpload` with `etag: 'block-000001'`, which is the *decoded* form of the block ID. After the fix above (always using `generateBlockId`), this test will correctly commit `YmxvY2stMDAwMDAxIwI=` (or equivalent base64) regardless of what the caller passes as `etag`. Consider updating the test to reflect the expected stateless behaviour:

```suggestion
      try {
        await adapterB.completeMultipartUpload!({
          key,
          uploadId,
          parts: [{ partNumber: 1, etag: Buffer.from('block-000001').toString('base64') }],
        });
      } catch (err: any) {
        expect(err.message).not.toContain('Invalid uploadId');
      }
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: packages/storage/src/conformance.ts
Line: 27-30

Comment:
**`_ctx` parameter is accepted but never read**

`runConformanceTests` declares a `_ctx: ConformanceTestContext` parameter (the `_` prefix signals intentional non-use), but the current implementation performs only static structural checks and never exercises actual I/O using the supplied key or test data. If live-I/O conformance tests are planned, the parameter will be needed; if the function will remain static-only, removing the parameter simplifies the API and avoids confusing callers who pass a custom context expecting it to take effect.

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "SF-30 add storage, files, and middleware..." | Re-trigger Greptile

Comment thread packages/storage-azure/src/adapter.ts Outdated
Comment on lines +277 to +287
const blockList = sortedParts.map(p => {
// The etag should be the blockId we generated
// If client recorded blockId as etag, use it; otherwise regenerate
const blockId = p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/)
? p.etag
: generateBlockId(p.partNumber);
return blockId;
});

await blockBlobClient.commitBlockList(blockList);
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P0 completeMultipartUpload commits wrong block IDs to Azure

The condition on line 280 is p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/). The second branch matches virtually every string (hex ETags, base64, alphanumeric), so the generateBlockId fallback is unreachable.

More critically, the first branch (p.etag.startsWith('block-')) passes the decoded form (e.g. 'block-000001') straight to commitBlockList. Azure requires block IDs to be base64-encoded. generateBlockId correctly returns the base64 form (e.g. 'YmxvY2stMDAwMDAxIwI='), but the test on line 211 passes etag: 'block-000001' (decoded), triggering the first branch and sending a non-base64 string to Azure — causing a 400 InvalidBlockId error at runtime.

The simplest correct fix is to always derive the block ID from the part number, which guarantees the IDs committed here match those embedded in the signed URLs produced by signMultipartUploadPartUrl:

Suggested change
const blockList = sortedParts.map(p => {
// The etag should be the blockId we generated
// If client recorded blockId as etag, use it; otherwise regenerate
const blockId = p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/)
? p.etag
: generateBlockId(p.partNumber);
return blockId;
});
await blockBlobClient.commitBlockList(blockList);
},
const sortedParts = input.parts.sort((a, b) => a.partNumber - b.partNumber);
const blockList = sortedParts.map(p => generateBlockId(p.partNumber));
await blockBlobClient.commitBlockList(blockList);
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/storage-azure/src/adapter.ts
Line: 277-287

Comment:
**`completeMultipartUpload` commits wrong block IDs to Azure**

The condition on line 280 is `p.etag.startsWith('block-') || p.etag.match(/^[A-Za-z0-9+/=]+$/)`. The second branch matches virtually every string (hex ETags, base64, alphanumeric), so the `generateBlockId` fallback is unreachable.

More critically, the first branch (`p.etag.startsWith('block-')`) passes the *decoded* form (e.g. `'block-000001'`) straight to `commitBlockList`. Azure requires block IDs to be **base64-encoded**. `generateBlockId` correctly returns the base64 form (e.g. `'YmxvY2stMDAwMDAxIwI='`), but the test on line 211 passes `etag: 'block-000001'` (decoded), triggering the first branch and sending a non-base64 string to Azure — causing a `400 InvalidBlockId` error at runtime.

The simplest correct fix is to always derive the block ID from the part number, which guarantees the IDs committed here match those embedded in the signed URLs produced by `signMultipartUploadPartUrl`:

```suggestion
      const sortedParts = input.parts.sort((a, b) => a.partNumber - b.partNumber);
      const blockList = sortedParts.map(p => generateBlockId(p.partNumber));

      await blockBlobClient.commitBlockList(blockList);
```

How can I resolve this? If you propose a fix, please make it concise.

Comment thread packages/storage-azure/src/adapter.ts Outdated
Comment on lines +83 to +88
function parseBlockId(blockId: string): number {
// Decode block ID to extract part number
const decoded = Buffer.from(blockId, 'base64').toString('utf-8');
const match = decoded.match(/block-(\d+)/);
return match ? parseInt(match[1], 10) : 0;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 parseBlockId is dead code

parseBlockId is defined but never called anywhere in the codebase. With the corrected completeMultipartUpload logic above (always regenerating from partNumber) it would remain unused. Consider removing it to keep the module clean.

Suggested change
function parseBlockId(blockId: string): number {
// Decode block ID to extract part number
const decoded = Buffer.from(blockId, 'base64').toString('utf-8');
const match = decoded.match(/block-(\d+)/);
return match ? parseInt(match[1], 10) : 0;
}
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/storage-azure/src/adapter.ts
Line: 83-88

Comment:
**`parseBlockId` is dead code**

`parseBlockId` is defined but never called anywhere in the codebase. With the corrected `completeMultipartUpload` logic above (always regenerating from `partNumber`) it would remain unused. Consider removing it to keep the module clean.

```suggestion

```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +294 to +316
async openUploadStream(input: { key: string; contentType?: string }): Promise<WritableStream> {
const blockBlobClient = containerClient.getBlockBlobClient(input.key);

let buffer: Buffer[] = [];

return new WritableStream({
async write(chunk) {
const buf = chunk instanceof Uint8Array ? Buffer.from(chunk) : chunk;
buffer.push(buf);
},
async close() {
const content = Buffer.concat(buffer);
await blockBlobClient.upload(content, content.length, {
blobHTTPHeaders: {
blobContentType: input.contentType,
},
});
buffer = [];
},
abort() {
buffer = [];
},
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 openUploadStream buffers entire file in memory

The write sink appends every chunk to an in-memory Buffer[] array, and only uploads on close. For any file larger than available heap this will either be very slow (GC pressure) or crash the process.

Consider uploading through a block-blob append stream or using the SDK's uploadStream helper instead:

import { streamToBuffer } from '@azure/storage-blob'; // or use uploadStream

async openUploadStream(input) {
  const blockBlobClient = containerClient.getBlockBlobClient(input.key);
  const { writable } = blockBlobClient.uploadStream(/* bufferSize, maxConcurrency, options */);
  return writable;
}

Alternatively, blockBlobClient.uploadStream(readableStream, ...) accepts a Node.js Readable, which you can bridge from the WritableStream sink without buffering everything.

Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/storage-azure/src/adapter.ts
Line: 294-316

Comment:
**`openUploadStream` buffers entire file in memory**

The `write` sink appends every chunk to an in-memory `Buffer[]` array, and only uploads on `close`. For any file larger than available heap this will either be very slow (GC pressure) or crash the process.

Consider uploading through a block-blob append stream or using the SDK's `uploadStream` helper instead:

```typescript
import { streamToBuffer } from '@azure/storage-blob'; // or use uploadStream

async openUploadStream(input) {
  const blockBlobClient = containerClient.getBlockBlobClient(input.key);
  const { writable } = blockBlobClient.uploadStream(/* bufferSize, maxConcurrency, options */);
  return writable;
}
```

Alternatively, `blockBlobClient.uploadStream(readableStream, ...)` accepts a Node.js Readable, which you can bridge from the WritableStream sink without buffering everything.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +207 to +214
await adapterB.completeMultipartUpload!({
key,
uploadId,
parts: [{ partNumber: 1, etag: 'block-000001' }],
});
} catch (err: any) {
expect(err.message).not.toContain('Invalid uploadId');
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Test passes a decoded block ID that will fail with the real Azure API

The test exercises completeMultipartUpload with etag: 'block-000001', which is the decoded form of the block ID. After the fix above (always using generateBlockId), this test will correctly commit YmxvY2stMDAwMDAxIwI= (or equivalent base64) regardless of what the caller passes as etag. Consider updating the test to reflect the expected stateless behaviour:

Suggested change
await adapterB.completeMultipartUpload!({
key,
uploadId,
parts: [{ partNumber: 1, etag: 'block-000001' }],
});
} catch (err: any) {
expect(err.message).not.toContain('Invalid uploadId');
}
try {
await adapterB.completeMultipartUpload!({
key,
uploadId,
parts: [{ partNumber: 1, etag: Buffer.from('block-000001').toString('base64') }],
});
} catch (err: any) {
expect(err.message).not.toContain('Invalid uploadId');
}
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/storage-azure/src/adapter.test.ts
Line: 207-214

Comment:
**Test passes a decoded block ID that will fail with the real Azure API**

The test exercises `completeMultipartUpload` with `etag: 'block-000001'`, which is the *decoded* form of the block ID. After the fix above (always using `generateBlockId`), this test will correctly commit `YmxvY2stMDAwMDAxIwI=` (or equivalent base64) regardless of what the caller passes as `etag`. Consider updating the test to reflect the expected stateless behaviour:

```suggestion
      try {
        await adapterB.completeMultipartUpload!({
          key,
          uploadId,
          parts: [{ partNumber: 1, etag: Buffer.from('block-000001').toString('base64') }],
        });
      } catch (err: any) {
        expect(err.message).not.toContain('Invalid uploadId');
      }
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +27 to +30
export async function runConformanceTests(
adapter: StorageAdapter,
_ctx: ConformanceTestContext = createDefaultContext()
): Promise<ConformanceResult> {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 _ctx parameter is accepted but never read

runConformanceTests declares a _ctx: ConformanceTestContext parameter (the _ prefix signals intentional non-use), but the current implementation performs only static structural checks and never exercises actual I/O using the supplied key or test data. If live-I/O conformance tests are planned, the parameter will be needed; if the function will remain static-only, removing the parameter simplifies the API and avoids confusing callers who pass a custom context expecting it to take effect.

Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/storage/src/conformance.ts
Line: 27-30

Comment:
**`_ctx` parameter is accepted but never read**

`runConformanceTests` declares a `_ctx: ConformanceTestContext` parameter (the `_` prefix signals intentional non-use), but the current implementation performs only static structural checks and never exercises actual I/O using the supplied key or test data. If live-I/O conformance tests are planned, the parameter will be needed; if the function will remain static-only, removing the parameter simplifies the API and avoids confusing callers who pass a custom context expecting it to take effect.

How can I resolve this? If you propose a fix, please make it concise.

Comment thread packages/storage-local/src/adapter.ts
Comment thread packages/storage-local/src/adapter.ts
Comment thread packages/storage-azure/src/adapter.ts
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

21 issues found across 61 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/storage-local/src/adapter.ts">

<violation number="1" location="packages/storage-local/src/adapter.ts:64">
P2: Path traversal checks use hardcoded `'/'` separator (e.g., `normalizedRoot + '/'`), but `path.resolve`/`realpath` return `'\\'` on Windows. On that platform, the `startsWith` guard will never match, silently disabling traversal protection. Use `path.sep` or `path.relative`-based logic so the check is correct on both POSIX and Windows. This affects `resolvePath`, `validateResolvedPath`, and the directory check in `openUploadStream`.</violation>

<violation number="2" location="packages/storage-local/src/adapter.ts:140">
P1: The upload path validates the parent directory against the root but does not validate the final file path before opening the write stream. Unlike `statObject` and `deleteObject` (which call `validateResolvedPath(filePath)`), a pre-existing symlink at `filePath` pointing outside the root will be followed by `createWriteStream`, writing data outside the storage sandbox. Add a `validateResolvedPath(filePath)` check (or equivalent `lstat`-based no-follow guard) before creating the write stream.</violation>

<violation number="3" location="packages/storage-local/src/adapter.ts:146">
P2: Event listener leak: when `drain` fires, the `once('error', reject)` listener from the same promise is never cleaned up (and vice versa). Over many back-pressured writes, orphaned listeners accumulate on the node stream. Remove the complementary listener when either event fires.</violation>

<violation number="4" location="packages/storage-local/src/adapter.ts:205">
P2: Missing backpressure handling: the node readable stream pushes data into the `ReadableStream` controller unconditionally. For large files with a slow consumer, memory usage grows without bound. Pause the node stream by default and use the `pull` callback to resume it on demand.</violation>
</file>

<file name="packages/storage-azure/README.md">

<violation number="1" location="packages/storage-azure/README.md:21">
P3: The usage example redeclares `const adapter` multiple times in one code block, which makes the snippet invalid TypeScript/JavaScript when copied.</violation>
</file>

<file name="packages/storage/src/conformance.ts">

<violation number="1" location="packages/storage/src/conformance.ts:29">
P3: `runConformanceTests` eagerly creates a default test context that is never used, introducing dead code and unnecessary per-call allocations.</violation>
</file>

<file name="packages/files/src/__tests__/file-provider-conformance.test.ts">

<violation number="1" location="packages/files/src/__tests__/file-provider-conformance.test.ts:4">
P1: This conformance suite only tests a local mock provider, so it can pass while real FileProvider implementations are broken.</violation>

<violation number="2" location="packages/files/src/__tests__/file-provider-conformance.test.ts:125">
P2: This assertion defaults to `true`, so missing/undefined completion results still pass.</violation>
</file>

<file name="packages/storage-minio/src/adapter.test.ts">

<violation number="1" location="packages/storage-minio/src/adapter.test.ts:140">
P2: This test is misleading: it claims to verify `forcePathStyle` but only asserts the adapter name, so path-style addressing regressions would not be caught.</violation>
</file>

<file name="packages/middleware/src/rate-limit.ts">

<violation number="1" location="packages/middleware/src/rate-limit.ts:56">
P2: The default in-memory KV store hardcodes `Date.now()` for TTL checks, but the rate limiter's state calculations use the injectable `now()` clock. When a custom `now()` is provided (e.g., for testing), state can be prematurely evicted by the KV store — resetting rate limits unexpectedly. Consider passing the `now` function into `createInMemoryKVStore` so its TTL logic uses the same clock.</violation>
</file>

<file name="packages/storage-azure/src/adapter.test.ts">

<violation number="1" location="packages/storage-azure/src/adapter.test.ts:207">
P2: This test performs a real Azure network operation with fake credentials, making the suite flaky and environment-dependent. Mock `commitBlockList` (or avoid calling `completeMultipartUpload` here) so the test stays deterministic.</violation>
</file>

<file name="packages/storage-r2/src/adapter.test.ts">

<violation number="1" location="packages/storage-r2/src/adapter.test.ts:15">
P3: This assertion duplicates an existing adapter-name check and does not test S3-compatible API behavior described by the test.</violation>

<violation number="2" location="packages/storage-r2/src/adapter.test.ts:97">
P2: This test name claims content-type constraint coverage, but the assertion only re-checks `signedUploadUrls` and does not validate the claimed behavior.</violation>
</file>

<file name="packages/storage-azure/src/adapter.ts">

<violation number="1" location="packages/storage-azure/src/adapter.ts:276">
P2: `Array.sort()` mutates `input.parts` in-place. Use a copy to avoid unexpected side effects on the caller's data.</violation>

<violation number="2" location="packages/storage-azure/src/adapter.ts:280">
P1: Block ID lookup logic will produce IDs that don't match the ones staged during upload, causing `commitBlockList` to fail. The regex `^[A-Za-z0-9+/=]+$` is too permissive (matches real Azure ETags), and the `'block-'` prefix check uses the decoded form instead of the base64-encoded form. Since block IDs are deterministic from `partNumber`, always regenerate them.</violation>

<violation number="3" location="packages/storage-azure/src/adapter.ts:297">
P2: `openUploadStream` accumulates all chunks in memory and uploads the full concatenated buffer on `close()`. For large files this causes unbounded memory growth and can trigger OOM, undermining the streaming API contract. Consider using Azure's `stageBlock`/`commitBlockList` pattern to send chunks to Azure as they arrive, avoiding full-payload buffering.</violation>
</file>

<file name="packages/storage/src/internal/s3-compatible.ts">

<violation number="1" location="packages/storage/src/internal/s3-compatible.ts:53">
P2: Avoid exposing the shared default capabilities object directly; clone it to prevent cross-instance mutation side effects.</violation>

<violation number="2" location="packages/storage/src/internal/s3-compatible.ts:190">
P2: Do not mutate the caller-provided `parts` array when completing multipart uploads.</violation>
</file>

<file name="packages/storage-minio/src/adapter.ts">

<violation number="1" location="packages/storage-minio/src/adapter.ts:28">
P2: Scheme detection is too broad; `startsWith('http')` misclassifies hostnames that begin with `http` and can generate an invalid S3 endpoint.</violation>
</file>

<file name="packages/storage-local/package.json">

<violation number="1" location="packages/storage-local/package.json:32">
P1: Avoid using `*` for a published runtime dependency; pin to a compatible semver range to prevent pulling incompatible major versions.</violation>
</file>

<file name="packages/storage-gcs/src/adapter.ts">

<violation number="1" location="packages/storage-gcs/src/adapter.ts:130">
P2: Orphaned event listeners accumulate on the Node stream. When `drain` fires, the `error` listener from that write remains, and vice-versa. Over many backpressured writes this leaks listeners and can trigger `MaxListenersExceeded` warnings. Remove the losing listener when the winning one fires.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

import { describe, expect, it } from 'vitest';
import type { FileProvider } from '../types.js';

function createConformantProvider(): FileProvider {
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: This conformance suite only tests a local mock provider, so it can pass while real FileProvider implementations are broken.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/files/src/__tests__/file-provider-conformance.test.ts, line 4:

<comment>This conformance suite only tests a local mock provider, so it can pass while real FileProvider implementations are broken.</comment>

<file context>
@@ -0,0 +1,149 @@
+import { describe, expect, it } from 'vitest';
+import type { FileProvider } from '../types.js';
+
+function createConformantProvider(): FileProvider {
+  return {
+    async createUploadSession() {
</file context>
Fix with Cubic

Comment thread packages/storage-azure/src/adapter.ts Outdated
Comment thread packages/storage-local/package.json Outdated
Comment thread packages/storage-local/src/adapter.ts
Comment thread packages/storage-local/src/adapter.ts
Comment thread packages/storage-gcs/src/adapter.ts
Comment thread packages/storage-local/src/adapter.ts Outdated
Comment thread packages/storage-azure/README.md
Comment thread packages/storage/src/conformance.ts Outdated

describe('Adapter instantiation (STORAGE-001)', () => {
it('should create adapter with correct name', () => {
expect(adapter.name).toBe('r2');
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P3: This assertion duplicates an existing adapter-name check and does not test S3-compatible API behavior described by the test.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/storage-r2/src/adapter.test.ts, line 15:

<comment>This assertion duplicates an existing adapter-name check and does not test S3-compatible API behavior described by the test.</comment>

<file context>
@@ -0,0 +1,136 @@
+
+  describe('Adapter instantiation (STORAGE-001)', () => {
+    it('should create adapter with correct name', () => {
+      expect(adapter.name).toBe('r2');
+    });
+
</file context>
Fix with Cubic

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 11 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/storage-azure/src/adapter.ts">

<violation number="1" location="packages/storage-azure/src/adapter.ts:316">
P1: Unhandled promise rejection: `abort()` destroys the PassThrough stream, which causes `uploadPromise` (from `blockBlobClient.uploadStream`) to reject, but that rejection is never caught. This will trigger a Node.js `unhandledRejection` warning (or crash, depending on the runtime configuration). Silence the rejection in `abort` by catching it.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread packages/storage-azure/src/adapter.ts
Comment thread packages/middleware/src/rate-limit.ts Outdated
Comment on lines +49 to +72
function createInMemoryKVStore(now: () => number): KVStoreAdapter {
const store = new Map<string, { value: string; expiresAt?: number }>();

return {
async get(key: string): Promise<string | null> {
const entry = store.get(key);
if (!entry) return null;
if (entry.expiresAt && now() > entry.expiresAt) {
store.delete(key);
return null;
}
return entry.value;
},
async set(input: { key: string; value: string; ttlSeconds?: number }): Promise<void> {
store.set(input.key, {
value: input.value,
expiresAt: input.ttlSeconds ? now() + input.ttlSeconds * 1000 : undefined,
});
},
async delete(key: string): Promise<void> {
store.delete(key);
},
};
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

createInMemoryKVStore requires a now: () => number parameter but is exported and used without it. The function signature expects now as a required parameter, but the export at line 269 and tests at line 182 call it without arguments, which will cause a runtime error.

// Fix: Make the parameter optional with a default
export function createInMemoryKVStore(now: () => number = Date.now): KVStoreAdapter {
  const store = new Map<string, { value: string; expiresAt?: number }>();
  // ...
}
Suggested change
function createInMemoryKVStore(now: () => number): KVStoreAdapter {
const store = new Map<string, { value: string; expiresAt?: number }>();
return {
async get(key: string): Promise<string | null> {
const entry = store.get(key);
if (!entry) return null;
if (entry.expiresAt && now() > entry.expiresAt) {
store.delete(key);
return null;
}
return entry.value;
},
async set(input: { key: string; value: string; ttlSeconds?: number }): Promise<void> {
store.set(input.key, {
value: input.value,
expiresAt: input.ttlSeconds ? now() + input.ttlSeconds * 1000 : undefined,
});
},
async delete(key: string): Promise<void> {
store.delete(key);
},
};
}
function createInMemoryKVStore(now: () => number = Date.now): KVStoreAdapter {
const store = new Map<string, { value: string; expiresAt?: number }>();
return {
async get(key: string): Promise<string | null> {
const entry = store.get(key);
if (!entry) return null;
if (entry.expiresAt && now() > entry.expiresAt) {
store.delete(key);
return null;
}
return entry.value;
},
async set(input: { key: string; value: string; ttlSeconds?: number }): Promise<void> {
store.set(input.key, {
value: input.value,
expiresAt: input.ttlSeconds ? now() + input.ttlSeconds * 1000 : undefined,
});
},
async delete(key: string): Promise<void> {
store.delete(key);
},
};
}

Spotted by Graphite

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (24)
packages/files/src/types.ts (1)

18-18: Use generic return types to replace Promise<unknown> in three FileProvider methods.

Lines 18, 41, and 42 currently return Promise<unknown>, forcing downstream consumers to cast aggressively. Generic parameters with unknown defaults (<TStatus = unknown>, <TFile = unknown>, <TList = unknown>) maintain backward compatibility while enabling implementers to specify actual return types and improving compile-time safety.

♻️ Minimal non-breaking direction (generic return types)
-  getUploadSessionStatus(input: { uploadSessionId: string }, ctx: FileProviderContext): Promise<unknown>;
+  getUploadSessionStatus<TStatus = unknown>(
+    input: { uploadSessionId: string },
+    ctx: FileProviderContext
+  ): Promise<TStatus>;
...
-  getFile(input: { fileId: string; versionId?: string }, ctx: FileProviderContext): Promise<unknown>;
-  listFiles(input: { cursor?: string; limit?: number }, ctx: FileProviderContext): Promise<unknown>;
+  getFile<TFile = unknown>(
+    input: { fileId: string; versionId?: string },
+    ctx: FileProviderContext
+  ): Promise<TFile>;
+  listFiles<TList = unknown>(
+    input: { cursor?: string; limit?: number },
+    ctx: FileProviderContext
+  ): Promise<TList>;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/files/src/types.ts` at line 18, Update the three FileProvider method
signatures to use generic return types with unknown defaults instead of
Promise<unknown>: add a generic type parameter to getUploadSessionStatus (e.g.,
<TStatus = unknown>) and change its return type to Promise<TStatus>, add <TFile
= unknown> to getFile and return Promise<TFile>, and add <TList = unknown> to
listFiles (or the corresponding method) and return Promise<TList>; this
preserves backward compatibility while allowing implementers to specify concrete
return types for getUploadSessionStatus, getFile, and listFiles.
packages/storage/src/index.test.ts (1)

4-7: Consider asserting concrete exports, not just module presence.

toBeDefined() is a very weak signal for an entrypoint test.

🔍 Stronger smoke assertion
 describe('storage', () => {
   it('should export types', async () => {
     const mod = await import('./index.js');
-    expect(mod).toBeDefined();
+    expect(mod).toBeDefined();
+    expect(Object.keys(mod).length).toBeGreaterThan(0);
   });
 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage/src/index.test.ts` around lines 4 - 7, The test in
it('should export types') currently only checks that the module imported from
'./index.js' is defined; instead assert the actual named exports from that
entrypoint by importing './index.js' and verifying specific export symbols
(e.g., the public API names your package exposes) are present — use assertions
such as expect(mod).toHaveProperty('<ExportName>') or
expect(Object.keys(mod)).toEqual(expect.arrayContaining(['<ExportA>','<ExportB>']))
to assert concrete exports from the module rather than only checking mod is
defined; update the test to reference the real export identifiers your package
exposes.
packages/files/src/__tests__/file-provider-conformance.test.ts (1)

14-14: Reduce as any usage to keep conformance tests type-sensitive.

Right now the suite can mask interface drift because many return values/results are explicitly cast to any.

♻️ Example tightening (pattern to apply across the suite)
-    const session = await provider.createUploadSession(
+    const session = await provider.createUploadSession(
       {
         policy: 'user-avatar',
         fileName: 'avatar.png',
         size: 3,
         mimeType: 'image/png',
       },
       { principalId: 'user_123', tenantId: 'org_123', requestId: 'req_001' },
-    ) as any;
+    );

-    async completeUploadSession() {
-      return { fileId: 'file_0001', versionId: 'ver_0001' };
+    async completeUploadSession() {
+      return {
+        fileId: 'file_0001',
+        versionId: 'ver_0001',
+      } satisfies Awaited<ReturnType<FileProvider['completeUploadSession']>>;
     },

Also applies to: 25-25, 28-28, 31-31, 43-43, 46-46, 66-67, 84-85, 103-104, 114-115, 123-124, 134-135, 143-144

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/files/src/__tests__/file-provider-conformance.test.ts` at line 14,
Several mocks/return objects in the conformance tests are being cast to `any`
(e.g., the object ending with `} as any;`) which masks type drift; replace those
`as any` casts with explicit, type-safe typings such as using the library's
FileProvider interface (or the actual conformance interface name) or
Partial<FileProvider> for stubs and then cast via `as FileProvider` only when
necessary, or better yet implement minimal typed mocks that satisfy the required
methods (e.g., implement the specific methods used in the test rather than
blanket `any`); apply this change to the occurrences you listed (lines
~14,25,28,31,43,46,66-67,84-85,103-104,114-115,123-124,134-135,143-144) so the
test suite remains type-sensitive.
packages/storage-gcs/src/index.ts (1)

1-1: Prefer explicit entrypoint exports over wildcard.

export * can accidentally expose new symbols when adapter.ts changes. Explicit exports make versioning safer.

Suggested change
-export * from './adapter.js';
+export { createGCSStorageAdapter, type GCSStorageConfig } from './adapter.js';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-gcs/src/index.ts` at line 1, The file currently re-exports
everything from adapter.js using "export *", which can accidentally expose new
symbols; replace the wildcard with explicit named exports from adapter.js (e.g.,
export { AdapterClass, createAdapter, TYPES } — use the actual exported
identifiers defined in adapter.js) so only intended symbols are exposed,
updating the export list in packages/storage-gcs/src/index.ts to match
adapter.js's public API.
packages/middleware/src/index.test.ts (1)

11-11: Rename test title to match runtime behavior.

Line 11 mentions interface export, but the test validates method presence on an instance. Consider wording like “should expose check and reset methods”.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/middleware/src/index.test.ts` at line 11, Update the test title
string for the failing "it" block currently named "should export RateLimiter
interface" to accurately describe what the test asserts — rename it to something
like "should expose `check` and `reset` methods" since the assertions check
method presence on an instance (verify the `check` and `reset` methods on the
RateLimiter instance).
packages/storage-azure/README.md (1)

44-53: Clarify uploadId as an opaque value to avoid integration mistakes.

The multipart section is good; a slightly tighter wording helps consumers avoid transforming/regenerating uploadId.

✏️ Suggested doc tweak
-- **uploadId**: Azure commitment placeholder (client-generated)
+- **uploadId**: Opaque multipart session identifier; obtain it once and pass it unchanged to part-signing and completion calls
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-azure/README.md` around lines 44 - 53, Clarify in the "Azure
Block Blob Semantics" section that uploadId is an opaque client-provided
identifier and must not be parsed, transformed, or regenerated by integrators;
update the description mapping for **uploadId** to explicitly state it is an
opaque value used only to correlate parts and should be treated verbatim by
clients and servers to avoid integration errors.
packages/storage-s3/src/adapter.test.ts (1)

7-14: Consider capturing conformance failure details.

Same suggestion as for GCS adapter: include failure details in the assertion message to aid debugging when conformance tests fail.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-s3/src/adapter.test.ts` around lines 7 - 14, The test for
createS3StorageAdapter currently asserts runConformanceTests returned ok without
showing failure details; update the assertion in the test that calls
runConformanceTests(adapter) to include the failure information in the expect
message (e.g., include result.error or JSON.stringify(result) in the
expect(...).toBe(true) message) so that when runConformanceTests (the
conformance harness) fails you get the detailed failure output for debugging.
packages/storage-r2/src/adapter.ts (1)

13-27: Consider validating required config fields.

The adapter accepts accountId, accessKeyId, and secretAccessKey as required fields but doesn't validate them before constructing the endpoint or client. Empty strings would result in malformed URLs or confusing auth errors.

Optional validation
 export function createR2StorageAdapter(config: R2StorageConfig): StorageAdapter {
   const { accountId, bucket, accessKeyId, secretAccessKey, jurisdiction } = config;
 
+  if (!accountId || !bucket || !accessKeyId || !secretAccessKey) {
+    throw new Error('R2 adapter requires accountId, bucket, accessKeyId, and secretAccessKey');
+  }
+
   const endpoint = jurisdiction
     ? `https://${accountId}.${jurisdiction}.r2.cloudflarestorage.com`
     : `https://${accountId}.r2.cloudflarestorage.com`;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-r2/src/adapter.ts` around lines 13 - 27, In
createR2StorageAdapter, validate that required config fields (accountId,
accessKeyId, secretAccessKey) from the R2StorageConfig are present and non-empty
before building endpoint or instantiating the S3Client; if any are missing/empty
throw a clear error (or return a failed result) naming the missing field(s) so
callers can fix configuration, and ensure you still accept optional jurisdiction
as-is when computing endpoint. Use the existing symbols (accountId, accessKeyId,
secretAccessKey, jurisdiction, endpoint, S3Client, createR2StorageAdapter) to
locate where to add the checks.
packages/storage-gcs/src/adapter.test.ts (1)

7-13: Consider capturing conformance failure details.

When result.ok is false, the test fails but doesn't reveal which checks failed. Capturing result.errors or result.details (if available) in the assertion message would improve debuggability.

Suggested improvement
     it('should pass conformance tests (method existence)', async () => {
       const adapter = createGCSStorageAdapter({
         bucket: 'test-bucket',
       });
       const result = await runConformanceTests(adapter);
-      expect(result.ok).toBe(true);
+      expect(result.ok, `Conformance failures: ${JSON.stringify(result)}`).toBe(true);
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-gcs/src/adapter.test.ts` around lines 7 - 13, The test for
createGCSStorageAdapter calling runConformanceTests only asserts result.ok and
loses failure details; update the assertion to include result.errors or
result.details in the failure message (or log them) so failing checks are
visible. Locate the test around createGCSStorageAdapter and runConformanceTests,
and change the expect to something like expecting result.ok to be true while
passing result.errors/result.details into the expect message or call
console.error(result.errors) before the assertion to surface which checks
failed.
packages/storage-r2/package.json (1)

34-34: Inconsistent internal dependency version specifier.

Same issue as other adapter packages: uses "*" instead of "^0.1.0" for @superfunctions/storage.

Suggested fix
   "dependencies": {
     "@aws-sdk/client-s3": "^3.600.0",
     "@aws-sdk/s3-request-presigner": "^3.600.0",
-    "@superfunctions/storage": "*"
+    "@superfunctions/storage": "^0.1.0"
   },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-r2/package.json` at line 34, Update the dependency spec for
the internal package to match other adapter packages: replace the loose version
"*" for the "@superfunctions/storage" entry in package.json with the
caret-pinned version "^0.1.0" so it becomes consistent and allows
semver-compatible updates; ensure the change is applied to the
"@superfunctions/storage" dependency line in this package.json.
packages/middleware/package.json (1)

36-41: Consider version range for peer dependency.

Using "*" for @superfunctions/db means any version will satisfy the peer dependency. While this is marked optional, once a consumer includes it, there's no guarantee of API compatibility. Consider specifying a minimum version range (e.g., ">=0.1.0") once the db package API stabilizes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/middleware/package.json` around lines 36 - 41, Replace the wildcard
peer dependency for `@superfunctions/db` in package.json with a semantically
meaningful minimum version range (e.g., ">=0.1.0" or a caret range like
"^0.1.0") so consumers get an API compatibility guarantee; keep the
peerDependenciesMeta optional setting as-is and update the "peerDependencies"
entry for "@superfunctions/db" accordingly.
packages/storage-s3/package.json (1)

34-34: Inconsistent internal dependency version specifier.

This package uses "*" for @superfunctions/storage, while packages/storage-local/package.json uses "^0.1.0". Using "*" can lead to unexpected breaking changes when the core package is updated. Consider using a consistent semver range like "^0.1.0" across all adapter packages for predictable dependency resolution.

Suggested fix
   "dependencies": {
     "@aws-sdk/client-s3": "^3.600.0",
     "@aws-sdk/s3-request-presigner": "^3.600.0",
-    "@superfunctions/storage": "*"
+    "@superfunctions/storage": "^0.1.0"
   },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-s3/package.json` at line 34, Update the dependency specifier
for `@superfunctions/storage` in the package.json that currently has
"@superfunctions/storage": "*" to use a consistent semver range (e.g., "^0.1.0")
matching other adapters; locate the dependency entry and replace the "*" value
with the chosen caret range so all adapter packages use the same predictable
versioning.
packages/storage-azure/package.json (1)

33-33: Inconsistent internal dependency version specifier.

Same issue as in storage-s3: uses "*" instead of a semver range for @superfunctions/storage.

Suggested fix
   "dependencies": {
     "@azure/storage-blob": "^12.18.0",
-    "@superfunctions/storage": "*"
+    "@superfunctions/storage": "^0.1.0"
   },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-azure/package.json` at line 33, The dependency entry for
`@superfunctions/storage` in package.json currently uses "*" which is inconsistent
with other packages (like storage-s3); replace the "*" specifier with the same
semver range used across the repo (e.g., a caret range such as ^X.Y.Z or the
workspace: range your monorepo uses) so `@superfunctions/storage` matches the
project's versioning policy; update the dependency value for
"@superfunctions/storage" in packages/storage-azure/package.json and ensure it
mirrors the specifier used in storage-s3.
packages/storage/src/router.test.ts (1)

68-70: Avoid optional chaining on required-path test calls.

At Line 68 and Line 127, ?. can hide missing-method regressions by short-circuiting instead of failing fast. Assert method presence, then call directly.

♻️ Suggested test hardening
-    await storage.createMultipartUpload?.({ key: 'files/a', target: 'durable' });
-    await storage.createMultipartUpload?.({ key: 'files/b', target: 'temporary' });
-    const descriptor = await storage.signDownloadUrl?.({ key: 'files/b', target: 'temporary', expiresInSeconds: 60 });
+    expect(typeof storage.createMultipartUpload).toBe('function');
+    expect(typeof storage.signDownloadUrl).toBe('function');
+    await storage.createMultipartUpload({ key: 'files/a', target: 'durable' });
+    await storage.createMultipartUpload({ key: 'files/b', target: 'temporary' });
+    const descriptor = await storage.signDownloadUrl({ key: 'files/b', target: 'temporary', expiresInSeconds: 60 });

...
-      storage.signDownloadUrl?.({ key: 'files/missing', target: 'temporary', expiresInSeconds: 60 }),
+      storage.signDownloadUrl({ key: 'files/missing', target: 'temporary', expiresInSeconds: 60 }),

Also applies to: 127-129

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage/src/router.test.ts` around lines 68 - 70, The tests use
optional chaining on required storage methods which can mask regressions; change
occurrences of storage.createMultipartUpload?. and storage.signDownloadUrl?.
(and similar uses around lines 127-129) to assert the methods exist and then
call them normally (e.g., verify typeof storage.createMultipartUpload ===
'function' or expect(storage.createMultipartUpload).toBeDefined() before
invoking) so missing-method failures surface immediately; update the test to
call storage.createMultipartUpload(...) and storage.signDownloadUrl(...)
directly after the assertion.
packages/storage-r2/src/adapter.test.ts (1)

127-134: Convert doc-style assertions into behavior checks (or comments).

These tests currently restate existing assertions and don’t validate R2-specific behavior. Consider replacing with concrete R2 edge-case checks or moving the note to documentation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-r2/src/adapter.test.ts` around lines 127 - 134, The tests
inside the "R2-specific limitations" describe block are only restating docs
(checking adapter.capabilities.signedUploadUrls and adapter.name) rather than
validating behavior; replace these doc-style it blocks with either real behavior
assertions—e.g., in the signedUploadUrls test actually generate a signed upload
URL via the adapter and assert that uploads with a mismatched Content-Type are
rejected or that the URL contains the content-type constraint—and for the
S3-compatibility test use the adapter to exercise an S3-compatible API edge case
(presigned PUT/GET headers or path vs host styles) or convert the two it blocks
into plain comments documenting the limitation; locate and update the tests
referencing adapter.capabilities.signedUploadUrls and adapter.name inside the
describe('R2-specific limitations') block accordingly.
packages/middleware/src/rate-limit.test.ts (1)

200-205: Make TTL expiration test deterministic with fake timers.

The real sleep at Line 204 can introduce CI flakiness. Prefer fake timers + controlled advancement.

♻️ Suggested deterministic version
     it('expires keys after TTL', async () => {
+      vi.useFakeTimers();
       await store.set({ key: 'key1', value: 'value1', ttlSeconds: 0.01 });
       expect(await store.get('key1')).toBe('value1');

-      await new Promise((resolve) => setTimeout(resolve, 20));
+      vi.advanceTimersByTime(20);
       expect(await store.get('key1')).toBeNull();
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/middleware/src/rate-limit.test.ts` around lines 200 - 205, The TTL
expiration test for "expires keys after TTL" is flaky due to a real sleep;
switch the test to use fake timers from the test runner (e.g., jest/vi): call
useFakeTimers() at the start of the test, call store.set({ key: 'key1', value:
'value1', ttlSeconds: 0.01 }) as before, then advance timers by >=20ms with
advanceTimersByTime(20) and flush pending microtasks if needed (e.g., await
Promise.resolve() or nextTick) before asserting store.get('key1') is null, and
finally restore real timers with useRealTimers()/clearAllTimers; reference the
test name "expires keys after TTL" and the methods store.set and store.get so
the change is applied in that test only.
packages/storage-minio/src/adapter.test.ts (1)

139-143: Test does not verify the claimed behavior.

The test title claims to verify "path-style addressing (forcePathStyle)" but only asserts that adapter.name equals 'minio'. This doesn't validate that forcePathStyle: true is actually configured on the underlying S3 client.

Consider either:

  1. Removing this test if path-style addressing is an internal implementation detail
  2. Actually verifying the configuration (e.g., by exposing it or testing URL output format)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-minio/src/adapter.test.ts` around lines 139 - 143, The test
title claims to verify path-style addressing but only checks adapter.name;
either remove the test or update it to actually assert forcePathStyle on the
underlying S3 client: locate the adapter instance (e.g., adapter or
adapter.s3Client / adapter.s3) and assert the S3 client's configuration includes
forcePathStyle === true (or call a URL-generation method like
adapter.getUrl(bucket, key) and assert the returned URL uses path-style
host/path format rather than virtual-hosted style). Update the spec to reference
adapter.name only if you remove the path-style assertion.
packages/storage/src/internal/s3-compatible.ts (1)

64-70: Defaulting size to 0 when ContentLength is undefined may be misleading.

When HeadObjectCommand doesn't return ContentLength (rare, but possible for some S3-compatible backends), the adapter returns size: 0. This could lead consumers to believe the object is empty rather than that the size is unknown.

Consider either:

  1. Throwing an error if ContentLength is missing (stricter)
  2. Making size optional in StorageObjectStat to represent "unknown" (breaking change)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage/src/internal/s3-compatible.ts` around lines 64 - 70, The
current code defaults ContentLength to 0 which misrepresents unknown sizes;
update the adapter to propagate unknown size instead of defaulting: return size
as undefined when response.ContentLength is undefined, and update the
StorageObjectStat type (make size?: number) and any callers/consumers of the
object (e.g., functions that read size) to handle an optional size;
alternatively, if you prefer a strict approach, throw an error from the method
that issues HeadObjectCommand when ContentLength is missing—choose one strategy
and apply it consistently across the StorageObjectStat definition and all
usages.
packages/storage/src/types.ts (1)

88-97: Consider adding type parameters to stream types.

The WritableStream and ReadableStream types lack type parameters, defaulting to any. Adding explicit type parameters would improve type safety for consumers.

♻️ Proposed improvement
   openUploadStream?(input: {
     key: string;
     target?: StorageTargetName;
     contentType?: string;
-  }): Promise<WritableStream>;
+  }): Promise<WritableStream<Uint8Array>>;
   openDownloadStream?(input: {
     key: string;
     target?: StorageTargetName;
     range?: { start: number; endInclusive: number };
-  }): Promise<ReadableStream>;
+  }): Promise<ReadableStream<Uint8Array>>;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage/src/types.ts` around lines 88 - 97, Update the method
signatures to use generic stream element types instead of bare any by adding a
type parameter (with sensible default) to both methods: change openUploadStream
to openUploadStream?<T = Uint8Array>(input: { key: string; target?:
StorageTargetName; contentType?: string; }): Promise<WritableStream<T>> and
change openDownloadStream to openDownloadStream?<T = Uint8Array>(input: { key:
string; target?: StorageTargetName; range?: { start: number; endInclusive:
number }; }): Promise<ReadableStream<T>> so consumers can opt into a specific
chunk type while preserving a safe default; update any related type
imports/exports in types.ts to reflect the generics.
packages/storage-gcs/src/adapter.ts (2)

58-65: Redundant try-catch block.

The try-catch here simply re-throws the error without any transformation. Since ignoreNotFound: true already handles the not-found case, this wrapper adds no value.

♻️ Simplify by removing unnecessary try-catch
     async deleteObject(input: { key: string }): Promise<void> {
-      try {
-        const file = bucket.file(input.key);
-        await file.delete({ ignoreNotFound: true });
-      } catch (err: unknown) {
-        throw err;
-      }
+      const file = bucket.file(input.key);
+      await file.delete({ ignoreNotFound: true });
     },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-gcs/src/adapter.ts` around lines 58 - 65, The deleteObject
method contains a redundant try-catch that only rethrows errors; remove the
try-catch wrapper in async deleteObject (which calls bucket.file(input.key) and
await file.delete({ ignoreNotFound: true })) so that errors propagate naturally
and the ignoreNotFound option handles missing objects—leave the bucket.file and
file.delete call as-is and return Promise<void> from the async function without
catching and rethrowing.

171-176: Redundant catch block with identical branches.

Both branches of this catch block throw err, making the entire try-catch unnecessary since createNotFoundError() at line 169 already throws.

♻️ Simplify error handling
       try {
         const [exists] = await file.exists();
         if (!exists) {
           throw createNotFoundError();
         }
-      } catch (err: unknown) {
-        if (err instanceof Error && 'code' in err && err.code === 'STORAGE_NOT_FOUND') {
-          throw err;
-        }
-        throw err;
-      }
+      } catch (err: unknown) {
+        throw err;
+      }

Or simply remove the try-catch entirely since it adds no value.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-gcs/src/adapter.ts` around lines 171 - 176, The catch block
that inspects and rethrows err is redundant because createNotFoundError()
already throws; remove the entire try-catch surrounding that call (or at minimum
remove the catch and let the error propagate) so the function/program doesn't
contain an identical rethrow branch; look for the createNotFoundError call and
the catch (err: unknown) block in adapter.ts and delete the catch block (or the
encompassing try) to simplify error handling.
packages/storage-azure/src/adapter.ts (1)

335-340: Redundant catch block.

Similar to the GCS adapter, both branches throw err, making the conditional check unnecessary.

♻️ Simplify or remove
       } catch (err: unknown) {
-        if (err instanceof Error && 'code' in err && err.code === 'STORAGE_NOT_FOUND') {
-          throw err;
-        }
         throw err;
       }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-azure/src/adapter.ts` around lines 335 - 340, The catch
block that tests "if (err instanceof Error && 'code' in err && err.code ===
'STORAGE_NOT_FOUND')" is redundant because both branches rethrow err; simplify
by removing the conditional and either remove the catch entirely to let errors
bubble or replace it with a single rethrow (catch (err: unknown) { throw err; })
in the same function scope where the current try/catch appears (the catch
handling "err" in adapter.ts).
packages/storage/src/router.ts (1)

32-40: Capability intersection may be too restrictive.

Using every() means the routed adapter only advertises a capability if all underlying adapters support it. This could be limiting—for example, if one adapter supports multipart and another doesn't, the router reports multipart: false even when the target that supports it is used.

Consider whether this is the intended behavior or if capability checks should be deferred to per-target resolution via capabilitiesForTarget.

The current design is safe (won't advertise unsupported features), but consumers should use getStorageCapabilities(adapter, target) for accurate per-target capabilities rather than relying on adapter.capabilities.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage/src/router.ts` around lines 32 - 40, mergeCapabilities
currently uses every() which only advertises a capability if all underlying
adapters support it; update the codebase and docs to avoid misleading consumers
by either (A) changing mergeCapabilities to compute per-capability availability
more appropriately (e.g., union/any() if you want advertised features when at
least one adapter supports them) or (B) keep the conservative every() behavior
but add a clear comment above mergeCapabilities and update documentation and
call sites to direct consumers to use the per-target resolver
(capabilitiesForTarget or getStorageCapabilities) for accurate target-specific
capability checks; locate mergeCapabilities and any callers that read
adapter.capabilities and update them or their docs to reference
capabilitiesForTarget/getStorageCapabilities.
packages/middleware/src/rate-limit.ts (1)

151-169: Key lock implementation may leak entries over time.

The lock cleanup at line 165-167 only deletes the key if the current promise matches. However, if multiple concurrent calls queue up and one completes while another is still waiting, the map entry persists. This isn't a functional bug, but the map could grow unbounded with unique keys.

♻️ Consider using WeakRef or periodic cleanup for long-running processes

For high-cardinality key spaces, consider periodically clearing the keyLocks map or using a different approach. In practice, this is likely fine for most use cases since the Promise objects are small.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/middleware/src/rate-limit.ts` around lines 151 - 169, The
withKeyLock implementation can leak map entries for high-cardinality keys
because cleanup only runs in the function's finally and may miss removing the
entry when promises chain; update withKeyLock to attach a cleanup handler to the
queued Promise itself (e.g., queued.finally(...)) that checks keyLocks.get(key)
=== queued and deletes the key when queued settles, and keep the existing
resolveLock/await prior behavior; reference symbols: withKeyLock, keyLocks,
queued, resolveLock.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/storage-gcs/package.json`:
- Around line 31-34: Update the storage adapter packages to remove the wildcard
dependency on `@superfunctions/storage`: replace the dependency entry
"@superfunctions/storage": "*" with a semver-constrained version such as
"@superfunctions/storage": "^0.1.0" in each package.json for the storage
adapters (refer to the dependency key "@superfunctions/storage" in
packages/storage-gcs/package.json and the equivalent entries in storage-azure,
storage-minio, storage-r2, and storage-s3) so the published packages retain
semver safety and avoid resolving incompatible upstream releases.

In `@packages/storage-gcs/src/adapter.ts`:
- Around line 151-156: The close() implementation on the adapter can miss
synchronous errors because it calls gcstream.end() before attaching the 'error'
listener and never cleans up listeners; update close() to attach the error
handler before calling gcstream.end(), use once for both 'finish'/'close' and
'error', and remove the opposite listener in each handler to avoid
leaks/handled-after-resolve scenarios—i.e., on gcstream.once('error', onErr) and
gcstream.once('finish' or 'close', onSuccess), call removeListener for the
counterpart in each handler and resolve/reject accordingly so gcstream errors
are caught and listeners cleaned up.

In `@packages/storage-local/src/adapter.ts`:
- Around line 182-186: The close() Promise may miss errors because
nodeStream.end() is called before attaching the 'error' listener; change the
order so the error listener (nodeStream.once('error', reject)) is registered
before calling nodeStream.end(), and ensure the listener is removed/cleanup when
end's callback resolves (e.g., remove the error listener inside the end callback
before calling resolve) so synchronous or immediate errors are properly caught
and the Promise always settles.

In `@packages/storage-minio/package.json`:
- Around line 31-35: Update the loosened dependency spec for
`@superfunctions/storage` which is currently "*" to a pinned semver range (e.g.
"^0.1.0") so consumers don't get future incompatible releases; change the
dependency entry named "@superfunctions/storage" in the package.json for
storage-minio (and likewise in storage-azure, storage-s3, storage-r2,
storage-gcs) from "*" to the chosen compatible range "^0.1.0" and run install to
verify lockfile updates.

In `@packages/storage-minio/src/adapter.ts`:
- Around line 28-30: The current logic skips portSuffix when endpoint already
includes a scheme, so fix the construction of fullEndpoint in adapter.ts by
detecting whether the given endpoint (when starting with 'http://' or
'https://') already contains an explicit port and, if not, append portSuffix;
update the code that computes fullEndpoint (which currently checks
endpoint.startsWith and returns endpoint unchanged) to parse or regex-check the
endpoint for an existing :port (or use the URL constructor) and only leave
endpoint as-is if a port is present, otherwise append portSuffix while
preserving the original scheme and host.

In `@packages/storage-s3/src/adapter.ts`:
- Around line 21-24: The credentials branch silently falls back when only one of
accessKeyId or secretAccessKey is provided; add a guard in the adapter
initialization (before constructing the credentials object) that checks the
variables accessKeyId and secretAccessKey and throws a descriptive Error if
exactly one is set, otherwise proceed: if both are set attach { accessKeyId,
secretAccessKey }, if neither set leave credentials undefined to use the default
provider chain. Ensure the check is placed near the existing credentials
construction (the credentials: accessKeyId && secretAccessKey ? ... expression)
so it prevents partial-static credential usage.

---

Nitpick comments:
In `@packages/files/src/__tests__/file-provider-conformance.test.ts`:
- Line 14: Several mocks/return objects in the conformance tests are being cast
to `any` (e.g., the object ending with `} as any;`) which masks type drift;
replace those `as any` casts with explicit, type-safe typings such as using the
library's FileProvider interface (or the actual conformance interface name) or
Partial<FileProvider> for stubs and then cast via `as FileProvider` only when
necessary, or better yet implement minimal typed mocks that satisfy the required
methods (e.g., implement the specific methods used in the test rather than
blanket `any`); apply this change to the occurrences you listed (lines
~14,25,28,31,43,46,66-67,84-85,103-104,114-115,123-124,134-135,143-144) so the
test suite remains type-sensitive.

In `@packages/files/src/types.ts`:
- Line 18: Update the three FileProvider method signatures to use generic return
types with unknown defaults instead of Promise<unknown>: add a generic type
parameter to getUploadSessionStatus (e.g., <TStatus = unknown>) and change its
return type to Promise<TStatus>, add <TFile = unknown> to getFile and return
Promise<TFile>, and add <TList = unknown> to listFiles (or the corresponding
method) and return Promise<TList>; this preserves backward compatibility while
allowing implementers to specify concrete return types for
getUploadSessionStatus, getFile, and listFiles.

In `@packages/middleware/package.json`:
- Around line 36-41: Replace the wildcard peer dependency for `@superfunctions/db`
in package.json with a semantically meaningful minimum version range (e.g.,
">=0.1.0" or a caret range like "^0.1.0") so consumers get an API compatibility
guarantee; keep the peerDependenciesMeta optional setting as-is and update the
"peerDependencies" entry for "@superfunctions/db" accordingly.

In `@packages/middleware/src/index.test.ts`:
- Line 11: Update the test title string for the failing "it" block currently
named "should export RateLimiter interface" to accurately describe what the test
asserts — rename it to something like "should expose `check` and `reset`
methods" since the assertions check method presence on an instance (verify the
`check` and `reset` methods on the RateLimiter instance).

In `@packages/middleware/src/rate-limit.test.ts`:
- Around line 200-205: The TTL expiration test for "expires keys after TTL" is
flaky due to a real sleep; switch the test to use fake timers from the test
runner (e.g., jest/vi): call useFakeTimers() at the start of the test, call
store.set({ key: 'key1', value: 'value1', ttlSeconds: 0.01 }) as before, then
advance timers by >=20ms with advanceTimersByTime(20) and flush pending
microtasks if needed (e.g., await Promise.resolve() or nextTick) before
asserting store.get('key1') is null, and finally restore real timers with
useRealTimers()/clearAllTimers; reference the test name "expires keys after TTL"
and the methods store.set and store.get so the change is applied in that test
only.

In `@packages/middleware/src/rate-limit.ts`:
- Around line 151-169: The withKeyLock implementation can leak map entries for
high-cardinality keys because cleanup only runs in the function's finally and
may miss removing the entry when promises chain; update withKeyLock to attach a
cleanup handler to the queued Promise itself (e.g., queued.finally(...)) that
checks keyLocks.get(key) === queued and deletes the key when queued settles, and
keep the existing resolveLock/await prior behavior; reference symbols:
withKeyLock, keyLocks, queued, resolveLock.

In `@packages/storage-azure/package.json`:
- Line 33: The dependency entry for `@superfunctions/storage` in package.json
currently uses "*" which is inconsistent with other packages (like storage-s3);
replace the "*" specifier with the same semver range used across the repo (e.g.,
a caret range such as ^X.Y.Z or the workspace: range your monorepo uses) so
`@superfunctions/storage` matches the project's versioning policy; update the
dependency value for "@superfunctions/storage" in
packages/storage-azure/package.json and ensure it mirrors the specifier used in
storage-s3.

In `@packages/storage-azure/README.md`:
- Around line 44-53: Clarify in the "Azure Block Blob Semantics" section that
uploadId is an opaque client-provided identifier and must not be parsed,
transformed, or regenerated by integrators; update the description mapping for
**uploadId** to explicitly state it is an opaque value used only to correlate
parts and should be treated verbatim by clients and servers to avoid integration
errors.

In `@packages/storage-azure/src/adapter.ts`:
- Around line 335-340: The catch block that tests "if (err instanceof Error &&
'code' in err && err.code === 'STORAGE_NOT_FOUND')" is redundant because both
branches rethrow err; simplify by removing the conditional and either remove the
catch entirely to let errors bubble or replace it with a single rethrow (catch
(err: unknown) { throw err; }) in the same function scope where the current
try/catch appears (the catch handling "err" in adapter.ts).

In `@packages/storage-gcs/src/adapter.test.ts`:
- Around line 7-13: The test for createGCSStorageAdapter calling
runConformanceTests only asserts result.ok and loses failure details; update the
assertion to include result.errors or result.details in the failure message (or
log them) so failing checks are visible. Locate the test around
createGCSStorageAdapter and runConformanceTests, and change the expect to
something like expecting result.ok to be true while passing
result.errors/result.details into the expect message or call
console.error(result.errors) before the assertion to surface which checks
failed.

In `@packages/storage-gcs/src/adapter.ts`:
- Around line 58-65: The deleteObject method contains a redundant try-catch that
only rethrows errors; remove the try-catch wrapper in async deleteObject (which
calls bucket.file(input.key) and await file.delete({ ignoreNotFound: true })) so
that errors propagate naturally and the ignoreNotFound option handles missing
objects—leave the bucket.file and file.delete call as-is and return
Promise<void> from the async function without catching and rethrowing.
- Around line 171-176: The catch block that inspects and rethrows err is
redundant because createNotFoundError() already throws; remove the entire
try-catch surrounding that call (or at minimum remove the catch and let the
error propagate) so the function/program doesn't contain an identical rethrow
branch; look for the createNotFoundError call and the catch (err: unknown) block
in adapter.ts and delete the catch block (or the encompassing try) to simplify
error handling.

In `@packages/storage-gcs/src/index.ts`:
- Line 1: The file currently re-exports everything from adapter.js using "export
*", which can accidentally expose new symbols; replace the wildcard with
explicit named exports from adapter.js (e.g., export { AdapterClass,
createAdapter, TYPES } — use the actual exported identifiers defined in
adapter.js) so only intended symbols are exposed, updating the export list in
packages/storage-gcs/src/index.ts to match adapter.js's public API.

In `@packages/storage-minio/src/adapter.test.ts`:
- Around line 139-143: The test title claims to verify path-style addressing but
only checks adapter.name; either remove the test or update it to actually assert
forcePathStyle on the underlying S3 client: locate the adapter instance (e.g.,
adapter or adapter.s3Client / adapter.s3) and assert the S3 client's
configuration includes forcePathStyle === true (or call a URL-generation method
like adapter.getUrl(bucket, key) and assert the returned URL uses path-style
host/path format rather than virtual-hosted style). Update the spec to reference
adapter.name only if you remove the path-style assertion.

In `@packages/storage-r2/package.json`:
- Line 34: Update the dependency spec for the internal package to match other
adapter packages: replace the loose version "*" for the
"@superfunctions/storage" entry in package.json with the caret-pinned version
"^0.1.0" so it becomes consistent and allows semver-compatible updates; ensure
the change is applied to the "@superfunctions/storage" dependency line in this
package.json.

In `@packages/storage-r2/src/adapter.test.ts`:
- Around line 127-134: The tests inside the "R2-specific limitations" describe
block are only restating docs (checking adapter.capabilities.signedUploadUrls
and adapter.name) rather than validating behavior; replace these doc-style it
blocks with either real behavior assertions—e.g., in the signedUploadUrls test
actually generate a signed upload URL via the adapter and assert that uploads
with a mismatched Content-Type are rejected or that the URL contains the
content-type constraint—and for the S3-compatibility test use the adapter to
exercise an S3-compatible API edge case (presigned PUT/GET headers or path vs
host styles) or convert the two it blocks into plain comments documenting the
limitation; locate and update the tests referencing
adapter.capabilities.signedUploadUrls and adapter.name inside the
describe('R2-specific limitations') block accordingly.

In `@packages/storage-r2/src/adapter.ts`:
- Around line 13-27: In createR2StorageAdapter, validate that required config
fields (accountId, accessKeyId, secretAccessKey) from the R2StorageConfig are
present and non-empty before building endpoint or instantiating the S3Client; if
any are missing/empty throw a clear error (or return a failed result) naming the
missing field(s) so callers can fix configuration, and ensure you still accept
optional jurisdiction as-is when computing endpoint. Use the existing symbols
(accountId, accessKeyId, secretAccessKey, jurisdiction, endpoint, S3Client,
createR2StorageAdapter) to locate where to add the checks.

In `@packages/storage-s3/package.json`:
- Line 34: Update the dependency specifier for `@superfunctions/storage` in the
package.json that currently has "@superfunctions/storage": "*" to use a
consistent semver range (e.g., "^0.1.0") matching other adapters; locate the
dependency entry and replace the "*" value with the chosen caret range so all
adapter packages use the same predictable versioning.

In `@packages/storage-s3/src/adapter.test.ts`:
- Around line 7-14: The test for createS3StorageAdapter currently asserts
runConformanceTests returned ok without showing failure details; update the
assertion in the test that calls runConformanceTests(adapter) to include the
failure information in the expect message (e.g., include result.error or
JSON.stringify(result) in the expect(...).toBe(true) message) so that when
runConformanceTests (the conformance harness) fails you get the detailed failure
output for debugging.

In `@packages/storage/src/index.test.ts`:
- Around line 4-7: The test in it('should export types') currently only checks
that the module imported from './index.js' is defined; instead assert the actual
named exports from that entrypoint by importing './index.js' and verifying
specific export symbols (e.g., the public API names your package exposes) are
present — use assertions such as expect(mod).toHaveProperty('<ExportName>') or
expect(Object.keys(mod)).toEqual(expect.arrayContaining(['<ExportA>','<ExportB>']))
to assert concrete exports from the module rather than only checking mod is
defined; update the test to reference the real export identifiers your package
exposes.

In `@packages/storage/src/internal/s3-compatible.ts`:
- Around line 64-70: The current code defaults ContentLength to 0 which
misrepresents unknown sizes; update the adapter to propagate unknown size
instead of defaulting: return size as undefined when response.ContentLength is
undefined, and update the StorageObjectStat type (make size?: number) and any
callers/consumers of the object (e.g., functions that read size) to handle an
optional size; alternatively, if you prefer a strict approach, throw an error
from the method that issues HeadObjectCommand when ContentLength is
missing—choose one strategy and apply it consistently across the
StorageObjectStat definition and all usages.

In `@packages/storage/src/router.test.ts`:
- Around line 68-70: The tests use optional chaining on required storage methods
which can mask regressions; change occurrences of
storage.createMultipartUpload?. and storage.signDownloadUrl?. (and similar uses
around lines 127-129) to assert the methods exist and then call them normally
(e.g., verify typeof storage.createMultipartUpload === 'function' or
expect(storage.createMultipartUpload).toBeDefined() before invoking) so
missing-method failures surface immediately; update the test to call
storage.createMultipartUpload(...) and storage.signDownloadUrl(...) directly
after the assertion.

In `@packages/storage/src/router.ts`:
- Around line 32-40: mergeCapabilities currently uses every() which only
advertises a capability if all underlying adapters support it; update the
codebase and docs to avoid misleading consumers by either (A) changing
mergeCapabilities to compute per-capability availability more appropriately
(e.g., union/any() if you want advertised features when at least one adapter
supports them) or (B) keep the conservative every() behavior but add a clear
comment above mergeCapabilities and update documentation and call sites to
direct consumers to use the per-target resolver (capabilitiesForTarget or
getStorageCapabilities) for accurate target-specific capability checks; locate
mergeCapabilities and any callers that read adapter.capabilities and update them
or their docs to reference capabilitiesForTarget/getStorageCapabilities.

In `@packages/storage/src/types.ts`:
- Around line 88-97: Update the method signatures to use generic stream element
types instead of bare any by adding a type parameter (with sensible default) to
both methods: change openUploadStream to openUploadStream?<T =
Uint8Array>(input: { key: string; target?: StorageTargetName; contentType?:
string; }): Promise<WritableStream<T>> and change openDownloadStream to
openDownloadStream?<T = Uint8Array>(input: { key: string; target?:
StorageTargetName; range?: { start: number; endInclusive: number }; }):
Promise<ReadableStream<T>> so consumers can opt into a specific chunk type while
preserving a safe default; update any related type imports/exports in types.ts
to reflect the generics.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d8eb8541-1db6-4b68-9954-74251ef7b47f

📥 Commits

Reviewing files that changed from the base of the PR and between e105c93 and 32cddda.

📒 Files selected for processing (61)
  • packages/files/package.json
  • packages/files/src/__tests__/file-provider-conformance.test.ts
  • packages/files/src/index.ts
  • packages/files/src/types.ts
  • packages/files/tsconfig.json
  • packages/middleware/package.json
  • packages/middleware/src/index.test.ts
  • packages/middleware/src/index.ts
  • packages/middleware/src/rate-limit.test.ts
  • packages/middleware/src/rate-limit.ts
  • packages/middleware/tsconfig.json
  • packages/middleware/vitest.config.ts
  • packages/storage-azure/README.md
  • packages/storage-azure/package.json
  • packages/storage-azure/src/adapter.test.ts
  • packages/storage-azure/src/adapter.ts
  • packages/storage-azure/src/index.ts
  • packages/storage-azure/tsconfig.json
  • packages/storage-azure/vitest.config.ts
  • packages/storage-gcs/package.json
  • packages/storage-gcs/src/adapter.test.ts
  • packages/storage-gcs/src/adapter.ts
  • packages/storage-gcs/src/index.ts
  • packages/storage-gcs/tsconfig.json
  • packages/storage-gcs/vitest.config.ts
  • packages/storage-local/package.json
  • packages/storage-local/src/adapter.test.ts
  • packages/storage-local/src/adapter.ts
  • packages/storage-local/src/index.ts
  • packages/storage-local/tsconfig.json
  • packages/storage-local/vitest.config.ts
  • packages/storage-minio/package.json
  • packages/storage-minio/src/adapter.test.ts
  • packages/storage-minio/src/adapter.ts
  • packages/storage-minio/src/index.ts
  • packages/storage-minio/tsconfig.json
  • packages/storage-r2/package.json
  • packages/storage-r2/src/adapter.test.ts
  • packages/storage-r2/src/adapter.ts
  • packages/storage-r2/src/index.ts
  • packages/storage-r2/tsconfig.json
  • packages/storage-s3/package.json
  • packages/storage-s3/src/adapter.test.ts
  • packages/storage-s3/src/adapter.ts
  • packages/storage-s3/src/index.ts
  • packages/storage-s3/tsconfig.json
  • packages/storage-s3/vitest.config.ts
  • packages/storage/README.md
  • packages/storage/package.json
  • packages/storage/src/conformance.test.ts
  • packages/storage/src/conformance.ts
  • packages/storage/src/index.test.ts
  • packages/storage/src/index.ts
  • packages/storage/src/internal/errors.ts
  • packages/storage/src/internal/s3-compatible.test.ts
  • packages/storage/src/internal/s3-compatible.ts
  • packages/storage/src/router.test.ts
  • packages/storage/src/router.ts
  • packages/storage/src/types.ts
  • packages/storage/tsconfig.json
  • packages/storage/vitest.config.ts

Comment on lines +31 to +34
"dependencies": {
"@google-cloud/storage": "^7.0.0",
"@superfunctions/storage": "*"
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail
python - <<'PY'
import json, pathlib
for p in pathlib.Path(".").rglob("package.json"):
    if "node_modules" in p.parts:
        continue
    data = json.loads(p.read_text())
    for section in ("dependencies", "peerDependencies"):
        for name, ver in data.get(section, {}).items():
            if name.startswith("@superfunctions/") and str(ver).strip() == "*":
                print(f"{p}: {section}.{name}={ver}")
PY

Repository: 21nCo/super-functions

Length of output: 3451


🏁 Script executed:

#!/bin/bash
# Check if there's a version constraint defined elsewhere for `@superfunctions/storage`
rg "@superfunctions/storage" --type json -A 2 packages/storage-*/package.json | head -30

Repository: 21nCo/super-functions

Length of output: 1638


🏁 Script executed:

#!/bin/bash
# Check root package.json for workspace or version constraints
if [ -f package.json ]; then
  echo "=== Root package.json ===" 
  cat package.json | head -50
fi

# Check if there's a lerna.json or workspace configuration
if [ -f lerna.json ]; then
  echo "=== lerna.json ===" 
  cat lerna.json
fi

Repository: 21nCo/super-functions

Length of output: 1866


🏁 Script executed:

#!/bin/bash
# Find and read the `@superfunctions/storage` package.json to check its version
find . -path "*/node_modules" -prune -o -name "package.json" -type f -print | xargs grep -l '"name".*"@superfunctions/storage"' | grep -v node_modules

Repository: 21nCo/super-functions

Length of output: 96


🏁 Script executed:

#!/bin/bash
# Check the version of `@superfunctions/storage` package
cat packages/storage/package.json | head -20

Repository: 21nCo/super-functions

Length of output: 680


Wildcard runtime dependency on @superfunctions/storage affects all storage adapter packages.

All storage adapter packages (gcs, azure, minio, r2, s3) use "@superfunctions/storage": "*" in dependencies, which removes semver safety for a published package and can resolve incompatible upstream releases. Since @superfunctions/storage is at version 0.1.0, use a compatible version constraint instead.

🔧 Suggested changes

Fix all storage adapter packages:

  • packages/storage-gcs/package.json
  • packages/storage-azure/package.json
  • packages/storage-minio/package.json
  • packages/storage-r2/package.json
  • packages/storage-s3/package.json

Replace "@superfunctions/storage": "*" with "@superfunctions/storage": "^0.1.0" in each file's dependencies section.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-gcs/package.json` around lines 31 - 34, Update the storage
adapter packages to remove the wildcard dependency on `@superfunctions/storage`:
replace the dependency entry "@superfunctions/storage": "*" with a
semver-constrained version such as "@superfunctions/storage": "^0.1.0" in each
package.json for the storage adapters (refer to the dependency key
"@superfunctions/storage" in packages/storage-gcs/package.json and the
equivalent entries in storage-azure, storage-minio, storage-r2, and storage-s3)
so the published packages retain semver safety and avoid resolving incompatible
upstream releases.

Comment thread packages/storage-gcs/src/adapter.ts
Comment thread packages/storage-local/src/adapter.ts
Comment thread packages/storage-minio/package.json
Comment thread packages/storage-minio/src/adapter.ts Outdated
Comment thread packages/storage-s3/src/adapter.ts
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 8 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/storage-minio/src/adapter.test.ts">

<violation number="1" location="packages/storage-minio/src/adapter.test.ts:154">
P3: This test’s assertion does not validate the behavior described by the test name (explicit port application), so it can pass without actually verifying port handling.</violation>
</file>

<file name="packages/storage-s3/src/adapter.ts">

<violation number="1" location="packages/storage-s3/src/adapter.ts:16">
P2: Credential presence validation is inconsistent with credential assignment; empty-string keys bypass the new check and silently fall back to ambient credentials.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread packages/storage-s3/src/adapter.ts
useSSL: false,
});

expect(customAdapter.name).toBe('minio');
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P3: This test’s assertion does not validate the behavior described by the test name (explicit port application), so it can pass without actually verifying port handling.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/storage-minio/src/adapter.test.ts, line 154:

<comment>This test’s assertion does not validate the behavior described by the test name (explicit port application), so it can pass without actually verifying port handling.</comment>

<file context>
@@ -141,6 +141,19 @@ describe('@superfunctions/storage-minio', () => {
+        useSSL: false,
+      });
+
+      expect(customAdapter.name).toBe('minio');
+    });
+
</file context>
Fix with Cubic

Comment on lines +27 to +33
completeUploadPart(input: {
uploadSessionId: string;
partNumber: number;
etag: string;
size: number;
checksumSha256Base64?: string;
}, ctx: FileProviderContext): Promise<void>;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Type mismatch: completeUploadPart is declared to return Promise<void> but the conformance test at file-provider-conformance.test.ts:120-125 expects it to return an object with a recorded property. This inconsistency will cause runtime type errors when implementers follow the interface but tests expect different behavior.

// Fix: Update return type to match expected behavior
completeUploadPart(input: {
  uploadSessionId: string;
  partNumber: number;
  etag: string;
  size: number;
  checksumSha256Base64?: string;
}, ctx: FileProviderContext): Promise<{ recorded: boolean }>;
Suggested change
completeUploadPart(input: {
uploadSessionId: string;
partNumber: number;
etag: string;
size: number;
checksumSha256Base64?: string;
}, ctx: FileProviderContext): Promise<void>;
completeUploadPart(input: {
uploadSessionId: string;
partNumber: number;
etag: string;
size: number;
checksumSha256Base64?: string;
}, ctx: FileProviderContext): Promise<{ recorded: boolean }>;

Spotted by Graphite

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 8a840d65ac

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/storage-azure/src/adapter.ts Outdated
Comment on lines +86 to +89
const capabilities: StorageAdapterCapabilities = {
signedUploadUrls: true,
signedDownloadUrls: true,
multipart: true,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Gate signed/multipart capabilities on signing credentials

This adapter always advertises signedUploadUrls, signedDownloadUrls, and multipart as supported, but createContainerClient can return without a credential (for example with accountName + sasToken), and in that case the signing methods throw immediately with SAS URL generation requires ... accountKey. Any caller that selects behavior from capabilities can choose signed multipart mode and then fail at runtime. Capabilities should reflect the active auth mode, or the methods should support the SAS-only path.

Useful? React with 👍 / 👎.

Comment thread packages/storage-azure/src/adapter.ts Outdated
Comment on lines +204 to +206
// Stateless: uploadId encodes the key
const uploadId = Buffer.from(input.key).toString('base64');
return { uploadId };
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Make Azure upload IDs unique per multipart session

createMultipartUpload derives uploadId only from key, while block IDs are derived only from partNumber, so separate multipart uploads targeting the same key are not isolated. In concurrent uploads, parts from one session can overwrite the other's uncommitted blocks and either completion can commit mixed data. Upload IDs need per-session entropy and block IDs should be scoped to that session.

Useful? React with 👍 / 👎.

Comment thread packages/storage-azure/src/adapter.ts Outdated

if (accountName && sasToken) {
const blobServiceClient = new BlobServiceClient(
`${endpoint || `https://${accountName}.blob.core.windows.net`}?${sasToken}`
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Normalize SAS token before composing service URL

The SAS-token path always concatenates ?${sasToken}. If a token already includes a leading ? (a common Azure portal format), this produces ??..., which changes the first query parameter name and breaks authentication. Trimming a leading ? (or building via URL APIs) avoids malformed URLs and makes configuration robust.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 2 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/storage-azure/src/adapter.ts">

<violation number="1" location="packages/storage-azure/src/adapter.ts:54">
P2: Legacy uploadIds (base64 of the key) will be rejected when the key is valid JSON (e.g., "null" or "123") because the fallback only runs on parse errors. Move the legacy fallback outside the catch so any non-conforming JSON also uses the legacy path.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread packages/storage-azure/src/adapter.ts
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c86e18f5d7

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/storage-azure/src/adapter.ts Outdated
Comment on lines +205 to +208
const headers: Record<string, string> = {};
if (input.constraints?.contentType) {
headers['x-ms-blob-content-type'] = input.constraints.contentType;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Add x-ms-blob-type to Azure signed upload headers

The signed upload URL path builds a plain PUT-blob SAS URL but does not return the required x-ms-blob-type: BlockBlob header, so clients that use this adapter output directly (without Azure-specific custom headers) will fail uploads with 400-level errors from Blob Storage. This breaks the adapter’s provider-agnostic signed-upload contract for normal direct-to-Azure uploads.

Useful? React with 👍 / 👎.

}
}

const nodeStream = createWriteStream(filePath);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Handle write-stream errors outside backpressure paths

This upload stream only listens for error when write() returns backpressure or during close(), so an asynchronous filesystem failure after a successful write() return (for example ENOSPC or I/O failure) can emit an unhandled stream error and crash the process while the write promise has already resolved. A persistent error listener should be attached for the stream lifetime and propagated to subsequent operations.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
packages/storage-azure/src/adapter.ts (3)

318-321: Clarify misleading comment about etag usage.

The comment states "the etag from each part is actually the blockId we generated" but the implementation actually ignores the etag values entirely and regenerates the block IDs from partNumber using generateBlockId. Consider clarifying this behavior.

📝 Suggested comment clarification
-      // In Azure, the etag from each part is actually the blockId we generated
-      // Sort parts and commit blocks
+      // Sort parts by partNumber and regenerate block IDs deterministically.
+      // The etag values are ignored since block IDs are derived from uploadId + partNumber.
       const sortedParts = [...input.parts].sort((a, b) => a.partNumber - b.partNumber);
       const blockList = sortedParts.map((part) => generateBlockId(input.uploadId, part.partNumber));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-azure/src/adapter.ts` around lines 318 - 321, The comment
misleadingly says "the etag from each part is actually the blockId we generated"
while the code actually ignores etag and regenerates block IDs from partNumber;
update the comment near sortedParts/blockList (reference input.parts,
sortedParts, generateBlockId(input.uploadId, part.partNumber), blockList,
uploadId, part.partNumber) to state that we sort by partNumber and regenerate
block IDs using generateBlockId (thus ignoring any etag values) so the behavior
is explicit and not confusing.

207-214: Minor: Redundant conditional check.

The headers object always contains at least 'x-ms-blob-type': 'BlockBlob', so Object.keys(headers).length > 0 is always true and the ternary will never return undefined.

♻️ Simplify by removing the conditional
-      return { url, headers: Object.keys(headers).length > 0 ? headers : undefined };
+      return { url, headers };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-azure/src/adapter.ts` around lines 207 - 214, The return
uses a redundant conditional because the headers object in getUploadUrl (the
local variable headers containing 'x-ms-blob-type': 'BlockBlob') is never empty;
simplify the return by returning { url, headers } directly (remove the
Object.keys(headers).length > 0 ? ... : undefined check) so the code always
returns the headers object and avoids the unnecessary ternary.

379-391: Use createNotFoundError() for consistent error handling.

The error is manually constructed here instead of using the imported createNotFoundError() helper, which is inconsistent with statObject. Additionally, the catch block at lines 386-391 is redundant since exists() returns a boolean and shouldn't throw a not-found error—it just re-throws whatever it catches.

♻️ Simplify using the shared error helper
-      try {
-        const exists = await blobClient.exists();
-        if (!exists) {
-          const notFoundError = new Error('Object not found') as Error & { code: string };
-          notFoundError.code = 'STORAGE_NOT_FOUND';
-          throw notFoundError;
-        }
-      } catch (err: unknown) {
-        if (err instanceof Error && 'code' in err && err.code === 'STORAGE_NOT_FOUND') {
-          throw err;
-        }
-        throw err;
-      }
+      const exists = await blobClient.exists();
+      if (!exists) {
+        throw createNotFoundError();
+      }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/storage-azure/src/adapter.ts` around lines 379 - 391, Replace the
manual construction and redundant catch with the shared helper: call
blobClient.exists() and if it returns false throw createNotFoundError() (the
same helper used by statObject) instead of creating a new Error with code
'STORAGE_NOT_FOUND'; remove the surrounding try/catch that only re-throws errors
from exists() since exists() returns a boolean and the catch is
redundant—preserve other error propagation as-is so only the not-found path uses
createNotFoundError().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@packages/storage-azure/src/adapter.ts`:
- Around line 318-321: The comment misleadingly says "the etag from each part is
actually the blockId we generated" while the code actually ignores etag and
regenerates block IDs from partNumber; update the comment near
sortedParts/blockList (reference input.parts, sortedParts,
generateBlockId(input.uploadId, part.partNumber), blockList, uploadId,
part.partNumber) to state that we sort by partNumber and regenerate block IDs
using generateBlockId (thus ignoring any etag values) so the behavior is
explicit and not confusing.
- Around line 207-214: The return uses a redundant conditional because the
headers object in getUploadUrl (the local variable headers containing
'x-ms-blob-type': 'BlockBlob') is never empty; simplify the return by returning
{ url, headers } directly (remove the Object.keys(headers).length > 0 ? ... :
undefined check) so the code always returns the headers object and avoids the
unnecessary ternary.
- Around line 379-391: Replace the manual construction and redundant catch with
the shared helper: call blobClient.exists() and if it returns false throw
createNotFoundError() (the same helper used by statObject) instead of creating a
new Error with code 'STORAGE_NOT_FOUND'; remove the surrounding try/catch that
only re-throws errors from exists() since exists() returns a boolean and the
catch is redundant—preserve other error propagation as-is so only the not-found
path uses createNotFoundError().

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e688fc2a-2493-4c84-86d3-cc2e0366c8db

📥 Commits

Reviewing files that changed from the base of the PR and between 32cddda and 37ecbd1.

📒 Files selected for processing (11)
  • packages/storage-azure/src/adapter.test.ts
  • packages/storage-azure/src/adapter.ts
  • packages/storage-gcs/package.json
  • packages/storage-gcs/src/adapter.ts
  • packages/storage-local/src/adapter.ts
  • packages/storage-minio/package.json
  • packages/storage-minio/src/adapter.test.ts
  • packages/storage-minio/src/adapter.ts
  • packages/storage-s3/src/adapter.test.ts
  • packages/storage-s3/src/adapter.ts
  • release-packages.json
✅ Files skipped from review due to trivial changes (5)
  • packages/storage-gcs/package.json
  • packages/storage-s3/src/adapter.test.ts
  • packages/storage-local/src/adapter.ts
  • release-packages.json
  • packages/storage-gcs/src/adapter.ts
🚧 Files skipped from review as they are similar to previous changes (5)
  • packages/storage-minio/package.json
  • packages/storage-minio/src/adapter.ts
  • packages/storage-minio/src/adapter.test.ts
  • packages/storage-azure/src/adapter.test.ts
  • packages/storage-s3/src/adapter.ts

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 37ecbd1089

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/storage/src/router.ts Outdated
Comment on lines +88 to +89
const { adapter, target } = pickAdapter(adapterEntries, defaultTarget, input.target);
return adapter.statObject({ ...input, target });
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Stop forwarding outer target into delegated adapter calls

After resolving input.target via pickAdapter, this method forwards the same target back into the selected adapter ({ ...input, target }), and the same pattern is repeated across routed operations. If the selected adapter is itself routed (with a different target namespace), the injected outer target is interpreted by the inner router and causes false StorageRoutingError failures, so nested/composed routers cannot work correctly. Once routing is decided at this layer, delegation should not overwrite target with the outer alias.

Useful? React with 👍 / 👎.

Comment on lines +130 to +133
const ok = gcstream.write(buffer);
if (ok) {
resolve();
} else {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Attach a persistent error listener for GCS upload streams

In openUploadStream, when gcstream.write() returns true, the write promise resolves immediately and no error listener remains attached until close() is called. If an asynchronous GCS write failure occurs in that window (for example transport or storage-side failure), the stream can emit an unhandled error, potentially crashing the process while the earlier write already reported success. A lifetime error handler should be installed and propagated to subsequent writes/close.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 8492cd0627

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +149 to +153
const keyLocks = new Map<string, Promise<void>>();

async function withKeyLock<T>(key: string, work: () => Promise<T>): Promise<T> {
const prior = keyLocks.get(key) ?? Promise.resolve();
let resolveLock!: () => void;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Use atomic persistence ops for rate-limit checks

The limiter serializes requests with an in-memory keyLocks map, but that lock only works within a single process. When multiple instances share the same persistence backend, each instance can read the same old state and independently allow a request before either write lands, so limits are exceeded under concurrent traffic. This affects all three algorithms because they perform non-atomic read/modify/write cycles; the shared-store path needs atomic primitives (e.g., incr/CAS/transaction) rather than process-local locking.

Useful? React with 👍 / 👎.

Comment on lines +326 to +329
async abortMultipartUpload(input: { key: string; uploadId: string }): Promise<void> {
// Azure automatically garbage-collects uncommitted blocks after 7 days
// No explicit cleanup needed for stateless tracking
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Implement real abort behavior for Azure multipart uploads

abortMultipartUpload is currently a no-op, so calling abort does not invalidate the session or clear uncommitted blocks. A client that uploaded parts can still call completeMultipartUpload later with the same uploadId and successfully create/overwrite the blob, which violates expected abort semantics and can persist data that callers explicitly canceled. Abort should either perform server-side cleanup or record tombstones so completion is rejected after abort.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3b06713efc

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

contentType?: string;
metadata?: Record<string, string>;
}): Promise<{ uploadId: string }> {
const uploadId = encodeUploadId(input.key);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve multipart metadata in Azure upload IDs

createMultipartUpload accepts contentType and metadata, but the generated uploadId only encodes key, so completeMultipartUpload has no way to restore those fields and commits with default blob properties. In multipart flows that rely on these inputs (for example setting Content-Type for browser downloads or storing object metadata), Azure uploads will silently lose that information while other adapters persist it, causing inconsistent behavior across providers.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 309c080274

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +71 to +75
if (decoded.length > 0) {
return {
key: decoded,
sessionId: uploadId,
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Reject legacy deterministic Azure upload IDs

The fallback branch in decodeUploadId accepts any non-empty base64 payload as a valid multipart session (key = decoded, sessionId = uploadId), so callers can forge uploadId values like base64(key) without ever calling createMultipartUpload. That bypasses the per-session isolation added by randomUUID(), allowing predictable IDs that can collide across concurrent uploads to the same key and undermining multipart-session integrity checks in signMultipartUploadPartUrl/completeMultipartUpload.

Useful? React with 👍 / 👎.

async openUploadStream(input: { key: string; contentType?: string }): Promise<WritableStream> {
const blockBlobClient = containerClient.getBlockBlobClient(input.key);
const stream = new PassThrough();
const uploadPromise = blockBlobClient.uploadStream(stream, 4 * 1024 * 1024, 5, {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Attach early rejection handling to Azure upload promise

uploadPromise is created eagerly but only observed later in close() (or in abort()), so if uploadStream rejects before either method is called (for example, auth/network failure during stream setup), Node can treat it as an unhandled rejection and terminate the process while prior writes may already have resolved. Install a rejection handler immediately and propagate that stored error through write/close.

Useful? React with 👍 / 👎.

state.tokens -= 1;
}

const msUntilNextToken = state.tokens >= 1 ? 0 : Math.ceil((1 - state.tokens) / refillRatePerMs);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Handle zero-limit token bucket without Infinity math

In token-bucket mode, a configured maxRequests (or per-call limit) of 0 makes refillRatePerMs equal to 0, which turns the divisions on this line into Infinity; resetAt then becomes non-finite and new Date(resetAt).toISOString() throws instead of returning a normal rate-limit response. This makes zero-limit configurations crash the limiter rather than cleanly deny requests.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 1767ab8fab

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +175 to +176
const effectiveWindowMs =
(input.windowSeconds !== undefined ? input.windowSeconds * 1000 : undefined) ?? windowMs;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Guard against non-positive window lengths

check() allows windowSeconds overrides but never validates that the effective window is positive. With windowSeconds <= 0, fixed/sliding-window modes effectively reset on every call (bypassing rate limits), and token-bucket mode can divide by zero or produce non-finite reset timestamps that throw when serialized. This can be triggered by a bad config or user-provided override and causes incorrect enforcement or runtime failures.

Useful? React with 👍 / 👎.

@cubic-dev-ai
Copy link
Copy Markdown

cubic-dev-ai bot commented Apr 9, 2026

You're iterating quickly on this pull request. To help protect your rate limits, cubic has paused automatic reviews on new pushes for now—when you're ready for another review, comment @cubic-dev-ai review.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 434a89d374

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

await uploadPromise;
},
abort(reason) {
stream.destroy(reason instanceof Error ? reason : new Error(String(reason)));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Add error listener before aborting Azure upload stream

In openUploadStream, abort() calls stream.destroy(error) on a PassThrough that never gets a lifetime 'error' listener, so cancelling an upload (or any path that destroys the stream with an error) can emit an unhandled error event and terminate the Node process. This is reproducible whenever callers invoke writer.abort(...) and makes normal cancellation a crash path; install a persistent error handler on stream and propagate that stored error through write/close.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d0c754735c

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +1 to +11
import {
AbortMultipartUploadCommand,
CompleteMultipartUploadCommand,
CreateMultipartUploadCommand,
DeleteObjectCommand,
GetObjectCommand,
HeadObjectCommand,
PutObjectCommand,
UploadPartCommand,
} from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Add runtime AWS deps for exported S3-compatible module

@superfunctions/storage publicly exports ./internal/s3-compatible, and that module imports @aws-sdk/client-s3 and @aws-sdk/s3-request-presigner, but the package manifest does not declare those as runtime dependencies. Under strict resolvers (notably pnpm/Yarn PnP), importing this subpath will fail with MODULE_NOT_FOUND, which breaks S3/R2/MinIO adapters that rely on it even when installation appears successful.

Useful? React with 👍 / 👎.

return withKeyLock(key, async () => {
const effectiveWindowMs =
(input.windowSeconds !== undefined ? input.windowSeconds * 1000 : undefined) ?? windowMs;
const effectiveLimit = input.limit ?? maxRequests;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Validate effectiveLimit before rate-limit calculations

check() validates effectiveWindowMs but never validates effectiveLimit; if callers pass a non-finite limit (for example NaN from parsing user input), token-bucket math propagates non-finite values and new Date(resetAt).toISOString() can throw, turning a normal rate-limit decision into an exception path. Guard effectiveLimit as finite/non-negative before entering the algorithm branches.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 453b285610

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +424 to +427
nodeStream.on('data', (chunk: Buffer | string) => {
const buffer = typeof chunk === 'string' ? Buffer.from(chunk) : chunk;
controller.enqueue(new Uint8Array(buffer));
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Honor backpressure in Azure download stream bridge

openDownloadStream converts a Node readable into a Web ReadableStream by listening to 'data', but it never pauses/resumes the Node stream based on consumer demand. In slow-consumer scenarios (large blobs over constrained clients), Node stays in flowing mode and keeps enqueueing chunks, so memory usage can grow with object size and cause process OOM. Add backpressure-aware flow control (e.g., pause() after enqueue and pull()/resume() or Readable.toWeb).

Useful? React with 👍 / 👎.

Comment on lines +211 to +214
gcstream.on('data', (chunk: Buffer | string) => {
const buffer = typeof chunk === 'string' ? Buffer.from(chunk) : chunk;
controller.enqueue(new Uint8Array(buffer));
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Honor backpressure in GCS download stream bridge

This Web stream wrapper also subscribes to Node 'data' events without any pause/resume path, so the GCS stream can continue producing while downstream reads are slow. That allows the internal queue to grow unbounded for large downloads, which can significantly increase memory and destabilize workers under load. The bridge should implement backpressure (pause on enqueue, resume in pull) or delegate to a backpressure-aware Node→Web conversion.

Useful? React with 👍 / 👎.

@graphite-app
Copy link
Copy Markdown

graphite-app bot commented Apr 13, 2026

Merge activity

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 48df48405e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@@ -0,0 +1,286 @@
import type { Adapter, KVStoreAdapter } from '@superfunctions/db';

export { KVStoreAdapter };
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Stop requiring @superfunctions/db for basic TypeScript usage

@superfunctions/middleware marks @superfunctions/db as an optional peer, but this re-export forces the generated .d.ts to reference @superfunctions/db unconditionally. TypeScript consumers who only use the default in-memory limiter (and therefore do not install @superfunctions/db) will get TS2307 during compilation when importing this package at all, which makes the package effectively unusable in its advertised optional-dependency mode.

Useful? React with 👍 / 👎.

Comment on lines +97 to +100
const accountNameMatch = connectionString.match(/AccountName=([^;]+)/);
const accountKeyMatch = connectionString.match(/AccountKey=([^;]+)/);
const extractedAccountName = accountNameMatch?.[1];
const extractedAccountKey = accountKeyMatch?.[1];
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Derive Azure signing creds without regex-parsing connection strings

The connection-string path infers signing capability by regex-extracting AccountName/AccountKey from the raw string, so valid connection-string formats that authenticate successfully but omit those literal fields (for example emulator/development formats) end up with credential unset. In that case the adapter disables signed URLs/multipart and throws from signing methods even though the client is already authenticated, causing feature loss in those environments.

Useful? React with 👍 / 👎.

## Summary
- add the missing shared file, middleware, and storage packages to `origin/dev`
- include the base storage contract plus local, s3-compatible, azure, gcs, minio, and r2 adapters
- keep the PR scoped to shared packages only so downstream function PRs can build against them

## Validation
- inspected the copied package set in a clean worktree created from `origin/dev`
- attempted a root install probe, but did not complete a reliable monorepo validation in this worktree

## Notes
- no `.conduct` changes included
- opened directly against `origin/dev` with `GH_CONFIG_DIR=~/.config/gh-other`

## Summary by Sourcery

Add shared storage, file, and middleware packages to support multi-provider file storage and rate limiting across the monorepo.

New Features:
- Introduce core @superfunctions/storage package defining the storage adapter contract, routing, and conformance helpers.
- Add cloud and local storage adapter packages for S3, Cloudflare R2, MinIO, Azure Blob Storage, Google Cloud Storage, and local filesystem.
- Add @superfunctions/files package with shared file and upload provider types for FileFn workflows.
- Add @superfunctions/middleware package providing a configurable, pluggable rate limiter with multiple algorithms and persistence backends.

Enhancements:
- Provide a routed storage adapter to fan out logical storage targets to different physical backends with merged capability reporting.
- Add shared error utilities and S3-compatible adapter helper to standardize storage error handling and multipart behavior across providers.

Build:
- Add package manifests, TypeScript configs, and Vitest configs for the new storage, files, and middleware-related packages so they can build and test independently.

Documentation:
- Document the Azure storage adapter usage and capabilities in a new README.
- Add a basic README for the core storage package.

Tests:
- Add conformance-style tests for storage routing and adapter capability validation.
- Add a FileProvider conformance test suite to enforce upload and file API shapes across implementations.
- Wire up Vitest configs for the new storage, files, and middleware-related packages to enable isolated testing.

<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Adds shared `@superfunctions/storage`, `@superfunctions/files`, and `@superfunctions/middleware` packages to enable a pluggable storage system and rate limiting across the monorepo, and registers release targets (SF-30).

- **New Features**
  - `@superfunctions/storage`: adapter contract, router, conformance helpers, standardized errors, and shared S3-compatible core.
  - Adapters: `@superfunctions/storage-local`, `-s3`, `-gcs`, `-azure`, `-minio`, `-r2` with signed URLs and multipart where supported.
  - `@superfunctions/files`: `FileProvider` types and a conformance test suite.
  - `@superfunctions/middleware`: rate limiter (fixed-window, sliding-window, token-bucket) with pluggable persistence via `@superfunctions/db` `KVStoreAdapter` or in-memory.

- **Bug Fixes**
  - Rate limiter: input validation and hardened token-bucket edge cases.
  - Storage router: fixed target resolution and corrected capability aggregation.
  - Azure adapter: hardened multipart session handling and upload URL compatibility.
  - S3 adapter: reject empty static credentials.
  - Uploads/streaming: persistently track upload stream errors and fix abort handling.

<sup>Written for commit 87da4d6. Summary will update on new commits.</sup>

<!-- End of auto-generated description by cubic. -->

<!-- This is an auto-generated comment: release notes by coderabbit.ai -->
## Summary by CodeRabbit

* **New Features**
  * Rate-limiting middleware with multiple algorithms and pluggable persistence.
  * File management API for upload sessions, multipart uploads, signed URLs, and streaming.
  * New storage adapters: S3, MinIO, R2, Azure Blob, GCS, and local filesystem.
  * Storage routing to select adapters per logical target with aggregated capabilities.

* **Documentation**
  * Azure adapter README with installation and usage examples.

* **Tests**
  * Extensive conformance and unit tests across middleware, file provider, and all storage adapters.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
@graphite-app graphite-app bot force-pushed the SF-30/storage-files-foundations branch from 87da4d6 to 48410cc Compare April 13, 2026 12:27
@graphite-app graphite-app bot merged commit 48410cc into dev Apr 13, 2026
13 checks passed
@sonarqubecloud
Copy link
Copy Markdown

Quality Gate Failed Quality Gate failed

Failed conditions
6.3% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 87da4d6696

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +91 to +93
capabilitiesForTarget(target) {
return pickAdapter(adapterEntries, defaultTarget, target).adapter.capabilities;
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Resolve delegated capabilities via child target logic

capabilitiesForTarget returns pickAdapter(...).adapter.capabilities, which is only the child adapter’s aggregate capability set. For nested routed adapters this can under-report what the delegated call path actually supports (because delegated operations strip target and use the child’s default routing), and callers like filefn/server/src/upload-sessions/service.ts use getStorageCapabilities(...) to choose upload mode. In mixed-capability nested setups, this can incorrectly select noSupportedUploadMode even though the delegated adapter can handle uploads.

Useful? React with 👍 / 👎.

Comment on lines +145 to +147
const fileStats = await lstat(filePath);
if (fileStats.isSymbolicLink()) {
throw new Error('Invalid key: upload target cannot be a symlink');
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Eliminate symlink race before opening local upload file

The symlink check is a TOCTOU sequence: lstat(filePath) runs first, then the code later opens the path with createWriteStream. If another process can modify entries under rootDir, it can swap the checked path to a symlink after the check and before open, and Node will follow that symlink, allowing writes outside the configured storage root. The open step needs no-follow/atomic validation rather than a pre-open check alone.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant