Skip to content

SF-30 add Python SQLAlchemy shared adapter#85

Merged
graphite-app[bot] merged 1 commit intodevfrom
SF-30/python-sqlalchemy-adapter
Apr 13, 2026
Merged

SF-30 add Python SQLAlchemy shared adapter#85
graphite-app[bot] merged 1 commit intodevfrom
SF-30/python-sqlalchemy-adapter

Conversation

@thyaravind
Copy link
Copy Markdown
Member

@thyaravind thyaravind commented Apr 4, 2026

Summary

  • add the missing shared SQLAlchemy adapter package to origin/dev
  • bring over the Python adapter package as a standalone PR so Python dependency review stays focused

Validation

  • inspected the copied package contents in a clean worktree created from origin/dev
  • did not run a reliable Python package validation pass in this worktree

Notes

  • no .conduct changes included
  • opened directly against origin/dev with GH_CONFIG_DIR=~/.config/gh-other

Summary by Sourcery

Add a new Python SQLAlchemy adapter package integrating superfunctions.db with SQLAlchemy-based databases.

New Features:

  • Introduce a SQLAlchemy-backed Adapter implementation providing CRUD, batch, counting, transactions, health checks, and basic schema hooks for superfunctions.db.
  • Expose a create_adapter factory and package initializer to simplify constructing the SQLAlchemy adapter from a SQLAlchemy Engine or AsyncEngine.

Enhancements:

  • Document installation, supported databases, and example usage for the new superfunctions-sqlalchemy package in a dedicated README.
  • Define packaging metadata, optional DB-specific extras, and development tooling configuration for the superfunctions-sqlalchemy Python package via pyproject.toml.

Summary by cubic

Adds the shared Python SQLAlchemy adapter superfunctions-sqlalchemy for superfunctions.db, providing CRUD, batch ops, upserts, counts, and transactions (sync/async callbacks) on Postgres/MySQL/SQLite using sync SQLAlchemy engines. Tightens capability flags and single‑row semantics, validates where clauses, maps SQL errors to typed errors, and fixes edge cases per SF‑30.

  • New Features

    • New adapter package (packages/python-sqlalchemy/) exporting create_adapter/SQLAlchemyAdapter, with init/health/close and optional namespace_prefix; includes README usage docs and pyproject.toml with extras (postgres, mysql, asyncpg).
    • Full CRUD, create_many/upsert/count, query builders (where with AND/OR, order_by/limit/offset), transactions that accept sync or async callbacks, and APIs that take params objects or kwargs.
    • Explicit column-key lookups for field resolution, including reserved names.
    • Capability metadata tightened: transactions/batch ops enabled; nested transactions, joins, full‑text search, schema management, and migrations disabled.
  • Bug Fixes

    • Safe single-row update/delete: find by where, target via primary key or fall back to existing row values; reselect by primary key when filters change; treat no‑op updates as success.
    • Strict where validation and OR support: require where for update_many/delete_many, reject unknown fields with clear errors, and honor OR connectors.
    • Preserve typed errors and bulk semantics: map duplicates/constraints to DuplicateKeyError/ConstraintViolationError, surface ConnectionError on drops, run create_many in a transaction and return inserted rows, and retry update on upsert duplicate.
    • Fix delete path connection typing to avoid type issues during execution.

Written for commit 8c9d511. Summary will update on new commits.

Summary by CodeRabbit

  • New Features

    • Added a new SQLAlchemy adapter package offering async-capable CRUD, batch operations, upserts, counting, transactions, and optional namespacing for SQL databases.
  • Documentation

    • Included comprehensive package docs with installation, sync/async usage examples, configuration, and feature checklist.
  • Tests

    • Added tests covering create/create_many, update behavior, upsert retry, find_many filtering, transactions, and error handling.

@linear
Copy link
Copy Markdown

linear bot commented Apr 4, 2026

SF-30 Shared packages

@sourcery-ai
Copy link
Copy Markdown

sourcery-ai bot commented Apr 4, 2026

Reviewer's Guide

Introduces a new Python package superfunctions-sqlalchemy that implements a SQLAlchemy-based adapter for the superfunctions.db abstraction, exposes it as a distributable package, and documents installation and usage.

Sequence diagram for a create operation via the SQLAlchemy adapter

sequenceDiagram
    actor Application
    participant AuthFn as AuthFnLibrary
    participant Adapter as SQLAlchemyAdapter
    participant SAEngine as SQLAlchemy_Engine
    participant DB as Database

    Application->>AuthFn: create_authfn(AuthFnConfig(database=adapter))
    Application->>AuthFn: auth.create_user(data)
    AuthFn->>Adapter: create(CreateParams)
    Adapter->>Adapter: _get_table(model, namespace)
    Adapter->>SAEngine: begin()
    activate SAEngine
    SAEngine-->>Adapter: Connection
    Adapter->>SAEngine: execute(insert(table).values(data).returning(table))
    SAEngine->>DB: INSERT INTO table ... RETURNING *
    DB-->>SAEngine: Row
    SAEngine-->>Adapter: Result
    Adapter->>Adapter: map row to dict
    Adapter-->>AuthFn: created_record Dict
    AuthFn-->>Application: created_record Dict
    deactivate SAEngine
Loading

Class diagram for the new SQLAlchemy adapter implementation

classDiagram
    class SQLAlchemyAdapter {
        - Engine engine
        - AsyncEngine engine
        - str namespace_prefix
        - MetaData metadata
        - float _start_time
        - str id
        - str name
        - str version
        - AdapterCapabilities capabilities
        + SQLAlchemyAdapter(engine, namespace_prefix)
        + create(params) Dict
        + find_one(params) Dict
        + find_many(params) List~Dict~
        + update(params) Dict
        + delete(params) None
        + create_many(params) List~Dict~
        + update_many(params) int
        + delete_many(params) int
        + upsert(params) Dict
        + count(params) int
        + transaction(callback) Any
        + initialize() None
        + is_healthy() HealthStatus
        + close() None
        + get_schema_version(namespace) int
        + set_schema_version(namespace, version) None
        + validate_schema(schema) ValidationResult
        + create_schema(params) SchemaCreation
        - _get_table_name(model, namespace) str
        - _build_where_clause(table, where) Any
        - _get_table(model, namespace) Table
    }

    class SQLAlchemyTransactionAdapter {
        - connection Any
        - SQLAlchemyAdapter parent
        + SQLAlchemyTransactionAdapter(connection, parent_adapter)
        + commit() None
        + rollback() None
        + create(params) Dict
        + find_one(params) Dict
        + find_many(params) List~Dict~
        + update(params) Dict
        + delete(params) None
    }

    class Adapter {
    }

    class TransactionAdapter {
    }

    class Engine {
    }

    class AsyncEngine {
    }

    SQLAlchemyAdapter ..|> Adapter
    SQLAlchemyTransactionAdapter ..|> TransactionAdapter
    SQLAlchemyAdapter o--> Engine
    SQLAlchemyAdapter o--> AsyncEngine
    SQLAlchemyAdapter o--> SQLAlchemyTransactionAdapter
    SQLAlchemyTransactionAdapter o--> SQLAlchemyAdapter
Loading

Flow diagram for integration of superfunctions-sqlalchemy into an application

flowchart LR
    App["Application code"] --> AuthFn["superfunctions library (authfn, etc.)"]
    AuthFn --> Adapter["SQLAlchemyAdapter (from superfunctions_sqlalchemy)"]
    Adapter --> Engine["SQLAlchemy Engine or AsyncEngine"]
    Engine --> DB["Relational database (PostgreSQL, MySQL, SQLite)"]
Loading

File-Level Changes

Change Details Files
Add SQLAlchemy-backed implementation of the superfunctions.db Adapter interface, including CRUD, batch operations, counting, basic transactions, and health checks.
  • Implement SQLAlchemyAdapter class that wraps a SQLAlchemy Engine or AsyncEngine and wires AdapterCapabilities metadata.
  • Provide helper methods to resolve table names, reflect table metadata, and translate high-level WhereClause filters into SQLAlchemy expressions.
  • Implement async CRUD APIs (create, find_one, find_many, update, delete) plus bulk variants (create_many, update_many, delete_many), upsert, and count, mapping SQLAlchemy errors into superfunctions.db error types.
  • Add transaction(callback) support via a SQLAlchemyTransactionAdapter that runs operations inside an engine.begin() context, delegating operations back to the parent adapter.
  • Stub out schema-related methods (get_schema_version, set_schema_version, validate_schema, create_schema) with TODOs for future implementation.
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Expose the adapter as an installable Python package with documentation and configuration for packaging, testing, and linting.
  • Create a README describing installation options (base, postgres, mysql, asyncpg), supported features, and basic usage examples with SQLAlchemy and superfunctions libraries.
  • Define pyproject.toml with project metadata, dependencies on superfunctions and SQLAlchemy, optional extras for different databases and tooling, and configuration for hatch, pytest, mypy, and ruff.
  • Add package init that re-exports SQLAlchemyAdapter and create_adapter, sets version, and includes a usage docstring.
packages/python-sqlalchemy/README.md
packages/python-sqlalchemy/pyproject.toml
packages/python-sqlalchemy/superfunctions_sqlalchemy/__init__.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a new superfunctions-sqlalchemy Python package: SQLAlchemy adapter providing sync/async CRUD, batch ops, upsert, transactions, predicate translation from structured where-clauses, error mapping, lifecycle/health hooks, and tests and packaging metadata.

Changes

Cohort / File(s) Summary
Package docs & metadata
packages/python-sqlalchemy/README.md, packages/python-sqlalchemy/pyproject.toml
Adds package README and pyproject with metadata, dependencies (superfunctions, sqlalchemy), optional extras (postgres/asyncpg/mysql/sqlite), build backend (hatchling), test/mypy/ruff configs, and packaging info.
Public package entry
packages/python-sqlalchemy/superfunctions_sqlalchemy/__init__.py
New package init: sets __version__ = "0.1.0", documents usage, and re-exports SQLAlchemyAdapter, create_adapter via __all__.
Adapter implementation
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Implements SQLAlchemyAdapter and SQLAlchemyTransactionAdapter: dynamic table reflection, where-clause → SQLAlchemy predicate translation (EQ/NE/GT/LT/IN/LIKE/ILIKE/NULL/CONTAINS/STARTS_WITH/ENDS_WITH, AND/OR), CRUD (create/find_one/find_many/update/delete), batch ops (create_many/update_many/delete_many), upsert, count, transaction wrapper, error mapping (IntegrityError→DuplicateKey/ConstraintViolation, OperationalError→ConnectionError), init/health/close, schema/version stubs, and factory create_adapter.
Tests
packages/python-sqlalchemy/tests/test_adapter.py
Adds pytest-asyncio tests using in-memory SQLite: fixtures create tables and adapters; tests cover create/create_many, update (reselect logic), upsert retry on duplicate-key, find_many filtering and OR connector handling, unknown-field error, reserved-like column handling, update_many/delete guards, constraint error propagation, and transaction callback behavior.

Sequence Diagram

sequenceDiagram
    participant Client as Client/Application
    participant Adapter as SQLAlchemyAdapter
    participant Engine as SQLAlchemy Engine
    participant DB as Database

    Client->>Adapter: create(params)
    activate Adapter
    Adapter->>Adapter: build INSERT (table, namespace, data)
    Adapter->>Engine: execute(INSERT)
    activate Engine
    Engine->>DB: execute INSERT
    DB-->>Engine: result (lastrowid / row)
    Engine-->>Adapter: execution result
    deactivate Engine
    Adapter->>Adapter: optionally SELECT inserted row by PK
    Adapter-->>Client: inserted row dict
    deactivate Adapter

    Client->>Adapter: find_many(params)
    activate Adapter
    Adapter->>Adapter: translate WhereClause -> predicates
    Adapter->>Engine: execute(SELECT with WHERE/ORDER/LIMIT)
    activate Engine
    Engine->>DB: execute SELECT
    DB-->>Engine: result rows
    Engine-->>Adapter: rows
    deactivate Engine
    Adapter-->>Client: list of row dicts
    deactivate Adapter

    Client->>Adapter: transaction(callback)
    activate Adapter
    Adapter->>Engine: begin transaction / get connection
    Adapter->>Adapter: create SQLAlchemyTransactionAdapter(connection)
    Adapter->>Adapter: invoke callback(transactionAdapter)
    Adapter->>Engine: commit or rollback via connection
    Adapter-->>Client: callback result
    deactivate Adapter
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇 I nibble on schemas, hopping through rows,

I plant tiny inserts where the wild data grows,
I stitch up transactions with a velvet paw,
Queries hum home and obey every law,
A rabbit-made adapter — swift, tidy, who knows?

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 40.98% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and concisely summarizes the main addition: a new Python SQLAlchemy adapter package for superfunctions.db, which aligns with all file additions.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch SF-30/python-sqlalchemy-adapter

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Add Python SQLAlchemy adapter for superfunctions.db

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Add complete SQLAlchemy adapter implementation for superfunctions.db
• Support full CRUD operations with transaction and batch capabilities
• Enable PostgreSQL, MySQL, and SQLite database compatibility
• Include comprehensive documentation and package configuration
Diagram
flowchart LR
  SF["superfunctions.db<br/>Adapter Interface"]
  SA["SQLAlchemyAdapter<br/>Implementation"]
  DB["PostgreSQL/MySQL<br/>SQLite"]
  SF -- "implements" --> SA
  SA -- "uses" --> DB
  SA -- "exports" --> API["create_adapter<br/>factory function"]
Loading

Grey Divider

File Changes

1. packages/python-sqlalchemy/superfunctions_sqlalchemy/__init__.py ✨ Enhancement +19/-0

Package initialization and public API

• Exports main adapter class and factory function
• Provides module-level documentation with usage examples
• Sets package version to 0.1.0

packages/python-sqlalchemy/superfunctions_sqlalchemy/init.py


2. packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py ✨ Enhancement +423/-0

Core SQLAlchemy adapter with database operations

• Implements SQLAlchemyAdapter class with full CRUD operations (create, find_one, find_many, update,
 delete)
• Supports batch operations (create_many, update_many, delete_many) and upsert functionality
• Implements transaction support via SQLAlchemyTransactionAdapter with connection management
• Provides query building with where clauses, operators (EQ, NE, GT, LT, IN, LIKE, ILIKE, etc.),
 ordering, limit, and offset
• Includes health checks, schema validation stubs, and error handling with custom exception mapping

packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py


3. packages/python-sqlalchemy/README.md 📝 Documentation +89/-0

Comprehensive package documentation and examples

• Documents installation with optional dependencies for PostgreSQL, MySQL, and async support
• Provides usage examples showing adapter creation and integration with superfunctions libraries
• Lists supported features including CRUD, transactions, batch operations, and SQL operators
• Includes practical code examples for create and query operations

packages/python-sqlalchemy/README.md


View more (1)
4. packages/python-sqlalchemy/pyproject.toml ⚙️ Configuration changes +65/-0

Package configuration and dependency management

• Configures Python package metadata with hatchling build system
• Specifies core dependencies (superfunctions>=0.1.0, sqlalchemy>=2.0.0)
• Defines optional dependencies for database drivers (psycopg2, asyncpg, pymysql) and development
 tools
• Configures pytest, mypy, and ruff linting tools with Python 3.10+ target

packages/python-sqlalchemy/pyproject.toml


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review bot commented Apr 4, 2026

Code Review by Qodo

🐞 Bugs (0) 📘 Rule violations (0) 📎 Requirement gaps (0) 🖥 UI issues (0) 🎨 UX Issues (0)

Grey Divider


Action required

1. Adapter API mismatch🐞
Description
SQLAlchemyAdapter CRUD methods require a single params object (e.g., `create(self, params:
CreateParams)`), but existing Python libraries in this repo call adapters with keyword arguments
like update(model=..., where=..., data=..., namespace=...). Using this adapter with those
libraries will raise TypeError at runtime and contradicts the README’s suggested authfn
integration.
Code

packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[R128-132]

+    async def create(self, params: CreateParams) -> Dict[str, Any]:
+        """Create a single record."""
+        try:
+            table = self._get_table(params.model, params.namespace)
+            
Evidence
The new adapter’s public CRUD API is *(self, params: XParams) (single positional argument), while
authfn (and its test in-memory adapters) invoke database.update(...) and
database.find_one(...) with keyword arguments (model=, where=, data=, namespace=). Passing
those keywords into SQLAlchemyAdapter.update(self, params) will fail immediately before any SQL
runs.

packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[128-147]
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[203-209]
authfn/python/authfn/authfn.py[70-87]
authfn/python/tests/support.py[32-67]
packages/python-sqlalchemy/README.md[36-45]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`SQLAlchemyAdapter` exposes CRUD methods that take a single `params` object, but other Python libraries in this repo (e.g., `authfn`) call database adapters using keyword arguments (`model=...`, `where=...`, `data=...`, `namespace=...`). This will raise `TypeError` when the SQLAlchemy adapter is used as the `database` implementation.
### Issue Context
The SQLAlchemy adapter README shows it being passed into `authfn` as `database=adapter`, but `authfn` calls `.update(...)`, `.find_one(...)`, etc. using keyword args.
### Fix Focus Areas
- packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[128-317]
- authfn/python/authfn/authfn.py[70-87]
### Suggested fix
Update `SQLAlchemyAdapter` (and `SQLAlchemyTransactionAdapter`) to accept the keyword-arg style used in-repo, or support both styles safely.
For example:
- Change signatures to `async def create(self, model: str, data: Dict[str, Any], namespace: Optional[str] = None) -> Dict[str, Any]: ...` etc.
- Or implement a small normalization layer that accepts either:
- `create(CreateParams(...))` OR
- `create(model="...", data={...}, namespace="...")`
and converts into a single internal representation.
Ensure `find_many` also accepts `order_by`, `limit`, and `offset` in the same shape the consuming libraries send.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Transactions ignore connection🐞
Description
transaction() creates a transactional connection but SQLAlchemyTransactionAdapter delegates
operations to the parent adapter, which opens new connections/transactions via
self.engine.begin()/connect(). Operations executed inside transaction() therefore are not part
of the transaction and rollback will not undo them.
Code

packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[R319-325]

+    async def transaction(self, callback: Callable) -> Any:
+        """Execute operations within a transaction."""
+        with self.engine.begin() as conn:
+            # Create transaction adapter
+            trx_adapter = SQLAlchemyTransactionAdapter(conn, self)
+            return await callback(trx_adapter)
+
Evidence
transaction() constructs a transaction adapter with a specific connection, but all transaction
adapter methods call self.parent.*, and the parent methods open their own `with
self.engine.begin()` blocks. That means the transaction connection is unused for CRUD and the
transaction wrapper does not provide atomicity.

packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[128-136]
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[319-325]
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[373-403]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`SQLAlchemyTransactionAdapter` does not execute queries on the transaction connection passed from `SQLAlchemyAdapter.transaction()`. Instead it delegates to parent methods that open their own connections/transactions, so the callback is not actually transactional.
### Issue Context
`transaction()` is expected to provide atomicity: all CRUD inside the callback should run on the same connection/transaction so rollback works.
### Fix Focus Areas
- packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[128-136]
- packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[319-325]
- packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[373-403]
### Suggested fix
Refactor CRUD execution to accept an optional connection:
- Implement internal helpers like `_create(conn, ...)`, `_find_one(conn, ...)`, etc.
- In normal adapter methods, open a connection/transaction and call helpers.
- In `SQLAlchemyTransactionAdapter`, call helpers with `self.connection` so all operations share the same transaction.
Also ensure `rollback()` and `commit()` semantics match how the connection/transaction is managed.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. AsyncEngine support broken🐞
Description
The adapter advertises support for AsyncEngine but uses synchronous context managers (`with
self.engine.begin()/connect()) and calls dispose() without awaiting. Passing an AsyncEngine`
will fail at runtime despite README claiming async support.
Code

packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[R42-50]

+    def __init__(self, engine: Engine | AsyncEngine, namespace_prefix: str = ""):
+        """
+        Initialize SQLAlchemy adapter.
+        
+        Args:
+            engine: SQLAlchemy Engine or AsyncEngine
+            namespace_prefix: Prefix for table names (for namespacing)
+        """
+        self.engine = engine
Evidence
The adapter explicitly types engine as Engine | AsyncEngine and the README claims "Sync and
async engines" / "With async support", but all DB access paths use sync with blocks and never
await connections. This makes the async-engine path nonfunctional.

packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[42-50]
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[133-135]
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[161-163]
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[326-334]
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[352-355]
packages/python-sqlalchemy/README.md[20-22]
packages/python-sqlalchemy/README.md[49-56]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`SQLAlchemyAdapter` claims to accept `AsyncEngine` and the README advertises async support, but the implementation uses synchronous `with` blocks and synchronous `.execute()` calls.
### Issue Context
Users will reasonably pass an `AsyncEngine` (especially when installing `[asyncpg]`) and then hit runtime failures.
### Fix Focus Areas
- packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[42-50]
- packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[128-355]
- packages/python-sqlalchemy/README.md[20-22]
- packages/python-sqlalchemy/README.md[49-56]
### Suggested fix
Choose one:
1) **Support async engines**: detect `isinstance(engine, AsyncEngine)` and use `async with engine.begin()` / `async with engine.connect()` and `await conn.execute(...)`; also ensure `close()` properly disposes async engines.
2) **Drop async claim**: change the type to `Engine` only and update README/features/extra names to avoid implying AsyncEngine support.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (2)
4. find_one where(None) bug🐞
Description
In find_one(), when params.select is provided and params.where is empty, the code calls
.where(None), which can yield an invalid/always-false predicate and return no rows. This breaks
find_one(select=[...]) queries without a filter.
Code

packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[R157-160]

+            if params.select:
+                columns = [getattr(table.c, field) for field in params.select]
+                query = select(*columns).select_from(table).where(where_clause if params.where else None)
+            
Evidence
The query is rebuilt under if params.select: and unconditionally applies `.where(where_clause if
params.where else None). When params.where is falsy, None is passed into .where(...)` instead
of omitting the WHERE clause.

packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[147-165]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`find_one()` applies `.where(None)` when `params.select` is set but `params.where` is not, which can produce incorrect SQL and return no results.
### Issue Context
This only happens in the `params.select` branch because the query is rebuilt there.
### Fix Focus Areas
- packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[147-165]
### Suggested fix
Build the selected-column query without calling `.where()` unless you actually have a where clause:
- Start with `query = select(*columns).select_from(table)`
- If `params.where`: compute `where_clause` and then `query = query.where(where_clause)`
This also reduces duplicated query-building logic.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. find_many drops pagination🐞
Description
In find_many(), when params.select is provided the code rebuilds the query and silently drops
previously applied order_by, limit, and offset. This returns wrong result ordering and can
ignore pagination.
Code

packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[R190-195]

+            if params.select:
+                columns = [getattr(table.c, field) for field in params.select]
+                query = select(*columns).select_from(table)
+                if params.where:
+                    query = query.where(self._build_where_clause(table, params.where))
+            
Evidence
The method first applies order_by/limit/offset to query, but under if params.select: it
replaces query with a new select(*columns).select_from(table) and only re-applies WHERE, losing
the other clauses.

packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[169-199]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`find_many()` rebuilds the query when `params.select` is set, which drops `order_by`, `limit`, and `offset` and returns incorrect results.
### Issue Context
This occurs because the code overwrites `query` inside the `params.select` block instead of modifying the existing query.
### Fix Focus Areas
- packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[169-199]
### Suggested fix
Refactor query building so `select` determines the base projection but then you consistently apply:
- WHERE (if any)
- ORDER BY (if any)
- LIMIT/OFFSET (if any)
One simple approach:
- Determine `columns = ...` (either `table` or `*columns`)
- Create `query = select(columns...).select_from(table)`
- Then apply where/order/limit/offset in one flow.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

6. Namespace prefix unused🐞
Description
namespace_prefix is accepted and stored on the adapter but never applied to table naming, so
callers cannot actually configure prefix-based namespacing. This can lead to table collisions when
multiple apps share a database.
Code

packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[R70-74]

+    def _get_table_name(self, model: str, namespace: Optional[str] = None) -> str:
+        """Get full table name with namespace."""
+        if namespace:
+            return f"{namespace}_{model}"
+        return model
Evidence
The constructor stores self.namespace_prefix, but _get_table_name() only uses the per-call
namespace argument and ignores self.namespace_prefix, making the init parameter ineffective.

packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[42-52]
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[70-74]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`namespace_prefix` is currently unused, so configuring it has no effect on actual table names.
### Issue Context
The adapter is intended for shared databases and namespacing, but prefix configuration is a no-op.
### Fix Focus Areas
- packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[42-52]
- packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[70-74]
### Suggested fix
In `_get_table_name()`, incorporate `self.namespace_prefix` (when non-empty) into the final table name, e.g.:
- `prefix = f"{self.namespace_prefix}_" if self.namespace_prefix else ""`
- then apply `namespace` and `model` after that consistently.
Also add/update README docs to describe the precedence rules (prefix vs namespace vs model).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


7. Capabilities not implemented🐞
Description
The adapter advertises schema_management=True and nested_transactions=True, but schema methods
are stubbed and transaction support is incomplete. Consumers using capabilities to decide behavior
may attempt unsupported operations and fail at runtime.
Code

packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[R59-68]

+        self.capabilities = AdapterCapabilities(
+            transactions=True,
+            nested_transactions=True,
+            joins=True,
+            full_text_search=False,
+            json_operations=True,
+            schema_management=True,
+            migration_support=False,
+            batch_operations=True,
+        )
Evidence
Capabilities claim schema management and nested transactions are supported, but
get_schema_version/set_schema_version/validate_schema/create_schema are TODO or always-fail.
This creates a mismatch between declared capabilities and actual behavior.

packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[59-68]
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[356-370]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The adapter’s declared `capabilities` do not match the current implementation (schema management and nested transactions are not actually implemented).
### Issue Context
Downstream libraries may branch on `capabilities` and attempt schema or nested transaction flows that will fail.
### Fix Focus Areas
- packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[59-68]
- packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[319-403]
- packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py[356-370]
### Suggested fix
Either:
- Implement the advertised capability areas, or
- Set the corresponding capability flags to `False` until implemented (recommended as a first step to avoid misleading consumers).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 6 issues, and left some high level feedback:

  • All adapter methods are declared async but use the synchronous Engine API (and ignore AsyncEngine), which will block the event loop; either switch these methods to sync or add proper AsyncEngine handling (e.g., async context managers and .execute calls) to avoid blocking in async contexts.
  • The transaction flow doesn’t actually bind operations to the passed connection: SQLAlchemyTransactionAdapter just delegates back to the parent adapter, which opens new connections, so calls inside transaction won’t participate in the same DB transaction; consider threading the transaction connection through the adapter methods or having the transaction adapter execute queries directly.
  • In find_one, the where_clause variable is only defined when params.where is set but is referenced in the params.select branch regardless, which will raise an UnboundLocalError when select is used without where; refactor the query-building so where_clause is always initialized (or conditionally applied) before use.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- All adapter methods are declared async but use the synchronous Engine API (and ignore AsyncEngine), which will block the event loop; either switch these methods to sync or add proper AsyncEngine handling (e.g., async context managers and `.execute` calls) to avoid blocking in async contexts.
- The transaction flow doesn’t actually bind operations to the passed connection: `SQLAlchemyTransactionAdapter` just delegates back to the parent adapter, which opens new connections, so calls inside `transaction` won’t participate in the same DB transaction; consider threading the transaction connection through the adapter methods or having the transaction adapter execute queries directly.
- In `find_one`, the `where_clause` variable is only defined when `params.where` is set but is referenced in the `params.select` branch regardless, which will raise an `UnboundLocalError` when `select` is used without `where`; refactor the query-building so `where_clause` is always initialized (or conditionally applied) before use.

## Individual Comments

### Comment 1
<location path="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py" line_range="42-51" />
<code_context>
+class SQLAlchemyAdapter:
+    """SQLAlchemy adapter for superfunctions.db"""
+
+    def __init__(self, engine: Engine | AsyncEngine, namespace_prefix: str = ""):
+        """
+        Initialize SQLAlchemy adapter.
+        
+        Args:
+            engine: SQLAlchemy Engine or AsyncEngine
+            namespace_prefix: Prefix for table names (for namespacing)
+        """
+        self.engine = engine
+        self.namespace_prefix = namespace_prefix
+        self.metadata = MetaData()
+        self._start_time = time.time()
+        
</code_context>
<issue_to_address>
**issue (bug_risk):** AsyncEngine support is misleading because all operations use synchronous engine APIs.

`engine` is typed as `Engine | AsyncEngine`, but the calls (`begin()`, `connect()`, `dispose()`) are all sync-only and will not work correctly with an `AsyncEngine`, and they perform blocking I/O from `async def` code. Please either introduce a proper async path using `AsyncEngine`/`AsyncSession` (and `await` the operations), or treat this adapter as sync-only by removing `AsyncEngine` from the type (or offloading sync calls to a thread pool).
</issue_to_address>

### Comment 2
<location path="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py" line_range="70-74" />
<code_context>
+            batch_operations=True,
+        )
+
+    def _get_table_name(self, model: str, namespace: Optional[str] = None) -> str:
+        """Get full table name with namespace."""
+        if namespace:
+            return f"{namespace}_{model}"
+        return model
+
+    def _build_where_clause(self, table: Table, where: List[WhereClause]) -> Any:
</code_context>
<issue_to_address>
**suggestion (bug_risk):** The `namespace_prefix` attribute is never used in table naming.

`namespace_prefix` is accepted and stored but never used in `_get_table_name`, which only considers `namespace` and `model`. If global namespacing is intended (e.g. for multi-tenant usage or avoiding table collisions), this method should include the prefix in the constructed name; otherwise the argument is effectively a no-op and may mislead callers.

Suggested implementation:

```python
    def _get_table_name(self, model: str, namespace: Optional[str] = None) -> str:
        """Get full table name with global namespace_prefix and optional per-call namespace."""
        parts: List[str] = []

        # Global prefix applied to all tables (e.g. for multi-tenant or app-level namespacing)
        if getattr(self, "namespace_prefix", None):
            parts.append(self.namespace_prefix)

        # Optional per-call namespace for finer-grained namespacing
        if namespace:
            parts.append(namespace)

        parts.append(model)
        return "_".join(parts)

```

This change assumes:
1. The adapter class already defines `self.namespace_prefix` in `__init__` (as your comment indicates it is “accepted and stored”). If it is not type-annotated there, you may want to add a `namespace_prefix: Optional[str]` attribute annotation to keep typing consistent.
2. Existing database schemas either already use the `namespace_prefix` convention or you are comfortable with the resulting table name changes. If not, you may need a migration or a backward-compatibility flag to conditionally enable the prefixing behavior.
</issue_to_address>

### Comment 3
<location path="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py" line_range="147-156" />
<code_context>
+        except Exception as e:
+            raise QueryFailedError(f"Create failed: {str(e)}", cause=e)
+
+    async def find_one(self, params: FindOneParams) -> Optional[Dict[str, Any]]:
+        """Find a single record."""
+        try:
+            table = self._get_table(params.model, params.namespace)
+            query = select(table)
+            
+            if params.where:
+                where_clause = self._build_where_clause(table, params.where)
+                query = query.where(where_clause)
+            
+            if params.select:
+                columns = [getattr(table.c, field) for field in params.select]
+                query = select(*columns).select_from(table).where(where_clause if params.where else None)
+            
+            with self.engine.connect() as conn:
</code_context>
<issue_to_address>
**issue (bug_risk):** Calling `.where(None)` when no `where` is provided may cause issues and the conditional is awkward.

In the `params.select` branch:

```python
if params.select:
    columns = [getattr(table.c, field) for field in params.select]
    query = select(*columns).select_from(table).where(where_clause if params.where else None)
```
When `params.where` is falsy this becomes `.where(None)`, which is unnecessary and can raise SQLAlchemy warnings/errors. It also duplicates the WHERE handling.

You can avoid this and keep the logic centralized with:

```python
columns = [getattr(table.c, field) for field in params.select]
query = select(*columns).select_from(table)
if params.where:
    where_clause = self._build_where_clause(table, params.where)
    query = query.where(where_clause)
```
</issue_to_address>

### Comment 4
<location path="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py" line_range="169-153" />
<code_context>
+        except Exception as e:
+            raise QueryFailedError(f"FindOne failed: {str(e)}", cause=e)
+
+    async def find_many(self, params: FindManyParams) -> List[Dict[str, Any]]:
+        """Find multiple records."""
+        try:
+            table = self._get_table(params.model, params.namespace)
+            query = select(table)
+            
+            if params.where:
+                where_clause = self._build_where_clause(table, params.where)
+                query = query.where(where_clause)
</code_context>
<issue_to_address>
**issue (bug_risk):** When `select` is used in `find_many`, ordering/limit/offset logic is silently dropped.

If `params.select` is provided, the code rebuilds `query` with `select(*columns).select_from(table)` and only reapplies `where`, dropping any previously applied `order_by`, `limit`, and `offset`. This changes behavior whenever a projection is requested. Instead, construct the projection first and then apply `where`/`order_by`/`limit`/`offset` once, or adjust the existing `select(table)` to change the selected columns without discarding the rest of the query state.
</issue_to_address>

### Comment 5
<location path="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py" line_range="319-323" />
<code_context>
+        except Exception as e:
+            raise QueryFailedError(f"Count failed: {str(e)}", cause=e)
+
+    async def transaction(self, callback: Callable) -> Any:
+        """Execute operations within a transaction."""
+        with self.engine.begin() as conn:
+            # Create transaction adapter
+            trx_adapter = SQLAlchemyTransactionAdapter(conn, self)
+            return await callback(trx_adapter)
+
</code_context>
<issue_to_address>
**issue (bug_risk):** Transactional operations do not actually use the transaction connection.

`transaction` creates a `SQLAlchemyTransactionAdapter` with a connection from `self.engine.begin()`, but the adapter methods delegate back to `self.parent` (e.g. `parent.create`), which starts its own connection/transaction. As a result, calls made inside the callback are not guaranteed to use the same DB transaction, breaking the expected transactional semantics.

To fix this, the transaction adapter should run queries using `self.connection` directly, or the parent methods should accept an optional connection/transaction and avoid opening a new one when it is provided.
</issue_to_address>

### Comment 6
<location path="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py" line_range="281-290" />
<code_context>
+        except Exception as e:
+            raise QueryFailedError(f"DeleteMany failed: {str(e)}", cause=e)
+
+    async def upsert(self, params: UpsertParams) -> Dict[str, Any]:
+        """Upsert a record."""
+        # Try to find existing record
+        existing = await self.find_one(
+            FindOneParams(model=params.model, where=params.where, namespace=params.namespace)
+        )
+        
+        if existing:
+            return await self.update(
+                UpdateParams(
+                    model=params.model,
+                    where=params.where,
+                    data=params.update,
+                    namespace=params.namespace,
+                )
+            )
+        else:
+            return await self.create(
+                CreateParams(model=params.model, data=params.create, namespace=params.namespace)
+            )
</code_context>
<issue_to_address>
**suggestion (bug_risk):** The `upsert` implementation is non-atomic and may race under concurrent access.

This uses a `find_one` followed by `update`/`create`, so two concurrent callers can both see no existing record and then both call `create`, causing duplicates or integrity errors. Where possible, prefer a single atomic upsert (e.g., `ON CONFLICT` / `ON DUPLICATE KEY UPDATE`) or wrap the check-and-create path in a transaction with appropriate locking. If you need to stay DB-agnostic, at least document the non-atomic behavior and normalize any resulting integrity errors to a stable error type so callers can handle them explicitly.

Suggested implementation:

```python
+        except Exception as e:
+            raise QueryFailedError(f"DeleteMany failed: {str(e)}", cause=e)
+
+    async def upsert(self, params: UpsertParams) -> Dict[str, Any]:
+        """
+        Upsert a record.
+
+        Note:
+            This implementation is not fully atomic across all backends because it
+            composes higher-level operations (find / create / update). Under
+            concurrent access, two callers may both attempt to insert the same
+            logical record. In that case, the database should raise an integrity
+            error (e.g., unique constraint violation), which is normalized here
+            to a QueryFailedError with a stable error message so callers can
+            handle it explicitly.
+        """
+        try:
+            # Try to find existing record
+            existing = await self.find_one(
+                FindOneParams(
+                    model=params.model,
+                    where=params.where,
+                    namespace=params.namespace,
+                )
+            )
+
+            if existing:
+                return await self.update(
+                    UpdateParams(
+                        model=params.model,
+                        where=params.where,
+                        data=params.update,
+                        namespace=params.namespace,
+                    )
+                )
+
+            # No existing record found; try to create one.
+            # Under concurrent access, this may raise an IntegrityError if
+            # another transaction inserts a conflicting row first.
+            return await self.create(
+                CreateParams(
+                    model=params.model,
+                    data=params.create,
+                    namespace=params.namespace,
+                )
+            )
+
+        except IntegrityError as e:
+            # Normalize integrity / uniqueness violations to a stable error
+            # type so callers can distinguish them from other failures.
+            raise QueryFailedError(
+                "Upsert failed due to an integrity constraint violation; "
+                "the upsert operation is not atomic under concurrent access.",
+                cause=e,
+            )
+        except Exception as e:
+            raise QueryFailedError(f"Upsert failed: {str(e)}", cause=e)
+
+
+
+class SQLAlchemyAdapter:

```

1. Ensure `IntegrityError` is imported at the top of this file:
   ```python
   from sqlalchemy.exc import IntegrityError
   ```
2. This implementation assumes `FindOneParams`, `UpdateParams`, `CreateParams`, `UpsertParams`, and `QueryFailedError` are already defined and imported in this module, and that `find_one`, `update`, and `create` are `async` methods returning dictionaries or `None`. If their signatures differ, adjust the parameter names and truthiness check on `existing` accordingly.
3. If your project has a more specific error type for integrity/uniqueness violations, you may want to raise that instead of `QueryFailedError`, or wrap `QueryFailedError` in a richer error hierarchy.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 63ada3c505

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +133 to +134
with self.engine.begin() as conn:
result = conn.execute(insert(table).values(**params.data).returning(table))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Use async SQLAlchemy flow when engine is AsyncEngine

The adapter accepts Engine | AsyncEngine and the README advertises async-engine support, but methods use synchronous context managers and execution (with self.engine.begin() / conn.execute(...)). When AsyncEngine is passed, this path raises at runtime because async engines require async with and awaited calls, so core CRUD operations fail instead of running.

Useful? React with 👍 / 👎.

Comment on lines +389 to +390
async def create(self, params: CreateParams) -> Dict[str, Any]:
return await self.parent.create(params)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Route transaction adapter calls through the transaction connection

Inside transaction(), the callback receives SQLAlchemyTransactionAdapter, but its CRUD methods delegate to self.parent.*, which opens fresh engine connections/transactions. That means work inside the callback is not bound to the outer transaction conn, so a rollback/error in the callback can still leave prior writes committed.

Useful? React with 👍 / 👎.

Comment on lines +190 to +193
if params.select:
columns = [getattr(table.c, field) for field in params.select]
query = select(*columns).select_from(table)
if params.where:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve pagination and ordering when projecting selected columns

When params.select is set, find_many() rebuilds the query with select(*columns).select_from(table) and only reapplies where. Any previously applied order_by, limit, and offset are dropped, so callers requesting projection plus pagination/sorting get incorrect result sets.

Useful? React with 👍 / 👎.

Comment on lines +72 to +74
if namespace:
return f"{namespace}_{model}"
return model
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Apply namespace_prefix when building table names

namespace_prefix is accepted in the constructor and create_adapter(...) but _get_table_name() ignores it and returns only namespace_model or model. As a result, configured prefixes never affect table resolution, which can cause collisions and makes the public option ineffective.

Useful? React with 👍 / 👎.

Comment on lines +128 to +145
async def create(self, params: CreateParams) -> Dict[str, Any]:
"""Create a single record."""
try:
table = self._get_table(params.model, params.namespace)

with self.engine.begin() as conn:
result = conn.execute(insert(table).values(**params.data).returning(table))
row = result.fetchone()
return dict(row._mapping) if row else params.data

except IntegrityError as e:
if "unique" in str(e).lower() or "duplicate" in str(e).lower():
raise DuplicateKeyError(str(e), cause=e)
raise ConstraintViolationError(str(e), cause=e)
except OperationalError as e:
raise ConnectionError(str(e), cause=e)
except Exception as e:
raise QueryFailedError(f"Create failed: {str(e)}", cause=e)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Critical: Async/sync mismatch - All methods are declared async def but use synchronous SQLAlchemy operations (with self.engine.begin(), with self.engine.connect()). This will:

  1. Block the event loop when used with sync Engine in async contexts
  2. Fail immediately with AsyncEngine (requires async with instead of with)

This pattern is repeated throughout all database operations (create, find_one, find_many, update, delete, etc.).

Fix: Either use synchronous methods or properly support AsyncEngine:

# For AsyncEngine support:
if isinstance(self.engine, AsyncEngine):
    async with self.engine.begin() as conn:
        result = await conn.execute(...)
else:
    with self.engine.begin() as conn:
        result = conn.execute(...)
Suggested change
async def create(self, params: CreateParams) -> Dict[str, Any]:
"""Create a single record."""
try:
table = self._get_table(params.model, params.namespace)
with self.engine.begin() as conn:
result = conn.execute(insert(table).values(**params.data).returning(table))
row = result.fetchone()
return dict(row._mapping) if row else params.data
except IntegrityError as e:
if "unique" in str(e).lower() or "duplicate" in str(e).lower():
raise DuplicateKeyError(str(e), cause=e)
raise ConstraintViolationError(str(e), cause=e)
except OperationalError as e:
raise ConnectionError(str(e), cause=e)
except Exception as e:
raise QueryFailedError(f"Create failed: {str(e)}", cause=e)
async def create(self, params: CreateParams) -> Dict[str, Any]:
"""Create a single record."""
try:
table = self._get_table(params.model, params.namespace)
if isinstance(self.engine, AsyncEngine):
async with self.engine.begin() as conn:
result = await conn.execute(insert(table).values(**params.data).returning(table))
row = result.fetchone()
return dict(row._mapping) if row else params.data
else:
with self.engine.begin() as conn:
result = conn.execute(insert(table).values(**params.data).returning(table))
row = result.fetchone()
return dict(row._mapping) if row else params.data
except IntegrityError as e:
if "unique" in str(e).lower() or "duplicate" in str(e).lower():
raise DuplicateKeyError(str(e), cause=e)
raise ConstraintViolationError(str(e), cause=e)
except OperationalError as e:
raise ConnectionError(str(e), cause=e)
except Exception as e:
raise QueryFailedError(f"Create failed: {str(e)}", cause=e)

Spotted by Graphite

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.

Comment on lines +321 to +324
with self.engine.begin() as conn:
# Create transaction adapter
trx_adapter = SQLAlchemyTransactionAdapter(conn, self)
return await callback(trx_adapter)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Critical: Transaction context not properly shared - The transaction creates a connection with self.engine.begin() but the SQLAlchemyTransactionAdapter operations call self.parent.create(), self.parent.update(), etc. which open their own separate connections via self.engine.begin(). This means operations within the transaction callback are not actually part of the same transaction.

Fix: The transaction adapter needs to use the existing connection, not create new ones:

class SQLAlchemyTransactionAdapter:
    def __init__(self, connection, parent_adapter: SQLAlchemyAdapter):
        self.connection = connection
        self.parent = parent_adapter
    
    async def create(self, params: CreateParams) -> Dict[str, Any]:
        table = self.parent._get_table(params.model, params.namespace)
        result = self.connection.execute(insert(table).values(**params.data).returning(table))
        # ... use self.connection instead of opening new connections

Spotted by Graphite

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.

@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Apr 4, 2026

Greptile Summary

This PR introduces a new packages/python-sqlalchemy package providing a SQLAlchemy adapter for superfunctions.db. The adapter wraps SQLAlchemy's Core API to implement the Adapter interface (CRUD, transactions, health, schema). While the structure is sound, several critical logic bugs make the implementation unsafe to merge as-is.

Key issues found:

  • Critical – broken transaction isolation: SQLAlchemyTransactionAdapter delegates all operations back to SQLAlchemyAdapter parent methods, which each open their own independent connections. The stored transaction connection is never used for any data operation, so nothing inside a transaction() callback is actually atomic or rollback-safe.
  • Critical – UnboundLocalError in find_one: When params.select is set but params.where is None, the code references where_clause (line 159) which was only assigned inside the if params.where: block (line 154), causing an UnboundLocalError at runtime.
  • High – find_many drops pagination when select is used: The if params.select: block rebuilds the query from scratch, silently discarding any previously applied limit, offset, and order_by.
  • High – MySQL is advertised but .returning() is unsupported by MySQL: The RETURNING clause is used in create, update, and create_many — MySQL does not support this clause and SQLAlchemy will raise a CompileError for the MySQL dialect.
  • Medium – namespace_prefix is stored but never applied: The constructor argument namespace_prefix is stored in self.namespace_prefix but _get_table_name never reads it, making the feature silently inoperative.
  • No tests are included despite pyproject.toml configuring a tests/ directory.

Confidence Score: 1/5

Not safe to merge — three P0 bugs will cause runtime crashes or silently broken behaviour in production

Score of 1 reflects: (1) an UnboundLocalError crash in find_one when select is used without where, (2) find_many silently returning wrong results when select and limit/offset/order_by are combined, and (3) SQLAlchemyTransactionAdapter providing zero atomicity because all operations bypass the transaction connection. The MySQL RETURNING incompatibility and unused namespace_prefix add further correctness risk. No tests are present to catch any of these.

adapter.py requires significant rework: fix the find_one UnboundLocalError, rebuild the find_many select branch to preserve pagination, rewrite SQLAlchemyTransactionAdapter to use its own connection, add MySQL-compatible write paths, and wire up namespace_prefix in _get_table_name

Important Files Changed

Filename Overview
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py Core adapter implementation — contains 3 P0 logic bugs (UnboundLocalError in find_one, find_many drops pagination when select is used, SQLAlchemyTransactionAdapter provides no actual transaction isolation) plus MySQL RETURNING incompatibility and unused namespace_prefix
packages/python-sqlalchemy/superfunctions_sqlalchemy/init.py Clean module init — correctly re-exports SQLAlchemyAdapter and create_adapter with version pinned to 0.1.0
packages/python-sqlalchemy/pyproject.toml Well-structured package config with hatchling, optional extras per DB driver; minor concern is advertising MySQL support that the adapter code does not yet deliver
packages/python-sqlalchemy/README.md Clear usage documentation; claims MySQL support and shows async usage (await adapter.create) which is inconsistent with the synchronous engine usage internally

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    CA[create_adapter\nengine, namespace_prefix] --> SA[SQLAlchemyAdapter]
    SA -->|CRUD methods| E{engine.begin / engine.connect}
    E --> DB[(Database)]

    SA -->|transaction callback| TA[SQLAlchemyTransactionAdapter\nstores connection]
    TA -->|delegates back to| SA
    SA -->|opens NEW connection| DB
    TA -. stored connection\nnever used .-> DB

    subgraph BUG ["⚠ Transaction Bug"]
        TA
    end

    style BUG fill:#ffcccc,stroke:#cc0000
    style TA fill:#ff9999
Loading
Prompt To Fix All With AI
This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 154-159

Comment:
**`UnboundLocalError` when `select` is used without `where`**

When `params.select` is provided but `params.where` is falsy, the expression on line 159 (`where_clause if params.where else None`) references `where_clause` — but that name is only assigned inside the `if params.where:` block on line 154. Python will raise `UnboundLocalError` at runtime for any call like `find_one(FindOneParams(model=..., select=["id", "name"]))`.

```suggestion
        if params.where:
            where_clause = self._build_where_clause(table, params.where)
            query = query.where(where_clause)
        
        if params.select:
            columns = [getattr(table.c, field) for field in params.select]
            query = select(*columns).select_from(table)
            if params.where:
                query = query.where(where_clause)
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 190-194

Comment:
**`select` column projection silently discards `limit`, `offset`, and `order_by`**

The `if params.select:` block replaces `query` with a brand-new `select(*columns).select_from(table)` object, which has no memory of the `.order_by()` (lines 180–182), `.limit()` (line 185), and `.offset()` (line 188) that were just applied. Any `find_many` call combining `select` with `limit`/`offset`/`order_by` will return unsorted, unpaginated results with no error.

```suggestion
            if params.select:
                columns = [getattr(table.c, field) for field in params.select]
                new_query = select(*columns).select_from(table)
                if params.where:
                    new_query = new_query.where(self._build_where_clause(table, params.where))
                if params.order_by:
                    for order in params.order_by:
                        col = getattr(table.c, order.field)
                        new_query = new_query.order_by(col.desc() if order.direction == "desc" else col)
                if params.limit:
                    new_query = new_query.limit(params.limit)
                if params.offset:
                    new_query = new_query.offset(params.offset)
                query = new_query
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 388-402

Comment:
**`SQLAlchemyTransactionAdapter` operations run outside the transaction**

Every method here delegates straight to `self.parent.create()`, `self.parent.find_one()`, etc. Each of those parent methods opens its **own new connection** via `with self.engine.begin() as conn:` or `with self.engine.connect() as conn:`, completely ignoring `self.connection`. The stored connection is used only in `rollback()` (line 386) and nowhere else.

This means operations issued inside a `transaction()` callback are individually auto-committed on separate connections — they are not atomic and cannot be rolled back together. The transaction abstraction provides no isolation guarantee at all.

Each method must use `self.connection` directly, for example:

```python
async def create(self, params: CreateParams) -> Dict[str, Any]:
    table = self.parent._get_table(params.model, params.namespace)
    result = self.connection.execute(
        insert(table).values(**params.data).returning(table)
    )
    row = result.fetchone()
    return dict(row._mapping) if row else params.data
```
All other delegated methods need equivalent treatment.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 132-136

Comment:
**`.returning()` is unsupported by MySQL**

`.returning(table)` is used in `create` (line 134), `update` (line 211), and `create_many` (line 247). MySQL does not support the `RETURNING` clause — SQLAlchemy will raise a `CompileError` for the MySQL dialect. Both the README and `pyproject.toml` advertise MySQL as a first-class supported backend (`pip install superfunctions-sqlalchemy[mysql]`), so every MySQL user will hit this on every write operation.

For MySQL compatibility, replace the `RETURNING`-based fetch with a dialect-aware approach such as capturing `result.lastrowid` after the insert and then selecting the row back, or using SQLAlchemy's `Inspector`/`select` pattern.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 50-74

Comment:
**`namespace_prefix` is stored but never applied to table names**

`self.namespace_prefix` is accepted and stored in `__init__` (line 51), but `_get_table_name` (lines 70–74) only uses the per-call `namespace` argument — it never reads `self.namespace_prefix`. Any adapter configured with a prefix (e.g., `create_adapter(engine, namespace_prefix="myapp_")`) will have it silently ignored on every query.

```suggestion
    def _get_table_name(self, model: str, namespace: Optional[str] = None) -> str:
        """Get full table name with namespace."""
        if namespace:
            return f"{self.namespace_prefix}{namespace}_{model}"
        return f"{self.namespace_prefix}{model}"
```

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "SF-30 add Python SQLAlchemy shared adapt..." | Re-trigger Greptile

Comment on lines +154 to +159
where_clause = self._build_where_clause(table, params.where)
query = query.where(where_clause)

if params.select:
columns = [getattr(table.c, field) for field in params.select]
query = select(*columns).select_from(table).where(where_clause if params.where else None)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P0 UnboundLocalError when select is used without where

When params.select is provided but params.where is falsy, the expression on line 159 (where_clause if params.where else None) references where_clause — but that name is only assigned inside the if params.where: block on line 154. Python will raise UnboundLocalError at runtime for any call like find_one(FindOneParams(model=..., select=["id", "name"])).

Suggested change
where_clause = self._build_where_clause(table, params.where)
query = query.where(where_clause)
if params.select:
columns = [getattr(table.c, field) for field in params.select]
query = select(*columns).select_from(table).where(where_clause if params.where else None)
if params.where:
where_clause = self._build_where_clause(table, params.where)
query = query.where(where_clause)
if params.select:
columns = [getattr(table.c, field) for field in params.select]
query = select(*columns).select_from(table)
if params.where:
query = query.where(where_clause)
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 154-159

Comment:
**`UnboundLocalError` when `select` is used without `where`**

When `params.select` is provided but `params.where` is falsy, the expression on line 159 (`where_clause if params.where else None`) references `where_clause` — but that name is only assigned inside the `if params.where:` block on line 154. Python will raise `UnboundLocalError` at runtime for any call like `find_one(FindOneParams(model=..., select=["id", "name"]))`.

```suggestion
        if params.where:
            where_clause = self._build_where_clause(table, params.where)
            query = query.where(where_clause)
        
        if params.select:
            columns = [getattr(table.c, field) for field in params.select]
            query = select(*columns).select_from(table)
            if params.where:
                query = query.where(where_clause)
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +190 to +194
if params.select:
columns = [getattr(table.c, field) for field in params.select]
query = select(*columns).select_from(table)
if params.where:
query = query.where(self._build_where_clause(table, params.where))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P0 select column projection silently discards limit, offset, and order_by

The if params.select: block replaces query with a brand-new select(*columns).select_from(table) object, which has no memory of the .order_by() (lines 180–182), .limit() (line 185), and .offset() (line 188) that were just applied. Any find_many call combining select with limit/offset/order_by will return unsorted, unpaginated results with no error.

Suggested change
if params.select:
columns = [getattr(table.c, field) for field in params.select]
query = select(*columns).select_from(table)
if params.where:
query = query.where(self._build_where_clause(table, params.where))
if params.select:
columns = [getattr(table.c, field) for field in params.select]
new_query = select(*columns).select_from(table)
if params.where:
new_query = new_query.where(self._build_where_clause(table, params.where))
if params.order_by:
for order in params.order_by:
col = getattr(table.c, order.field)
new_query = new_query.order_by(col.desc() if order.direction == "desc" else col)
if params.limit:
new_query = new_query.limit(params.limit)
if params.offset:
new_query = new_query.offset(params.offset)
query = new_query
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 190-194

Comment:
**`select` column projection silently discards `limit`, `offset`, and `order_by`**

The `if params.select:` block replaces `query` with a brand-new `select(*columns).select_from(table)` object, which has no memory of the `.order_by()` (lines 180–182), `.limit()` (line 185), and `.offset()` (line 188) that were just applied. Any `find_many` call combining `select` with `limit`/`offset`/`order_by` will return unsorted, unpaginated results with no error.

```suggestion
            if params.select:
                columns = [getattr(table.c, field) for field in params.select]
                new_query = select(*columns).select_from(table)
                if params.where:
                    new_query = new_query.where(self._build_where_clause(table, params.where))
                if params.order_by:
                    for order in params.order_by:
                        col = getattr(table.c, order.field)
                        new_query = new_query.order_by(col.desc() if order.direction == "desc" else col)
                if params.limit:
                    new_query = new_query.limit(params.limit)
                if params.offset:
                    new_query = new_query.offset(params.offset)
                query = new_query
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +388 to +402
# Delegate all operations to parent adapter
async def create(self, params: CreateParams) -> Dict[str, Any]:
return await self.parent.create(params)

async def find_one(self, params: FindOneParams) -> Optional[Dict[str, Any]]:
return await self.parent.find_one(params)

async def find_many(self, params: FindManyParams) -> List[Dict[str, Any]]:
return await self.parent.find_many(params)

async def update(self, params: UpdateParams) -> Dict[str, Any]:
return await self.parent.update(params)

async def delete(self, params: DeleteParams) -> None:
return await self.parent.delete(params)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P0 SQLAlchemyTransactionAdapter operations run outside the transaction

Every method here delegates straight to self.parent.create(), self.parent.find_one(), etc. Each of those parent methods opens its own new connection via with self.engine.begin() as conn: or with self.engine.connect() as conn:, completely ignoring self.connection. The stored connection is used only in rollback() (line 386) and nowhere else.

This means operations issued inside a transaction() callback are individually auto-committed on separate connections — they are not atomic and cannot be rolled back together. The transaction abstraction provides no isolation guarantee at all.

Each method must use self.connection directly, for example:

async def create(self, params: CreateParams) -> Dict[str, Any]:
    table = self.parent._get_table(params.model, params.namespace)
    result = self.connection.execute(
        insert(table).values(**params.data).returning(table)
    )
    row = result.fetchone()
    return dict(row._mapping) if row else params.data

All other delegated methods need equivalent treatment.

Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 388-402

Comment:
**`SQLAlchemyTransactionAdapter` operations run outside the transaction**

Every method here delegates straight to `self.parent.create()`, `self.parent.find_one()`, etc. Each of those parent methods opens its **own new connection** via `with self.engine.begin() as conn:` or `with self.engine.connect() as conn:`, completely ignoring `self.connection`. The stored connection is used only in `rollback()` (line 386) and nowhere else.

This means operations issued inside a `transaction()` callback are individually auto-committed on separate connections — they are not atomic and cannot be rolled back together. The transaction abstraction provides no isolation guarantee at all.

Each method must use `self.connection` directly, for example:

```python
async def create(self, params: CreateParams) -> Dict[str, Any]:
    table = self.parent._get_table(params.model, params.namespace)
    result = self.connection.execute(
        insert(table).values(**params.data).returning(table)
    )
    row = result.fetchone()
    return dict(row._mapping) if row else params.data
```
All other delegated methods need equivalent treatment.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +132 to +136

with self.engine.begin() as conn:
result = conn.execute(insert(table).values(**params.data).returning(table))
row = result.fetchone()
return dict(row._mapping) if row else params.data
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 .returning() is unsupported by MySQL

.returning(table) is used in create (line 134), update (line 211), and create_many (line 247). MySQL does not support the RETURNING clause — SQLAlchemy will raise a CompileError for the MySQL dialect. Both the README and pyproject.toml advertise MySQL as a first-class supported backend (pip install superfunctions-sqlalchemy[mysql]), so every MySQL user will hit this on every write operation.

For MySQL compatibility, replace the RETURNING-based fetch with a dialect-aware approach such as capturing result.lastrowid after the insert and then selecting the row back, or using SQLAlchemy's Inspector/select pattern.

Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 132-136

Comment:
**`.returning()` is unsupported by MySQL**

`.returning(table)` is used in `create` (line 134), `update` (line 211), and `create_many` (line 247). MySQL does not support the `RETURNING` clause — SQLAlchemy will raise a `CompileError` for the MySQL dialect. Both the README and `pyproject.toml` advertise MySQL as a first-class supported backend (`pip install superfunctions-sqlalchemy[mysql]`), so every MySQL user will hit this on every write operation.

For MySQL compatibility, replace the `RETURNING`-based fetch with a dialect-aware approach such as capturing `result.lastrowid` after the insert and then selecting the row back, or using SQLAlchemy's `Inspector`/`select` pattern.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +50 to +74
self.engine = engine
self.namespace_prefix = namespace_prefix
self.metadata = MetaData()
self._start_time = time.time()

# Metadata
self.id = "sqlalchemy"
self.name = "SQLAlchemy Adapter"
self.version = "0.1.0"
self.capabilities = AdapterCapabilities(
transactions=True,
nested_transactions=True,
joins=True,
full_text_search=False,
json_operations=True,
schema_management=True,
migration_support=False,
batch_operations=True,
)

def _get_table_name(self, model: str, namespace: Optional[str] = None) -> str:
"""Get full table name with namespace."""
if namespace:
return f"{namespace}_{model}"
return model
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 namespace_prefix is stored but never applied to table names

self.namespace_prefix is accepted and stored in __init__ (line 51), but _get_table_name (lines 70–74) only uses the per-call namespace argument — it never reads self.namespace_prefix. Any adapter configured with a prefix (e.g., create_adapter(engine, namespace_prefix="myapp_")) will have it silently ignored on every query.

Suggested change
self.engine = engine
self.namespace_prefix = namespace_prefix
self.metadata = MetaData()
self._start_time = time.time()
# Metadata
self.id = "sqlalchemy"
self.name = "SQLAlchemy Adapter"
self.version = "0.1.0"
self.capabilities = AdapterCapabilities(
transactions=True,
nested_transactions=True,
joins=True,
full_text_search=False,
json_operations=True,
schema_management=True,
migration_support=False,
batch_operations=True,
)
def _get_table_name(self, model: str, namespace: Optional[str] = None) -> str:
"""Get full table name with namespace."""
if namespace:
return f"{namespace}_{model}"
return model
def _get_table_name(self, model: str, namespace: Optional[str] = None) -> str:
"""Get full table name with namespace."""
if namespace:
return f"{self.namespace_prefix}{namespace}_{model}"
return f"{self.namespace_prefix}{model}"
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 50-74

Comment:
**`namespace_prefix` is stored but never applied to table names**

`self.namespace_prefix` is accepted and stored in `__init__` (line 51), but `_get_table_name` (lines 70–74) only uses the per-call `namespace` argument — it never reads `self.namespace_prefix`. Any adapter configured with a prefix (e.g., `create_adapter(engine, namespace_prefix="myapp_")`) will have it silently ignored on every query.

```suggestion
    def _get_table_name(self, model: str, namespace: Optional[str] = None) -> str:
        """Get full table name with namespace."""
        if namespace:
            return f"{self.namespace_prefix}{namespace}_{model}"
        return f"{self.namespace_prefix}{model}"
```

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

7 issues found across 4 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py">

<violation number="1" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:42">
P1: The type signature accepts `AsyncEngine` but every method uses synchronous context managers (`with self.engine.begin()`, `with self.engine.connect()`). Passing an `AsyncEngine` will fail at runtime because `AsyncEngine` requires `async with`. Even with a sync `Engine`, the `async def` methods perform blocking I/O that will block the event loop.

Either remove `AsyncEngine` from the type signature and document that this adapter is sync-only, or add proper async code paths using `async with engine.begin()` when an `AsyncEngine` is provided.</violation>

<violation number="2" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:70">
P2: `namespace_prefix` is accepted and stored in `__init__` but `_get_table_name()` only considers `namespace` and `model`, making the prefix option a no-op. Table names will never include the prefix, which can cause collisions in multi-tenant setups.</violation>

<violation number="3" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:128">
P0: Transaction adapter delegates operations back to the parent adapter, which opens its own connections. The `self.connection` passed to `SQLAlchemyTransactionAdapter` is stored but never used, so all operations inside a `transaction()` callback execute outside the transaction on separate connections. Commits and rollbacks have no effect on the actual work performed.

The transaction adapter needs to execute statements on `self.connection` directly instead of delegating to parent methods that create their own connections.</violation>

<violation number="4" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:157">
P1: When `params.select` is provided, the query is rebuilt from scratch, silently discarding any previously applied `order_by`, `limit`, and `offset` clauses. Queries with both column selection and pagination/sorting will return incorrectly ordered or unpaginated results.

Instead of rebuilding the query, apply column selection earlier (before order_by/limit/offset) or use `.with_only_columns()` on the existing query.</violation>

<violation number="5" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:159">
P2: When `params.select` is set but `params.where` is falsy, this passes `None` to `.where()`, which can produce SQLAlchemy warnings or errors. Build the projection query separately and only apply `.where()` conditionally.</violation>
</file>

<file name="packages/python-sqlalchemy/README.md">

<violation number="1" location="packages/python-sqlalchemy/README.md:55">
P2: The README claims AsyncEngine support, but the adapter implementation uses synchronous engine/connection calls and does not implement async SQLAlchemy access patterns.</violation>

<violation number="2" location="packages/python-sqlalchemy/README.md:68">
P2: The example uses `await` at top level, which is not valid in normal Python script usage. Wrap the calls in an async function and run it via `asyncio.run(...)`.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 issues found across 2 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py">

<violation number="1" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:142">
P2: Re-selecting after INSERT by matching on all `params.data` fields is fragile: it may match the wrong row when data fields aren't unique, and the `params.data` fallback loses server-generated values (auto-increment IDs, default timestamps). The previous `.returning(table)` approach handled this correctly.</violation>

<violation number="2" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:226">
P1: The re-select after UPDATE uses the original WHERE clause, which will fail to find the row if the update changed any column referenced in that clause. For example, updating `email` when `WHERE email = 'old@...'` causes a spurious `NotFoundError`. The previous code avoided this by using `.returning(table)` on the UPDATE statement itself.</violation>

<violation number="3" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:275">
P2: `create_many` now returns a shallow copy of the input data (`list(params.data)`) instead of the actual inserted rows. Server-generated columns (auto-increment IDs, defaults) are lost. The old code used `.returning(table)` to return the real persisted records.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment on lines +379 to +391
async def update_many(self, params: Optional[UpdateManyParams] = None, **kwargs: Any) -> int:
"""Update multiple records."""
normalized = self._coerce_params(UpdateManyParams, params, kwargs)
try:
table = self._get_table(normalized.model, normalized.namespace)
where_clause = self._build_where_clause(table, normalized.where)

with self.engine.begin() as conn:
result = conn.execute(update(table).where(where_clause).values(**normalized.data))
return result.rowcount

except Exception as e:
raise QueryFailedError(f"UpdateMany failed: {str(e)}", cause=e)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CRITICAL: update_many will update ALL records if where clause is empty

If normalized.where is None or an empty list, _build_where_clause returns None, causing the update to execute without a WHERE clause and modify every record in the table.

async def update_many(self, params: Optional[UpdateManyParams] = None, **kwargs: Any) -> int:
    normalized = self._coerce_params(UpdateManyParams, params, kwargs)
    try:
        table = self._get_table(normalized.model, normalized.namespace)
        where_clause = self._build_where_clause(table, normalized.where)
        
        # Add validation
        if where_clause is None:
            raise ValueError("update_many requires a where clause to prevent updating all records")
        
        with self.engine.begin() as conn:
            result = conn.execute(update(table).where(where_clause).values(**normalized.data))
            return result.rowcount
Suggested change
async def update_many(self, params: Optional[UpdateManyParams] = None, **kwargs: Any) -> int:
"""Update multiple records."""
normalized = self._coerce_params(UpdateManyParams, params, kwargs)
try:
table = self._get_table(normalized.model, normalized.namespace)
where_clause = self._build_where_clause(table, normalized.where)
with self.engine.begin() as conn:
result = conn.execute(update(table).where(where_clause).values(**normalized.data))
return result.rowcount
except Exception as e:
raise QueryFailedError(f"UpdateMany failed: {str(e)}", cause=e)
async def update_many(self, params: Optional[UpdateManyParams] = None, **kwargs: Any) -> int:
"""Update multiple records."""
normalized = self._coerce_params(UpdateManyParams, params, kwargs)
try:
table = self._get_table(normalized.model, normalized.namespace)
where_clause = self._build_where_clause(table, normalized.where)
# Add validation
if where_clause is None:
raise ValueError("update_many requires a where clause to prevent updating all records")
with self.engine.begin() as conn:
result = conn.execute(update(table).where(where_clause).values(**normalized.data))
return result.rowcount
except Exception as e:
raise QueryFailedError(f"UpdateMany failed: {str(e)}", cause=e)

Spotted by Graphite

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py (3)

468-473: Sync context manager with async callback may block the event loop.

The transaction method uses a synchronous with self.engine.begin() context manager but awaits an async callback inside. Since this adapter targets sync engines, blocking is expected. However, this pattern may cause issues if users expect non-blocking behavior in an async context.

Consider documenting this behavior or using run_in_executor if true async transaction support is needed in the future.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
468 - 473, The transaction method uses a synchronous context manager (with
self.engine.begin()) while awaiting an async callback, which can block the event
loop; update the transaction(self, callback) implementation or docs: either
document that transaction is blocking/sync-only in the method docstring
(mentioning SQLAlchemyTransactionAdapter and that this adapter targets sync
engines), or change the implementation to offload the blocking work to a thread
executor (e.g., run the with self.engine.begin() and callback(trx_adapter)
inside run_in_executor) to avoid blocking the event loop if true async behavior
is required.

233-240: Use raise ... from e for proper exception chaining.

The custom exceptions accept a cause parameter, but Python's standard raise X from e syntax preserves the full traceback chain. Without it, the original exception context may be lost in some scenarios.

Proposed fix
         except IntegrityError as e:
             if "unique" in str(e).lower() or "duplicate" in str(e).lower():
-                raise DuplicateKeyError(str(e), cause=e)
-            raise ConstraintViolationError(str(e), cause=e)
+                raise DuplicateKeyError(str(e), cause=e) from e
+            raise ConstraintViolationError(str(e), cause=e) from e
         except OperationalError as e:
-            raise ConnectionError(str(e), cause=e)
+            raise ConnectionError(str(e), cause=e) from e
         except Exception as e:
-            raise QueryFailedError(f"Create failed: {str(e)}", cause=e)
+            raise QueryFailedError(f"Create failed: {str(e)}", cause=e) from e

This pattern should be applied to all similar exception handling blocks throughout the file (lines 261, 293, 320-325, 346-349, 374-377, 390-391, 404-405, 465-466, 481-482).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
233 - 240, The exception handlers in adapter.py (e.g., the block handling
IntegrityError/OperationalError/Exception that currently raises
DuplicateKeyError, ConstraintViolationError, ConnectionError, and
QueryFailedError with a cause= argument) should use Python exception chaining:
replace the current raises that pass cause=e with "raise <CustomError>(... )
from e" so the original traceback is preserved; apply this change consistently
to all similar handlers mentioned (lines around the handlers for
DuplicateKeyError, ConstraintViolationError, ConnectionError, QueryFailedError
and the other listed blocks) and ensure the message text remains the same while
removing or keeping the cause parameter as appropriate.

407-448: Consider using database-native upsert for atomicity.

The current implementation uses a find-then-create pattern with a DuplicateKeyError fallback, which handles the race condition but executes multiple non-atomic queries. SQLAlchemy 2.0+ supports database-native upserts via insert().on_conflict_do_update() (PostgreSQL) and similar constructs for other databases, providing true atomicity.

The current approach is functional and the duplicate-key retry is a reasonable fallback, but database-native upsert would be more robust under high concurrency.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
407 - 448, The upsert method in adapter.py (async def upsert) currently does a
non-atomic find-then-create with a DuplicateKeyError fallback, which risks race
conditions under concurrency; replace this pattern with a database-native atomic
upsert using SQLAlchemy's insert().on_conflict_do_update() (or the DB-specific
equivalent) for the model referenced by normalized.model, building the insert
from normalized.create and the update clause from normalized.update, and return
the selected columns (normalized.select) from that single statement; if your DB
backend does not support on_conflict_do_update, keep the DuplicateKeyError
fallback around the atomic attempt (call find_one/create/update only as a
fallback) so behavior remains correct for unsupported dialects.
packages/python-sqlalchemy/README.md (1)

59-93: Example shows async patterns but feature list says "Sync SQLAlchemy engines".

The example wraps synchronous adapter calls in an async def function and uses await, which is technically correct since the adapter methods are async. However, this may confuse users who see "✅ Sync SQLAlchemy engines" in the feature list (line 55) but then encounter async/await syntax in the example.

Consider either:

  1. Simplifying the example to show synchronous-style usage (if that's the intended primary use case)
  2. Clarifying in the features that the adapter exposes an async API over sync engines
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/README.md` around lines 59 - 93, The README
example shows async usage (async def main, await
adapter.create/adapter.find_many) while the feature list claims "Sync SQLAlchemy
engines"; update the docs for consistency by either replacing the code sample to
demonstrate synchronous-style usage (call create_adapter and then call
adapter.create and adapter.find_many in a normal, non-async flow) or, if the
library intentionally exposes an async API over sync engines, change the feature
list text to explicitly state that create_adapter returns an async adapter
(mention create_adapter, adapter.create, adapter.find_many, CreateParams,
FindManyParams) and add a short note above the example clarifying that the
adapter methods are async even when using sync SQLAlchemy engines.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 215-220: The dict comprehension building the primary-key mapping
uses zip(table.primary_key.columns, inserted_primary_key) which silently
truncates on length mismatch; change that zip call to use zip(..., strict=True)
so a ValueError is raised when table.primary_key.columns and
inserted_primary_key have different lengths (locate the comprehension using
table.primary_key.columns and inserted_primary_key in adapter.py and update the
zip invocation to include strict=True).
- Around line 338-340: The NotFoundError is raised with an unnecessary f-string
literal in the deletion path; update the raise in the delete flow (the code
around result = conn.execute(delete(table).where(where_clause)) and the
NotFoundError) to use a plain string literal (remove the leading "f") so the
message becomes a normal string without interpolation.
- Around line 38-39: The SQLAlchemyAdapter class does not match the Adapter
protocol signatures and should either explicitly inherit and implement Adapter
or a matching Protocol should be declared and used as the factory return type;
to fix, update SQLAlchemyAdapter to "class SQLAlchemyAdapter(Adapter):" and
change its async method signatures (create, find_many, find_one, update, delete,
transaction) to match the Adapter Protocol (e.g., create(self, model: str, data:
Dict[str, Any], namespace: str = "datafn") -> None, etc.), or alternatively
define a new Protocol that matches the current method shapes and change
create_adapter's return annotation from Adapter to that new Protocol; ensure
create_adapter, SQLAlchemyAdapter, and all listed method names are updated
consistently.

---

Nitpick comments:
In `@packages/python-sqlalchemy/README.md`:
- Around line 59-93: The README example shows async usage (async def main, await
adapter.create/adapter.find_many) while the feature list claims "Sync SQLAlchemy
engines"; update the docs for consistency by either replacing the code sample to
demonstrate synchronous-style usage (call create_adapter and then call
adapter.create and adapter.find_many in a normal, non-async flow) or, if the
library intentionally exposes an async API over sync engines, change the feature
list text to explicitly state that create_adapter returns an async adapter
(mention create_adapter, adapter.create, adapter.find_many, CreateParams,
FindManyParams) and add a short note above the example clarifying that the
adapter methods are async even when using sync SQLAlchemy engines.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 468-473: The transaction method uses a synchronous context manager
(with self.engine.begin()) while awaiting an async callback, which can block the
event loop; update the transaction(self, callback) implementation or docs:
either document that transaction is blocking/sync-only in the method docstring
(mentioning SQLAlchemyTransactionAdapter and that this adapter targets sync
engines), or change the implementation to offload the blocking work to a thread
executor (e.g., run the with self.engine.begin() and callback(trx_adapter)
inside run_in_executor) to avoid blocking the event loop if true async behavior
is required.
- Around line 233-240: The exception handlers in adapter.py (e.g., the block
handling IntegrityError/OperationalError/Exception that currently raises
DuplicateKeyError, ConstraintViolationError, ConnectionError, and
QueryFailedError with a cause= argument) should use Python exception chaining:
replace the current raises that pass cause=e with "raise <CustomError>(... )
from e" so the original traceback is preserved; apply this change consistently
to all similar handlers mentioned (lines around the handlers for
DuplicateKeyError, ConstraintViolationError, ConnectionError, QueryFailedError
and the other listed blocks) and ensure the message text remains the same while
removing or keeping the cause parameter as appropriate.
- Around line 407-448: The upsert method in adapter.py (async def upsert)
currently does a non-atomic find-then-create with a DuplicateKeyError fallback,
which risks race conditions under concurrency; replace this pattern with a
database-native atomic upsert using SQLAlchemy's
insert().on_conflict_do_update() (or the DB-specific equivalent) for the model
referenced by normalized.model, building the insert from normalized.create and
the update clause from normalized.update, and return the selected columns
(normalized.select) from that single statement; if your DB backend does not
support on_conflict_do_update, keep the DuplicateKeyError fallback around the
atomic attempt (call find_one/create/update only as a fallback) so behavior
remains correct for unsupported dialects.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2cf6a2ff-7cda-408b-bf9c-05629eb41d55

📥 Commits

Reviewing files that changed from the base of the PR and between e105c93 and 3a66514.

📒 Files selected for processing (5)
  • packages/python-sqlalchemy/README.md
  • packages/python-sqlalchemy/pyproject.toml
  • packages/python-sqlalchemy/superfunctions_sqlalchemy/__init__.py
  • packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
  • packages/python-sqlalchemy/tests/test_adapter.py

Comment on lines +38 to +39
class SQLAlchemyAdapter:
"""SQLAlchemy adapter for superfunctions.db"""
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if Adapter is a Protocol or ABC in superfunctions.db
rg -n "class Adapter" --type py -A5 | head -50

Repository: 21nCo/super-functions

Length of output: 399


🏁 Script executed:

#!/bin/bash
# Find the adapter.py file in packages/python-sqlalchemy
find . -type f -name "adapter.py" | grep -i sqlalchemy

Repository: 21nCo/super-functions

Length of output: 130


🏁 Script executed:

#!/bin/bash
# Check the full Adapter protocol definition to see all required methods
rg -n "class Adapter" --type py -A 50 | head -100

Repository: 21nCo/super-functions

Length of output: 2164


🏁 Script executed:

#!/bin/bash
# If adapter.py exists, check SQLAlchemyAdapter class definition
find . -type f -path "*/packages/python-sqlalchemy/*adapter.py" -exec head -60 {} +

Repository: 21nCo/super-functions

Length of output: 3692


🏁 Script executed:

#!/bin/bash
# Get the full SQLAlchemyAdapter class to see all methods
wc -l ./packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py

Repository: 21nCo/super-functions

Length of output: 134


🏁 Script executed:

#!/bin/bash
# Check if the class implements the required protocol methods
rg -n "async def (find_many|find_one|create|update|delete|transaction)" ./packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py

Repository: 21nCo/super-functions

Length of output: 1572


🏁 Script executed:

#!/bin/bash
# Check if Adapter Protocol is marked as runtime_checkable
rg -n "runtime_checkable|@runtime_checkable" ./datafn/python/datafn/db.py -B5

Repository: 21nCo/super-functions

Length of output: 47


🏁 Script executed:

#!/bin/bash
# Check how create_adapter function is defined and if there are type checking issues
rg -n "def create_adapter" ./packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py -A20

Repository: 21nCo/super-functions

Length of output: 736


🏁 Script executed:

#!/bin/bash
# Check full imports at top of db.py to see if Protocol is runtime_checkable
head -40 ./datafn/python/datafn/db.py

Repository: 21nCo/super-functions

Length of output: 1325


🏁 Script executed:

#!/bin/bash
# Check the exact definition of Adapter protocol with decorators
sed -n '1,35p' ./datafn/python/datafn/db.py

Repository: 21nCo/super-functions

Length of output: 1325


🏁 Script executed:

#!/bin/bash
# Check if SQLAlchemyAdapter has a different signature or if there are multiple Adapter definitions
rg -n "class.*Adapter" ./datafn/python/datafn/db.py -A3

Repository: 21nCo/super-functions

Length of output: 158


🏁 Script executed:

#!/bin/bash
# Check if superfunctions.db exports multiple versions of Adapter
cat ./datafn/python/datafn/__init__.py | head -50

Repository: 21nCo/super-functions

Length of output: 141


🏁 Script executed:

#!/bin/bash
# Compare the exact method signature of SQLAlchemyAdapter.create vs Adapter.create
sed -n '242,250p' ./packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py

Repository: 21nCo/super-functions

Length of output: 652


🏁 Script executed:

#!/bin/bash
# Check if there's a method signature mismatch - compare find_one signatures
sed -n '263,270p' ./packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py

Repository: 21nCo/super-functions

Length of output: 580


🏁 Script executed:

#!/bin/bash
# Check if there's parameter transformation happening - look for how params are used
sed -n '240,265p' ./packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py

Repository: 21nCo/super-functions

Length of output: 1340


SQLAlchemyAdapter has incompatible method signatures with the Adapter Protocol.

The create_adapter function declares return type Adapter (line 553), but SQLAlchemyAdapter does not explicitly inherit from or properly implement the Adapter protocol (line 38). More critically, the method signatures are incompatible:

  • Protocol expects: async def create(self, model: str, data: Dict[str, Any], namespace: str = "datafn") -> None
  • Implementation provides: async def create(self, params: Optional[CreateParams] = None, **kwargs: Any) -> Dict[str, Any]

This pattern is repeated across all protocol methods (find_many, find_one, update, delete, transaction). Since Adapter is an undecorated Protocol without @runtime_checkable, static type checkers (mypy, pyright) will flag this as a type error.

Options:

  1. Declare class SQLAlchemyAdapter(Adapter): and adjust method signatures to match the protocol
  2. Or define a separate Protocol that matches the actual implementation and use that for return type
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
38 - 39, The SQLAlchemyAdapter class does not match the Adapter protocol
signatures and should either explicitly inherit and implement Adapter or a
matching Protocol should be declared and used as the factory return type; to
fix, update SQLAlchemyAdapter to "class SQLAlchemyAdapter(Adapter):" and change
its async method signatures (create, find_many, find_one, update, delete,
transaction) to match the Adapter Protocol (e.g., create(self, model: str, data:
Dict[str, Any], namespace: str = "datafn") -> None, etc.), or alternatively
define a new Protocol that matches the current method shapes and change
create_adapter's return annotation from Adapter to that new Protocol; ensure
create_adapter, SQLAlchemyAdapter, and all listed method names are updated
consistently.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 issues found across 2 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py">

<violation number="1" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:217">
P2: Add `strict=True` to this `zip()` call. If `table.primary_key.columns` and `inserted_primary_key` have different lengths, `zip()` silently truncates, which could produce an incomplete primary-key dict and cause an incorrect row lookup.</violation>

<violation number="2" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:387">
P0: `update_many` will update **all records** in the table when `where` is `None` or empty. `_build_where_clause` returns `None` in that case, and SQLAlchemy's `.where(None)` is a no-op, producing an unfiltered `UPDATE`. Add a guard that raises before executing:

```python
if where_clause is None:
    raise ValueError("update_many requires a non-empty where clause to prevent updating all records")
```</violation>

<violation number="3" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:543">
P1: `SQLAlchemyAdapter.find_many` is the only public method not updated to the `Optional[params] = None, **kwargs` pattern. Callers passing keyword arguments will get a `TypeError` from the main adapter, even though the same call works on `SQLAlchemyTransactionAdapter.find_many`.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (4)
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py (4)

233-240: Use raise ... from e for proper exception chaining.

Ruff B904 flags these lines. While you pass cause=e to the error constructors, Python's standard exception chaining uses raise X from e which sets __cause__ and suppresses the implicit __context__. If superfunctions.db errors rely on the cause parameter, this may be intentional, but adding from e ensures proper traceback display.

♻️ Proposed fix
         except IntegrityError as e:
             if "unique" in str(e).lower() or "duplicate" in str(e).lower():
-                raise DuplicateKeyError(str(e), cause=e)
-            raise ConstraintViolationError(str(e), cause=e)
+                raise DuplicateKeyError(str(e), cause=e) from e
+            raise ConstraintViolationError(str(e), cause=e) from e
         except OperationalError as e:
-            raise ConnectionError(str(e), cause=e)
+            raise ConnectionError(str(e), cause=e) from e
         except Exception as e:
-            raise QueryFailedError(f"Create failed: {str(e)}", cause=e)
+            raise QueryFailedError(f"Create failed: {str(e)}", cause=e) from e

The same pattern applies to all exception handlers throughout the file (lines 261, 293, 323, 325, 349, 375, 377, 395, 413, 474, 490).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
233 - 240, The except blocks currently re-raise custom exceptions with a cause=
parameter but don't use Python exception chaining; change each raise to use
"raise <ExceptionClass>(... ) from e" (e.g., replace raising
DuplicateKeyError(str(e), cause=e) with raising DuplicateKeyError(str(e)) from
e, and similarly for ConstraintViolationError, ConnectionError,
QueryFailedError) so __cause__ is set and tracebacks chain properly; apply this
same change to all similar handlers in the file that re-raise with cause=e
(including other occurrences that raise DuplicateKeyError,
ConstraintViolationError, ConnectionError, QueryFailedError).

513-528: Schema management methods are stubbed with TODOs.

The get_schema_version, set_schema_version, validate_schema, and create_schema methods are placeholders. This is acceptable for an initial release if schema management isn't a priority.

Do you want me to help implement these schema management methods, or would you prefer to track this as a follow-up issue?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
513 - 528, The four schema-management methods (get_schema_version,
set_schema_version, validate_schema, create_schema) are currently stubs;
implement them so the adapter can persist and validate schemas: add a metadata
table or use existing migrations table to store/retrieve namespace version in
get_schema_version and set_schema_version (return int and update row
respectively), implement validate_schema to check a TableSchema object for
required fields/types and return a ValidationResult with valid=False and errors
populated when issues are found, and implement create_schema to run the
necessary DDL (CREATE TABLE/ALTER TABLE) for the provided CreateSchemaParams,
call validate_schema first, persist the new version via set_schema_version on
success, and return a SchemaCreation indicating success and any errors; use the
existing method names (get_schema_version, set_schema_version, validate_schema,
create_schema) to locate where to add this logic.

541-543: Synchronous rollback() call in async method.

The rollback method is declared async but performs a synchronous self.connection.rollback(). This is inconsistent with the async interface. For a sync engine, this works but blocks the event loop.

♻️ Consider making rollback non-blocking or documenting sync behavior

If this adapter is sync-only, document it clearly. Otherwise, wrap the call:

async def rollback(self) -> None:
    """Rollback the transaction."""
    # Note: Synchronous call - blocks event loop with sync engine
    self.connection.rollback()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
541 - 543, The async rollback method currently calls the blocking
self.connection.rollback() directly; change it to avoid blocking the event loop
by running the synchronous rollback in an executor and awaiting it (e.g. use
asyncio.get_running_loop().run_in_executor(None, self.connection.rollback)) or,
if this adapter is intentionally sync-only, make rollback a regular def (remove
async) and update the docstring to document sync behavior; update the rollback
signature and docstring accordingly so callers and linters reflect the chosen
approach.

354-378: Consider bulk insert for create_many if returning individual rows isn't required.

Currently, create_many inserts rows one-by-one in a loop, which is O(n) round-trips. If the caller doesn't need the returned rows (or can accept just PKs), a single conn.execute(insert(table), normalized.data) would be significantly faster for large batches.

This is acceptable if the current behavior (returning full row data with select projection) is a requirement.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
354 - 378, The create_many implementation currently loops calling _create for
each row; change it to perform a bulk insert when the caller does not need full
returned rows: in create_many, after normalizing params (CreateManyParams ->
normalized), if normalized.select is None or only requests primary key(s),
obtain the target Table for normalized.model, and use
conn.execute(insert(table), normalized.data) (or insert(...).returning(pk) when
PKs are desired and supported) to insert all rows in one call and return either
the returned PKs or an empty list; otherwise keep the existing per-row loop that
calls self._create for full select projections. Preserve the existing
IntegrityError and generic Exception handling and ensure namespace/transaction
semantics are unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 74-114: In _build_where_clause add a clear fallback that raises an
exception when a WhereClause.operator is not handled so unrecognized operators
are not silently ignored; update the function (method _build_where_clause in
adapter.py) to validate each clause.operator and if it doesn't match any of the
handled Operator cases raise a ValueError (or custom error) including the
operator name and the clause.field/value to aid debugging so callers immediately
see unsupported operators instead of getting dropped conditions.
- Around line 428-437: When existing is truthy we call self.update(...) which
can raise NotFoundError if the record was deleted between find_one and update;
wrap the call to self.update (where UpdateParams is used and normalized is
referenced) in a try/except that catches NotFoundError and falls back to calling
self.create(...) with a CreateParams constructed from normalized.model,
normalized.create (or normalized.update as appropriate), normalized.select and
normalized.namespace so the operation succeeds even if the record was removed
before update.
- Around line 79-81: The loop building conditions uses getattr(table.c,
clause.field) which can raise AttributeError for invalid fields; update the
for-loop that iterates over where to validate that clause.field exists on
table.c before calling getattr, and if it does not, raise a QueryFailedError (or
wrap the AttributeError) with a clear message instead of letting AttributeError
bubble up so consumers get a meaningful error; reference the existing variables
and symbols: the for loop over where, table.c, clause.field, conditions, and
QueryFailedError to locate where to add the check and error handling.
- Around line 230-231: The current async method is using the synchronous context
manager with self.engine.begin(), which blocks the event loop; either convert
the adapter to use an AsyncEngine and async context managers or make the method
synchronous. Option A: create an AsyncEngine via create_async_engine and replace
with "async with self.async_engine.begin() as managed_conn" and ensure the
called helper _create accepts/uses the async connection and is awaited. Option
B: if you want synchronous behavior, remove the async/await on the method that
contains "with self.engine.begin()" and call self._create(params, managed_conn)
synchronously (no await), keeping _create as sync. Ensure references to
self.engine.begin(), _create, create_async_engine/AsyncEngine are updated
consistently.

---

Nitpick comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 233-240: The except blocks currently re-raise custom exceptions
with a cause= parameter but don't use Python exception chaining; change each
raise to use "raise <ExceptionClass>(... ) from e" (e.g., replace raising
DuplicateKeyError(str(e), cause=e) with raising DuplicateKeyError(str(e)) from
e, and similarly for ConstraintViolationError, ConnectionError,
QueryFailedError) so __cause__ is set and tracebacks chain properly; apply this
same change to all similar handlers in the file that re-raise with cause=e
(including other occurrences that raise DuplicateKeyError,
ConstraintViolationError, ConnectionError, QueryFailedError).
- Around line 513-528: The four schema-management methods (get_schema_version,
set_schema_version, validate_schema, create_schema) are currently stubs;
implement them so the adapter can persist and validate schemas: add a metadata
table or use existing migrations table to store/retrieve namespace version in
get_schema_version and set_schema_version (return int and update row
respectively), implement validate_schema to check a TableSchema object for
required fields/types and return a ValidationResult with valid=False and errors
populated when issues are found, and implement create_schema to run the
necessary DDL (CREATE TABLE/ALTER TABLE) for the provided CreateSchemaParams,
call validate_schema first, persist the new version via set_schema_version on
success, and return a SchemaCreation indicating success and any errors; use the
existing method names (get_schema_version, set_schema_version, validate_schema,
create_schema) to locate where to add this logic.
- Around line 541-543: The async rollback method currently calls the blocking
self.connection.rollback() directly; change it to avoid blocking the event loop
by running the synchronous rollback in an executor and awaiting it (e.g. use
asyncio.get_running_loop().run_in_executor(None, self.connection.rollback)) or,
if this adapter is intentionally sync-only, make rollback a regular def (remove
async) and update the docstring to document sync behavior; update the rollback
signature and docstring accordingly so callers and linters reflect the chosen
approach.
- Around line 354-378: The create_many implementation currently loops calling
_create for each row; change it to perform a bulk insert when the caller does
not need full returned rows: in create_many, after normalizing params
(CreateManyParams -> normalized), if normalized.select is None or only requests
primary key(s), obtain the target Table for normalized.model, and use
conn.execute(insert(table), normalized.data) (or insert(...).returning(pk) when
PKs are desired and supported) to insert all rows in one call and return either
the returned PKs or an empty list; otherwise keep the existing per-row loop that
calls self._create for full select projections. Preserve the existing
IntegrityError and generic Exception handling and ensure namespace/transaction
semantics are unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a1093074-1fb9-432b-91be-546eca914b24

📥 Commits

Reviewing files that changed from the base of the PR and between 3a66514 and f0b4d64.

📒 Files selected for processing (2)
  • packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
  • packages/python-sqlalchemy/tests/test_adapter.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/python-sqlalchemy/tests/test_adapter.py

Comment on lines +230 to +231
with self.engine.begin() as managed_conn:
return await self._create(params, managed_conn)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Does SQLAlchemy 2.0 support mixing sync Engine with async/await patterns?

💡 Result:

No, SQLAlchemy 2.0 does not support directly mixing a synchronous Engine with async/await patterns. Async operations require an AsyncEngine created via create_async_engine, which uses async database drivers (e.g., asyncpg, aiosqlite). However, SQLAlchemy provides mechanisms to integrate synchronous code within async contexts and vice versa: - AsyncEngine exposes a .sync_engine attribute, a synchronous Engine for running sync operations via run_sync. - AsyncSession exposes .sync_session for attaching sync event listeners. - Use AsyncConnection.run_sync or AsyncSession.run_sync to execute synchronous functions (e.g., metadata.create_all) in an async context. - Examples: await conn.run_sync(Base.metadata.create_all(bind=conn.sync_connection)) Directly passing a sync Engine to async_sessionmaker or AsyncSession results in type errors or runtime failures, as confirmed by official docs and issues. Use async_sessionmaker(bind=async_engine) for proper async ORM usage. All examples in SQLAlchemy 2.0 docs use AsyncEngine with AsyncSession for async/await.

Citations:


🏁 Script executed:

# Get an overview of the adapter.py file structure
head -50 packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py

Repository: 21nCo/super-functions

Length of output: 1322


🏁 Script executed:

# Find where self.engine is initialized and its type
rg "self\.engine\s*=" packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py -A 2 -B 2

Repository: 21nCo/super-functions

Length of output: 260


🏁 Script executed:

# Look at the full _create method to understand the context better
sed -n '202,244p' packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py | cat -n

Repository: 21nCo/super-functions

Length of output: 2417


🏁 Script executed:

# Check if AsyncEngine is being used anywhere in the file
rg "AsyncEngine|create_async_engine|async with.*begin" packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py

Repository: 21nCo/super-functions

Length of output: 47


🏁 Script executed:

# Search for the custom exception definitions in superfunctions package
rg "class (DuplicateKeyError|ConstraintViolationError|ConnectionError|QueryFailedError)" --type py -A 5

Repository: 21nCo/super-functions

Length of output: 47


Mixing synchronous engine.begin() with await blocks the event loop.

The sync context manager with self.engine.begin() blocks execution while holding a connection. SQLAlchemy 2.0 does not support mixing synchronous Engine with async/await patterns—this requires AsyncEngine created via create_async_engine with async database drivers (e.g., asyncpg, aiosqlite).

Either convert to AsyncEngine with async with engine.begin(), or remove the async decorator and use synchronous method signatures.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
230 - 231, The current async method is using the synchronous context manager
with self.engine.begin(), which blocks the event loop; either convert the
adapter to use an AsyncEngine and async context managers or make the method
synchronous. Option A: create an AsyncEngine via create_async_engine and replace
with "async with self.async_engine.begin() as managed_conn" and ensure the
called helper _create accepts/uses the async connection and is awaited. Option
B: if you want synchronous behavior, remove the async/await on the method that
contains "with self.engine.begin()" and call self._create(params, managed_conn)
synchronously (no await), keeping _create as sync. Ensure references to
self.engine.begin(), _create, create_async_engine/AsyncEngine are updated
consistently.

Comment on lines +428 to +437
if existing:
return await self.update(
UpdateParams(
model=normalized.model,
where=normalized.where,
data=normalized.update,
select=normalized.select,
namespace=normalized.namespace,
)
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential NotFoundError if record is deleted between find_one and update.

If existing is truthy but the record is deleted by another transaction before update executes, NotFoundError will propagate. Consider catching this and falling back to create:

🛡️ Proposed fix
         if existing:
-            return await self.update(
-                UpdateParams(
-                    model=normalized.model,
-                    where=normalized.where,
-                    data=normalized.update,
-                    select=normalized.select,
-                    namespace=normalized.namespace,
+            try:
+                return await self.update(
+                    UpdateParams(
+                        model=normalized.model,
+                        where=normalized.where,
+                        data=normalized.update,
+                        select=normalized.select,
+                        namespace=normalized.namespace,
+                    )
                 )
-            )
+            except NotFoundError:
+                pass  # Fall through to create
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
428 - 437, When existing is truthy we call self.update(...) which can raise
NotFoundError if the record was deleted between find_one and update; wrap the
call to self.update (where UpdateParams is used and normalized is referenced) in
a try/except that catches NotFoundError and falls back to calling
self.create(...) with a CreateParams constructed from normalized.model,
normalized.create (or normalized.update as appropriate), normalized.select and
normalized.namespace so the operation succeeds even if the record was removed
before update.

Comment on lines +481 to +486
async def transaction(self, callback: Callable) -> Any:
"""Execute operations within a transaction."""
with self.engine.begin() as conn:
# Create transaction adapter
trx_adapter = SQLAlchemyTransactionAdapter(conn, self)
return await callback(trx_adapter)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The callback parameter is typed as Callable but is being awaited. If a non-async callback function is passed, this will fail with "object NoneType can't be used in 'await' expression" or similar error.

async def transaction(self, callback: Callable) -> Any:
    """Execute operations within a transaction."""
    with self.engine.begin() as conn:
        trx_adapter = SQLAlchemyTransactionAdapter(conn, self)
        result = callback(trx_adapter)  # Don't await if not async
        if hasattr(result, '__await__'):
            return await result
        return result

Or update the type hint to require an async callable:

from typing import Awaitable

async def transaction(self, callback: Callable[[TransactionAdapter], Awaitable[Any]]) -> Any:
Suggested change
async def transaction(self, callback: Callable) -> Any:
"""Execute operations within a transaction."""
with self.engine.begin() as conn:
# Create transaction adapter
trx_adapter = SQLAlchemyTransactionAdapter(conn, self)
return await callback(trx_adapter)
async def transaction(self, callback: Callable[..., Awaitable[Any]]) -> Any:
"""Execute operations within a transaction."""
with self.engine.begin() as conn:
# Create transaction adapter
trx_adapter = SQLAlchemyTransactionAdapter(conn, self)
return await callback(trx_adapter)

Spotted by Graphite

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9d7e3107f5

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +381 to +382
except Exception as e:
raise QueryFailedError(f"CreateMany failed: {str(e)}", cause=e)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Keep integrity errors typed in create_many

When create_many() hits a duplicate/constraint violation, _create() already converts DB exceptions into DuplicateKeyError/ConstraintViolationError, but this broad except Exception wraps those again as QueryFailedError. That prevents callers from handling expected integrity conflicts distinctly (for example retrying on duplicates or returning conflict-specific responses) and makes batch error handling inconsistent with create().

Useful? React with 👍 / 👎.

Comment on lines +485 to +486
trx_adapter = SQLAlchemyTransactionAdapter(conn, self)
return await callback(trx_adapter)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Expose full adapter ops on transaction adapter

transaction() passes SQLAlchemyTransactionAdapter to the callback, but that adapter only implements single-row CRUD methods. Callbacks that use batch/advanced methods available on the main adapter (such as create_many, update_many, delete_many, upsert, or count) will fail with AttributeError, which breaks transaction callbacks that rely on the shared adapter surface.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 58-65: The Adapter currently advertises schema_management=True but
the related methods (validate_schema, create_schema, set_schema_version,
get_schema_version) are no-ops or failing; change the capability to
schema_management=False in AdapterCapabilities (and any other occurrences around
the similar block at lines ~518-532) until real implementations exist, or
alternatively implement the full schema management hooks: make validate_schema
perform real validation and return accurate valid/errors, implement
create_schema to actually create schema and return success, and have
set_schema_version/get_schema_version persist and return real versions; update
AdapterCapabilities only after those methods are correctly implemented.
- Around line 311-320: The single-record update path uses the original
where_clause which may match multiple rows; change it to target the specific
fetched row by obtaining its primary key from existing (via _fetch_one_by_clause
and the table PK helper) and reissue the UPDATE using
update(table).where(primary_key_clause) with params.data, or raise an error if
the table has no single-column primary key; ensure the subsequent fetch uses
_fetch_one_by_primary_key and fall back to _project_row as before. Apply the
same fix pattern to the single-record DELETE path (mirror the primary-key
targeting in the _delete flow) so both _fetch_one_by_clause,
update(table).where(...).values(...), and delete(table).where(...) are executed
against the reconstructed primary_key_clause rather than the original
where_clause.
- Around line 283-287: The code currently checks params.limit and params.offset
by truthiness, so limit=0 is treated like None and the LIMIT is omitted; update
the checks in the adapter (where params.limit and params.offset are used—e.g.,
in the query-building block in superfunctions_sqlalchemy.adapter) to use
explicit None checks: if params.limit is not None then apply
query.limit(params.limit), and similarly if params.offset is not None apply
query.offset(params.offset), leaving behavior unchanged for None but correctly
applying zero values.
- Around line 359-382: The create_many method is rewrapping adapter-specific
errors from _create (like DuplicateKeyError/ConstraintViolationError) into
QueryFailedError which loses duplicate-key/constraint semantics; modify
create_many to preserve those exceptions by checking for and re-raising existing
adapter exceptions (e.g., DuplicateKeyError, ConstraintViolationError) before
the broad except Exception handler, and only wrap unexpected exceptions as
QueryFailedError; keep the existing IntegrityError handling for raw DB errors
and continue using CreateParams/CreateManyParams and the conn loop as-is.
- Around line 80-84: The code is using getattr(table.c, clause.field) and
hasattr(table.c, clause.field) which is unsafe for user-controlled column names
(they may collide with ColumnCollection attributes); change these checks to use
membership and keyed access: replace hasattr(table.c, field) checks with "field
in table.c" and replace getattr(table.c, field) with table.c[field] for the
where clause handling in adapter.py (and apply the same keyed-lookup pattern to
the other occurrences referenced in the review: the uses around lines with
identifiers for the other checks at the same module — specifically update the
access at the spots currently around lines 142, 146, 162, and 280 to use
table.c[name] and membership testing rather than getattr/hasattr).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0321c45f-a060-47e1-b9f1-a2cd0d6713fd

📥 Commits

Reviewing files that changed from the base of the PR and between f0b4d64 and 9d7e310.

📒 Files selected for processing (2)
  • packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
  • packages/python-sqlalchemy/tests/test_adapter.py
✅ Files skipped from review due to trivial changes (1)
  • packages/python-sqlalchemy/tests/test_adapter.py

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 98fe5ca275

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +258 to +259
except Exception as e:
raise QueryFailedError(f"Create failed: {str(e)}", cause=e)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve DuplicateKeyError in create for upsert fallback

_create() calls itself recursively when no connection is provided, but the outer frame wraps any exception in QueryFailedError. That means an inner DuplicateKeyError/ConstraintViolationError is lost when create() is used normally, and upsert()'s except DuplicateKeyError retry path will never trigger on real duplicate inserts, causing upserts under contention to fail instead of updating.

Useful? React with 👍 / 👎.

else:
raise QueryFailedError(f"Unsupported operator in where clause: {clause.operator}")

return and_(*conditions) if len(conditions) > 1 else conditions[0]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Honor OR connectors when building where predicates

Multi-clause filters are always combined with and_(...), so any WhereClause entries intended to be OR-connected are silently executed as AND. This changes query semantics for find_*, update*, delete*, and count calls whenever callers rely on connector='OR', returning incorrect results rather than raising an error.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fd17484c19

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +333 to +334
target_where_clause = self._build_primary_key_where_clause(table, existing) or where_clause
result = conn.execute(update(table).where(target_where_clause).values(**params.data))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Prevent single-row update from widening on tables without PKs

Do not fall back to the original where for update() when no primary key can be derived. In _update, if the reflected table has no primary key, _build_primary_key_where_clause(...) returns None, so target_where_clause becomes the original predicate and can update multiple rows even though update() is the single-record API. This can silently modify many records on legacy/no-PK tables (and the same pattern is used in _delete).

Useful? React with 👍 / 👎.

Comment on lines +13 to +16
for path in (PACKAGE_ROOT, PYTHON_CORE_ROOT):
path_str = str(path)
if path_str not in sys.path:
sys.path.insert(0, path_str)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Avoid shadowing installed superfunctions package in tests

Inserting PYTHON_CORE_ROOT at the front of sys.path causes tests to import the local packages/python-core/superfunctions package first, but that package does not contain superfunctions.db. As a result, from superfunctions.db import ... fails during test collection once SQLAlchemy is available, so the package tests cannot run in a normal environment.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py (1)

147-149: 🛠️ Refactor suggestion | 🟠 Major

Apply keyed lookup pattern here too.

getattr(table.c, field) has the same collision risk with ColumnCollection method names. Use table.c[field] instead.

Proposed fix
     def _apply_select(self, table: Table, params: FindOneParams | FindManyParams) -> Any:
-        columns = [getattr(table.c, field) for field in params.select] if params.select else [table]
+        columns = [table.c[field] for field in params.select] if params.select else [table]
         return select(*columns).select_from(table)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
147 - 149, In _apply_select, avoid using getattr(table.c, field) because it can
collide with ColumnCollection attribute names; update the selection logic inside
the _apply_select method to index into the ColumnCollection using table.c[field]
for each field in params.select (falling back to [table] when params.select is
empty) so the correct column object is retrieved without name-collision risk.
🧹 Nitpick comments (4)
packages/python-sqlalchemy/tests/test_adapter.py (2)

28-40: Consider calling adapter.initialize() in the fixture.

The fixture creates the adapter but doesn't call initialize(). While this works for SQLite in-memory, it may mask initialization issues. Adding await adapter.initialize() would better mirror production usage and validate the connection test path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/tests/test_adapter.py` around lines 28 - 40, The
fixture returns a created adapter but never runs its initialization path; change
the adapter fixture to async (async def adapter():) so you can await
adapter.initialize() after create_adapter(engine) (i.e., call adapter =
create_adapter(engine); await adapter.initialize(); return adapter) to mirror
production usage and surface any connection/init issues; reference the adapter
fixture, create_adapter, and adapter.initialize when updating the test.

10-16: Fragile sys.path manipulation for test imports.

Manually inserting paths into sys.path is brittle and can cause import ordering issues. Consider installing the packages in editable mode (pip install -e .) or using pytest's pythonpath configuration in pyproject.toml instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/tests/test_adapter.py` around lines 10 - 16,
Replace the fragile sys.path hacks (the PACKAGE_ROOT/PYTHON_CORE_ROOT insertion
block that mutates sys.path) with a proper package import setup: remove the
for-loop that touches sys.path in tests/test_adapter.py, and ensure tests run
against an installed (editable) package or via pytest configuration instead —
either add an editable install step (pip install -e .) in the test/CI setup or
declare pythonpath via pytest config (pyproject.toml /
[tool.pytest.ini_options]) so PACKAGE_ROOT and PYTHON_CORE_ROOT are resolved by
normal imports rather than manipulating sys.path at runtime.
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py (2)

258-265: Consider using raise ... from e for proper exception chaining.

Ruff B904 warns that re-raised exceptions should use raise X from e to set Python's __cause__ attribute. While the adapter exceptions accept a cause parameter, using from e ensures standard traceback chaining.

Example fix
         except IntegrityError as e:
             if "unique" in str(e).lower() or "duplicate" in str(e).lower():
-                raise DuplicateKeyError(str(e), cause=e)
-            raise ConstraintViolationError(str(e), cause=e)
+                raise DuplicateKeyError(str(e), cause=e) from e
+            raise ConstraintViolationError(str(e), cause=e) from e
         except OperationalError as e:
-            raise ConnectionError(str(e), cause=e)
+            raise ConnectionError(str(e), cause=e) from e
         except Exception as e:
-            raise QueryFailedError(f"Create failed: {str(e)}", cause=e)
+            raise QueryFailedError(f"Create failed: {str(e)}", cause=e) from e

Apply this pattern to all exception handlers flagged by static analysis.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
258 - 265, The exception handlers in adapter.py (the except blocks catching
IntegrityError, OperationalError, and generic Exception) should use Python
exception chaining by raising the new adapter exceptions with "raise
NewException(...) from e" in addition to (or instead of) passing the cause
param; update the IntegrityError branch that currently raises
DuplicateKeyError(str(e), cause=e) and ConstraintViolationError(str(e), cause=e)
to raise DuplicateKeyError(str(e), cause=e) from e or
ConstraintViolationError(...) from e as appropriate, change the OperationalError
branch to raise ConnectionError(str(e), cause=e) from e, and change the generic
except to raise QueryFailedError(f"Create failed: {str(e)}", cause=e) from e so
that __cause__ is set and traceback chaining is preserved.

570-576: commit() no-op is fine, but rollback() may not behave as expected.

With engine.begin() context manager, commit happens on successful exit and rollback happens on exception. The explicit connection.rollback() call here may not work if called after the context manager has already handled the transaction. Consider documenting this behavior or raising an error if called outside the transaction scope.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
570 - 576, The rollback() implementation may incorrectly call
self.connection.rollback() after an engine.begin() context manager has already
committed/rolled back; update rollback() to check transaction scope before
calling rollback (e.g., verify an active transaction flag or inspect
self.connection.in_transaction() / self._transaction attribute) and raise a
clear error (RuntimeError) if no active transaction exists, or alternatively
document in the commit()/rollback() docstrings that commit is a no-op under
engine.begin() and rollback must only be invoked while the transaction context
is active; reference the commit() and rollback() methods and the use of
engine.begin() in the surrounding code when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 298-301: Replace the attribute lookup using getattr(table.c,
order.field) with a keyed lookup table.c[order.field] inside the params.order_by
loop so column resolution uses the ColumnCollection API (i.e., change the line
in the loop that assigns column to use table.c[order.field]); keep the ternary
for desc() unchanged and, if desired, guard or catch KeyError around the lookup
to provide a clearer error message when the column name is invalid.
- Around line 151-155: Replace the hasattr/getattr approach in
_build_data_where_clause with keyed ColumnCollection lookup to avoid attribute
name collisions: iterate data.items(), use "if key in table.c" (or
table.c.get(key) existence check) and build conditions with table.c[key] ==
value (or table.c.get(key) == value), then return None if no conditions and use
and_(*conditions) or conditions[0] as before; update references to the
Table/ColumnCollection via table.c and the function _build_data_where_clause
accordingly.

---

Duplicate comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 147-149: In _apply_select, avoid using getattr(table.c, field)
because it can collide with ColumnCollection attribute names; update the
selection logic inside the _apply_select method to index into the
ColumnCollection using table.c[field] for each field in params.select (falling
back to [table] when params.select is empty) so the correct column object is
retrieved without name-collision risk.

---

Nitpick comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 258-265: The exception handlers in adapter.py (the except blocks
catching IntegrityError, OperationalError, and generic Exception) should use
Python exception chaining by raising the new adapter exceptions with "raise
NewException(...) from e" in addition to (or instead of) passing the cause
param; update the IntegrityError branch that currently raises
DuplicateKeyError(str(e), cause=e) and ConstraintViolationError(str(e), cause=e)
to raise DuplicateKeyError(str(e), cause=e) from e or
ConstraintViolationError(...) from e as appropriate, change the OperationalError
branch to raise ConnectionError(str(e), cause=e) from e, and change the generic
except to raise QueryFailedError(f"Create failed: {str(e)}", cause=e) from e so
that __cause__ is set and traceback chaining is preserved.
- Around line 570-576: The rollback() implementation may incorrectly call
self.connection.rollback() after an engine.begin() context manager has already
committed/rolled back; update rollback() to check transaction scope before
calling rollback (e.g., verify an active transaction flag or inspect
self.connection.in_transaction() / self._transaction attribute) and raise a
clear error (RuntimeError) if no active transaction exists, or alternatively
document in the commit()/rollback() docstrings that commit is a no-op under
engine.begin() and rollback must only be invoked while the transaction context
is active; reference the commit() and rollback() methods and the use of
engine.begin() in the surrounding code when making the change.

In `@packages/python-sqlalchemy/tests/test_adapter.py`:
- Around line 28-40: The fixture returns a created adapter but never runs its
initialization path; change the adapter fixture to async (async def adapter():)
so you can await adapter.initialize() after create_adapter(engine) (i.e., call
adapter = create_adapter(engine); await adapter.initialize(); return adapter) to
mirror production usage and surface any connection/init issues; reference the
adapter fixture, create_adapter, and adapter.initialize when updating the test.
- Around line 10-16: Replace the fragile sys.path hacks (the
PACKAGE_ROOT/PYTHON_CORE_ROOT insertion block that mutates sys.path) with a
proper package import setup: remove the for-loop that touches sys.path in
tests/test_adapter.py, and ensure tests run against an installed (editable)
package or via pytest configuration instead — either add an editable install
step (pip install -e .) in the test/CI setup or declare pythonpath via pytest
config (pyproject.toml / [tool.pytest.ini_options]) so PACKAGE_ROOT and
PYTHON_CORE_ROOT are resolved by normal imports rather than manipulating
sys.path at runtime.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 01a40f17-de66-4d64-af5a-de45035f4f63

📥 Commits

Reviewing files that changed from the base of the PR and between 9d7e310 and 1f4ad81.

📒 Files selected for processing (2)
  • packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
  • packages/python-sqlalchemy/tests/test_adapter.py

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 1f4ad815ac

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +82 to +85
if not hasattr(table.c, clause.field):
raise QueryFailedError(f"Unknown field in where clause: {clause.field}")

column = getattr(table.c, clause.field)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Validate where fields by column key instead of attribute lookup

Using hasattr(table.c, clause.field)/getattr(table.c, clause.field) can resolve SQLAlchemy ColumnCollection methods (for example keys, items, values) instead of real columns, so valid columns with those names cannot be filtered correctly and some invalid field names bypass the intended "unknown field" error path. This causes incorrect predicates in find_*, update*, delete*, and count when field names collide with collection attributes.

Useful? React with 👍 / 👎.

Comment on lines +246 to +250
if inserted is None:
inserted = self._fetch_one_by_clause(
conn,
table,
self._build_data_where_clause(table, params.data),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Avoid ambiguous requery when create cannot read inserted PK

When inserted_primary_key is unavailable, _create falls back to querying by the inserted data values and returns the first match. If the table allows duplicate payloads (or lacks a PK), this can return a pre-existing row instead of the row just inserted, so callers may receive the wrong record identity after create/create_many.

Useful? React with 👍 / 👎.

@socket-security
Copy link
Copy Markdown

socket-security bot commented Apr 7, 2026

No dependency changes detected. Learn more about Socket for GitHub.

👍 No dependency changes detected in pull request

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fc2dd3a0db

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +353 to +354
except Exception as e:
raise QueryFailedError(f"Update failed: {str(e)}", cause=e)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve typed constraint errors in update()

The outer update() path opens a transaction and recursively calls _update, but then catches every exception and re-wraps it as QueryFailedError. In the normal call path (no explicit connection), an inner ConstraintViolationError from a unique/foreign-key conflict is therefore lost, so callers cannot reliably handle expected integrity conflicts the same way they can in other CRUD methods.

Useful? React with 👍 / 👎.

Comment on lines +429 to +430
except Exception as e:
raise QueryFailedError(f"UpdateMany failed: {str(e)}", cause=e)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Map update_many integrity failures to constraint errors

update_many() currently converts all non-QueryFailedError exceptions into QueryFailedError, so database IntegrityError (for example, unique/index or FK violations during bulk updates) is not surfaced as ConstraintViolationError. This makes batch-update conflict handling inconsistent with single-row update() semantics and prevents callers from distinguishing expected constraint failures from generic query failures.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py (2)

597-615: Return type annotation Adapter may cause type checker warnings.

create_adapter returns SQLAlchemyAdapter but is annotated to return Adapter. Since SQLAlchemyAdapter doesn't explicitly inherit from Adapter, type checkers may flag this. Consider updating the return type to SQLAlchemyAdapter or making SQLAlchemyAdapter conform to the Adapter protocol.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
597 - 615, The function create_adapter is annotated to return Adapter but
actually returns SQLAlchemyAdapter, which can trigger type-checker warnings;
either change the return annotation on create_adapter to SQLAlchemyAdapter or
update SQLAlchemyAdapter to explicitly inherit from or implement the Adapter
protocol/class (i.e., make SQLAlchemyAdapter subclass Adapter or satisfy its
interface), and adjust imports/type hints accordingly so the declared return
type matches the concrete SQLAlchemyAdapter type.

261-268: Use exception chaining with from e for proper tracebacks.

While cause=e is passed to the custom exception constructors, Python's standard exception chaining via raise ... from e sets __cause__ for proper traceback propagation.

Proposed fix
         except IntegrityError as e:
             if "unique" in str(e).lower() or "duplicate" in str(e).lower():
-                raise DuplicateKeyError(str(e), cause=e)
-            raise ConstraintViolationError(str(e), cause=e)
+                raise DuplicateKeyError(str(e), cause=e) from e
+            raise ConstraintViolationError(str(e), cause=e) from e
         except OperationalError as e:
-            raise ConnectionError(str(e), cause=e)
+            raise ConnectionError(str(e), cause=e) from e
         except Exception as e:
-            raise QueryFailedError(f"Create failed: {str(e)}", cause=e)
+            raise QueryFailedError(f"Create failed: {str(e)}", cause=e) from e

This pattern should be applied consistently across all exception handlers in the file (lines 289, 321, 352, 354, 382, 410, 412, 430, 448, 509, 526).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
261 - 268, The except blocks use custom exceptions like DuplicateKeyError,
ConstraintViolationError, ConnectionError, and QueryFailedError but only pass
the original exception via a cause kwarg; change each raise to use Python
exception chaining (raise DuplicateKeyError(str(e), cause=e) from e, etc.) for
IntegrityError, OperationalError and the generic Exception handlers (use "from
e" with the same arguments), and apply this same "raise ... from e" pattern
consistently to all similar except blocks in this file so tracebacks correctly
show the original exception.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 341-344: The refresh after primary-key update uses the old PKs
from `existing` so `_fetch_one_by_primary_key(conn, table, existing,
params.select)` can miss the row when `params.data` contains PK changes; change
the call to pass merged values (e.g., merged = {**existing, **params.data}) so
`_fetch_one_by_primary_key` uses the new PK values, and keep the fallback
`self._project_row({**existing, **params.data}, params.select)` unchanged.

---

Nitpick comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 597-615: The function create_adapter is annotated to return
Adapter but actually returns SQLAlchemyAdapter, which can trigger type-checker
warnings; either change the return annotation on create_adapter to
SQLAlchemyAdapter or update SQLAlchemyAdapter to explicitly inherit from or
implement the Adapter protocol/class (i.e., make SQLAlchemyAdapter subclass
Adapter or satisfy its interface), and adjust imports/type hints accordingly so
the declared return type matches the concrete SQLAlchemyAdapter type.
- Around line 261-268: The except blocks use custom exceptions like
DuplicateKeyError, ConstraintViolationError, ConnectionError, and
QueryFailedError but only pass the original exception via a cause kwarg; change
each raise to use Python exception chaining (raise DuplicateKeyError(str(e),
cause=e) from e, etc.) for IntegrityError, OperationalError and the generic
Exception handlers (use "from e" with the same arguments), and apply this same
"raise ... from e" pattern consistently to all similar except blocks in this
file so tracebacks correctly show the original exception.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 8877d228-bce1-4f3c-9412-03a89890c2a7

📥 Commits

Reviewing files that changed from the base of the PR and between 1f4ad81 and fc2dd3a.

📒 Files selected for processing (2)
  • packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
  • packages/python-sqlalchemy/tests/test_adapter.py
✅ Files skipped from review due to trivial changes (1)
  • packages/python-sqlalchemy/tests/test_adapter.py

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 19572029cf

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +581 to +583
async def commit(self) -> None:
"""Commit handled by context manager."""
pass
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Implement transaction adapter commit behavior

SQLAlchemyTransactionAdapter.commit() is currently a no-op, so callbacks that explicitly call await trx.commit() do not actually finalize the transaction at that point. This can produce incorrect outcomes when code expects an early commit boundary (for example, commit then later raise/catch errors), because all writes still depend on the outer context manager exit and may be rolled back unexpectedly.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 5094bc0688

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +394 to +395
except Exception as e:
raise QueryFailedError(f"Delete failed: {str(e)}", cause=e)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Map delete FK violations to ConstraintViolationError

When deleting a row that is still referenced by a foreign key (e.g., ON DELETE RESTRICT), SQLAlchemy raises IntegrityError, but _delete currently falls through to the broad except Exception and rethrows QueryFailedError. That makes delete conflict handling inconsistent with create/update and prevents callers from distinguishing expected constraint conflicts from generic query failures.

Useful? React with 👍 / 👎.

Comment on lines +464 to +465
except Exception as e:
raise QueryFailedError(f"DeleteMany failed: {str(e)}", cause=e)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve typed constraint errors in delete_many

delete_many also wraps all non-QueryFailedError exceptions as QueryFailedError, so batch deletes that hit referential-integrity violations lose the ConstraintViolationError signal. In workloads that bulk-delete parent records, this prevents reliable conflict-specific handling and diverges from the typed error behavior already implemented in other mutating methods.

Useful? React with 👍 / 👎.

@cubic-dev-ai
Copy link
Copy Markdown

cubic-dev-ai bot commented Apr 8, 2026

You're iterating quickly on this pull request. To help protect your rate limits, cubic has paused automatic reviews on new pushes for now—when you're ready for another review, comment @cubic-dev-ai review.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 059011b49c

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +361 to +362
except Exception as e:
raise QueryFailedError(f"Update failed: {str(e)}", cause=e)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Map update connection failures to ConnectionError

_update() never catches OperationalError, so transient DB/connectivity failures during update() fall into the generic handler and are rethrown as QueryFailedError. This breaks error-type consistency with other mutating methods (create, delete, update_many, delete_many) and prevents callers from handling retryable connection failures via ConnectionError.

Useful? React with 👍 / 👎.

Comment on lines +427 to +428
except Exception as e:
raise QueryFailedError(f"CreateMany failed: {str(e)}", cause=e)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve connection error typing in create_many

create_many() does not have an except OperationalError branch, so failures establishing/starting the transaction at engine.begin() are wrapped by the generic QueryFailedError path instead of surfacing as ConnectionError. In outage/pool-exhaustion scenarios this makes batch-create error handling inconsistent with create() and other write methods.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7fab47de7e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

self.version = "0.1.0"
self.capabilities = AdapterCapabilities(
transactions=True,
nested_transactions=True,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Mark nested_transactions unsupported until savepoints exist

Set nested_transactions to False (or implement savepoint-based nesting) because this adapter currently does not provide a nested transaction path: transaction() always opens a new top-level engine.begin() and the transaction adapter exposes no nested transaction() API. Callers that branch on capabilities.nested_transactions can incorrectly assume inner work shares the same atomic boundary, which can lead to unexpected commit/rollback behavior.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 4fa39d0589

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +399 to +400
except Exception as e:
raise QueryFailedError(f"Delete failed: {str(e)}", cause=e)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Re-raise ConnectionError from nested delete calls

In the normal delete() path (no explicit connection), _delete calls itself inside engine.begin(). If the inner call converts an OperationalError to ConnectionError, the outer frame does not preserve it and the broad except Exception re-wraps it as QueryFailedError, so callers lose the retryable connection-failure signal during outages or dropped connections.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 18fd5013d3

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +344 to +345
if result.rowcount == 0:
raise NotFoundError("Record not found for update")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Stop treating no-op UPDATE as missing row

In _update, the row is already confirmed to exist via _fetch_one_by_clause, but result.rowcount == 0 is still treated as not found. On MySQL-style drivers that report only changed rows, an idempotent update (setting a field to its current value) returns 0, so this path incorrectly raises NotFoundError for an existing record.

Useful? React with 👍 / 👎.

Comment on lines +289 to +290
except Exception as e:
raise QueryFailedError(f"FindOne failed: {str(e)}", cause=e)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve ConnectionError typing on read failures

The _find_one read path wraps all exceptions as QueryFailedError, so transient connectivity failures (OperationalError) are no longer surfaced as ConnectionError. That makes retry/error-handling logic inconsistent with write methods that preserve ConnectionError, and the same broad-except pattern is repeated in _find_many and count.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fd801174bf

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

self.capabilities = AdapterCapabilities(
transactions=True,
nested_transactions=False,
joins=True,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Advertise joins only when join queries are implemented

capabilities currently reports joins=True, but the adapter never reads a join configuration and always builds find_one/find_many queries from a single table via _apply_select(...). In environments that branch on adapter.capabilities.joins, this will cause callers to run join-dependent flows and get silently incorrect (unjoined) result sets rather than a clear unsupported-path error. Mark joins unsupported until join handling is actually wired into the query builders.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 05c38e3e25

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +351 to +355
refreshed = self._fetch_one_by_primary_key(conn, table, updated_row, params.select)
if refreshed is not None:
return refreshed
return self._project_row(updated_row, params.select)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Fail update when post-write refresh cannot find target row

After the UPDATE executes, the method returns a locally merged updated_row whenever the re-read by primary key returns None. In a concurrent delete race (row read, then removed by another transaction before this update statement runs), this path reports success with synthetic data even though no row exists anymore. For primary-key tables, refreshed is None should be treated as a missing-row failure rather than returning the merged input.

Useful? React with 👍 / 👎.

@graphite-app
Copy link
Copy Markdown

graphite-app bot commented Apr 13, 2026

Merge activity

## Summary
- add the missing shared SQLAlchemy adapter package to `origin/dev`
- bring over the Python adapter package as a standalone PR so Python dependency review stays focused

## Validation
- inspected the copied package contents in a clean worktree created from `origin/dev`
- did not run a reliable Python package validation pass in this worktree

## Notes
- no `.conduct` changes included
- opened directly against `origin/dev` with `GH_CONFIG_DIR=~/.config/gh-other`

## Summary by Sourcery

Add a new Python SQLAlchemy adapter package integrating superfunctions.db with SQLAlchemy-based databases.

New Features:
- Introduce a SQLAlchemy-backed Adapter implementation providing CRUD, batch, counting, transactions, health checks, and basic schema hooks for superfunctions.db.
- Expose a create_adapter factory and package initializer to simplify constructing the SQLAlchemy adapter from a SQLAlchemy Engine or AsyncEngine.

Enhancements:
- Document installation, supported databases, and example usage for the new superfunctions-sqlalchemy package in a dedicated README.
- Define packaging metadata, optional DB-specific extras, and development tooling configuration for the superfunctions-sqlalchemy Python package via pyproject.toml.

<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Adds the shared Python SQLAlchemy adapter `superfunctions-sqlalchemy` for `superfunctions.db`, providing CRUD, batch ops, upserts, counts, and transactions (sync/async callbacks) on Postgres/MySQL/SQLite using sync SQLAlchemy engines. Tightens capability flags and single‑row semantics, validates `where` clauses, maps SQL errors to typed errors, and fixes edge cases per SF‑30.

- **New Features**
  - New adapter package (`packages/python-sqlalchemy/`) exporting `create_adapter`/`SQLAlchemyAdapter`, with init/health/close and optional `namespace_prefix`.
  - Full CRUD, `create_many`/`upsert`/`count`, query builders (`where` with AND/OR, `order_by`/`limit`/`offset`), transactions that accept sync or async callbacks, and APIs that take params objects or kwargs.
  - Explicit column-key lookups for field resolution, including reserved names.
  - Capability metadata tightened: transactions/batch ops enabled; nested transactions, joins, full‑text search, schema management, and migrations disabled.

- **Bug Fixes**
  - Safe single-row update/delete: find by `where`, target via primary key or fall back to existing row values; reselect by primary key when filters change; treat no‑op updates as success.
  - Strict `where` validation and OR support: require `where` for `update_many`/`delete_many`, reject unknown fields with clear errors, and honor OR connectors.
  - Preserve typed errors and bulk semantics: map duplicates/constraints to `DuplicateKeyError`/`ConstraintViolationError`, surface `ConnectionError` on drops, run `create_many` in a transaction and return inserted rows, and retry update on upsert duplicate.
  - Fix delete path connection typing to avoid type issues during execution.

<sup>Written for commit 05c38e3. Summary will update on new commits.</sup>

<!-- End of auto-generated description by cubic. -->

<!-- This is an auto-generated comment: release notes by coderabbit.ai -->
## Summary by CodeRabbit

* **New Features**
  * Added a new SQLAlchemy adapter package offering async-capable CRUD, batch operations, upserts, counting, transactions, and optional namespacing for SQL databases.

* **Documentation**
  * Included comprehensive package docs with installation, sync/async usage examples, configuration, and feature checklist.

* **Tests**
  * Added tests covering create/create_many, update behavior, upsert retry, find_many filtering, transactions, and error handling.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
@graphite-app graphite-app bot force-pushed the SF-30/python-sqlalchemy-adapter branch from 05c38e3 to 8c9d511 Compare April 13, 2026 11:55
@sonarqubecloud
Copy link
Copy Markdown

@graphite-app graphite-app bot merged commit 8c9d511 into dev Apr 13, 2026
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant