SF-30 add Python SQLAlchemy shared adapter#85
Conversation
Reviewer's GuideIntroduces a new Python package Sequence diagram for a create operation via the SQLAlchemy adaptersequenceDiagram
actor Application
participant AuthFn as AuthFnLibrary
participant Adapter as SQLAlchemyAdapter
participant SAEngine as SQLAlchemy_Engine
participant DB as Database
Application->>AuthFn: create_authfn(AuthFnConfig(database=adapter))
Application->>AuthFn: auth.create_user(data)
AuthFn->>Adapter: create(CreateParams)
Adapter->>Adapter: _get_table(model, namespace)
Adapter->>SAEngine: begin()
activate SAEngine
SAEngine-->>Adapter: Connection
Adapter->>SAEngine: execute(insert(table).values(data).returning(table))
SAEngine->>DB: INSERT INTO table ... RETURNING *
DB-->>SAEngine: Row
SAEngine-->>Adapter: Result
Adapter->>Adapter: map row to dict
Adapter-->>AuthFn: created_record Dict
AuthFn-->>Application: created_record Dict
deactivate SAEngine
Class diagram for the new SQLAlchemy adapter implementationclassDiagram
class SQLAlchemyAdapter {
- Engine engine
- AsyncEngine engine
- str namespace_prefix
- MetaData metadata
- float _start_time
- str id
- str name
- str version
- AdapterCapabilities capabilities
+ SQLAlchemyAdapter(engine, namespace_prefix)
+ create(params) Dict
+ find_one(params) Dict
+ find_many(params) List~Dict~
+ update(params) Dict
+ delete(params) None
+ create_many(params) List~Dict~
+ update_many(params) int
+ delete_many(params) int
+ upsert(params) Dict
+ count(params) int
+ transaction(callback) Any
+ initialize() None
+ is_healthy() HealthStatus
+ close() None
+ get_schema_version(namespace) int
+ set_schema_version(namespace, version) None
+ validate_schema(schema) ValidationResult
+ create_schema(params) SchemaCreation
- _get_table_name(model, namespace) str
- _build_where_clause(table, where) Any
- _get_table(model, namespace) Table
}
class SQLAlchemyTransactionAdapter {
- connection Any
- SQLAlchemyAdapter parent
+ SQLAlchemyTransactionAdapter(connection, parent_adapter)
+ commit() None
+ rollback() None
+ create(params) Dict
+ find_one(params) Dict
+ find_many(params) List~Dict~
+ update(params) Dict
+ delete(params) None
}
class Adapter {
}
class TransactionAdapter {
}
class Engine {
}
class AsyncEngine {
}
SQLAlchemyAdapter ..|> Adapter
SQLAlchemyTransactionAdapter ..|> TransactionAdapter
SQLAlchemyAdapter o--> Engine
SQLAlchemyAdapter o--> AsyncEngine
SQLAlchemyAdapter o--> SQLAlchemyTransactionAdapter
SQLAlchemyTransactionAdapter o--> SQLAlchemyAdapter
Flow diagram for integration of superfunctions-sqlalchemy into an applicationflowchart LR
App["Application code"] --> AuthFn["superfunctions library (authfn, etc.)"]
AuthFn --> Adapter["SQLAlchemyAdapter (from superfunctions_sqlalchemy)"]
Adapter --> Engine["SQLAlchemy Engine or AsyncEngine"]
Engine --> DB["Relational database (PostgreSQL, MySQL, SQLite)"]
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new Changes
Sequence DiagramsequenceDiagram
participant Client as Client/Application
participant Adapter as SQLAlchemyAdapter
participant Engine as SQLAlchemy Engine
participant DB as Database
Client->>Adapter: create(params)
activate Adapter
Adapter->>Adapter: build INSERT (table, namespace, data)
Adapter->>Engine: execute(INSERT)
activate Engine
Engine->>DB: execute INSERT
DB-->>Engine: result (lastrowid / row)
Engine-->>Adapter: execution result
deactivate Engine
Adapter->>Adapter: optionally SELECT inserted row by PK
Adapter-->>Client: inserted row dict
deactivate Adapter
Client->>Adapter: find_many(params)
activate Adapter
Adapter->>Adapter: translate WhereClause -> predicates
Adapter->>Engine: execute(SELECT with WHERE/ORDER/LIMIT)
activate Engine
Engine->>DB: execute SELECT
DB-->>Engine: result rows
Engine-->>Adapter: rows
deactivate Engine
Adapter-->>Client: list of row dicts
deactivate Adapter
Client->>Adapter: transaction(callback)
activate Adapter
Adapter->>Engine: begin transaction / get connection
Adapter->>Adapter: create SQLAlchemyTransactionAdapter(connection)
Adapter->>Adapter: invoke callback(transactionAdapter)
Adapter->>Engine: commit or rollback via connection
Adapter-->>Client: callback result
deactivate Adapter
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Review Summary by QodoAdd Python SQLAlchemy adapter for superfunctions.db
WalkthroughsDescription• Add complete SQLAlchemy adapter implementation for superfunctions.db • Support full CRUD operations with transaction and batch capabilities • Enable PostgreSQL, MySQL, and SQLite database compatibility • Include comprehensive documentation and package configuration Diagramflowchart LR
SF["superfunctions.db<br/>Adapter Interface"]
SA["SQLAlchemyAdapter<br/>Implementation"]
DB["PostgreSQL/MySQL<br/>SQLite"]
SF -- "implements" --> SA
SA -- "uses" --> DB
SA -- "exports" --> API["create_adapter<br/>factory function"]
File Changes1. packages/python-sqlalchemy/superfunctions_sqlalchemy/__init__.py
|
Code Review by Qodo
1.
|
There was a problem hiding this comment.
Hey - I've found 6 issues, and left some high level feedback:
- All adapter methods are declared async but use the synchronous Engine API (and ignore AsyncEngine), which will block the event loop; either switch these methods to sync or add proper AsyncEngine handling (e.g., async context managers and
.executecalls) to avoid blocking in async contexts. - The transaction flow doesn’t actually bind operations to the passed connection:
SQLAlchemyTransactionAdapterjust delegates back to the parent adapter, which opens new connections, so calls insidetransactionwon’t participate in the same DB transaction; consider threading the transaction connection through the adapter methods or having the transaction adapter execute queries directly. - In
find_one, thewhere_clausevariable is only defined whenparams.whereis set but is referenced in theparams.selectbranch regardless, which will raise anUnboundLocalErrorwhenselectis used withoutwhere; refactor the query-building sowhere_clauseis always initialized (or conditionally applied) before use.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- All adapter methods are declared async but use the synchronous Engine API (and ignore AsyncEngine), which will block the event loop; either switch these methods to sync or add proper AsyncEngine handling (e.g., async context managers and `.execute` calls) to avoid blocking in async contexts.
- The transaction flow doesn’t actually bind operations to the passed connection: `SQLAlchemyTransactionAdapter` just delegates back to the parent adapter, which opens new connections, so calls inside `transaction` won’t participate in the same DB transaction; consider threading the transaction connection through the adapter methods or having the transaction adapter execute queries directly.
- In `find_one`, the `where_clause` variable is only defined when `params.where` is set but is referenced in the `params.select` branch regardless, which will raise an `UnboundLocalError` when `select` is used without `where`; refactor the query-building so `where_clause` is always initialized (or conditionally applied) before use.
## Individual Comments
### Comment 1
<location path="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py" line_range="42-51" />
<code_context>
+class SQLAlchemyAdapter:
+ """SQLAlchemy adapter for superfunctions.db"""
+
+ def __init__(self, engine: Engine | AsyncEngine, namespace_prefix: str = ""):
+ """
+ Initialize SQLAlchemy adapter.
+
+ Args:
+ engine: SQLAlchemy Engine or AsyncEngine
+ namespace_prefix: Prefix for table names (for namespacing)
+ """
+ self.engine = engine
+ self.namespace_prefix = namespace_prefix
+ self.metadata = MetaData()
+ self._start_time = time.time()
+
</code_context>
<issue_to_address>
**issue (bug_risk):** AsyncEngine support is misleading because all operations use synchronous engine APIs.
`engine` is typed as `Engine | AsyncEngine`, but the calls (`begin()`, `connect()`, `dispose()`) are all sync-only and will not work correctly with an `AsyncEngine`, and they perform blocking I/O from `async def` code. Please either introduce a proper async path using `AsyncEngine`/`AsyncSession` (and `await` the operations), or treat this adapter as sync-only by removing `AsyncEngine` from the type (or offloading sync calls to a thread pool).
</issue_to_address>
### Comment 2
<location path="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py" line_range="70-74" />
<code_context>
+ batch_operations=True,
+ )
+
+ def _get_table_name(self, model: str, namespace: Optional[str] = None) -> str:
+ """Get full table name with namespace."""
+ if namespace:
+ return f"{namespace}_{model}"
+ return model
+
+ def _build_where_clause(self, table: Table, where: List[WhereClause]) -> Any:
</code_context>
<issue_to_address>
**suggestion (bug_risk):** The `namespace_prefix` attribute is never used in table naming.
`namespace_prefix` is accepted and stored but never used in `_get_table_name`, which only considers `namespace` and `model`. If global namespacing is intended (e.g. for multi-tenant usage or avoiding table collisions), this method should include the prefix in the constructed name; otherwise the argument is effectively a no-op and may mislead callers.
Suggested implementation:
```python
def _get_table_name(self, model: str, namespace: Optional[str] = None) -> str:
"""Get full table name with global namespace_prefix and optional per-call namespace."""
parts: List[str] = []
# Global prefix applied to all tables (e.g. for multi-tenant or app-level namespacing)
if getattr(self, "namespace_prefix", None):
parts.append(self.namespace_prefix)
# Optional per-call namespace for finer-grained namespacing
if namespace:
parts.append(namespace)
parts.append(model)
return "_".join(parts)
```
This change assumes:
1. The adapter class already defines `self.namespace_prefix` in `__init__` (as your comment indicates it is “accepted and stored”). If it is not type-annotated there, you may want to add a `namespace_prefix: Optional[str]` attribute annotation to keep typing consistent.
2. Existing database schemas either already use the `namespace_prefix` convention or you are comfortable with the resulting table name changes. If not, you may need a migration or a backward-compatibility flag to conditionally enable the prefixing behavior.
</issue_to_address>
### Comment 3
<location path="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py" line_range="147-156" />
<code_context>
+ except Exception as e:
+ raise QueryFailedError(f"Create failed: {str(e)}", cause=e)
+
+ async def find_one(self, params: FindOneParams) -> Optional[Dict[str, Any]]:
+ """Find a single record."""
+ try:
+ table = self._get_table(params.model, params.namespace)
+ query = select(table)
+
+ if params.where:
+ where_clause = self._build_where_clause(table, params.where)
+ query = query.where(where_clause)
+
+ if params.select:
+ columns = [getattr(table.c, field) for field in params.select]
+ query = select(*columns).select_from(table).where(where_clause if params.where else None)
+
+ with self.engine.connect() as conn:
</code_context>
<issue_to_address>
**issue (bug_risk):** Calling `.where(None)` when no `where` is provided may cause issues and the conditional is awkward.
In the `params.select` branch:
```python
if params.select:
columns = [getattr(table.c, field) for field in params.select]
query = select(*columns).select_from(table).where(where_clause if params.where else None)
```
When `params.where` is falsy this becomes `.where(None)`, which is unnecessary and can raise SQLAlchemy warnings/errors. It also duplicates the WHERE handling.
You can avoid this and keep the logic centralized with:
```python
columns = [getattr(table.c, field) for field in params.select]
query = select(*columns).select_from(table)
if params.where:
where_clause = self._build_where_clause(table, params.where)
query = query.where(where_clause)
```
</issue_to_address>
### Comment 4
<location path="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py" line_range="169-153" />
<code_context>
+ except Exception as e:
+ raise QueryFailedError(f"FindOne failed: {str(e)}", cause=e)
+
+ async def find_many(self, params: FindManyParams) -> List[Dict[str, Any]]:
+ """Find multiple records."""
+ try:
+ table = self._get_table(params.model, params.namespace)
+ query = select(table)
+
+ if params.where:
+ where_clause = self._build_where_clause(table, params.where)
+ query = query.where(where_clause)
</code_context>
<issue_to_address>
**issue (bug_risk):** When `select` is used in `find_many`, ordering/limit/offset logic is silently dropped.
If `params.select` is provided, the code rebuilds `query` with `select(*columns).select_from(table)` and only reapplies `where`, dropping any previously applied `order_by`, `limit`, and `offset`. This changes behavior whenever a projection is requested. Instead, construct the projection first and then apply `where`/`order_by`/`limit`/`offset` once, or adjust the existing `select(table)` to change the selected columns without discarding the rest of the query state.
</issue_to_address>
### Comment 5
<location path="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py" line_range="319-323" />
<code_context>
+ except Exception as e:
+ raise QueryFailedError(f"Count failed: {str(e)}", cause=e)
+
+ async def transaction(self, callback: Callable) -> Any:
+ """Execute operations within a transaction."""
+ with self.engine.begin() as conn:
+ # Create transaction adapter
+ trx_adapter = SQLAlchemyTransactionAdapter(conn, self)
+ return await callback(trx_adapter)
+
</code_context>
<issue_to_address>
**issue (bug_risk):** Transactional operations do not actually use the transaction connection.
`transaction` creates a `SQLAlchemyTransactionAdapter` with a connection from `self.engine.begin()`, but the adapter methods delegate back to `self.parent` (e.g. `parent.create`), which starts its own connection/transaction. As a result, calls made inside the callback are not guaranteed to use the same DB transaction, breaking the expected transactional semantics.
To fix this, the transaction adapter should run queries using `self.connection` directly, or the parent methods should accept an optional connection/transaction and avoid opening a new one when it is provided.
</issue_to_address>
### Comment 6
<location path="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py" line_range="281-290" />
<code_context>
+ except Exception as e:
+ raise QueryFailedError(f"DeleteMany failed: {str(e)}", cause=e)
+
+ async def upsert(self, params: UpsertParams) -> Dict[str, Any]:
+ """Upsert a record."""
+ # Try to find existing record
+ existing = await self.find_one(
+ FindOneParams(model=params.model, where=params.where, namespace=params.namespace)
+ )
+
+ if existing:
+ return await self.update(
+ UpdateParams(
+ model=params.model,
+ where=params.where,
+ data=params.update,
+ namespace=params.namespace,
+ )
+ )
+ else:
+ return await self.create(
+ CreateParams(model=params.model, data=params.create, namespace=params.namespace)
+ )
</code_context>
<issue_to_address>
**suggestion (bug_risk):** The `upsert` implementation is non-atomic and may race under concurrent access.
This uses a `find_one` followed by `update`/`create`, so two concurrent callers can both see no existing record and then both call `create`, causing duplicates or integrity errors. Where possible, prefer a single atomic upsert (e.g., `ON CONFLICT` / `ON DUPLICATE KEY UPDATE`) or wrap the check-and-create path in a transaction with appropriate locking. If you need to stay DB-agnostic, at least document the non-atomic behavior and normalize any resulting integrity errors to a stable error type so callers can handle them explicitly.
Suggested implementation:
```python
+ except Exception as e:
+ raise QueryFailedError(f"DeleteMany failed: {str(e)}", cause=e)
+
+ async def upsert(self, params: UpsertParams) -> Dict[str, Any]:
+ """
+ Upsert a record.
+
+ Note:
+ This implementation is not fully atomic across all backends because it
+ composes higher-level operations (find / create / update). Under
+ concurrent access, two callers may both attempt to insert the same
+ logical record. In that case, the database should raise an integrity
+ error (e.g., unique constraint violation), which is normalized here
+ to a QueryFailedError with a stable error message so callers can
+ handle it explicitly.
+ """
+ try:
+ # Try to find existing record
+ existing = await self.find_one(
+ FindOneParams(
+ model=params.model,
+ where=params.where,
+ namespace=params.namespace,
+ )
+ )
+
+ if existing:
+ return await self.update(
+ UpdateParams(
+ model=params.model,
+ where=params.where,
+ data=params.update,
+ namespace=params.namespace,
+ )
+ )
+
+ # No existing record found; try to create one.
+ # Under concurrent access, this may raise an IntegrityError if
+ # another transaction inserts a conflicting row first.
+ return await self.create(
+ CreateParams(
+ model=params.model,
+ data=params.create,
+ namespace=params.namespace,
+ )
+ )
+
+ except IntegrityError as e:
+ # Normalize integrity / uniqueness violations to a stable error
+ # type so callers can distinguish them from other failures.
+ raise QueryFailedError(
+ "Upsert failed due to an integrity constraint violation; "
+ "the upsert operation is not atomic under concurrent access.",
+ cause=e,
+ )
+ except Exception as e:
+ raise QueryFailedError(f"Upsert failed: {str(e)}", cause=e)
+
+
+
+class SQLAlchemyAdapter:
```
1. Ensure `IntegrityError` is imported at the top of this file:
```python
from sqlalchemy.exc import IntegrityError
```
2. This implementation assumes `FindOneParams`, `UpdateParams`, `CreateParams`, `UpsertParams`, and `QueryFailedError` are already defined and imported in this module, and that `find_one`, `update`, and `create` are `async` methods returning dictionaries or `None`. If their signatures differ, adjust the parameter names and truthiness check on `existing` accordingly.
3. If your project has a more specific error type for integrity/uniqueness violations, you may want to raise that instead of `QueryFailedError`, or wrap `QueryFailedError` in a richer error hierarchy.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 63ada3c505
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| with self.engine.begin() as conn: | ||
| result = conn.execute(insert(table).values(**params.data).returning(table)) |
There was a problem hiding this comment.
Use async SQLAlchemy flow when engine is AsyncEngine
The adapter accepts Engine | AsyncEngine and the README advertises async-engine support, but methods use synchronous context managers and execution (with self.engine.begin() / conn.execute(...)). When AsyncEngine is passed, this path raises at runtime because async engines require async with and awaited calls, so core CRUD operations fail instead of running.
Useful? React with 👍 / 👎.
| async def create(self, params: CreateParams) -> Dict[str, Any]: | ||
| return await self.parent.create(params) |
There was a problem hiding this comment.
Route transaction adapter calls through the transaction connection
Inside transaction(), the callback receives SQLAlchemyTransactionAdapter, but its CRUD methods delegate to self.parent.*, which opens fresh engine connections/transactions. That means work inside the callback is not bound to the outer transaction conn, so a rollback/error in the callback can still leave prior writes committed.
Useful? React with 👍 / 👎.
| if params.select: | ||
| columns = [getattr(table.c, field) for field in params.select] | ||
| query = select(*columns).select_from(table) | ||
| if params.where: |
There was a problem hiding this comment.
Preserve pagination and ordering when projecting selected columns
When params.select is set, find_many() rebuilds the query with select(*columns).select_from(table) and only reapplies where. Any previously applied order_by, limit, and offset are dropped, so callers requesting projection plus pagination/sorting get incorrect result sets.
Useful? React with 👍 / 👎.
| if namespace: | ||
| return f"{namespace}_{model}" | ||
| return model |
There was a problem hiding this comment.
Apply namespace_prefix when building table names
namespace_prefix is accepted in the constructor and create_adapter(...) but _get_table_name() ignores it and returns only namespace_model or model. As a result, configured prefixes never affect table resolution, which can cause collisions and makes the public option ineffective.
Useful? React with 👍 / 👎.
| async def create(self, params: CreateParams) -> Dict[str, Any]: | ||
| """Create a single record.""" | ||
| try: | ||
| table = self._get_table(params.model, params.namespace) | ||
|
|
||
| with self.engine.begin() as conn: | ||
| result = conn.execute(insert(table).values(**params.data).returning(table)) | ||
| row = result.fetchone() | ||
| return dict(row._mapping) if row else params.data | ||
|
|
||
| except IntegrityError as e: | ||
| if "unique" in str(e).lower() or "duplicate" in str(e).lower(): | ||
| raise DuplicateKeyError(str(e), cause=e) | ||
| raise ConstraintViolationError(str(e), cause=e) | ||
| except OperationalError as e: | ||
| raise ConnectionError(str(e), cause=e) | ||
| except Exception as e: | ||
| raise QueryFailedError(f"Create failed: {str(e)}", cause=e) |
There was a problem hiding this comment.
Critical: Async/sync mismatch - All methods are declared async def but use synchronous SQLAlchemy operations (with self.engine.begin(), with self.engine.connect()). This will:
- Block the event loop when used with sync Engine in async contexts
- Fail immediately with AsyncEngine (requires
async withinstead ofwith)
This pattern is repeated throughout all database operations (create, find_one, find_many, update, delete, etc.).
Fix: Either use synchronous methods or properly support AsyncEngine:
# For AsyncEngine support:
if isinstance(self.engine, AsyncEngine):
async with self.engine.begin() as conn:
result = await conn.execute(...)
else:
with self.engine.begin() as conn:
result = conn.execute(...)| async def create(self, params: CreateParams) -> Dict[str, Any]: | |
| """Create a single record.""" | |
| try: | |
| table = self._get_table(params.model, params.namespace) | |
| with self.engine.begin() as conn: | |
| result = conn.execute(insert(table).values(**params.data).returning(table)) | |
| row = result.fetchone() | |
| return dict(row._mapping) if row else params.data | |
| except IntegrityError as e: | |
| if "unique" in str(e).lower() or "duplicate" in str(e).lower(): | |
| raise DuplicateKeyError(str(e), cause=e) | |
| raise ConstraintViolationError(str(e), cause=e) | |
| except OperationalError as e: | |
| raise ConnectionError(str(e), cause=e) | |
| except Exception as e: | |
| raise QueryFailedError(f"Create failed: {str(e)}", cause=e) | |
| async def create(self, params: CreateParams) -> Dict[str, Any]: | |
| """Create a single record.""" | |
| try: | |
| table = self._get_table(params.model, params.namespace) | |
| if isinstance(self.engine, AsyncEngine): | |
| async with self.engine.begin() as conn: | |
| result = await conn.execute(insert(table).values(**params.data).returning(table)) | |
| row = result.fetchone() | |
| return dict(row._mapping) if row else params.data | |
| else: | |
| with self.engine.begin() as conn: | |
| result = conn.execute(insert(table).values(**params.data).returning(table)) | |
| row = result.fetchone() | |
| return dict(row._mapping) if row else params.data | |
| except IntegrityError as e: | |
| if "unique" in str(e).lower() or "duplicate" in str(e).lower(): | |
| raise DuplicateKeyError(str(e), cause=e) | |
| raise ConstraintViolationError(str(e), cause=e) | |
| except OperationalError as e: | |
| raise ConnectionError(str(e), cause=e) | |
| except Exception as e: | |
| raise QueryFailedError(f"Create failed: {str(e)}", cause=e) | |
Spotted by Graphite
Is this helpful? React 👍 or 👎 to let us know.
| with self.engine.begin() as conn: | ||
| # Create transaction adapter | ||
| trx_adapter = SQLAlchemyTransactionAdapter(conn, self) | ||
| return await callback(trx_adapter) |
There was a problem hiding this comment.
Critical: Transaction context not properly shared - The transaction creates a connection with self.engine.begin() but the SQLAlchemyTransactionAdapter operations call self.parent.create(), self.parent.update(), etc. which open their own separate connections via self.engine.begin(). This means operations within the transaction callback are not actually part of the same transaction.
Fix: The transaction adapter needs to use the existing connection, not create new ones:
class SQLAlchemyTransactionAdapter:
def __init__(self, connection, parent_adapter: SQLAlchemyAdapter):
self.connection = connection
self.parent = parent_adapter
async def create(self, params: CreateParams) -> Dict[str, Any]:
table = self.parent._get_table(params.model, params.namespace)
result = self.connection.execute(insert(table).values(**params.data).returning(table))
# ... use self.connection instead of opening new connectionsSpotted by Graphite
Is this helpful? React 👍 or 👎 to let us know.
Greptile SummaryThis PR introduces a new Key issues found:
Confidence Score: 1/5Not safe to merge — three P0 bugs will cause runtime crashes or silently broken behaviour in production Score of 1 reflects: (1) an UnboundLocalError crash in find_one when select is used without where, (2) find_many silently returning wrong results when select and limit/offset/order_by are combined, and (3) SQLAlchemyTransactionAdapter providing zero atomicity because all operations bypass the transaction connection. The MySQL RETURNING incompatibility and unused namespace_prefix add further correctness risk. No tests are present to catch any of these. adapter.py requires significant rework: fix the find_one UnboundLocalError, rebuild the find_many select branch to preserve pagination, rewrite SQLAlchemyTransactionAdapter to use its own connection, add MySQL-compatible write paths, and wire up namespace_prefix in _get_table_name Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
CA[create_adapter\nengine, namespace_prefix] --> SA[SQLAlchemyAdapter]
SA -->|CRUD methods| E{engine.begin / engine.connect}
E --> DB[(Database)]
SA -->|transaction callback| TA[SQLAlchemyTransactionAdapter\nstores connection]
TA -->|delegates back to| SA
SA -->|opens NEW connection| DB
TA -. stored connection\nnever used .-> DB
subgraph BUG ["⚠ Transaction Bug"]
TA
end
style BUG fill:#ffcccc,stroke:#cc0000
style TA fill:#ff9999
Prompt To Fix All With AIThis is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 154-159
Comment:
**`UnboundLocalError` when `select` is used without `where`**
When `params.select` is provided but `params.where` is falsy, the expression on line 159 (`where_clause if params.where else None`) references `where_clause` — but that name is only assigned inside the `if params.where:` block on line 154. Python will raise `UnboundLocalError` at runtime for any call like `find_one(FindOneParams(model=..., select=["id", "name"]))`.
```suggestion
if params.where:
where_clause = self._build_where_clause(table, params.where)
query = query.where(where_clause)
if params.select:
columns = [getattr(table.c, field) for field in params.select]
query = select(*columns).select_from(table)
if params.where:
query = query.where(where_clause)
```
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 190-194
Comment:
**`select` column projection silently discards `limit`, `offset`, and `order_by`**
The `if params.select:` block replaces `query` with a brand-new `select(*columns).select_from(table)` object, which has no memory of the `.order_by()` (lines 180–182), `.limit()` (line 185), and `.offset()` (line 188) that were just applied. Any `find_many` call combining `select` with `limit`/`offset`/`order_by` will return unsorted, unpaginated results with no error.
```suggestion
if params.select:
columns = [getattr(table.c, field) for field in params.select]
new_query = select(*columns).select_from(table)
if params.where:
new_query = new_query.where(self._build_where_clause(table, params.where))
if params.order_by:
for order in params.order_by:
col = getattr(table.c, order.field)
new_query = new_query.order_by(col.desc() if order.direction == "desc" else col)
if params.limit:
new_query = new_query.limit(params.limit)
if params.offset:
new_query = new_query.offset(params.offset)
query = new_query
```
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 388-402
Comment:
**`SQLAlchemyTransactionAdapter` operations run outside the transaction**
Every method here delegates straight to `self.parent.create()`, `self.parent.find_one()`, etc. Each of those parent methods opens its **own new connection** via `with self.engine.begin() as conn:` or `with self.engine.connect() as conn:`, completely ignoring `self.connection`. The stored connection is used only in `rollback()` (line 386) and nowhere else.
This means operations issued inside a `transaction()` callback are individually auto-committed on separate connections — they are not atomic and cannot be rolled back together. The transaction abstraction provides no isolation guarantee at all.
Each method must use `self.connection` directly, for example:
```python
async def create(self, params: CreateParams) -> Dict[str, Any]:
table = self.parent._get_table(params.model, params.namespace)
result = self.connection.execute(
insert(table).values(**params.data).returning(table)
)
row = result.fetchone()
return dict(row._mapping) if row else params.data
```
All other delegated methods need equivalent treatment.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 132-136
Comment:
**`.returning()` is unsupported by MySQL**
`.returning(table)` is used in `create` (line 134), `update` (line 211), and `create_many` (line 247). MySQL does not support the `RETURNING` clause — SQLAlchemy will raise a `CompileError` for the MySQL dialect. Both the README and `pyproject.toml` advertise MySQL as a first-class supported backend (`pip install superfunctions-sqlalchemy[mysql]`), so every MySQL user will hit this on every write operation.
For MySQL compatibility, replace the `RETURNING`-based fetch with a dialect-aware approach such as capturing `result.lastrowid` after the insert and then selecting the row back, or using SQLAlchemy's `Inspector`/`select` pattern.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 50-74
Comment:
**`namespace_prefix` is stored but never applied to table names**
`self.namespace_prefix` is accepted and stored in `__init__` (line 51), but `_get_table_name` (lines 70–74) only uses the per-call `namespace` argument — it never reads `self.namespace_prefix`. Any adapter configured with a prefix (e.g., `create_adapter(engine, namespace_prefix="myapp_")`) will have it silently ignored on every query.
```suggestion
def _get_table_name(self, model: str, namespace: Optional[str] = None) -> str:
"""Get full table name with namespace."""
if namespace:
return f"{self.namespace_prefix}{namespace}_{model}"
return f"{self.namespace_prefix}{model}"
```
How can I resolve this? If you propose a fix, please make it concise.Reviews (1): Last reviewed commit: "SF-30 add Python SQLAlchemy shared adapt..." | Re-trigger Greptile |
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
| where_clause = self._build_where_clause(table, params.where) | ||
| query = query.where(where_clause) | ||
|
|
||
| if params.select: | ||
| columns = [getattr(table.c, field) for field in params.select] | ||
| query = select(*columns).select_from(table).where(where_clause if params.where else None) |
There was a problem hiding this comment.
UnboundLocalError when select is used without where
When params.select is provided but params.where is falsy, the expression on line 159 (where_clause if params.where else None) references where_clause — but that name is only assigned inside the if params.where: block on line 154. Python will raise UnboundLocalError at runtime for any call like find_one(FindOneParams(model=..., select=["id", "name"])).
| where_clause = self._build_where_clause(table, params.where) | |
| query = query.where(where_clause) | |
| if params.select: | |
| columns = [getattr(table.c, field) for field in params.select] | |
| query = select(*columns).select_from(table).where(where_clause if params.where else None) | |
| if params.where: | |
| where_clause = self._build_where_clause(table, params.where) | |
| query = query.where(where_clause) | |
| if params.select: | |
| columns = [getattr(table.c, field) for field in params.select] | |
| query = select(*columns).select_from(table) | |
| if params.where: | |
| query = query.where(where_clause) |
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 154-159
Comment:
**`UnboundLocalError` when `select` is used without `where`**
When `params.select` is provided but `params.where` is falsy, the expression on line 159 (`where_clause if params.where else None`) references `where_clause` — but that name is only assigned inside the `if params.where:` block on line 154. Python will raise `UnboundLocalError` at runtime for any call like `find_one(FindOneParams(model=..., select=["id", "name"]))`.
```suggestion
if params.where:
where_clause = self._build_where_clause(table, params.where)
query = query.where(where_clause)
if params.select:
columns = [getattr(table.c, field) for field in params.select]
query = select(*columns).select_from(table)
if params.where:
query = query.where(where_clause)
```
How can I resolve this? If you propose a fix, please make it concise.| if params.select: | ||
| columns = [getattr(table.c, field) for field in params.select] | ||
| query = select(*columns).select_from(table) | ||
| if params.where: | ||
| query = query.where(self._build_where_clause(table, params.where)) |
There was a problem hiding this comment.
select column projection silently discards limit, offset, and order_by
The if params.select: block replaces query with a brand-new select(*columns).select_from(table) object, which has no memory of the .order_by() (lines 180–182), .limit() (line 185), and .offset() (line 188) that were just applied. Any find_many call combining select with limit/offset/order_by will return unsorted, unpaginated results with no error.
| if params.select: | |
| columns = [getattr(table.c, field) for field in params.select] | |
| query = select(*columns).select_from(table) | |
| if params.where: | |
| query = query.where(self._build_where_clause(table, params.where)) | |
| if params.select: | |
| columns = [getattr(table.c, field) for field in params.select] | |
| new_query = select(*columns).select_from(table) | |
| if params.where: | |
| new_query = new_query.where(self._build_where_clause(table, params.where)) | |
| if params.order_by: | |
| for order in params.order_by: | |
| col = getattr(table.c, order.field) | |
| new_query = new_query.order_by(col.desc() if order.direction == "desc" else col) | |
| if params.limit: | |
| new_query = new_query.limit(params.limit) | |
| if params.offset: | |
| new_query = new_query.offset(params.offset) | |
| query = new_query |
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 190-194
Comment:
**`select` column projection silently discards `limit`, `offset`, and `order_by`**
The `if params.select:` block replaces `query` with a brand-new `select(*columns).select_from(table)` object, which has no memory of the `.order_by()` (lines 180–182), `.limit()` (line 185), and `.offset()` (line 188) that were just applied. Any `find_many` call combining `select` with `limit`/`offset`/`order_by` will return unsorted, unpaginated results with no error.
```suggestion
if params.select:
columns = [getattr(table.c, field) for field in params.select]
new_query = select(*columns).select_from(table)
if params.where:
new_query = new_query.where(self._build_where_clause(table, params.where))
if params.order_by:
for order in params.order_by:
col = getattr(table.c, order.field)
new_query = new_query.order_by(col.desc() if order.direction == "desc" else col)
if params.limit:
new_query = new_query.limit(params.limit)
if params.offset:
new_query = new_query.offset(params.offset)
query = new_query
```
How can I resolve this? If you propose a fix, please make it concise.| # Delegate all operations to parent adapter | ||
| async def create(self, params: CreateParams) -> Dict[str, Any]: | ||
| return await self.parent.create(params) | ||
|
|
||
| async def find_one(self, params: FindOneParams) -> Optional[Dict[str, Any]]: | ||
| return await self.parent.find_one(params) | ||
|
|
||
| async def find_many(self, params: FindManyParams) -> List[Dict[str, Any]]: | ||
| return await self.parent.find_many(params) | ||
|
|
||
| async def update(self, params: UpdateParams) -> Dict[str, Any]: | ||
| return await self.parent.update(params) | ||
|
|
||
| async def delete(self, params: DeleteParams) -> None: | ||
| return await self.parent.delete(params) |
There was a problem hiding this comment.
SQLAlchemyTransactionAdapter operations run outside the transaction
Every method here delegates straight to self.parent.create(), self.parent.find_one(), etc. Each of those parent methods opens its own new connection via with self.engine.begin() as conn: or with self.engine.connect() as conn:, completely ignoring self.connection. The stored connection is used only in rollback() (line 386) and nowhere else.
This means operations issued inside a transaction() callback are individually auto-committed on separate connections — they are not atomic and cannot be rolled back together. The transaction abstraction provides no isolation guarantee at all.
Each method must use self.connection directly, for example:
async def create(self, params: CreateParams) -> Dict[str, Any]:
table = self.parent._get_table(params.model, params.namespace)
result = self.connection.execute(
insert(table).values(**params.data).returning(table)
)
row = result.fetchone()
return dict(row._mapping) if row else params.dataAll other delegated methods need equivalent treatment.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 388-402
Comment:
**`SQLAlchemyTransactionAdapter` operations run outside the transaction**
Every method here delegates straight to `self.parent.create()`, `self.parent.find_one()`, etc. Each of those parent methods opens its **own new connection** via `with self.engine.begin() as conn:` or `with self.engine.connect() as conn:`, completely ignoring `self.connection`. The stored connection is used only in `rollback()` (line 386) and nowhere else.
This means operations issued inside a `transaction()` callback are individually auto-committed on separate connections — they are not atomic and cannot be rolled back together. The transaction abstraction provides no isolation guarantee at all.
Each method must use `self.connection` directly, for example:
```python
async def create(self, params: CreateParams) -> Dict[str, Any]:
table = self.parent._get_table(params.model, params.namespace)
result = self.connection.execute(
insert(table).values(**params.data).returning(table)
)
row = result.fetchone()
return dict(row._mapping) if row else params.data
```
All other delegated methods need equivalent treatment.
How can I resolve this? If you propose a fix, please make it concise.|
|
||
| with self.engine.begin() as conn: | ||
| result = conn.execute(insert(table).values(**params.data).returning(table)) | ||
| row = result.fetchone() | ||
| return dict(row._mapping) if row else params.data |
There was a problem hiding this comment.
.returning() is unsupported by MySQL
.returning(table) is used in create (line 134), update (line 211), and create_many (line 247). MySQL does not support the RETURNING clause — SQLAlchemy will raise a CompileError for the MySQL dialect. Both the README and pyproject.toml advertise MySQL as a first-class supported backend (pip install superfunctions-sqlalchemy[mysql]), so every MySQL user will hit this on every write operation.
For MySQL compatibility, replace the RETURNING-based fetch with a dialect-aware approach such as capturing result.lastrowid after the insert and then selecting the row back, or using SQLAlchemy's Inspector/select pattern.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 132-136
Comment:
**`.returning()` is unsupported by MySQL**
`.returning(table)` is used in `create` (line 134), `update` (line 211), and `create_many` (line 247). MySQL does not support the `RETURNING` clause — SQLAlchemy will raise a `CompileError` for the MySQL dialect. Both the README and `pyproject.toml` advertise MySQL as a first-class supported backend (`pip install superfunctions-sqlalchemy[mysql]`), so every MySQL user will hit this on every write operation.
For MySQL compatibility, replace the `RETURNING`-based fetch with a dialect-aware approach such as capturing `result.lastrowid` after the insert and then selecting the row back, or using SQLAlchemy's `Inspector`/`select` pattern.
How can I resolve this? If you propose a fix, please make it concise.| self.engine = engine | ||
| self.namespace_prefix = namespace_prefix | ||
| self.metadata = MetaData() | ||
| self._start_time = time.time() | ||
|
|
||
| # Metadata | ||
| self.id = "sqlalchemy" | ||
| self.name = "SQLAlchemy Adapter" | ||
| self.version = "0.1.0" | ||
| self.capabilities = AdapterCapabilities( | ||
| transactions=True, | ||
| nested_transactions=True, | ||
| joins=True, | ||
| full_text_search=False, | ||
| json_operations=True, | ||
| schema_management=True, | ||
| migration_support=False, | ||
| batch_operations=True, | ||
| ) | ||
|
|
||
| def _get_table_name(self, model: str, namespace: Optional[str] = None) -> str: | ||
| """Get full table name with namespace.""" | ||
| if namespace: | ||
| return f"{namespace}_{model}" | ||
| return model |
There was a problem hiding this comment.
namespace_prefix is stored but never applied to table names
self.namespace_prefix is accepted and stored in __init__ (line 51), but _get_table_name (lines 70–74) only uses the per-call namespace argument — it never reads self.namespace_prefix. Any adapter configured with a prefix (e.g., create_adapter(engine, namespace_prefix="myapp_")) will have it silently ignored on every query.
| self.engine = engine | |
| self.namespace_prefix = namespace_prefix | |
| self.metadata = MetaData() | |
| self._start_time = time.time() | |
| # Metadata | |
| self.id = "sqlalchemy" | |
| self.name = "SQLAlchemy Adapter" | |
| self.version = "0.1.0" | |
| self.capabilities = AdapterCapabilities( | |
| transactions=True, | |
| nested_transactions=True, | |
| joins=True, | |
| full_text_search=False, | |
| json_operations=True, | |
| schema_management=True, | |
| migration_support=False, | |
| batch_operations=True, | |
| ) | |
| def _get_table_name(self, model: str, namespace: Optional[str] = None) -> str: | |
| """Get full table name with namespace.""" | |
| if namespace: | |
| return f"{namespace}_{model}" | |
| return model | |
| def _get_table_name(self, model: str, namespace: Optional[str] = None) -> str: | |
| """Get full table name with namespace.""" | |
| if namespace: | |
| return f"{self.namespace_prefix}{namespace}_{model}" | |
| return f"{self.namespace_prefix}{model}" |
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Line: 50-74
Comment:
**`namespace_prefix` is stored but never applied to table names**
`self.namespace_prefix` is accepted and stored in `__init__` (line 51), but `_get_table_name` (lines 70–74) only uses the per-call `namespace` argument — it never reads `self.namespace_prefix`. Any adapter configured with a prefix (e.g., `create_adapter(engine, namespace_prefix="myapp_")`) will have it silently ignored on every query.
```suggestion
def _get_table_name(self, model: str, namespace: Optional[str] = None) -> str:
"""Get full table name with namespace."""
if namespace:
return f"{self.namespace_prefix}{namespace}_{model}"
return f"{self.namespace_prefix}{model}"
```
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
7 issues found across 4 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py">
<violation number="1" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:42">
P1: The type signature accepts `AsyncEngine` but every method uses synchronous context managers (`with self.engine.begin()`, `with self.engine.connect()`). Passing an `AsyncEngine` will fail at runtime because `AsyncEngine` requires `async with`. Even with a sync `Engine`, the `async def` methods perform blocking I/O that will block the event loop.
Either remove `AsyncEngine` from the type signature and document that this adapter is sync-only, or add proper async code paths using `async with engine.begin()` when an `AsyncEngine` is provided.</violation>
<violation number="2" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:70">
P2: `namespace_prefix` is accepted and stored in `__init__` but `_get_table_name()` only considers `namespace` and `model`, making the prefix option a no-op. Table names will never include the prefix, which can cause collisions in multi-tenant setups.</violation>
<violation number="3" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:128">
P0: Transaction adapter delegates operations back to the parent adapter, which opens its own connections. The `self.connection` passed to `SQLAlchemyTransactionAdapter` is stored but never used, so all operations inside a `transaction()` callback execute outside the transaction on separate connections. Commits and rollbacks have no effect on the actual work performed.
The transaction adapter needs to execute statements on `self.connection` directly instead of delegating to parent methods that create their own connections.</violation>
<violation number="4" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:157">
P1: When `params.select` is provided, the query is rebuilt from scratch, silently discarding any previously applied `order_by`, `limit`, and `offset` clauses. Queries with both column selection and pagination/sorting will return incorrectly ordered or unpaginated results.
Instead of rebuilding the query, apply column selection earlier (before order_by/limit/offset) or use `.with_only_columns()` on the existing query.</violation>
<violation number="5" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:159">
P2: When `params.select` is set but `params.where` is falsy, this passes `None` to `.where()`, which can produce SQLAlchemy warnings or errors. Build the projection query separately and only apply `.where()` conditionally.</violation>
</file>
<file name="packages/python-sqlalchemy/README.md">
<violation number="1" location="packages/python-sqlalchemy/README.md:55">
P2: The README claims AsyncEngine support, but the adapter implementation uses synchronous engine/connection calls and does not implement async SQLAlchemy access patterns.</violation>
<violation number="2" location="packages/python-sqlalchemy/README.md:68">
P2: The example uses `await` at top level, which is not valid in normal Python script usage. Wrap the calls in an async function and run it via `asyncio.run(...)`.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
3 issues found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py">
<violation number="1" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:142">
P2: Re-selecting after INSERT by matching on all `params.data` fields is fragile: it may match the wrong row when data fields aren't unique, and the `params.data` fallback loses server-generated values (auto-increment IDs, default timestamps). The previous `.returning(table)` approach handled this correctly.</violation>
<violation number="2" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:226">
P1: The re-select after UPDATE uses the original WHERE clause, which will fail to find the row if the update changed any column referenced in that clause. For example, updating `email` when `WHERE email = 'old@...'` causes a spurious `NotFoundError`. The previous code avoided this by using `.returning(table)` on the UPDATE statement itself.</violation>
<violation number="3" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:275">
P2: `create_many` now returns a shallow copy of the input data (`list(params.data)`) instead of the actual inserted rows. Server-generated columns (auto-increment IDs, defaults) are lost. The old code used `.returning(table)` to return the real persisted records.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
| async def update_many(self, params: Optional[UpdateManyParams] = None, **kwargs: Any) -> int: | ||
| """Update multiple records.""" | ||
| normalized = self._coerce_params(UpdateManyParams, params, kwargs) | ||
| try: | ||
| table = self._get_table(normalized.model, normalized.namespace) | ||
| where_clause = self._build_where_clause(table, normalized.where) | ||
|
|
||
| with self.engine.begin() as conn: | ||
| result = conn.execute(update(table).where(where_clause).values(**normalized.data)) | ||
| return result.rowcount | ||
|
|
||
| except Exception as e: | ||
| raise QueryFailedError(f"UpdateMany failed: {str(e)}", cause=e) |
There was a problem hiding this comment.
CRITICAL: update_many will update ALL records if where clause is empty
If normalized.where is None or an empty list, _build_where_clause returns None, causing the update to execute without a WHERE clause and modify every record in the table.
async def update_many(self, params: Optional[UpdateManyParams] = None, **kwargs: Any) -> int:
normalized = self._coerce_params(UpdateManyParams, params, kwargs)
try:
table = self._get_table(normalized.model, normalized.namespace)
where_clause = self._build_where_clause(table, normalized.where)
# Add validation
if where_clause is None:
raise ValueError("update_many requires a where clause to prevent updating all records")
with self.engine.begin() as conn:
result = conn.execute(update(table).where(where_clause).values(**normalized.data))
return result.rowcount| async def update_many(self, params: Optional[UpdateManyParams] = None, **kwargs: Any) -> int: | |
| """Update multiple records.""" | |
| normalized = self._coerce_params(UpdateManyParams, params, kwargs) | |
| try: | |
| table = self._get_table(normalized.model, normalized.namespace) | |
| where_clause = self._build_where_clause(table, normalized.where) | |
| with self.engine.begin() as conn: | |
| result = conn.execute(update(table).where(where_clause).values(**normalized.data)) | |
| return result.rowcount | |
| except Exception as e: | |
| raise QueryFailedError(f"UpdateMany failed: {str(e)}", cause=e) | |
| async def update_many(self, params: Optional[UpdateManyParams] = None, **kwargs: Any) -> int: | |
| """Update multiple records.""" | |
| normalized = self._coerce_params(UpdateManyParams, params, kwargs) | |
| try: | |
| table = self._get_table(normalized.model, normalized.namespace) | |
| where_clause = self._build_where_clause(table, normalized.where) | |
| # Add validation | |
| if where_clause is None: | |
| raise ValueError("update_many requires a where clause to prevent updating all records") | |
| with self.engine.begin() as conn: | |
| result = conn.execute(update(table).where(where_clause).values(**normalized.data)) | |
| return result.rowcount | |
| except Exception as e: | |
| raise QueryFailedError(f"UpdateMany failed: {str(e)}", cause=e) | |
Spotted by Graphite
Is this helpful? React 👍 or 👎 to let us know.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (4)
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py (3)
468-473: Sync context manager with async callback may block the event loop.The
transactionmethod uses a synchronouswith self.engine.begin()context manager but awaits an async callback inside. Since this adapter targets sync engines, blocking is expected. However, this pattern may cause issues if users expect non-blocking behavior in an async context.Consider documenting this behavior or using
run_in_executorif true async transaction support is needed in the future.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines 468 - 473, The transaction method uses a synchronous context manager (with self.engine.begin()) while awaiting an async callback, which can block the event loop; update the transaction(self, callback) implementation or docs: either document that transaction is blocking/sync-only in the method docstring (mentioning SQLAlchemyTransactionAdapter and that this adapter targets sync engines), or change the implementation to offload the blocking work to a thread executor (e.g., run the with self.engine.begin() and callback(trx_adapter) inside run_in_executor) to avoid blocking the event loop if true async behavior is required.
233-240: Useraise ... from efor proper exception chaining.The custom exceptions accept a
causeparameter, but Python's standardraise X from esyntax preserves the full traceback chain. Without it, the original exception context may be lost in some scenarios.Proposed fix
except IntegrityError as e: if "unique" in str(e).lower() or "duplicate" in str(e).lower(): - raise DuplicateKeyError(str(e), cause=e) - raise ConstraintViolationError(str(e), cause=e) + raise DuplicateKeyError(str(e), cause=e) from e + raise ConstraintViolationError(str(e), cause=e) from e except OperationalError as e: - raise ConnectionError(str(e), cause=e) + raise ConnectionError(str(e), cause=e) from e except Exception as e: - raise QueryFailedError(f"Create failed: {str(e)}", cause=e) + raise QueryFailedError(f"Create failed: {str(e)}", cause=e) from eThis pattern should be applied to all similar exception handling blocks throughout the file (lines 261, 293, 320-325, 346-349, 374-377, 390-391, 404-405, 465-466, 481-482).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines 233 - 240, The exception handlers in adapter.py (e.g., the block handling IntegrityError/OperationalError/Exception that currently raises DuplicateKeyError, ConstraintViolationError, ConnectionError, and QueryFailedError with a cause= argument) should use Python exception chaining: replace the current raises that pass cause=e with "raise <CustomError>(... ) from e" so the original traceback is preserved; apply this change consistently to all similar handlers mentioned (lines around the handlers for DuplicateKeyError, ConstraintViolationError, ConnectionError, QueryFailedError and the other listed blocks) and ensure the message text remains the same while removing or keeping the cause parameter as appropriate.
407-448: Consider using database-native upsert for atomicity.The current implementation uses a find-then-create pattern with a
DuplicateKeyErrorfallback, which handles the race condition but executes multiple non-atomic queries. SQLAlchemy 2.0+ supports database-native upserts viainsert().on_conflict_do_update()(PostgreSQL) and similar constructs for other databases, providing true atomicity.The current approach is functional and the duplicate-key retry is a reasonable fallback, but database-native upsert would be more robust under high concurrency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines 407 - 448, The upsert method in adapter.py (async def upsert) currently does a non-atomic find-then-create with a DuplicateKeyError fallback, which risks race conditions under concurrency; replace this pattern with a database-native atomic upsert using SQLAlchemy's insert().on_conflict_do_update() (or the DB-specific equivalent) for the model referenced by normalized.model, building the insert from normalized.create and the update clause from normalized.update, and return the selected columns (normalized.select) from that single statement; if your DB backend does not support on_conflict_do_update, keep the DuplicateKeyError fallback around the atomic attempt (call find_one/create/update only as a fallback) so behavior remains correct for unsupported dialects.packages/python-sqlalchemy/README.md (1)
59-93: Example shows async patterns but feature list says "Sync SQLAlchemy engines".The example wraps synchronous adapter calls in an
async deffunction and usesawait, which is technically correct since the adapter methods areasync. However, this may confuse users who see "✅ Sync SQLAlchemy engines" in the feature list (line 55) but then encounter async/await syntax in the example.Consider either:
- Simplifying the example to show synchronous-style usage (if that's the intended primary use case)
- Clarifying in the features that the adapter exposes an async API over sync engines
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/python-sqlalchemy/README.md` around lines 59 - 93, The README example shows async usage (async def main, await adapter.create/adapter.find_many) while the feature list claims "Sync SQLAlchemy engines"; update the docs for consistency by either replacing the code sample to demonstrate synchronous-style usage (call create_adapter and then call adapter.create and adapter.find_many in a normal, non-async flow) or, if the library intentionally exposes an async API over sync engines, change the feature list text to explicitly state that create_adapter returns an async adapter (mention create_adapter, adapter.create, adapter.find_many, CreateParams, FindManyParams) and add a short note above the example clarifying that the adapter methods are async even when using sync SQLAlchemy engines.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 215-220: The dict comprehension building the primary-key mapping
uses zip(table.primary_key.columns, inserted_primary_key) which silently
truncates on length mismatch; change that zip call to use zip(..., strict=True)
so a ValueError is raised when table.primary_key.columns and
inserted_primary_key have different lengths (locate the comprehension using
table.primary_key.columns and inserted_primary_key in adapter.py and update the
zip invocation to include strict=True).
- Around line 338-340: The NotFoundError is raised with an unnecessary f-string
literal in the deletion path; update the raise in the delete flow (the code
around result = conn.execute(delete(table).where(where_clause)) and the
NotFoundError) to use a plain string literal (remove the leading "f") so the
message becomes a normal string without interpolation.
- Around line 38-39: The SQLAlchemyAdapter class does not match the Adapter
protocol signatures and should either explicitly inherit and implement Adapter
or a matching Protocol should be declared and used as the factory return type;
to fix, update SQLAlchemyAdapter to "class SQLAlchemyAdapter(Adapter):" and
change its async method signatures (create, find_many, find_one, update, delete,
transaction) to match the Adapter Protocol (e.g., create(self, model: str, data:
Dict[str, Any], namespace: str = "datafn") -> None, etc.), or alternatively
define a new Protocol that matches the current method shapes and change
create_adapter's return annotation from Adapter to that new Protocol; ensure
create_adapter, SQLAlchemyAdapter, and all listed method names are updated
consistently.
---
Nitpick comments:
In `@packages/python-sqlalchemy/README.md`:
- Around line 59-93: The README example shows async usage (async def main, await
adapter.create/adapter.find_many) while the feature list claims "Sync SQLAlchemy
engines"; update the docs for consistency by either replacing the code sample to
demonstrate synchronous-style usage (call create_adapter and then call
adapter.create and adapter.find_many in a normal, non-async flow) or, if the
library intentionally exposes an async API over sync engines, change the feature
list text to explicitly state that create_adapter returns an async adapter
(mention create_adapter, adapter.create, adapter.find_many, CreateParams,
FindManyParams) and add a short note above the example clarifying that the
adapter methods are async even when using sync SQLAlchemy engines.
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 468-473: The transaction method uses a synchronous context manager
(with self.engine.begin()) while awaiting an async callback, which can block the
event loop; update the transaction(self, callback) implementation or docs:
either document that transaction is blocking/sync-only in the method docstring
(mentioning SQLAlchemyTransactionAdapter and that this adapter targets sync
engines), or change the implementation to offload the blocking work to a thread
executor (e.g., run the with self.engine.begin() and callback(trx_adapter)
inside run_in_executor) to avoid blocking the event loop if true async behavior
is required.
- Around line 233-240: The exception handlers in adapter.py (e.g., the block
handling IntegrityError/OperationalError/Exception that currently raises
DuplicateKeyError, ConstraintViolationError, ConnectionError, and
QueryFailedError with a cause= argument) should use Python exception chaining:
replace the current raises that pass cause=e with "raise <CustomError>(... )
from e" so the original traceback is preserved; apply this change consistently
to all similar handlers mentioned (lines around the handlers for
DuplicateKeyError, ConstraintViolationError, ConnectionError, QueryFailedError
and the other listed blocks) and ensure the message text remains the same while
removing or keeping the cause parameter as appropriate.
- Around line 407-448: The upsert method in adapter.py (async def upsert)
currently does a non-atomic find-then-create with a DuplicateKeyError fallback,
which risks race conditions under concurrency; replace this pattern with a
database-native atomic upsert using SQLAlchemy's
insert().on_conflict_do_update() (or the DB-specific equivalent) for the model
referenced by normalized.model, building the insert from normalized.create and
the update clause from normalized.update, and return the selected columns
(normalized.select) from that single statement; if your DB backend does not
support on_conflict_do_update, keep the DuplicateKeyError fallback around the
atomic attempt (call find_one/create/update only as a fallback) so behavior
remains correct for unsupported dialects.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2cf6a2ff-7cda-408b-bf9c-05629eb41d55
📒 Files selected for processing (5)
packages/python-sqlalchemy/README.mdpackages/python-sqlalchemy/pyproject.tomlpackages/python-sqlalchemy/superfunctions_sqlalchemy/__init__.pypackages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.pypackages/python-sqlalchemy/tests/test_adapter.py
| class SQLAlchemyAdapter: | ||
| """SQLAlchemy adapter for superfunctions.db""" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if Adapter is a Protocol or ABC in superfunctions.db
rg -n "class Adapter" --type py -A5 | head -50Repository: 21nCo/super-functions
Length of output: 399
🏁 Script executed:
#!/bin/bash
# Find the adapter.py file in packages/python-sqlalchemy
find . -type f -name "adapter.py" | grep -i sqlalchemyRepository: 21nCo/super-functions
Length of output: 130
🏁 Script executed:
#!/bin/bash
# Check the full Adapter protocol definition to see all required methods
rg -n "class Adapter" --type py -A 50 | head -100Repository: 21nCo/super-functions
Length of output: 2164
🏁 Script executed:
#!/bin/bash
# If adapter.py exists, check SQLAlchemyAdapter class definition
find . -type f -path "*/packages/python-sqlalchemy/*adapter.py" -exec head -60 {} +Repository: 21nCo/super-functions
Length of output: 3692
🏁 Script executed:
#!/bin/bash
# Get the full SQLAlchemyAdapter class to see all methods
wc -l ./packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.pyRepository: 21nCo/super-functions
Length of output: 134
🏁 Script executed:
#!/bin/bash
# Check if the class implements the required protocol methods
rg -n "async def (find_many|find_one|create|update|delete|transaction)" ./packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.pyRepository: 21nCo/super-functions
Length of output: 1572
🏁 Script executed:
#!/bin/bash
# Check if Adapter Protocol is marked as runtime_checkable
rg -n "runtime_checkable|@runtime_checkable" ./datafn/python/datafn/db.py -B5Repository: 21nCo/super-functions
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Check how create_adapter function is defined and if there are type checking issues
rg -n "def create_adapter" ./packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py -A20Repository: 21nCo/super-functions
Length of output: 736
🏁 Script executed:
#!/bin/bash
# Check full imports at top of db.py to see if Protocol is runtime_checkable
head -40 ./datafn/python/datafn/db.pyRepository: 21nCo/super-functions
Length of output: 1325
🏁 Script executed:
#!/bin/bash
# Check the exact definition of Adapter protocol with decorators
sed -n '1,35p' ./datafn/python/datafn/db.pyRepository: 21nCo/super-functions
Length of output: 1325
🏁 Script executed:
#!/bin/bash
# Check if SQLAlchemyAdapter has a different signature or if there are multiple Adapter definitions
rg -n "class.*Adapter" ./datafn/python/datafn/db.py -A3Repository: 21nCo/super-functions
Length of output: 158
🏁 Script executed:
#!/bin/bash
# Check if superfunctions.db exports multiple versions of Adapter
cat ./datafn/python/datafn/__init__.py | head -50Repository: 21nCo/super-functions
Length of output: 141
🏁 Script executed:
#!/bin/bash
# Compare the exact method signature of SQLAlchemyAdapter.create vs Adapter.create
sed -n '242,250p' ./packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.pyRepository: 21nCo/super-functions
Length of output: 652
🏁 Script executed:
#!/bin/bash
# Check if there's a method signature mismatch - compare find_one signatures
sed -n '263,270p' ./packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.pyRepository: 21nCo/super-functions
Length of output: 580
🏁 Script executed:
#!/bin/bash
# Check if there's parameter transformation happening - look for how params are used
sed -n '240,265p' ./packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.pyRepository: 21nCo/super-functions
Length of output: 1340
SQLAlchemyAdapter has incompatible method signatures with the Adapter Protocol.
The create_adapter function declares return type Adapter (line 553), but SQLAlchemyAdapter does not explicitly inherit from or properly implement the Adapter protocol (line 38). More critically, the method signatures are incompatible:
- Protocol expects:
async def create(self, model: str, data: Dict[str, Any], namespace: str = "datafn") -> None - Implementation provides:
async def create(self, params: Optional[CreateParams] = None, **kwargs: Any) -> Dict[str, Any]
This pattern is repeated across all protocol methods (find_many, find_one, update, delete, transaction). Since Adapter is an undecorated Protocol without @runtime_checkable, static type checkers (mypy, pyright) will flag this as a type error.
Options:
- Declare
class SQLAlchemyAdapter(Adapter):and adjust method signatures to match the protocol - Or define a separate Protocol that matches the actual implementation and use that for return type
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
38 - 39, The SQLAlchemyAdapter class does not match the Adapter protocol
signatures and should either explicitly inherit and implement Adapter or a
matching Protocol should be declared and used as the factory return type; to
fix, update SQLAlchemyAdapter to "class SQLAlchemyAdapter(Adapter):" and change
its async method signatures (create, find_many, find_one, update, delete,
transaction) to match the Adapter Protocol (e.g., create(self, model: str, data:
Dict[str, Any], namespace: str = "datafn") -> None, etc.), or alternatively
define a new Protocol that matches the current method shapes and change
create_adapter's return annotation from Adapter to that new Protocol; ensure
create_adapter, SQLAlchemyAdapter, and all listed method names are updated
consistently.
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
3 issues found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py">
<violation number="1" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:217">
P2: Add `strict=True` to this `zip()` call. If `table.primary_key.columns` and `inserted_primary_key` have different lengths, `zip()` silently truncates, which could produce an incomplete primary-key dict and cause an incorrect row lookup.</violation>
<violation number="2" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:387">
P0: `update_many` will update **all records** in the table when `where` is `None` or empty. `_build_where_clause` returns `None` in that case, and SQLAlchemy's `.where(None)` is a no-op, producing an unfiltered `UPDATE`. Add a guard that raises before executing:
```python
if where_clause is None:
raise ValueError("update_many requires a non-empty where clause to prevent updating all records")
```</violation>
<violation number="3" location="packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py:543">
P1: `SQLAlchemyAdapter.find_many` is the only public method not updated to the `Optional[params] = None, **kwargs` pattern. Callers passing keyword arguments will get a `TypeError` from the main adapter, even though the same call works on `SQLAlchemyTransactionAdapter.find_many`.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (4)
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py (4)
233-240: Useraise ... from efor proper exception chaining.Ruff B904 flags these lines. While you pass
cause=eto the error constructors, Python's standard exception chaining usesraise X from ewhich sets__cause__and suppresses the implicit__context__. Ifsuperfunctions.dberrors rely on thecauseparameter, this may be intentional, but addingfrom eensures proper traceback display.♻️ Proposed fix
except IntegrityError as e: if "unique" in str(e).lower() or "duplicate" in str(e).lower(): - raise DuplicateKeyError(str(e), cause=e) - raise ConstraintViolationError(str(e), cause=e) + raise DuplicateKeyError(str(e), cause=e) from e + raise ConstraintViolationError(str(e), cause=e) from e except OperationalError as e: - raise ConnectionError(str(e), cause=e) + raise ConnectionError(str(e), cause=e) from e except Exception as e: - raise QueryFailedError(f"Create failed: {str(e)}", cause=e) + raise QueryFailedError(f"Create failed: {str(e)}", cause=e) from eThe same pattern applies to all exception handlers throughout the file (lines 261, 293, 323, 325, 349, 375, 377, 395, 413, 474, 490).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines 233 - 240, The except blocks currently re-raise custom exceptions with a cause= parameter but don't use Python exception chaining; change each raise to use "raise <ExceptionClass>(... ) from e" (e.g., replace raising DuplicateKeyError(str(e), cause=e) with raising DuplicateKeyError(str(e)) from e, and similarly for ConstraintViolationError, ConnectionError, QueryFailedError) so __cause__ is set and tracebacks chain properly; apply this same change to all similar handlers in the file that re-raise with cause=e (including other occurrences that raise DuplicateKeyError, ConstraintViolationError, ConnectionError, QueryFailedError).
513-528: Schema management methods are stubbed with TODOs.The
get_schema_version,set_schema_version,validate_schema, andcreate_schemamethods are placeholders. This is acceptable for an initial release if schema management isn't a priority.Do you want me to help implement these schema management methods, or would you prefer to track this as a follow-up issue?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines 513 - 528, The four schema-management methods (get_schema_version, set_schema_version, validate_schema, create_schema) are currently stubs; implement them so the adapter can persist and validate schemas: add a metadata table or use existing migrations table to store/retrieve namespace version in get_schema_version and set_schema_version (return int and update row respectively), implement validate_schema to check a TableSchema object for required fields/types and return a ValidationResult with valid=False and errors populated when issues are found, and implement create_schema to run the necessary DDL (CREATE TABLE/ALTER TABLE) for the provided CreateSchemaParams, call validate_schema first, persist the new version via set_schema_version on success, and return a SchemaCreation indicating success and any errors; use the existing method names (get_schema_version, set_schema_version, validate_schema, create_schema) to locate where to add this logic.
541-543: Synchronousrollback()call in async method.The
rollbackmethod is declaredasyncbut performs a synchronousself.connection.rollback(). This is inconsistent with the async interface. For a sync engine, this works but blocks the event loop.♻️ Consider making rollback non-blocking or documenting sync behavior
If this adapter is sync-only, document it clearly. Otherwise, wrap the call:
async def rollback(self) -> None: """Rollback the transaction.""" # Note: Synchronous call - blocks event loop with sync engine self.connection.rollback()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines 541 - 543, The async rollback method currently calls the blocking self.connection.rollback() directly; change it to avoid blocking the event loop by running the synchronous rollback in an executor and awaiting it (e.g. use asyncio.get_running_loop().run_in_executor(None, self.connection.rollback)) or, if this adapter is intentionally sync-only, make rollback a regular def (remove async) and update the docstring to document sync behavior; update the rollback signature and docstring accordingly so callers and linters reflect the chosen approach.
354-378: Consider bulk insert forcreate_manyif returning individual rows isn't required.Currently,
create_manyinserts rows one-by-one in a loop, which is O(n) round-trips. If the caller doesn't need the returned rows (or can accept just PKs), a singleconn.execute(insert(table), normalized.data)would be significantly faster for large batches.This is acceptable if the current behavior (returning full row data with select projection) is a requirement.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines 354 - 378, The create_many implementation currently loops calling _create for each row; change it to perform a bulk insert when the caller does not need full returned rows: in create_many, after normalizing params (CreateManyParams -> normalized), if normalized.select is None or only requests primary key(s), obtain the target Table for normalized.model, and use conn.execute(insert(table), normalized.data) (or insert(...).returning(pk) when PKs are desired and supported) to insert all rows in one call and return either the returned PKs or an empty list; otherwise keep the existing per-row loop that calls self._create for full select projections. Preserve the existing IntegrityError and generic Exception handling and ensure namespace/transaction semantics are unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 74-114: In _build_where_clause add a clear fallback that raises an
exception when a WhereClause.operator is not handled so unrecognized operators
are not silently ignored; update the function (method _build_where_clause in
adapter.py) to validate each clause.operator and if it doesn't match any of the
handled Operator cases raise a ValueError (or custom error) including the
operator name and the clause.field/value to aid debugging so callers immediately
see unsupported operators instead of getting dropped conditions.
- Around line 428-437: When existing is truthy we call self.update(...) which
can raise NotFoundError if the record was deleted between find_one and update;
wrap the call to self.update (where UpdateParams is used and normalized is
referenced) in a try/except that catches NotFoundError and falls back to calling
self.create(...) with a CreateParams constructed from normalized.model,
normalized.create (or normalized.update as appropriate), normalized.select and
normalized.namespace so the operation succeeds even if the record was removed
before update.
- Around line 79-81: The loop building conditions uses getattr(table.c,
clause.field) which can raise AttributeError for invalid fields; update the
for-loop that iterates over where to validate that clause.field exists on
table.c before calling getattr, and if it does not, raise a QueryFailedError (or
wrap the AttributeError) with a clear message instead of letting AttributeError
bubble up so consumers get a meaningful error; reference the existing variables
and symbols: the for loop over where, table.c, clause.field, conditions, and
QueryFailedError to locate where to add the check and error handling.
- Around line 230-231: The current async method is using the synchronous context
manager with self.engine.begin(), which blocks the event loop; either convert
the adapter to use an AsyncEngine and async context managers or make the method
synchronous. Option A: create an AsyncEngine via create_async_engine and replace
with "async with self.async_engine.begin() as managed_conn" and ensure the
called helper _create accepts/uses the async connection and is awaited. Option
B: if you want synchronous behavior, remove the async/await on the method that
contains "with self.engine.begin()" and call self._create(params, managed_conn)
synchronously (no await), keeping _create as sync. Ensure references to
self.engine.begin(), _create, create_async_engine/AsyncEngine are updated
consistently.
---
Nitpick comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 233-240: The except blocks currently re-raise custom exceptions
with a cause= parameter but don't use Python exception chaining; change each
raise to use "raise <ExceptionClass>(... ) from e" (e.g., replace raising
DuplicateKeyError(str(e), cause=e) with raising DuplicateKeyError(str(e)) from
e, and similarly for ConstraintViolationError, ConnectionError,
QueryFailedError) so __cause__ is set and tracebacks chain properly; apply this
same change to all similar handlers in the file that re-raise with cause=e
(including other occurrences that raise DuplicateKeyError,
ConstraintViolationError, ConnectionError, QueryFailedError).
- Around line 513-528: The four schema-management methods (get_schema_version,
set_schema_version, validate_schema, create_schema) are currently stubs;
implement them so the adapter can persist and validate schemas: add a metadata
table or use existing migrations table to store/retrieve namespace version in
get_schema_version and set_schema_version (return int and update row
respectively), implement validate_schema to check a TableSchema object for
required fields/types and return a ValidationResult with valid=False and errors
populated when issues are found, and implement create_schema to run the
necessary DDL (CREATE TABLE/ALTER TABLE) for the provided CreateSchemaParams,
call validate_schema first, persist the new version via set_schema_version on
success, and return a SchemaCreation indicating success and any errors; use the
existing method names (get_schema_version, set_schema_version, validate_schema,
create_schema) to locate where to add this logic.
- Around line 541-543: The async rollback method currently calls the blocking
self.connection.rollback() directly; change it to avoid blocking the event loop
by running the synchronous rollback in an executor and awaiting it (e.g. use
asyncio.get_running_loop().run_in_executor(None, self.connection.rollback)) or,
if this adapter is intentionally sync-only, make rollback a regular def (remove
async) and update the docstring to document sync behavior; update the rollback
signature and docstring accordingly so callers and linters reflect the chosen
approach.
- Around line 354-378: The create_many implementation currently loops calling
_create for each row; change it to perform a bulk insert when the caller does
not need full returned rows: in create_many, after normalizing params
(CreateManyParams -> normalized), if normalized.select is None or only requests
primary key(s), obtain the target Table for normalized.model, and use
conn.execute(insert(table), normalized.data) (or insert(...).returning(pk) when
PKs are desired and supported) to insert all rows in one call and return either
the returned PKs or an empty list; otherwise keep the existing per-row loop that
calls self._create for full select projections. Preserve the existing
IntegrityError and generic Exception handling and ensure namespace/transaction
semantics are unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a1093074-1fb9-432b-91be-546eca914b24
📒 Files selected for processing (2)
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.pypackages/python-sqlalchemy/tests/test_adapter.py
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/python-sqlalchemy/tests/test_adapter.py
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
| with self.engine.begin() as managed_conn: | ||
| return await self._create(params, managed_conn) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does SQLAlchemy 2.0 support mixing sync Engine with async/await patterns?
💡 Result:
No, SQLAlchemy 2.0 does not support directly mixing a synchronous Engine with async/await patterns. Async operations require an AsyncEngine created via create_async_engine, which uses async database drivers (e.g., asyncpg, aiosqlite). However, SQLAlchemy provides mechanisms to integrate synchronous code within async contexts and vice versa: - AsyncEngine exposes a .sync_engine attribute, a synchronous Engine for running sync operations via run_sync. - AsyncSession exposes .sync_session for attaching sync event listeners. - Use AsyncConnection.run_sync or AsyncSession.run_sync to execute synchronous functions (e.g., metadata.create_all) in an async context. - Examples: await conn.run_sync(Base.metadata.create_all(bind=conn.sync_connection)) Directly passing a sync Engine to async_sessionmaker or AsyncSession results in type errors or runtime failures, as confirmed by official docs and issues. Use async_sessionmaker(bind=async_engine) for proper async ORM usage. All examples in SQLAlchemy 2.0 docs use AsyncEngine with AsyncSession for async/await.
Citations:
- 1: https://www.sqlalchemy.org/docs/07/orm/extensions/asyncio.html
- 2: http://docs.sqlalchemy.org/en/latest/orm/extensions/asyncio.html
- 3: https://docs.sqlalchemy.org/en/20/_modules/examples/asyncio/async_orm.html
- 4: https://docs.sqlalchemy.org/en/stable/orm/extensions/asyncio.html
- 5: https://goldlapel.com/how-to/sqlalchemy-async-slower-than-sync
- 6: https://docs.advanced-alchemy.litestar.dev/latest/_modules/sqlalchemy/ext/asyncio/engine.html
- 7: https://docs.advanced-alchemy.litestar.dev/latest/_modules/sqlalchemy/ext/asyncio/session.html
- 8: type error with async sessions in sqlalchemy 2.0 sqlalchemy/sqlalchemy#9126
- 9: Argument "bind" to "sessionmaker" has incompatible type ... sqlalchemy/sqlalchemy#9713
🏁 Script executed:
# Get an overview of the adapter.py file structure
head -50 packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.pyRepository: 21nCo/super-functions
Length of output: 1322
🏁 Script executed:
# Find where self.engine is initialized and its type
rg "self\.engine\s*=" packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py -A 2 -B 2Repository: 21nCo/super-functions
Length of output: 260
🏁 Script executed:
# Look at the full _create method to understand the context better
sed -n '202,244p' packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py | cat -nRepository: 21nCo/super-functions
Length of output: 2417
🏁 Script executed:
# Check if AsyncEngine is being used anywhere in the file
rg "AsyncEngine|create_async_engine|async with.*begin" packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.pyRepository: 21nCo/super-functions
Length of output: 47
🏁 Script executed:
# Search for the custom exception definitions in superfunctions package
rg "class (DuplicateKeyError|ConstraintViolationError|ConnectionError|QueryFailedError)" --type py -A 5Repository: 21nCo/super-functions
Length of output: 47
Mixing synchronous engine.begin() with await blocks the event loop.
The sync context manager with self.engine.begin() blocks execution while holding a connection. SQLAlchemy 2.0 does not support mixing synchronous Engine with async/await patterns—this requires AsyncEngine created via create_async_engine with async database drivers (e.g., asyncpg, aiosqlite).
Either convert to AsyncEngine with async with engine.begin(), or remove the async decorator and use synchronous method signatures.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
230 - 231, The current async method is using the synchronous context manager
with self.engine.begin(), which blocks the event loop; either convert the
adapter to use an AsyncEngine and async context managers or make the method
synchronous. Option A: create an AsyncEngine via create_async_engine and replace
with "async with self.async_engine.begin() as managed_conn" and ensure the
called helper _create accepts/uses the async connection and is awaited. Option
B: if you want synchronous behavior, remove the async/await on the method that
contains "with self.engine.begin()" and call self._create(params, managed_conn)
synchronously (no await), keeping _create as sync. Ensure references to
self.engine.begin(), _create, create_async_engine/AsyncEngine are updated
consistently.
| if existing: | ||
| return await self.update( | ||
| UpdateParams( | ||
| model=normalized.model, | ||
| where=normalized.where, | ||
| data=normalized.update, | ||
| select=normalized.select, | ||
| namespace=normalized.namespace, | ||
| ) | ||
| ) |
There was a problem hiding this comment.
Potential NotFoundError if record is deleted between find_one and update.
If existing is truthy but the record is deleted by another transaction before update executes, NotFoundError will propagate. Consider catching this and falling back to create:
🛡️ Proposed fix
if existing:
- return await self.update(
- UpdateParams(
- model=normalized.model,
- where=normalized.where,
- data=normalized.update,
- select=normalized.select,
- namespace=normalized.namespace,
+ try:
+ return await self.update(
+ UpdateParams(
+ model=normalized.model,
+ where=normalized.where,
+ data=normalized.update,
+ select=normalized.select,
+ namespace=normalized.namespace,
+ )
)
- )
+ except NotFoundError:
+ pass # Fall through to create🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines
428 - 437, When existing is truthy we call self.update(...) which can raise
NotFoundError if the record was deleted between find_one and update; wrap the
call to self.update (where UpdateParams is used and normalized is referenced) in
a try/except that catches NotFoundError and falls back to calling
self.create(...) with a CreateParams constructed from normalized.model,
normalized.create (or normalized.update as appropriate), normalized.select and
normalized.namespace so the operation succeeds even if the record was removed
before update.
| async def transaction(self, callback: Callable) -> Any: | ||
| """Execute operations within a transaction.""" | ||
| with self.engine.begin() as conn: | ||
| # Create transaction adapter | ||
| trx_adapter = SQLAlchemyTransactionAdapter(conn, self) | ||
| return await callback(trx_adapter) |
There was a problem hiding this comment.
The callback parameter is typed as Callable but is being awaited. If a non-async callback function is passed, this will fail with "object NoneType can't be used in 'await' expression" or similar error.
async def transaction(self, callback: Callable) -> Any:
"""Execute operations within a transaction."""
with self.engine.begin() as conn:
trx_adapter = SQLAlchemyTransactionAdapter(conn, self)
result = callback(trx_adapter) # Don't await if not async
if hasattr(result, '__await__'):
return await result
return resultOr update the type hint to require an async callable:
from typing import Awaitable
async def transaction(self, callback: Callable[[TransactionAdapter], Awaitable[Any]]) -> Any:| async def transaction(self, callback: Callable) -> Any: | |
| """Execute operations within a transaction.""" | |
| with self.engine.begin() as conn: | |
| # Create transaction adapter | |
| trx_adapter = SQLAlchemyTransactionAdapter(conn, self) | |
| return await callback(trx_adapter) | |
| async def transaction(self, callback: Callable[..., Awaitable[Any]]) -> Any: | |
| """Execute operations within a transaction.""" | |
| with self.engine.begin() as conn: | |
| # Create transaction adapter | |
| trx_adapter = SQLAlchemyTransactionAdapter(conn, self) | |
| return await callback(trx_adapter) | |
Spotted by Graphite
Is this helpful? React 👍 or 👎 to let us know.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 9d7e3107f5
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| except Exception as e: | ||
| raise QueryFailedError(f"CreateMany failed: {str(e)}", cause=e) |
There was a problem hiding this comment.
Keep integrity errors typed in create_many
When create_many() hits a duplicate/constraint violation, _create() already converts DB exceptions into DuplicateKeyError/ConstraintViolationError, but this broad except Exception wraps those again as QueryFailedError. That prevents callers from handling expected integrity conflicts distinctly (for example retrying on duplicates or returning conflict-specific responses) and makes batch error handling inconsistent with create().
Useful? React with 👍 / 👎.
| trx_adapter = SQLAlchemyTransactionAdapter(conn, self) | ||
| return await callback(trx_adapter) |
There was a problem hiding this comment.
Expose full adapter ops on transaction adapter
transaction() passes SQLAlchemyTransactionAdapter to the callback, but that adapter only implements single-row CRUD methods. Callbacks that use batch/advanced methods available on the main adapter (such as create_many, update_many, delete_many, upsert, or count) will fail with AttributeError, which breaks transaction callbacks that rely on the shared adapter surface.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 58-65: The Adapter currently advertises schema_management=True but
the related methods (validate_schema, create_schema, set_schema_version,
get_schema_version) are no-ops or failing; change the capability to
schema_management=False in AdapterCapabilities (and any other occurrences around
the similar block at lines ~518-532) until real implementations exist, or
alternatively implement the full schema management hooks: make validate_schema
perform real validation and return accurate valid/errors, implement
create_schema to actually create schema and return success, and have
set_schema_version/get_schema_version persist and return real versions; update
AdapterCapabilities only after those methods are correctly implemented.
- Around line 311-320: The single-record update path uses the original
where_clause which may match multiple rows; change it to target the specific
fetched row by obtaining its primary key from existing (via _fetch_one_by_clause
and the table PK helper) and reissue the UPDATE using
update(table).where(primary_key_clause) with params.data, or raise an error if
the table has no single-column primary key; ensure the subsequent fetch uses
_fetch_one_by_primary_key and fall back to _project_row as before. Apply the
same fix pattern to the single-record DELETE path (mirror the primary-key
targeting in the _delete flow) so both _fetch_one_by_clause,
update(table).where(...).values(...), and delete(table).where(...) are executed
against the reconstructed primary_key_clause rather than the original
where_clause.
- Around line 283-287: The code currently checks params.limit and params.offset
by truthiness, so limit=0 is treated like None and the LIMIT is omitted; update
the checks in the adapter (where params.limit and params.offset are used—e.g.,
in the query-building block in superfunctions_sqlalchemy.adapter) to use
explicit None checks: if params.limit is not None then apply
query.limit(params.limit), and similarly if params.offset is not None apply
query.offset(params.offset), leaving behavior unchanged for None but correctly
applying zero values.
- Around line 359-382: The create_many method is rewrapping adapter-specific
errors from _create (like DuplicateKeyError/ConstraintViolationError) into
QueryFailedError which loses duplicate-key/constraint semantics; modify
create_many to preserve those exceptions by checking for and re-raising existing
adapter exceptions (e.g., DuplicateKeyError, ConstraintViolationError) before
the broad except Exception handler, and only wrap unexpected exceptions as
QueryFailedError; keep the existing IntegrityError handling for raw DB errors
and continue using CreateParams/CreateManyParams and the conn loop as-is.
- Around line 80-84: The code is using getattr(table.c, clause.field) and
hasattr(table.c, clause.field) which is unsafe for user-controlled column names
(they may collide with ColumnCollection attributes); change these checks to use
membership and keyed access: replace hasattr(table.c, field) checks with "field
in table.c" and replace getattr(table.c, field) with table.c[field] for the
where clause handling in adapter.py (and apply the same keyed-lookup pattern to
the other occurrences referenced in the review: the uses around lines with
identifiers for the other checks at the same module — specifically update the
access at the spots currently around lines 142, 146, 162, and 280 to use
table.c[name] and membership testing rather than getattr/hasattr).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0321c45f-a060-47e1-b9f1-a2cd0d6713fd
📒 Files selected for processing (2)
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.pypackages/python-sqlalchemy/tests/test_adapter.py
✅ Files skipped from review due to trivial changes (1)
- packages/python-sqlalchemy/tests/test_adapter.py
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 98fe5ca275
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| except Exception as e: | ||
| raise QueryFailedError(f"Create failed: {str(e)}", cause=e) |
There was a problem hiding this comment.
Preserve DuplicateKeyError in create for upsert fallback
_create() calls itself recursively when no connection is provided, but the outer frame wraps any exception in QueryFailedError. That means an inner DuplicateKeyError/ConstraintViolationError is lost when create() is used normally, and upsert()'s except DuplicateKeyError retry path will never trigger on real duplicate inserts, causing upserts under contention to fail instead of updating.
Useful? React with 👍 / 👎.
| else: | ||
| raise QueryFailedError(f"Unsupported operator in where clause: {clause.operator}") | ||
|
|
||
| return and_(*conditions) if len(conditions) > 1 else conditions[0] |
There was a problem hiding this comment.
Honor OR connectors when building where predicates
Multi-clause filters are always combined with and_(...), so any WhereClause entries intended to be OR-connected are silently executed as AND. This changes query semantics for find_*, update*, delete*, and count calls whenever callers rely on connector='OR', returning incorrect results rather than raising an error.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: fd17484c19
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| target_where_clause = self._build_primary_key_where_clause(table, existing) or where_clause | ||
| result = conn.execute(update(table).where(target_where_clause).values(**params.data)) |
There was a problem hiding this comment.
Prevent single-row update from widening on tables without PKs
Do not fall back to the original where for update() when no primary key can be derived. In _update, if the reflected table has no primary key, _build_primary_key_where_clause(...) returns None, so target_where_clause becomes the original predicate and can update multiple rows even though update() is the single-record API. This can silently modify many records on legacy/no-PK tables (and the same pattern is used in _delete).
Useful? React with 👍 / 👎.
| for path in (PACKAGE_ROOT, PYTHON_CORE_ROOT): | ||
| path_str = str(path) | ||
| if path_str not in sys.path: | ||
| sys.path.insert(0, path_str) |
There was a problem hiding this comment.
Avoid shadowing installed superfunctions package in tests
Inserting PYTHON_CORE_ROOT at the front of sys.path causes tests to import the local packages/python-core/superfunctions package first, but that package does not contain superfunctions.db. As a result, from superfunctions.db import ... fails during test collection once SQLAlchemy is available, so the package tests cannot run in a normal environment.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py (1)
147-149: 🛠️ Refactor suggestion | 🟠 MajorApply keyed lookup pattern here too.
getattr(table.c, field)has the same collision risk withColumnCollectionmethod names. Usetable.c[field]instead.Proposed fix
def _apply_select(self, table: Table, params: FindOneParams | FindManyParams) -> Any: - columns = [getattr(table.c, field) for field in params.select] if params.select else [table] + columns = [table.c[field] for field in params.select] if params.select else [table] return select(*columns).select_from(table)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines 147 - 149, In _apply_select, avoid using getattr(table.c, field) because it can collide with ColumnCollection attribute names; update the selection logic inside the _apply_select method to index into the ColumnCollection using table.c[field] for each field in params.select (falling back to [table] when params.select is empty) so the correct column object is retrieved without name-collision risk.
🧹 Nitpick comments (4)
packages/python-sqlalchemy/tests/test_adapter.py (2)
28-40: Consider callingadapter.initialize()in the fixture.The fixture creates the adapter but doesn't call
initialize(). While this works for SQLite in-memory, it may mask initialization issues. Addingawait adapter.initialize()would better mirror production usage and validate the connection test path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/python-sqlalchemy/tests/test_adapter.py` around lines 28 - 40, The fixture returns a created adapter but never runs its initialization path; change the adapter fixture to async (async def adapter():) so you can await adapter.initialize() after create_adapter(engine) (i.e., call adapter = create_adapter(engine); await adapter.initialize(); return adapter) to mirror production usage and surface any connection/init issues; reference the adapter fixture, create_adapter, and adapter.initialize when updating the test.
10-16: Fragilesys.pathmanipulation for test imports.Manually inserting paths into
sys.pathis brittle and can cause import ordering issues. Consider installing the packages in editable mode (pip install -e .) or using pytest'spythonpathconfiguration inpyproject.tomlinstead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/python-sqlalchemy/tests/test_adapter.py` around lines 10 - 16, Replace the fragile sys.path hacks (the PACKAGE_ROOT/PYTHON_CORE_ROOT insertion block that mutates sys.path) with a proper package import setup: remove the for-loop that touches sys.path in tests/test_adapter.py, and ensure tests run against an installed (editable) package or via pytest configuration instead — either add an editable install step (pip install -e .) in the test/CI setup or declare pythonpath via pytest config (pyproject.toml / [tool.pytest.ini_options]) so PACKAGE_ROOT and PYTHON_CORE_ROOT are resolved by normal imports rather than manipulating sys.path at runtime.packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py (2)
258-265: Consider usingraise ... from efor proper exception chaining.Ruff B904 warns that re-raised exceptions should use
raise X from eto set Python's__cause__attribute. While the adapter exceptions accept acauseparameter, usingfrom eensures standard traceback chaining.Example fix
except IntegrityError as e: if "unique" in str(e).lower() or "duplicate" in str(e).lower(): - raise DuplicateKeyError(str(e), cause=e) - raise ConstraintViolationError(str(e), cause=e) + raise DuplicateKeyError(str(e), cause=e) from e + raise ConstraintViolationError(str(e), cause=e) from e except OperationalError as e: - raise ConnectionError(str(e), cause=e) + raise ConnectionError(str(e), cause=e) from e except Exception as e: - raise QueryFailedError(f"Create failed: {str(e)}", cause=e) + raise QueryFailedError(f"Create failed: {str(e)}", cause=e) from eApply this pattern to all exception handlers flagged by static analysis.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines 258 - 265, The exception handlers in adapter.py (the except blocks catching IntegrityError, OperationalError, and generic Exception) should use Python exception chaining by raising the new adapter exceptions with "raise NewException(...) from e" in addition to (or instead of) passing the cause param; update the IntegrityError branch that currently raises DuplicateKeyError(str(e), cause=e) and ConstraintViolationError(str(e), cause=e) to raise DuplicateKeyError(str(e), cause=e) from e or ConstraintViolationError(...) from e as appropriate, change the OperationalError branch to raise ConnectionError(str(e), cause=e) from e, and change the generic except to raise QueryFailedError(f"Create failed: {str(e)}", cause=e) from e so that __cause__ is set and traceback chaining is preserved.
570-576:commit()no-op is fine, butrollback()may not behave as expected.With
engine.begin()context manager, commit happens on successful exit and rollback happens on exception. The explicitconnection.rollback()call here may not work if called after the context manager has already handled the transaction. Consider documenting this behavior or raising an error if called outside the transaction scope.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines 570 - 576, The rollback() implementation may incorrectly call self.connection.rollback() after an engine.begin() context manager has already committed/rolled back; update rollback() to check transaction scope before calling rollback (e.g., verify an active transaction flag or inspect self.connection.in_transaction() / self._transaction attribute) and raise a clear error (RuntimeError) if no active transaction exists, or alternatively document in the commit()/rollback() docstrings that commit is a no-op under engine.begin() and rollback must only be invoked while the transaction context is active; reference the commit() and rollback() methods and the use of engine.begin() in the surrounding code when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 298-301: Replace the attribute lookup using getattr(table.c,
order.field) with a keyed lookup table.c[order.field] inside the params.order_by
loop so column resolution uses the ColumnCollection API (i.e., change the line
in the loop that assigns column to use table.c[order.field]); keep the ternary
for desc() unchanged and, if desired, guard or catch KeyError around the lookup
to provide a clearer error message when the column name is invalid.
- Around line 151-155: Replace the hasattr/getattr approach in
_build_data_where_clause with keyed ColumnCollection lookup to avoid attribute
name collisions: iterate data.items(), use "if key in table.c" (or
table.c.get(key) existence check) and build conditions with table.c[key] ==
value (or table.c.get(key) == value), then return None if no conditions and use
and_(*conditions) or conditions[0] as before; update references to the
Table/ColumnCollection via table.c and the function _build_data_where_clause
accordingly.
---
Duplicate comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 147-149: In _apply_select, avoid using getattr(table.c, field)
because it can collide with ColumnCollection attribute names; update the
selection logic inside the _apply_select method to index into the
ColumnCollection using table.c[field] for each field in params.select (falling
back to [table] when params.select is empty) so the correct column object is
retrieved without name-collision risk.
---
Nitpick comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 258-265: The exception handlers in adapter.py (the except blocks
catching IntegrityError, OperationalError, and generic Exception) should use
Python exception chaining by raising the new adapter exceptions with "raise
NewException(...) from e" in addition to (or instead of) passing the cause
param; update the IntegrityError branch that currently raises
DuplicateKeyError(str(e), cause=e) and ConstraintViolationError(str(e), cause=e)
to raise DuplicateKeyError(str(e), cause=e) from e or
ConstraintViolationError(...) from e as appropriate, change the OperationalError
branch to raise ConnectionError(str(e), cause=e) from e, and change the generic
except to raise QueryFailedError(f"Create failed: {str(e)}", cause=e) from e so
that __cause__ is set and traceback chaining is preserved.
- Around line 570-576: The rollback() implementation may incorrectly call
self.connection.rollback() after an engine.begin() context manager has already
committed/rolled back; update rollback() to check transaction scope before
calling rollback (e.g., verify an active transaction flag or inspect
self.connection.in_transaction() / self._transaction attribute) and raise a
clear error (RuntimeError) if no active transaction exists, or alternatively
document in the commit()/rollback() docstrings that commit is a no-op under
engine.begin() and rollback must only be invoked while the transaction context
is active; reference the commit() and rollback() methods and the use of
engine.begin() in the surrounding code when making the change.
In `@packages/python-sqlalchemy/tests/test_adapter.py`:
- Around line 28-40: The fixture returns a created adapter but never runs its
initialization path; change the adapter fixture to async (async def adapter():)
so you can await adapter.initialize() after create_adapter(engine) (i.e., call
adapter = create_adapter(engine); await adapter.initialize(); return adapter) to
mirror production usage and surface any connection/init issues; reference the
adapter fixture, create_adapter, and adapter.initialize when updating the test.
- Around line 10-16: Replace the fragile sys.path hacks (the
PACKAGE_ROOT/PYTHON_CORE_ROOT insertion block that mutates sys.path) with a
proper package import setup: remove the for-loop that touches sys.path in
tests/test_adapter.py, and ensure tests run against an installed (editable)
package or via pytest configuration instead — either add an editable install
step (pip install -e .) in the test/CI setup or declare pythonpath via pytest
config (pyproject.toml / [tool.pytest.ini_options]) so PACKAGE_ROOT and
PYTHON_CORE_ROOT are resolved by normal imports rather than manipulating
sys.path at runtime.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 01a40f17-de66-4d64-af5a-de45035f4f63
📒 Files selected for processing (2)
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.pypackages/python-sqlalchemy/tests/test_adapter.py
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1f4ad815ac
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if not hasattr(table.c, clause.field): | ||
| raise QueryFailedError(f"Unknown field in where clause: {clause.field}") | ||
|
|
||
| column = getattr(table.c, clause.field) |
There was a problem hiding this comment.
Validate where fields by column key instead of attribute lookup
Using hasattr(table.c, clause.field)/getattr(table.c, clause.field) can resolve SQLAlchemy ColumnCollection methods (for example keys, items, values) instead of real columns, so valid columns with those names cannot be filtered correctly and some invalid field names bypass the intended "unknown field" error path. This causes incorrect predicates in find_*, update*, delete*, and count when field names collide with collection attributes.
Useful? React with 👍 / 👎.
| if inserted is None: | ||
| inserted = self._fetch_one_by_clause( | ||
| conn, | ||
| table, | ||
| self._build_data_where_clause(table, params.data), |
There was a problem hiding this comment.
Avoid ambiguous requery when create cannot read inserted PK
When inserted_primary_key is unavailable, _create falls back to querying by the inserted data values and returns the first match. If the table allows duplicate payloads (or lacks a PK), this can return a pre-existing row instead of the row just inserted, so callers may receive the wrong record identity after create/create_many.
Useful? React with 👍 / 👎.
|
No dependency changes detected. Learn more about Socket for GitHub. 👍 No dependency changes detected in pull request |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: fc2dd3a0db
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| except Exception as e: | ||
| raise QueryFailedError(f"Update failed: {str(e)}", cause=e) |
There was a problem hiding this comment.
Preserve typed constraint errors in update()
The outer update() path opens a transaction and recursively calls _update, but then catches every exception and re-wraps it as QueryFailedError. In the normal call path (no explicit connection), an inner ConstraintViolationError from a unique/foreign-key conflict is therefore lost, so callers cannot reliably handle expected integrity conflicts the same way they can in other CRUD methods.
Useful? React with 👍 / 👎.
| except Exception as e: | ||
| raise QueryFailedError(f"UpdateMany failed: {str(e)}", cause=e) |
There was a problem hiding this comment.
Map update_many integrity failures to constraint errors
update_many() currently converts all non-QueryFailedError exceptions into QueryFailedError, so database IntegrityError (for example, unique/index or FK violations during bulk updates) is not surfaced as ConstraintViolationError. This makes batch-update conflict handling inconsistent with single-row update() semantics and prevents callers from distinguishing expected constraint failures from generic query failures.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py (2)
597-615: Return type annotationAdaptermay cause type checker warnings.
create_adapterreturnsSQLAlchemyAdapterbut is annotated to returnAdapter. SinceSQLAlchemyAdapterdoesn't explicitly inherit fromAdapter, type checkers may flag this. Consider updating the return type toSQLAlchemyAdapteror makingSQLAlchemyAdapterconform to theAdapterprotocol.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines 597 - 615, The function create_adapter is annotated to return Adapter but actually returns SQLAlchemyAdapter, which can trigger type-checker warnings; either change the return annotation on create_adapter to SQLAlchemyAdapter or update SQLAlchemyAdapter to explicitly inherit from or implement the Adapter protocol/class (i.e., make SQLAlchemyAdapter subclass Adapter or satisfy its interface), and adjust imports/type hints accordingly so the declared return type matches the concrete SQLAlchemyAdapter type.
261-268: Use exception chaining withfrom efor proper tracebacks.While
cause=eis passed to the custom exception constructors, Python's standard exception chaining viaraise ... from esets__cause__for proper traceback propagation.Proposed fix
except IntegrityError as e: if "unique" in str(e).lower() or "duplicate" in str(e).lower(): - raise DuplicateKeyError(str(e), cause=e) - raise ConstraintViolationError(str(e), cause=e) + raise DuplicateKeyError(str(e), cause=e) from e + raise ConstraintViolationError(str(e), cause=e) from e except OperationalError as e: - raise ConnectionError(str(e), cause=e) + raise ConnectionError(str(e), cause=e) from e except Exception as e: - raise QueryFailedError(f"Create failed: {str(e)}", cause=e) + raise QueryFailedError(f"Create failed: {str(e)}", cause=e) from eThis pattern should be applied consistently across all exception handlers in the file (lines 289, 321, 352, 354, 382, 410, 412, 430, 448, 509, 526).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py` around lines 261 - 268, The except blocks use custom exceptions like DuplicateKeyError, ConstraintViolationError, ConnectionError, and QueryFailedError but only pass the original exception via a cause kwarg; change each raise to use Python exception chaining (raise DuplicateKeyError(str(e), cause=e) from e, etc.) for IntegrityError, OperationalError and the generic Exception handlers (use "from e" with the same arguments), and apply this same "raise ... from e" pattern consistently to all similar except blocks in this file so tracebacks correctly show the original exception.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 341-344: The refresh after primary-key update uses the old PKs
from `existing` so `_fetch_one_by_primary_key(conn, table, existing,
params.select)` can miss the row when `params.data` contains PK changes; change
the call to pass merged values (e.g., merged = {**existing, **params.data}) so
`_fetch_one_by_primary_key` uses the new PK values, and keep the fallback
`self._project_row({**existing, **params.data}, params.select)` unchanged.
---
Nitpick comments:
In `@packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py`:
- Around line 597-615: The function create_adapter is annotated to return
Adapter but actually returns SQLAlchemyAdapter, which can trigger type-checker
warnings; either change the return annotation on create_adapter to
SQLAlchemyAdapter or update SQLAlchemyAdapter to explicitly inherit from or
implement the Adapter protocol/class (i.e., make SQLAlchemyAdapter subclass
Adapter or satisfy its interface), and adjust imports/type hints accordingly so
the declared return type matches the concrete SQLAlchemyAdapter type.
- Around line 261-268: The except blocks use custom exceptions like
DuplicateKeyError, ConstraintViolationError, ConnectionError, and
QueryFailedError but only pass the original exception via a cause kwarg; change
each raise to use Python exception chaining (raise DuplicateKeyError(str(e),
cause=e) from e, etc.) for IntegrityError, OperationalError and the generic
Exception handlers (use "from e" with the same arguments), and apply this same
"raise ... from e" pattern consistently to all similar except blocks in this
file so tracebacks correctly show the original exception.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8877d228-bce1-4f3c-9412-03a89890c2a7
📒 Files selected for processing (2)
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.pypackages/python-sqlalchemy/tests/test_adapter.py
✅ Files skipped from review due to trivial changes (1)
- packages/python-sqlalchemy/tests/test_adapter.py
packages/python-sqlalchemy/superfunctions_sqlalchemy/adapter.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 19572029cf
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| async def commit(self) -> None: | ||
| """Commit handled by context manager.""" | ||
| pass |
There was a problem hiding this comment.
Implement transaction adapter commit behavior
SQLAlchemyTransactionAdapter.commit() is currently a no-op, so callbacks that explicitly call await trx.commit() do not actually finalize the transaction at that point. This can produce incorrect outcomes when code expects an early commit boundary (for example, commit then later raise/catch errors), because all writes still depend on the outer context manager exit and may be rolled back unexpectedly.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 5094bc0688
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| except Exception as e: | ||
| raise QueryFailedError(f"Delete failed: {str(e)}", cause=e) |
There was a problem hiding this comment.
Map delete FK violations to ConstraintViolationError
When deleting a row that is still referenced by a foreign key (e.g., ON DELETE RESTRICT), SQLAlchemy raises IntegrityError, but _delete currently falls through to the broad except Exception and rethrows QueryFailedError. That makes delete conflict handling inconsistent with create/update and prevents callers from distinguishing expected constraint conflicts from generic query failures.
Useful? React with 👍 / 👎.
| except Exception as e: | ||
| raise QueryFailedError(f"DeleteMany failed: {str(e)}", cause=e) |
There was a problem hiding this comment.
Preserve typed constraint errors in delete_many
delete_many also wraps all non-QueryFailedError exceptions as QueryFailedError, so batch deletes that hit referential-integrity violations lose the ConstraintViolationError signal. In workloads that bulk-delete parent records, this prevents reliable conflict-specific handling and diverges from the typed error behavior already implemented in other mutating methods.
Useful? React with 👍 / 👎.
|
You're iterating quickly on this pull request. To help protect your rate limits, cubic has paused automatic reviews on new pushes for now—when you're ready for another review, comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 059011b49c
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| except Exception as e: | ||
| raise QueryFailedError(f"Update failed: {str(e)}", cause=e) |
There was a problem hiding this comment.
Map update connection failures to ConnectionError
_update() never catches OperationalError, so transient DB/connectivity failures during update() fall into the generic handler and are rethrown as QueryFailedError. This breaks error-type consistency with other mutating methods (create, delete, update_many, delete_many) and prevents callers from handling retryable connection failures via ConnectionError.
Useful? React with 👍 / 👎.
| except Exception as e: | ||
| raise QueryFailedError(f"CreateMany failed: {str(e)}", cause=e) |
There was a problem hiding this comment.
Preserve connection error typing in create_many
create_many() does not have an except OperationalError branch, so failures establishing/starting the transaction at engine.begin() are wrapped by the generic QueryFailedError path instead of surfacing as ConnectionError. In outage/pool-exhaustion scenarios this makes batch-create error handling inconsistent with create() and other write methods.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7fab47de7e
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| self.version = "0.1.0" | ||
| self.capabilities = AdapterCapabilities( | ||
| transactions=True, | ||
| nested_transactions=True, |
There was a problem hiding this comment.
Mark nested_transactions unsupported until savepoints exist
Set nested_transactions to False (or implement savepoint-based nesting) because this adapter currently does not provide a nested transaction path: transaction() always opens a new top-level engine.begin() and the transaction adapter exposes no nested transaction() API. Callers that branch on capabilities.nested_transactions can incorrectly assume inner work shares the same atomic boundary, which can lead to unexpected commit/rollback behavior.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 4fa39d0589
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| except Exception as e: | ||
| raise QueryFailedError(f"Delete failed: {str(e)}", cause=e) |
There was a problem hiding this comment.
Re-raise ConnectionError from nested delete calls
In the normal delete() path (no explicit connection), _delete calls itself inside engine.begin(). If the inner call converts an OperationalError to ConnectionError, the outer frame does not preserve it and the broad except Exception re-wraps it as QueryFailedError, so callers lose the retryable connection-failure signal during outages or dropped connections.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 18fd5013d3
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if result.rowcount == 0: | ||
| raise NotFoundError("Record not found for update") |
There was a problem hiding this comment.
Stop treating no-op UPDATE as missing row
In _update, the row is already confirmed to exist via _fetch_one_by_clause, but result.rowcount == 0 is still treated as not found. On MySQL-style drivers that report only changed rows, an idempotent update (setting a field to its current value) returns 0, so this path incorrectly raises NotFoundError for an existing record.
Useful? React with 👍 / 👎.
| except Exception as e: | ||
| raise QueryFailedError(f"FindOne failed: {str(e)}", cause=e) |
There was a problem hiding this comment.
Preserve ConnectionError typing on read failures
The _find_one read path wraps all exceptions as QueryFailedError, so transient connectivity failures (OperationalError) are no longer surfaced as ConnectionError. That makes retry/error-handling logic inconsistent with write methods that preserve ConnectionError, and the same broad-except pattern is repeated in _find_many and count.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: fd801174bf
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| self.capabilities = AdapterCapabilities( | ||
| transactions=True, | ||
| nested_transactions=False, | ||
| joins=True, |
There was a problem hiding this comment.
Advertise joins only when join queries are implemented
capabilities currently reports joins=True, but the adapter never reads a join configuration and always builds find_one/find_many queries from a single table via _apply_select(...). In environments that branch on adapter.capabilities.joins, this will cause callers to run join-dependent flows and get silently incorrect (unjoined) result sets rather than a clear unsupported-path error. Mark joins unsupported until join handling is actually wired into the query builders.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 05c38e3e25
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| refreshed = self._fetch_one_by_primary_key(conn, table, updated_row, params.select) | ||
| if refreshed is not None: | ||
| return refreshed | ||
| return self._project_row(updated_row, params.select) | ||
|
|
There was a problem hiding this comment.
Fail update when post-write refresh cannot find target row
After the UPDATE executes, the method returns a locally merged updated_row whenever the re-read by primary key returns None. In a concurrent delete race (row read, then removed by another transaction before this update statement runs), this path reports success with synthetic data even though no row exists anymore. For primary-key tables, refreshed is None should be treated as a missing-row failure rather than returning the merged input.
Useful? React with 👍 / 👎.
Merge activity
|
## Summary - add the missing shared SQLAlchemy adapter package to `origin/dev` - bring over the Python adapter package as a standalone PR so Python dependency review stays focused ## Validation - inspected the copied package contents in a clean worktree created from `origin/dev` - did not run a reliable Python package validation pass in this worktree ## Notes - no `.conduct` changes included - opened directly against `origin/dev` with `GH_CONFIG_DIR=~/.config/gh-other` ## Summary by Sourcery Add a new Python SQLAlchemy adapter package integrating superfunctions.db with SQLAlchemy-based databases. New Features: - Introduce a SQLAlchemy-backed Adapter implementation providing CRUD, batch, counting, transactions, health checks, and basic schema hooks for superfunctions.db. - Expose a create_adapter factory and package initializer to simplify constructing the SQLAlchemy adapter from a SQLAlchemy Engine or AsyncEngine. Enhancements: - Document installation, supported databases, and example usage for the new superfunctions-sqlalchemy package in a dedicated README. - Define packaging metadata, optional DB-specific extras, and development tooling configuration for the superfunctions-sqlalchemy Python package via pyproject.toml. <!-- This is an auto-generated description by cubic. --> --- ## Summary by cubic Adds the shared Python SQLAlchemy adapter `superfunctions-sqlalchemy` for `superfunctions.db`, providing CRUD, batch ops, upserts, counts, and transactions (sync/async callbacks) on Postgres/MySQL/SQLite using sync SQLAlchemy engines. Tightens capability flags and single‑row semantics, validates `where` clauses, maps SQL errors to typed errors, and fixes edge cases per SF‑30. - **New Features** - New adapter package (`packages/python-sqlalchemy/`) exporting `create_adapter`/`SQLAlchemyAdapter`, with init/health/close and optional `namespace_prefix`. - Full CRUD, `create_many`/`upsert`/`count`, query builders (`where` with AND/OR, `order_by`/`limit`/`offset`), transactions that accept sync or async callbacks, and APIs that take params objects or kwargs. - Explicit column-key lookups for field resolution, including reserved names. - Capability metadata tightened: transactions/batch ops enabled; nested transactions, joins, full‑text search, schema management, and migrations disabled. - **Bug Fixes** - Safe single-row update/delete: find by `where`, target via primary key or fall back to existing row values; reselect by primary key when filters change; treat no‑op updates as success. - Strict `where` validation and OR support: require `where` for `update_many`/`delete_many`, reject unknown fields with clear errors, and honor OR connectors. - Preserve typed errors and bulk semantics: map duplicates/constraints to `DuplicateKeyError`/`ConstraintViolationError`, surface `ConnectionError` on drops, run `create_many` in a transaction and return inserted rows, and retry update on upsert duplicate. - Fix delete path connection typing to avoid type issues during execution. <sup>Written for commit 05c38e3. Summary will update on new commits.</sup> <!-- End of auto-generated description by cubic. --> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Added a new SQLAlchemy adapter package offering async-capable CRUD, batch operations, upserts, counting, transactions, and optional namespacing for SQL databases. * **Documentation** * Included comprehensive package docs with installation, sync/async usage examples, configuration, and feature checklist. * **Tests** * Added tests covering create/create_many, update behavior, upsert retry, find_many filtering, transactions, and error handling. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
05c38e3 to
8c9d511
Compare
|



Summary
origin/devValidation
origin/devNotes
.conductchanges includedorigin/devwithGH_CONFIG_DIR=~/.config/gh-otherSummary by Sourcery
Add a new Python SQLAlchemy adapter package integrating superfunctions.db with SQLAlchemy-based databases.
New Features:
Enhancements:
Summary by cubic
Adds the shared Python SQLAlchemy adapter
superfunctions-sqlalchemyforsuperfunctions.db, providing CRUD, batch ops, upserts, counts, and transactions (sync/async callbacks) on Postgres/MySQL/SQLite using sync SQLAlchemy engines. Tightens capability flags and single‑row semantics, validateswhereclauses, maps SQL errors to typed errors, and fixes edge cases per SF‑30.New Features
packages/python-sqlalchemy/) exportingcreate_adapter/SQLAlchemyAdapter, with init/health/close and optionalnamespace_prefix; includes README usage docs andpyproject.tomlwith extras (postgres,mysql,asyncpg).create_many/upsert/count, query builders (wherewith AND/OR,order_by/limit/offset), transactions that accept sync or async callbacks, and APIs that take params objects or kwargs.Bug Fixes
where, target via primary key or fall back to existing row values; reselect by primary key when filters change; treat no‑op updates as success.wherevalidation and OR support: requirewhereforupdate_many/delete_many, reject unknown fields with clear errors, and honor OR connectors.DuplicateKeyError/ConstraintViolationError, surfaceConnectionErroron drops, runcreate_manyin a transaction and return inserted rows, and retry update on upsert duplicate.Written for commit 8c9d511. Summary will update on new commits.
Summary by CodeRabbit
New Features
Documentation
Tests