Skip to content

feat: use of engine.yaml in workflow#62

Merged
bmichele merged 8 commits intomainfrom
build-command-to-output-yaml
Apr 16, 2026
Merged

feat: use of engine.yaml in workflow#62
bmichele merged 8 commits intomainfrom
build-command-to-output-yaml

Conversation

@teppohudsson
Copy link
Copy Markdown
Contributor

@teppohudsson teppohudsson commented Apr 2, 2026

Summary

This PR moves the Skene workflow around skene/engine.yaml: build merges LLM-generated engine deltas, updates the feature registry, emits Supabase trigger migrations, and push only ships pre-built artifacts upstream. Status validates the engine document and checks that action-backed features have matching trigger/function SQL in migrations.

Highlights:

  • Engine module — YAML models, merge-by-key semantics, LLM-driven engine deltas, and DB schema discovery from skene/ or skene-context (schema.yaml / schema.md) with a clear warning when schema is missing; shared LLM progress feedback.
  • Build — Writes/merges skene/engine.yaml, upserts skene-context/feature-registry.json, generates trigger SQL (and base schema migration as before). Destination menu fourth option is “I build it myself”; numbered fallback uses “Choose 1–4”.
  • Push — Deploys engine YAML + Feature registry + latest trigger migration only (no migration generation on push). Trigger file naming is supabase/migrations/*_skene_triggers.sql, with find_trigger_migration and fallbacks for legacy *skene_trigger* / *skene_telemetry* names.
  • Status — Validates engine shape and alignment of action features with skene_growth_trg_* / skene_growth_fn_* in migration SQL. The Detail column shows the latest matching migration file and (+N) when the same trigger appears in additional files.
  • Analyze / plan — Use engine-oriented context instead of growth-loop JSON artifacts for the parts refactored in this branch.
  • Skene provider — Normalize base URLs and surface concise HTTP errors; default Skene chat API to production unless base_url is set via CLI, SKENE_BASE_URL, or project .skene.config (fixes push flows that depended on the Skene provider).
  • Docs, Cursor plugin, and tests — Aligned with the above; PyYAML added where needed.

Test plan

  • uv run ruff check . and uv run ruff format .
  • uv run pytest (or CI equivalent)
  • Build: run uvx skene build on a fixture/small project; confirm skene/engine.yaml, feature registry update, *_skene_triggers.sql (and schema migration) under supabase/migrations/.
  • Status: uvx skene status with and without matching migrations; confirm Detail shows Found in: <file> and (+N) when duplicates exist.
  • Push: with upstream configured, confirm push packages engine + latest eligible trigger migration.
  • Skene provider: push or chat path without custom base_url hits production; override via env/config still works.

Checklist

  • PR is scoped to the engine.yaml workflow refactor (no unrelated churn).
  • Tests updated for behavioral changes (engine, migrations, upstream package, validators).
  • Documentation and Cursor plugin updated for CLI behavior, filenames, and build vs push.
  • Linting and tests pass locally (confirm in your environment before merge).

- Add skene/engine module: YAML models, merge-by-key, LLM engine deltas, and
  schema discovery (skene/ or skene-context schema.yaml|schema.md) with a
  clear warning when missing; shared LLM progress indicator.
- build: merge engine.yaml, update feature registry, generate trigger SQL;
  destination menu fourth option is "I build it myself"; numbered fallback
  prompt uses "Choose 1-4".
- push: deploy engine + trigger migration only; upstream packaging prefers
  *_skene_trigger.sql with legacy *_skene_telemetry.sql fallback.
- status: validate engine.yaml and action trigger alignment in migrations.
- analyze/plan: use engine context instead of growth-loop JSON artifacts.
- Skene provider: normalize base URLs and concise HTTP errors; default skene
  chat API to production unless base_url is set via CLI, SKENE_BASE_URL, or
  project .skene.config.
- Docs and tests updated; add PyYAML dependency.

Made-with: Cursor
- Write build telemetry SQL to *_skene_triggers.sql; add find_trigger_migration
  in push.py and use it from upstream + skene push (keep skene_trigger/skene_telemetry fallbacks).
- Engine status: Found in line shows latest migration file only, with (+N) for other matches.

Made-with: Cursor
- Add feature_registry_json from configured output_dir; registry_path_for_project helper.
- Rename package field telemetry_sql to trigger_sql for upstream deploy payload.
- CLI warns when registry file missing; success message lists JSON keys.
- Update docs, cursor plugin, and tests.

Made-with: Cursor
Use build_push_manifest inside upstream push_to_upstream to keep manifest construction in one place, and remove unused trigger SQL parameters to reduce dead plumbing without changing loops_count behavior.

Made-with: Cursor
@teppohudsson teppohudsson marked this pull request as ready for review April 7, 2026 10:18
@teppohudsson teppohudsson requested a review from bmichele as a code owner April 7, 2026 10:18
@bmichele
Copy link
Copy Markdown
Contributor

bmichele commented Apr 7, 2026

Here is a review from Claude Code. I used review guidelines built according to the current code style and structure (I will add the guidelines to the repo soon.)

Can you please check the following items @teppohudsson ?

Code Review Feedback

Nice work on the engine.yaml workflow — the storage models, merge-by-key semantics, and validator are clean. A few things to address:


1. Sync file I/O in async code path

engine/generator.py_load_schema_context uses schema_path.read_text() (synchronous) but is called from the async generate_engine_delta_with_llm. This will block the event loop. Should use aiofiles to stay consistent with the rest of the async pipeline.

2. Duplicate progress indicator

The PR adds progress.py with show_progress_indicator() but analysis_helpers.py still has its own identical _show_progress_indicator(). Now that the shared module exists, analysis_helpers.py should import from progress.py rather than keeping a local copy.

3. os.environ.get("SKENE_BASE_URL") in _resolve_base_url

The new _resolve_base_url in cli/app.py reads the env var directly. The Config class already handles env var merging — could this go through Config instead of special-casing the env read?

4. Test docstrings

New test methods in test_engine_storage.py, test_engine_generator.py, test_engine_validator.py, test_resolve_cli_config.py, and test_skene_provider.py are missing one-line docstrings describing what they verify.

5. Test class grouping

test_engine_generator.py and test_resolve_cli_config.py use bare test functions. The rest of the test suite uses class TestComponentName: grouping — would be good to keep these consistent.

6. Security tests for new file access

engine/storage.py introduces load_engine_document() and write_engine_document() which accept a Path from callers. engine_validator.py reads migration files via glob. Neither has tests for path traversal attempts (e.g. ../../../etc/passwd). Worth adding given the existing coverage in test_codebase/test_explorer.py.

7. No tests for progress.py

New module with no test coverage.

8. Pydantic field descriptions

The new engine models (EngineAction, EngineSubject, EngineFeature, EngineDocument) use Field() for defaults but don't include description=... on fields that end up in the serialized engine.yaml.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f8d7269bbd

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/skene/engine/storage.py Outdated
parsed = parse_source_to_db_event(feature.source)
if not parsed:
continue
_, table, operation = parsed
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve source schema when building trigger telemetry

engine_features_to_loop_definitions parses features[].source as schema.table.operation but discards the schema here, so downstream SQL generation only receives the table name and emits triggers on public.<table>. For actionable features on non-public schemas (e.g. auth.users.insert), skene build will generate migrations for the wrong table, causing trigger creation to fail or telemetry to never fire.

Useful? React with 👍 / 👎.

…ne safety

Use aiofiles for schema loading in the async engine generator so the event loop
is not blocked. Deduplicate progress handling by routing analysis and growth-loop
generation through skene.progress. Track SKENE_BASE_URL via Config instead of
reading the env var in app.py. Add optional project_root checks for engine
load/write, harden migration file reads in the engine validator, and extend
tests (progress module, path traversal, symlink case). Add Pydantic field
descriptions for serialized engine models and align test docstrings and class
grouping with the rest of the suite.
Comment thread tests/test_progress.py Fixed
Include schema from features[].source in loop telemetry and qualify CREATE
TRIGGER targets with schema.table instead of public.<table>. Align
trigger/function name slugs with existing public-only naming while prefixing
non-public schemas. Update event_type and trigger_events to schema.table.operation
and keep engine validator expectations in sync with generated migration names.
Comment thread tests/test_progress.py
task = asyncio.create_task(progress.show_progress_indicator(stop_event))
await first_print.wait()
stop_event.set()
await task
@teppohudsson
Copy link
Copy Markdown
Contributor Author

@bmichele this should be ready? I've gone through all the parts and fixed those review issues.

@bmichele
Copy link
Copy Markdown
Contributor

I am on it!

@bmichele
Copy link
Copy Markdown
Contributor

There is still plenty of AI slop, but I will close my eyes and merge it. I can't really get a good grasp of all the changes without a code walkthrough

Copy link
Copy Markdown
Contributor

@bmichele bmichele left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is big to get a grasp of all the changes without a code walkthrough. I did review with AI, some bugs are now fixed but I expect troubles, there is still plenty of AI slop there.
I will close my eyes and merge, we will fix issues as they come up

@bmichele bmichele merged commit cb85d68 into main Apr 16, 2026
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants