This page is for developers extending, testing, or maintaining IronEngine-RL. It complements the architecture and customization pages with workflow-oriented guidance.
When you change the framework, try to preserve these design goals:
- keep the runtime contracts explicit and stable
- keep safety enforcement outside the model whenever possible
- keep the default repository lightweight, with persistence as an opt-in plugin path
- prefer additive naming layers and metadata instead of breaking public runtime types
- keep profiles readable enough that users can inspect them without reading the full source tree
README.mdfor the project overview and onboarding pathsdocs/framework-architecture.mdfor the system shapedocs/api-reference.mdfor the main runtime APIs and extension contractsdocs/customization.mdfor profile and plugin patternsdocs/examples-and-workflows.mdfor concrete starting points
src/ironengine_rl/interfaces/- core datamodels and contract definitionssrc/ironengine_rl/framework/- manifests, validation, compatibility, and runtime factoriessrc/ironengine_rl/core/- orchestrator, repository, safety, and agent runtime helperssrc/ironengine_rl/inference/- provider selection and trainable or prompt-driven wrapperssrc/ironengine_rl/evaluations/- tasks, metrics, and evaluation suite assemblysrc/ironengine_rl/platforms/- hardware and simulation platform adapterssrc/ironengine_rl/plugins/- plugin loader utilities
profiles/- canonical reusable profiles for validation, tests, and scaffolding baselinesexamples/- runnable reference configurations for hardware and inference workflowsuser_modules/examples/- example plugin implementations grouped by capabilitytests/- regression coverage for manifests, profiles, plugin loading, and runtime behavior
Choose the right folder first:
user_modules/examples/inference/for providersuser_modules/examples/agents/for agentsuser_modules/examples/metrics/for metricsuser_modules/examples/safety/for safety policiesuser_modules/examples/repositories/for repository integrationsuser_modules/examples/update/for update strategiesuser_modules/examples/tasks/for evaluation task builders
Then wire the module into a profile with a custom_plugin block and add at least one focused test showing that the profile validates and the plugin loads.
A new example should usually include:
- a clear
hardwareor normalized runtime block - an explicit
action_schemewhen phases or interfaces matter - contracts for custom providers or tasks when the defaults are not enough
- a log directory under
logs/examples/... - a validation test in
tests/test_framework.pyor a nearby focused test file
If you add a new concept to the framework surface, update all of the following when relevant:
- runtime datamodel or contract definitions
- manifest generation
- validation checks
- scaffold output if the concept is user-facing
- docs and at least one example profile
- tests that protect the new behavior
Use these commands from an active environment:
python -m ironengine_rl.validate --profile examples\hardware\armsmart\profile.mock.json --strict
python -m ironengine_rl.describe --profile profiles\framework_customizable\profile.jsonpython -m unittest tests.test_framework -vWhen changing only one example or plugin family, prefer adding or running focused tests first before broader runs.
A provider should return a valid InferenceResult even when optional dependencies are missing. A graceful fallback is often better than making the whole example unreadable or impossible to validate.
The complete ARMSmart PyTorch example follows this rule by using analytic fallback behavior when a weights file or Torch runtime is absent.
Prompt-driven providers should treat repository notes, action-scheme metadata, and success history as context, but safety-critical enforcement must still remain in the framework safety layer.
Keep the built-in KnowledgeRepository lightweight. If you need persistence, experiment tracking, indexing, or external database integration, implement that as a repository plugin instead of making the default runtime heavier for every user.
Use update strategies for trainable or adaptive policies where reward and state feedback legitimately change weights or control parameters. Do not pretend that a hosted or already-trained LLM is applying online weight updates unless you are actually implementing such a mechanism in a custom provider.
When you add or change framework capabilities, update the docs proactively:
README.mdfor user-visible starting pathsdocs/index.mdfor docs navigationdocs/api-reference.mdwhen the public API surface changesdocs/customization.mdanddocs/plugins-and-extensions.mdfor extension patternsdocs/examples-and-workflows.mdwhen new examples are added
Before considering a change complete, verify:
- the relevant profiles validate
- new or changed plugins load correctly
- tests cover the new behavior
- docs explain the new user-facing configuration
- example paths remain consistent with the actual repository layout
If you want developer-oriented reference material, these are currently the most useful examples to study:
examples/plugins/persistent_repository/profile.jsonfor opt-in persistenceexamples/inference/armsmart_pytorch_complete/profile.jsonfor a full custom adaptive pipelineexamples/inference/armsmart_ollama_complete/profile.jsonfor local LLM planning with repository contextexamples/inference/armsmart_cloud_complete/profile.jsonfor cloud LLM planning with repository context
docs/api-reference.mdfor symbols and runtime APIsdocs/customization.mdfor profile examplesdocs/plugins-and-extensions.mdfor plugin layoutdocs/logging-and-outputs.mdfor run artifacts and repository files