Skip to content

[Testing] Integrate Minimal Fixtures for Deterministic Output Validation #7

@aviralsaxena16

Description

@aviralsaxena16

Porting this proposal from the previous repository to the official OSIPI organization. Original discussion: Link

Hi @1brahimmohamed and @jan-petr,

Following up on the regression framework and CI pipeline introduced in PR #20 (which I will be porting over here shortly) , I would like to propose the next architectural step to fulfill the objective of comparing generated outputs against expected outputs.

Currently, our CI infrastructure supports regression snapshotting, but we need curated baseline fixtures representing real-world sequences to act as our safety net.

Architectural Considerations (Preventing Repo Bloat):

  • To keep the CI pipeline lightning-fast and avoid bloating the git history with large binary files:
  • We should strictly prefer reduced JSON/TSV-based metadata inputs where possible.
  • We should avoid storing raw, heavy DICOM datasets directly in the repository for routine unit tests.

The Goal:
By establishing these deterministic outputs, if any future PRs (like adding new modalities or tweaking thresholds) unintentionally alter the mathematical outputs, severity classifications, or method-section text of our base sequences, the CI will catch it instantly.

Next Steps:
I am currently extracting and sanitizing these minimal fixtures locally. I plan to open a PR with the initial implementation tomorrow evening. Please let me know in the meantime if you have a preference on which specific sample datasets we should isolate for these baselines!

Best,
Aviral

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions