Conversation
Updating branch to main
- Improve provenance logging by avoiding duplicate initialization events and handling potentially corrupted provenance files. - Ensure internal consistency on restart by verifying that species marked as converged have all required output paths, resetting their status otherwise. - Fix job key generation for reactions (lists of labels) and improve tracking for running conformer jobs. - Defer TS switching during conformer optimization batches to avoid unnecessary job deletions.
Ensure that successful and unsuccessful transition state generation methods are listed uniquely and formatted using join to avoid trailing commas in the species report.
- Update graph logic to correctly link jobs to parent jobs, troubleshooting diamonds, or TS selection decisions instead of always defaulting to the last node. - Preserve intentional newlines in wrapped labels to improve node readability. - Ensure the provenance YAML file is saved with an updated timestamp even when the graphviz package is unavailable. - Add support for visualizing TS guess selection failure events as decision nodes.
- Use stable indices for TS guesses to ensure correct mapping between jobs and guess objects during conformer optimization. - Add unit tests for provenance deduplication, restart output sanitization, and multi-species label handling in the Scheduler.
- Correct "unsuccessfully" to "unsuccessful" in the transition state report string. - Update unit tests to reflect the deduplication of generation methods and the removal of trailing commas in the report output.
There was a problem hiding this comment.
Pull request overview
Adds provenance tracking to ARC runs, persisting an event log to YAML and optionally rendering a Graphviz (DOT/SVG) visualization at the end of scheduling.
Changes:
- Introduces scheduler-side provenance event recording (job start/finish, troubleshooting, TS guess selection) with persistence and restart behavior.
- Adds plotter support to save provenance artifacts (YAML + Graphviz DOT/SVG) with label wrapping and safe node IDs.
- Updates/extends unit tests to validate provenance logging/rendering and improves TS report formatting (deduped method lists).
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
environment.yml |
Adds conda package for the Python Graphviz bindings used for rendering provenance graphs. |
arc/species/species.py |
Deduplicates TS report method lists and fixes wording for unsuccessful methods. |
arc/species/species_test.py |
Updates expected TS report string to match new formatting. |
arc/scheduler.py |
Implements provenance state/events, restart sanitization for missing paths, and records key scheduling events. |
arc/scheduler_test.py |
Adds tests for provenance restart dedup, restart sanitization, delete-all-jobs reset behavior, and multi-label provenance. |
arc/plotter.py |
Adds provenance artifact generation (YAML + optional DOT/SVG) and helper functions for Graphviz output. |
arc/plotter_test.py |
Adds tests for graph label wrapping and provenance artifact generation/graph structure. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
arc/scheduler.py
Outdated
| level_of_theory=self.ts_guess_level, | ||
| job_type='conf_opt', | ||
| conformer=i, | ||
| conformer=tsg.index, | ||
| ) | ||
| tsg.conformer_index = i # Store the conformer index in the TSGuess object to match them later. | ||
| tsg.conformer_index = tsg.index # Use a stable identifier for mapping back to TSGuess. |
There was a problem hiding this comment.
tsg.conformer_index is assigned after run_job(), but run_job() immediately persists the restart file. If ARC is interrupted after spawning these conf_opt jobs but before another save_restart_dict() call, the restart.yml can contain running conf_opt jobs while the corresponding TSGuess objects still have conformer_index=None, and parse_conformer() will then be unable to map conformer results back to a TSGuess. Set tsg.conformer_index (and any fallback tsg.index) before calling run_job() (or explicitly save the restart dict after assignment) so restarts are always consistent.
arc/scheduler.py
Outdated
| logger.warning('Could not parse existing provenance.yml; starting a fresh provenance log.') | ||
| provenance = None | ||
| if isinstance(provenance, dict): | ||
| self.provenance['events'] = provenance.get('events', list()) |
There was a problem hiding this comment.
When loading an existing provenance.yml, provenance.get('events', ...) is assigned directly to self.provenance['events'] without validating type/shape. If the file is partially corrupted (e.g., events is not a list of dicts), the set comprehension on the next line can raise and break Scheduler initialization. Consider validating events (must be a list of dicts) and falling back to an empty list if it’s not.
| self.provenance['events'] = provenance.get('events', list()) | |
| raw_events = provenance.get('events', list()) | |
| if isinstance(raw_events, list) and all(isinstance(e, dict) for e in raw_events): | |
| self.provenance['events'] = raw_events | |
| else: | |
| logger.warning('Existing provenance.yml has an invalid "events" structure; ' | |
| 'starting with an empty event log.') | |
| self.provenance['events'] = list() | |
| # Ensure we always have a list for provenance events | |
| self.provenance.setdefault('events', list()) | |
| if not isinstance(self.provenance['events'], list): | |
| logger.warning('Existing provenance events are not a list; resetting to empty list.') | |
| self.provenance['events'] = list() |
| self.provenance['events'].append(event) | ||
| self.save_provenance() |
There was a problem hiding this comment.
record_provenance_event() persists provenance.yml on every event. In real runs this could be thousands of events (job starts/finishes, troubleshooting, etc.) and may noticeably slow scheduling due to synchronous disk I/O. Consider buffering events in memory and flushing periodically (e.g., every N events / every M seconds) and/or only persisting on key milestones + finalize, while still ensuring durability on restart.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 7 out of 7 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
arc/scheduler.py
Outdated
| event = {'event_id': len(self.provenance['events']) + 1, | ||
| 'event_type': event_type, | ||
| 'timestamp': datetime.datetime.now().isoformat(timespec='seconds'), |
There was a problem hiding this comment.
record_provenance_event() derives event_id from len(events) + 1. If an existing provenance log is loaded with non-contiguous or non-1-indexed event_ids (e.g., manual edits or future schema changes), this can generate duplicate IDs. Safer approach: compute the next ID from max(existing_event_id) + 1 (defaulting to 0 when absent).
arc/scheduler.py
Outdated
| already_initialized = {e['label'] for e in self.provenance['events'] | ||
| if e.get('event_type') == 'species_initialized' and 'label' in e} |
There was a problem hiding this comment.
The already_initialized = {e['label'] ...} comprehension assumes label is always hashable. If provenance.yml contains a parsed-but-invalid event where label is a list/dict, restart will raise TypeError here despite the earlier “robust to parsing errors” intent. Consider filtering to isinstance(label, str) (or coercing to str) before adding to the set.
| already_initialized = {e['label'] for e in self.provenance['events'] | |
| if e.get('event_type') == 'species_initialized' and 'label' in e} | |
| already_initialized = set() | |
| for event in self.provenance['events']: | |
| if event.get('event_type') == 'species_initialized': | |
| label = event.get('label') | |
| if isinstance(label, str): | |
| already_initialized.add(label) | |
| elif label is not None: | |
| logger.debug(f"Ignoring provenance event with non-string label in provenance.yml: {label!r}") |
| self.species_dict, self.rxn_dict = dict(), dict() | ||
| for species in self.species_list: | ||
| self.species_dict[species.label] = species | ||
| for rxn in self.rxn_list: | ||
| self.rxn_dict[rxn.index] = rxn | ||
| self._initialize_provenance() |
There was a problem hiding this comment.
_initialize_provenance() is called before TS species are created/added from rxn_list, so those TS labels never get a species_initialized event and the provenance graph/log will be incomplete for reaction runs. Consider moving _initialize_provenance() to after the reaction/TS-species construction block, or explicitly recording species_initialized when a TS species is created and appended to species_list.
|
Thanks for this awesome addition! For a while we wanted to add something to visualize ARC's progress. Is this meant to be live or static at the end of the run? Eventually, we want a live HTML portal to track ARC/T3 progress, will be great to have that in mind when developing the feature in the present PR so we can build on top of that |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #841 +/- ##
==========================================
+ Coverage 58.58% 58.81% +0.22%
==========================================
Files 97 97
Lines 29203 29409 +206
Branches 7752 7800 +48
==========================================
+ Hits 17110 17297 +187
- Misses 9889 9890 +1
- Partials 2204 2222 +18
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
This pull request introduces a provenance tracking and visualization system to the ARC workflow, enabling detailed recording and rendering of the sequence of computational events (such as job launches, completions, troubleshooting, and decision points) in each run. The provenance data is saved in YAML format and, if Graphviz is available, also rendered as a graph (DOT and SVG). The scheduler now records all relevant events and generates these artifacts at the end of a run. Comprehensive tests are included to validate the new functionality.
Key changes include:
Provenance tracking and event recording:
provenancedictionary to theSchedulerclass to track run metadata and a list of events, with initialization and persistence logic. Events such as species initialization, job start, job finish, troubleshooting, and TS guess selection are now recorded via the newrecord_provenance_eventmethod. [1] [2] [3] [4]Provenance artifact generation and visualization:
save_provenance_artifactsinarc/plotter.pyto save the provenance event log as YAML and, if possible, render the event graph using Graphviz (DOT and SVG). The graph visualizes the relationships between species, jobs, troubleshooting decisions, and TS guess selections. Helper functions ensure graph labels are readable and node IDs are safe.Testing and validation:
API and typing improvements:
run_job. [1] [2]arc/scheduler.py.Utility and robustness:
These changes lay the foundation for reproducible, auditable ARC runs and provide a clear visual summary of complex computational workflows.
Provenance tracking and event recording
provenancedictionary and event recording methods to theSchedulerclass, capturing all key events during a run and persisting them to YAML. [1] [2] [3] [4]Provenance artifact generation and visualization
save_provenance_artifactsinarc/plotter.pyto render provenance graphs (DOT/SVG) and YAML logs, with readable labels and safe node IDs. Handles missing Graphviz gracefully.Testing
API and typing improvements
run_joband related methods to accept provenance-related parameters and improved docstrings and typing. [1] [2] [3]Utility and robustness