Conversation
Updating branch to main
- Improve provenance logging by avoiding duplicate initialization events and handling potentially corrupted provenance files. - Ensure internal consistency on restart by verifying that species marked as converged have all required output paths, resetting their status otherwise. - Fix job key generation for reactions (lists of labels) and improve tracking for running conformer jobs. - Defer TS switching during conformer optimization batches to avoid unnecessary job deletions.
Ensure that successful and unsuccessful transition state generation methods are listed uniquely and formatted using join to avoid trailing commas in the species report.
- Update graph logic to correctly link jobs to parent jobs, troubleshooting diamonds, or TS selection decisions instead of always defaulting to the last node. - Preserve intentional newlines in wrapped labels to improve node readability. - Ensure the provenance YAML file is saved with an updated timestamp even when the graphviz package is unavailable. - Add support for visualizing TS guess selection failure events as decision nodes.
- Use stable indices for TS guesses to ensure correct mapping between jobs and guess objects during conformer optimization. - Add unit tests for provenance deduplication, restart output sanitization, and multi-species label handling in the Scheduler.
- Correct "unsuccessfully" to "unsuccessful" in the transition state report string. - Update unit tests to reflect the deduplication of generation methods and the removal of trailing commas in the report output.
There was a problem hiding this comment.
Pull request overview
Adds provenance tracking to ARC runs, persisting an event log to YAML and optionally rendering a Graphviz (DOT/SVG) visualization at the end of scheduling.
Changes:
- Introduces scheduler-side provenance event recording (job start/finish, troubleshooting, TS guess selection) with persistence and restart behavior.
- Adds plotter support to save provenance artifacts (YAML + Graphviz DOT/SVG) with label wrapping and safe node IDs.
- Updates/extends unit tests to validate provenance logging/rendering and improves TS report formatting (deduped method lists).
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
environment.yml |
Adds conda package for the Python Graphviz bindings used for rendering provenance graphs. |
arc/species/species.py |
Deduplicates TS report method lists and fixes wording for unsuccessful methods. |
arc/species/species_test.py |
Updates expected TS report string to match new formatting. |
arc/scheduler.py |
Implements provenance state/events, restart sanitization for missing paths, and records key scheduling events. |
arc/scheduler_test.py |
Adds tests for provenance restart dedup, restart sanitization, delete-all-jobs reset behavior, and multi-label provenance. |
arc/plotter.py |
Adds provenance artifact generation (YAML + optional DOT/SVG) and helper functions for Graphviz output. |
arc/plotter_test.py |
Adds tests for graph label wrapping and provenance artifact generation/graph structure. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| self.provenance['events'].append(event) | ||
| self.save_provenance() |
There was a problem hiding this comment.
record_provenance_event() persists provenance.yml on every event. In real runs this could be thousands of events (job starts/finishes, troubleshooting, etc.) and may noticeably slow scheduling due to synchronous disk I/O. Consider buffering events in memory and flushing periodically (e.g., every N events / every M seconds) and/or only persisting on key milestones + finalize, while still ensuring durability on restart.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 7 out of 7 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| self.species_dict, self.rxn_dict = dict(), dict() | ||
| for species in self.species_list: | ||
| self.species_dict[species.label] = species | ||
| for rxn in self.rxn_list: | ||
| self.rxn_dict[rxn.index] = rxn | ||
| self._initialize_provenance() |
There was a problem hiding this comment.
_initialize_provenance() is called before TS species are created/added from rxn_list, so those TS labels never get a species_initialized event and the provenance graph/log will be incomplete for reaction runs. Consider moving _initialize_provenance() to after the reaction/TS-species construction block, or explicitly recording species_initialized when a TS species is created and appended to species_list.
|
Thanks for this awesome addition! For a while we wanted to add something to visualize ARC's progress. Is this meant to be live or static at the end of the run? Eventually, we want a live HTML portal to track ARC/T3 progress, will be great to have that in mind when developing the feature in the present PR so we can build on top of that |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #841 +/- ##
==========================================
+ Coverage 60.09% 60.58% +0.48%
==========================================
Files 102 105 +3
Lines 31045 31825 +780
Branches 8087 8236 +149
==========================================
+ Hits 18658 19282 +624
- Misses 10071 10169 +98
- Partials 2316 2374 +58
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 11 out of 11 changed files in this pull request and generated 7 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| successful_tsgs = [tsg for tsg in self.species_dict[label].ts_guesses if tsg.success] | ||
| if len(successful_tsgs) > 1: | ||
| xyzs = [tsg.initial_xyz for tsg in successful_tsgs] | ||
| piped_indices = self.pipe_planner.try_pipe_ts_opt(label, xyzs, self.ts_guess_level) | ||
| if not piped_indices: | ||
| self.job_dict[label]['conf_opt'] = dict() | ||
| for i, tsg in enumerate(successful_tsgs): | ||
| tsg.conformer_index = i # Store the conformer index to match them later. | ||
| if i in piped_indices: | ||
| continue | ||
| if 'conf_opt' not in self.job_dict[label]: | ||
| self.job_dict[label]['conf_opt'] = dict() | ||
| self.job_dict[label]['conf_opt'] = dict() | ||
| for tsg in successful_tsgs: | ||
| if tsg.index is None: | ||
| existing_indices = [guess.index for guess in self.species_dict[label].ts_guesses | ||
| if guess.index is not None] | ||
| tsg.index = max(existing_indices or [-1]) + 1 | ||
| tsg.conformer_index = tsg.index # Set before run_job so restart state is consistent. | ||
| self.run_job(label=label, | ||
| xyz=tsg.initial_xyz, | ||
| level_of_theory=self.ts_guess_level, | ||
| job_type='conf_opt', | ||
| conformer=i, | ||
| conformer=tsg.index, | ||
| ) |
There was a problem hiding this comment.
In run_ts_conformer_jobs(), the new block resets self.job_dict[label]['conf_opt'] and then loops over successful_tsgs a second time, which effectively ignores piped_indices and will spawn conf_opt jobs even for TS guesses that were supposed to be piped (and also overwrites the earlier tsg.conformer_index assignment). This looks like an indentation/logic error: the job_dict reset and run_job calls should respect piped_indices and avoid wiping any already-planned piped work.
arc/scheduler.py
Outdated
| for job_type, spawn_job_type in self.job_types.items(): | ||
| if spawn_job_type and not self.output[label]['job_types'][job_type] \ | ||
| and not ((self.species_dict[label].is_ts and job_type in ['scan', 'conf_opt']) | ||
| or (self.species_dict[label].number_of_atoms == 1 | ||
| and job_type in ['conf_opt', 'opt', 'fine', 'freq', 'rotors', 'bde']) | ||
| or job_type == 'bde' and self.species_dict[label].bdes is None | ||
| or job_type == 'conf_opt' | ||
| or job_type == 'irc' | ||
| or job_type == 'tsg'): | ||
| logger.debug(f'Species {label} did not converge.') | ||
| all_converged = False | ||
| break |
There was a problem hiding this comment.
check_all_done() now iterates over self.job_types twice with the same condition block (the new loop repeats the existing convergence check). This duplication is easy to miss and can complicate future edits; it should be removed or consolidated so the convergence logic is only evaluated once (and the TS E0 special-case remains intact).
| for job_type, spawn_job_type in self.job_types.items(): | |
| if spawn_job_type and not self.output[label]['job_types'][job_type] \ | |
| and not ((self.species_dict[label].is_ts and job_type in ['scan', 'conf_opt']) | |
| or (self.species_dict[label].number_of_atoms == 1 | |
| and job_type in ['conf_opt', 'opt', 'fine', 'freq', 'rotors', 'bde']) | |
| or job_type == 'bde' and self.species_dict[label].bdes is None | |
| or job_type == 'conf_opt' | |
| or job_type == 'irc' | |
| or job_type == 'tsg'): | |
| logger.debug(f'Species {label} did not converge.') | |
| all_converged = False | |
| break |
arc/provenance/nodes.py
Outdated
| return val.value if isinstance(val, Enum) else val | ||
|
|
||
|
|
||
| # ── Enums ───────────────────────────────��──────────────────────────────────���─ |
There was a problem hiding this comment.
The section header comment contains corrupted/unprintable characters ("��"/"���"), which will show up in diffs and can cause encoding noise in editors. Please replace this with plain ASCII/UTF-8 characters so the file remains clean and searchable.
| # ── Enums ───────────────────────────────��──────────────────────────────────���─ | |
| # -- Enums ------------------------------------------------------------------ |
| def render_provenance_graph(prov_graph, run_label: str = 'ARC run') -> 'graphviz.Digraph': | ||
| """ | ||
| Render a :class:`ProvenanceGraph` as a Graphviz directed graph. | ||
|
|
||
| Node styling by type: | ||
| - **species**: box / aliceblue | ||
| - **calculation**: box / color by status (honeydew=done, mistyrose=errored, white=pending) | ||
| - **data**: note / cornsilk | ||
| - **decision**: diamond / color by kind (lavender, moccasin, mistyrose) | ||
|
|
||
| Edge styling by type: | ||
| - ``selected_by``: solid green | ||
| - ``rejected_by``: dashed red | ||
| - ``troubleshot_by``: dashed orange | ||
| - ``retried_as`` / ``fine_of``: dotted gray | ||
| - others: solid black | ||
|
|
||
| Args: | ||
| prov_graph: A :class:`ProvenanceGraph` instance. | ||
| run_label (str): Label for the root run node. | ||
|
|
||
| Returns: | ||
| graphviz.Digraph: The rendered graph object. | ||
| """ | ||
| gv = graphviz.Digraph( | ||
| name='arc_provenance', | ||
| comment=f'ARC provenance for {run_label}', | ||
| graph_attr={'rankdir': 'LR', 'splines': 'true', 'overlap': 'false'}, | ||
| node_attr={'shape': 'box', 'style': 'rounded,filled', 'fillcolor': 'white', 'fontname': 'Helvetica'}, | ||
| edge_attr={'fontname': 'Helvetica'}, | ||
| ) |
There was a problem hiding this comment.
render_provenance_graph() assumes the optional dependency graphviz is available; if the import failed earlier, this will raise an AttributeError when trying to access graphviz.Digraph. Since graphviz is treated as optional elsewhere, consider adding an explicit guard at the top of this function (raise a clear ImportError/RuntimeError) so callers get a helpful message.
arc/plotter.py
Outdated
| ) | ||
| from arc.species.perceive import perceive_molecule_from_xyz | ||
| from arc.species.species import ARCSpecies, rmg_mol_to_dict_repr | ||
| from arc.provenance.nodes import _enum_val, NodeType, EdgeType, DecisionKind |
There was a problem hiding this comment.
Unused imports were added from arc.provenance.nodes (_enum_val, NodeType, EdgeType, DecisionKind) but they are not referenced anywhere in this module. Please remove them (or use them) to avoid lint/static-analysis failures and keep the dependency surface minimal.
| from arc.provenance.nodes import _enum_val, NodeType, EdgeType, DecisionKind |
arc/provenance/graph.py
Outdated
| def add_species_node(self, label: str, is_ts: bool = False, | ||
| timestamp: Optional[str] = None) -> str: | ||
| """ | ||
| Convenience method to add a species node. | ||
|
|
||
| Args: | ||
| label: Species label. |
There was a problem hiding this comment.
ProvenanceGraph.add_species_node() is annotated as taking label: str, but the new test suite exercises label=None (e.g., to ensure rendering falls back to node_id). To keep typing consistent with actual supported inputs, consider changing the signature to label: Optional[str] (and similarly for other node-creation helpers if None is allowed).
| def add_species_node(self, label: str, is_ts: bool = False, | |
| timestamp: Optional[str] = None) -> str: | |
| """ | |
| Convenience method to add a species node. | |
| Args: | |
| label: Species label. | |
| def add_species_node(self, label: Optional[str] = None, is_ts: bool = False, | |
| timestamp: Optional[str] = None) -> str: | |
| """ | |
| Convenience method to add a species node. | |
| Args: | |
| label: Optional species label. |
arc/species/species.py
Outdated
| Optional[dict]: A summary dict with keys ``n_before``, ``n_after``, and | ||
| ``merged`` (list of lists of merged indices), or ``None`` if clustering | ||
| was skipped. |
There was a problem hiding this comment.
cluster_tsgs() now always returns a summary dict for TS species, even when no clustering actually occurred (n_before == n_after). The docstring still says it returns None when clustering was skipped, which is no longer accurate. Please update the docstring to match the behavior, or return None when nothing was clustered so callers can rely on the Optional[dict] contract.
| Optional[dict]: A summary dict with keys ``n_before``, ``n_after``, and | |
| ``merged`` (list of lists of merged indices), or ``None`` if clustering | |
| was skipped. | |
| Optional[dict]: ``None`` if this species is not a TS or has no TS guesses. | |
| Otherwise, returns a summary dict with keys ``n_before``, ``n_after``, | |
| and ``merged`` (a list of lists of merged indices), even if no TS | |
| guesses were merged. |
795b090 to
60a8226
Compare
Added also TS troubleshoots
This pull request introduces a provenance tracking and visualization system to the ARC workflow, enabling detailed recording and rendering of the sequence of computational events (such as job launches, completions, troubleshooting, and decision points) in each run. The provenance data is saved in YAML format and, if Graphviz is available, also rendered as a graph (DOT and SVG). The scheduler now records all relevant events and generates these artifacts at the end of a run. Comprehensive tests are included to validate the new functionality.
Key changes include:
Provenance tracking and event recording:
provenancedictionary to theSchedulerclass to track run metadata and a list of events, with initialization and persistence logic. Events such as species initialization, job start, job finish, troubleshooting, and TS guess selection are now recorded via the newrecord_provenance_eventmethod. [1] [2] [3] [4]Provenance artifact generation and visualization:
save_provenance_artifactsinarc/plotter.pyto save the provenance event log as YAML and, if possible, render the event graph using Graphviz (DOT and SVG). The graph visualizes the relationships between species, jobs, troubleshooting decisions, and TS guess selections. Helper functions ensure graph labels are readable and node IDs are safe.Testing and validation:
API and typing improvements:
run_job. [1] [2]arc/scheduler.py.Utility and robustness:
These changes lay the foundation for reproducible, auditable ARC runs and provide a clear visual summary of complex computational workflows.
Provenance tracking and event recording
provenancedictionary and event recording methods to theSchedulerclass, capturing all key events during a run and persisting them to YAML. [1] [2] [3] [4]Provenance artifact generation and visualization
save_provenance_artifactsinarc/plotter.pyto render provenance graphs (DOT/SVG) and YAML logs, with readable labels and safe node IDs. Handles missing Graphviz gracefully.Testing
API and typing improvements
run_joband related methods to accept provenance-related parameters and improved docstrings and typing. [1] [2] [3]Utility and robustness