-
Notifications
You must be signed in to change notification settings - Fork 14
Open
Description
Problem
I'd like to aggregate reports across multiple processes, e.g. using multiprocessing.Pool.
(In my case, it's a manual worker dispatch based on mp.Process - see here for an old port of the code).
Motivating Example
In my current code, I do something like this:
def worker(values):
for value in values:
out, timing = do_work(value)
yield out, timing
def main():
values = range(n) # etc.
results = parallel_work(worker, values)
outs, timings = zip(*results)
# Print aggregation of "timings" reports.More concretely, here's the example code (doesn't have all deps, but it communicates the intent):
https://github.com/EricCousineau-TRI/repro/blob/54494a5c5154f19e693e4862fbaa79cddcd78d6f/drake_stuff/multibody_plant_prototypes/generate_poses_sink_clutter.py#L507-L516
Request
Is there an easy way to aggregate results themselves using public API?
Currently, it looks like aggregation is done internally:
Lines 303 to 306 in 94f59aa
| agg_report = AggregatedReport(self._reported_values, tr_data) | |
| # Stash information internally | |
| self._last_trace_report = self._reported_traces | |
| self._last_aggregated_report = agg_report |
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels