Skip to content

Support to manually aggregate reports? (e.g. after joining parallel jobs?) #13

@EricCousineau-TRI

Description

@EricCousineau-TRI

Problem

I'd like to aggregate reports across multiple processes, e.g. using multiprocessing.Pool.
(In my case, it's a manual worker dispatch based on mp.Process - see here for an old port of the code).

Motivating Example

In my current code, I do something like this:

def worker(values):
    for value in values:
        out, timing = do_work(value)
        yield out, timing

def main():
    values = range(n)  # etc.
    results = parallel_work(worker, values)
    outs, timings = zip(*results)
    # Print aggregation of "timings" reports.

More concretely, here's the example code (doesn't have all deps, but it communicates the intent):
https://github.com/EricCousineau-TRI/repro/blob/54494a5c5154f19e693e4862fbaa79cddcd78d6f/drake_stuff/multibody_plant_prototypes/generate_poses_sink_clutter.py#L507-L516

Request

Is there an easy way to aggregate results themselves using public API?

Currently, it looks like aggregation is done internally:

stopwatch/stopwatch.py

Lines 303 to 306 in 94f59aa

agg_report = AggregatedReport(self._reported_values, tr_data)
# Stash information internally
self._last_trace_report = self._reported_traces
self._last_aggregated_report = agg_report

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions