Skip to content

Implement ML model to identify performance issues in running Indy services, based on aggregated logs + metrics in ElasticSearch #1657

@geored

Description

@geored

Implement ML model to identify performance issues in running Indy services, based on aggregated logs + metrics in ElasticSearch. Use the ML model to address any / all of the following:

  • Predict performance during performance testing executions
  • Identify performance bottlenecks and recommend areas to optimize
  • Trigger automated investigations when Indy SLOs are breached
  • Provide Jupyter notebook containing initial investigation results, linked to the datasets as appropriate

We will provide aggregated log events and/or metric events. Metric events will contain opentracing.io -compatible spans, with ID’s that tie the events together in context. Aggregated log events also contain some contextual information, but there’s a lot of overlap with the metric data. Span data in our metric events will contain measurements from subsystems (and threaded-off sub-processes) for each request. We may also be able to provide aggregated metrics (think Prometheus, not Opentracing) for system-level metrics like memory usage.

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions