Skip to content

Measure and improve DB/API performance #139

@hardbyte

Description

@hardbyte

I set up a locust script that simulates heavy usage to measure and ideally improve our api/db robustness.

I configured locust to create users at a rate of 0.05 users per second, and just keep adding users (up to a thousand or so). I stopped the experiment when the backend locked up, or when there were 5% failures.

Baseline 1

I established a baseline measuring performance on Cloud run (development deployment of PR #138). That sets limits on the database connections each container opens to 20, and the cloud run revision limits the concurrency to 25, and the number of containers to 3. The dev database supports 50 connections, so in theory 2 containers worth of traffic should be "fine", but somewhere after the third is added we should see not enough connection errors.

As with all experiments we use a First generation Cloud Run service, with 1 CPU and 512 MiB of memory.

Hypothesis: This will hopefully break somewhat similar to our production environment. As in it should work ok for low numbers of concurrent requests, then break.

Notes on baseline experiment 1

  • I see fine performance all the way up to 35 users, at about 23 requests per second before the first server 500 occurs.
  • I did see a few sqlalchemy.exc.InterfaceError: (psycopg2.InterfaceError) connection already closed errors.
  • The Cloud SQL System insights shows that many connections are in state "idle in transaction", but we didn't exceed the connection limit of the database.
  • It appears to me that only one container was active, and 1 idle.
  • /recommend?limit=5 isn't too bad. About 150ms median response time, 390ms at 99%ile.
  • /auth/me weighs a wopping 65 KiB trimming that would likely help.

image

image

Baseline 2

Next we will configure the deployment to only allow 10 db connections for each container, and allow 4 containers.

Hypothesis: Based on the container scaling in baseline 1 I expect we get even less throughput, and I expect to see timeouts rather than database connection errors.

Notes on baseline experiment 2

  • I see similar performance up to 14 users, at about 10 requests per second before the responses just stop coming.
  • Similar connections many idle and "idle in transaction" in Cloud SQL (got to ~38).

image

Baseline 3

Next similar to baseline 2 we will configure the deployment to only allow 10 db connections for each container, and allow 4 containers, but this time we lower the concurrency in Google Cloud Run to 5 so we should see all 4 containers spin up.

Hypothesis: Probably the highest throughput for the baselines. I hope to see timeouts rather than database connection errors.

Notes on baseline experiment 3

image

  • Was worse than baseline 1. Got to 20 users, 10 requests per second before starting to fail with 429 Client Error: Too Many Requests and server 500 errors.
  • Logs had sqlalchemy.exc.TimeoutError: QueuePool limit of size 5 overflow 5 reached, connection timed out, timeout 120.00
  • Saw a few more InterfaceError: (psycopg2.InterfaceError) connection already closed exceptions (in the get_session finally clause).

Improvements

  • We can probably catch and safely ignore the sqlalchemy.exc.InterfaceError: (psycopg2.InterfaceError) connection already closed
  • We should use baseline 1 config for now!

With these baseline established we can try to improve the default settings of the production deployment, and have a good starting point for measuring the effectiveness of particular optimizations.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions