-
Notifications
You must be signed in to change notification settings - Fork 32
Description
We have a use case that require fetching 100K-1M filtered records directly from Pinot servers with minimal performance hit. Each record has between 5 and 10 columns. We noticed fetching 500K records through default path (Pinot servers -> Pinot broker -> client) is a challenge for brokers.
Once reason is because Pinot dbapi client uses HTTP/JSON communication which is inefficient for large result set. Pinot-Connector for Presto and Spark fetches large result set directly from Pinot servers using a more efficient communication method: gRPC + streaming. This method has less impact on Pinot servers and allow fetching larger result set quickly.
Can you add gRPC + streaming support to Pinot python client?
[More details]
We noticed high CPU utilization on Pinot brokers. The following chart shows that Pinot brokers are spending most time on Reduce operation. Please note that the queries in question are simple SELECT + WHERE clause queries (no aggregations, no group by and no joins).
Reduce operation: Time spent by broker in combining query results from multiple servers.
Broker Avg. P99 reduce operation:

To summarize above chart, broker spends:
- between 1s and up to 3.5s combining response for ApplicationStage queries.
- between 1s and up to 4.5s combining response for ApplicationMilestone queries.
- up to 1s combining response for ATSApplicant queries.
💡 The chart explains where 1s and up to 3s-4s of ApplicationStage and ApplicationMilestone queries are spent (broker combining responses, serializing into JSON before responding back to Reports Pinot client).