Is your feature request related to a problem? Please describe.
Right now, it doesn't seem possible to pass a prompt_cache_key with the CreateModelResponseQuery. That parameter is useful for improving cache hit rates, according to the documentation: https://platform.openai.com/docs/guides/prompt-caching
Describe the solution you'd like
A new parameter prompt_cache_key that is encoded into the request