diff --git a/api-reference/admin-api/control-plane/virtual-keys/create-virtual-key.mdx b/api-reference/admin-api/control-plane/virtual-keys/create-virtual-key.mdx index bf5f5c7e..07d78056 100644 --- a/api-reference/admin-api/control-plane/virtual-keys/create-virtual-key.mdx +++ b/api-reference/admin-api/control-plane/virtual-keys/create-virtual-key.mdx @@ -3,6 +3,10 @@ title: Create Virtual Key openapi: post /virtual-keys --- + +**Deprecated.** Use the [Integrations API](/api-reference/admin-api/control-plane/integrations/create-integration) to store provider credentials and the [Providers API](/api-reference/admin-api/control-plane/providers/create-provider) to create AI Providers in your workspace. Existing virtual keys continue to work — no code changes needed. + + #### Azure OpenAI Create virtual key to access your Azure OpenAI models or deployments, and manage all auth in one place. @@ -247,6 +251,6 @@ main(); #### Vertex AI Create virtual key to access any models available or hosted on Vertex AI. [Docs →](/integrations/llms/vertex-ai) - -Securely store your provider auth in Portkey vault and democratize and streamline access to Gen AI. + +Manage AI providers and models centrally with budget limits, rate limits, and model provisioning. diff --git a/api-reference/admin-api/control-plane/virtual-keys/delete-virtual-key.mdx b/api-reference/admin-api/control-plane/virtual-keys/delete-virtual-key.mdx index f29e0b68..c7f27eea 100644 --- a/api-reference/admin-api/control-plane/virtual-keys/delete-virtual-key.mdx +++ b/api-reference/admin-api/control-plane/virtual-keys/delete-virtual-key.mdx @@ -2,3 +2,7 @@ title: Delete Virtual Key openapi: delete /virtual-keys/{slug} --- + + +**Deprecated.** Use the [Providers API](/api-reference/admin-api/control-plane/providers/delete-provider) instead. Existing virtual keys continue to work — no code changes needed. + diff --git a/api-reference/admin-api/control-plane/virtual-keys/list-virtual-keys.mdx b/api-reference/admin-api/control-plane/virtual-keys/list-virtual-keys.mdx index cc325306..f86b7409 100644 --- a/api-reference/admin-api/control-plane/virtual-keys/list-virtual-keys.mdx +++ b/api-reference/admin-api/control-plane/virtual-keys/list-virtual-keys.mdx @@ -2,3 +2,7 @@ title: List Virtual Key openapi: get /virtual-keys --- + + +**Deprecated.** Use the [Providers API](/api-reference/admin-api/control-plane/providers/list-providers) instead. Existing virtual keys continue to work — no code changes needed. + diff --git a/api-reference/admin-api/control-plane/virtual-keys/retrieve-virtual-key.mdx b/api-reference/admin-api/control-plane/virtual-keys/retrieve-virtual-key.mdx index 42545fb1..7c74fba6 100644 --- a/api-reference/admin-api/control-plane/virtual-keys/retrieve-virtual-key.mdx +++ b/api-reference/admin-api/control-plane/virtual-keys/retrieve-virtual-key.mdx @@ -2,3 +2,7 @@ title: Retrieve Virtual Key openapi: get /virtual-keys/{slug} --- + + +**Deprecated.** Use the [Providers API](/api-reference/admin-api/control-plane/providers/retrieve-provider) instead. Existing virtual keys continue to work — no code changes needed. + diff --git a/api-reference/admin-api/control-plane/virtual-keys/update-virtual-key.mdx b/api-reference/admin-api/control-plane/virtual-keys/update-virtual-key.mdx index 59f57f83..b6e4cd9f 100644 --- a/api-reference/admin-api/control-plane/virtual-keys/update-virtual-key.mdx +++ b/api-reference/admin-api/control-plane/virtual-keys/update-virtual-key.mdx @@ -2,3 +2,7 @@ title: Update Virtual Key openapi: put /virtual-keys/{slug} --- + + +**Deprecated.** Use the [Providers API](/api-reference/admin-api/control-plane/providers/update-provider) instead. Existing virtual keys continue to work — no code changes needed. + diff --git a/api-reference/admin-api/introduction.mdx b/api-reference/admin-api/introduction.mdx index 528ccc21..a66d2a6c 100644 --- a/api-reference/admin-api/introduction.mdx +++ b/api-reference/admin-api/introduction.mdx @@ -19,8 +19,8 @@ At the foundation of Portkey are the resources that define how your AI implement Create and manage configuration profiles that define routing rules, model settings, and more. - - Manage virtual API keys that provide customized access to specific configurations. + + Manage AI providers and credentials across workspaces. Replaces Virtual Keys. Create and manage API keys for accessing Portkey services. @@ -172,7 +172,7 @@ Both key types have different capabilities. This table clarifies which operation | Create/manage workspaces | ✅ | ❌ | | Manage users and permissions | ✅ | ❌ | | Create/manage configs | ✅ (All workspaces) | ✅ (Single workspace) | -| Create/manage virtual keys | ✅ (All workspaces) | ✅ (Single workspace) | +| Create/manage providers | ✅ (All workspaces) | ✅ (Single workspace) | | Access Analytics | ✅ (All workspaces) | ✅ (Single workspace) | | Create/update feedback | ❌ | ✅ | diff --git a/api-reference/inference-api/authentication.mdx b/api-reference/inference-api/authentication.mdx index 2971b75d..2116fb2c 100644 --- a/api-reference/inference-api/authentication.mdx +++ b/api-reference/inference-api/authentication.mdx @@ -31,7 +31,7 @@ import Portkey from 'portkey-ai' const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", // Replace with your actual API key - virtualKey: "VIRTUAL_KEY" // Optional: Use for virtual key management + provider: "@openai-prod" // Optional: AI Provider slug from Model Catalog }) const chatCompletion = await portkey.chat.completions.create({ @@ -48,7 +48,7 @@ from portkey_ai import Portkey client = Portkey( api_key="PORTKEY_API_KEY", # Replace with your actual API key - provider="@VIRTUAL_KEY" # Optional: Use if virtual keys are set up + provider="@openai-prod" # Optional: AI Provider slug from Model Catalog ) chat_completion = client.chat.completions.create( @@ -65,7 +65,7 @@ print(chat_completion.choices[0].message["content"]) curl https://api.portkey.ai/v1/chat/completions \ -H "Content-Type: application/json" \ -H "x-portkey-api-key: $PORTKEY_API_KEY" \ - -H "x-portkey-virtual-key: $VIRTUAL_KEY" \ + -H "x-portkey-provider: @openai-prod" \ -d '{ "model": "gpt-4o", "messages": [ diff --git a/api-reference/inference-api/gateway-for-other-apis.mdx b/api-reference/inference-api/gateway-for-other-apis.mdx index 6c55d7c5..46377596 100644 --- a/api-reference/inference-api/gateway-for-other-apis.mdx +++ b/api-reference/inference-api/gateway-for-other-apis.mdx @@ -39,8 +39,8 @@ Create or log in to your Portkey account. Grab your account's API key from the [ Choose one of these authentication methods: - -Portkey integrates with 40+ LLM providers. Add your provider credentials (such as API key) to Portkey, and get a virtual key that you can use to authenticate and send your requests. + +Add your provider credentials to [Model Catalog](https://app.portkey.ai/model-catalog) and use the AI Provider slug to authenticate your requests. @@ -48,7 +48,7 @@ Portkey integrates with 40+ LLM providers. Add your provider credentials (such a curl https://api.portkey.ai/v1/rerank \ -H "Content-Type: application/json" \ -H "x-portkey-api-key: $PORTKEY_API_KEY" \ - -H "x-portkey-virtual-key: $PORTKEY_PROVIDER" \ + -H "x-portkey-provider: @cohere-prod" \ ``` ```py Python @@ -56,7 +56,7 @@ from portkey_ai import Portkey portkey = Portkey( api_key = "PORTKEY_API_KEY", - virtual_key = "PROVIDER" + provider = "@cohere-prod" ) ``` @@ -65,15 +65,15 @@ import Portkey from 'portkey-ai'; const portkey = new Portkey({ apiKey: 'PORTKEY_API_KEY', - virtualKey: 'PROVIDER' + provider: '@cohere-prod' }); ``` -Creating virtual keys lets you: -- Manage all credentials in one place +Model Catalog lets you: +- Manage all provider credentials in one place +- Set budget limits & rate limits per provider - Rotate between different provider keys -- Set custom budget limits & rate limits per key @@ -165,7 +165,7 @@ curl --request POST \ --url https://api.portkey.ai/v1/rerank \ --header 'Content-Type: application/json' \ --header 'x-portkey-api-key: $PORTKEY_API_KEY' \ - --header 'x-portkey-virtual-key: $COHERE_VIRTUAL_KEY' \ + --header 'x-portkey-provider: @cohere-prod' \ --data '{ "model": "rerank-english-v2.0", "query": "What is machine learning?", @@ -181,7 +181,7 @@ curl --request GET \ --url https://api.portkey.ai/v1/collections \ --header 'Content-Type: application/json' \ --header 'x-portkey-api-key: $PORTKEY_API_KEY' \ - --header 'x-portkey-virtual-key: $PROVIDER' + --header 'x-portkey-provider: @provider-prod' ``` ```bash PUT @@ -189,7 +189,7 @@ curl --request PUT \ --url https://api.portkey.ai/v1/collections/my-collection \ --header 'Content-Type: application/json' \ --header 'x-portkey-api-key: $PORTKEY_API_KEY' \ - --header 'x-portkey-virtual-key: $PROVIDER' \ + --header 'x-portkey-provider: @provider-prod' \ --data '{ "metadata": { "description": "Updated collection description" @@ -202,7 +202,7 @@ curl --request DELETE \ --url https://api.portkey.ai/v1/collections/my-collection \ --header 'Content-Type: application/json' \ --header 'x-portkey-api-key: $PORTKEY_API_KEY' \ - --header 'x-portkey-virtual-key: $PROVIDER' + --header 'x-portkey-provider: @provider-prod' ``` @@ -279,7 +279,7 @@ import Portkey from 'portkey-ai'; const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", - virtualKey: "PROVIDER" + provider: "@cohere-prod" }); const response = await portkey.post('/rerank', { @@ -297,7 +297,7 @@ import Portkey from 'portkey-ai'; const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", - virtualKey: "PROVIDER" + provider: "@cohere-prod" }); const response = await portkey.get('/collections'); @@ -308,7 +308,7 @@ import Portkey from 'portkey-ai'; const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", - virtualKey: "PROVIDER" + provider: "@cohere-prod" }); const response = await portkey.put('/collections/my-collection', { @@ -323,7 +323,7 @@ import Portkey from 'portkey-ai'; const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", - virtualKey: "PROVIDER" + provider: "@cohere-prod" }); const response = await portkey.delete('/collections/my-collection'); diff --git a/api-reference/inference-api/headers.mdx b/api-reference/inference-api/headers.mdx index 13a4e895..8b9ace84 100644 --- a/api-reference/inference-api/headers.mdx +++ b/api-reference/inference-api/headers.mdx @@ -90,10 +90,12 @@ const portkey = new Portkey({ -### 2. Virtual Key +### 2. AI Provider - -Save your provider auth on Portkey and use a virtual key to directly make a call. [Docs](/product/ai-gateway/virtual-keys)) + +Specify your AI Provider slug (from [Model Catalog](/product/model-catalog)) to route requests through a managed provider. Use the `@provider-slug` format. ([Docs](/product/model-catalog)) + +The `x-portkey-virtual-key` / `virtual_key` / `virtualKey` parameter is the legacy equivalent and still works for backward compatibility. @@ -102,7 +104,7 @@ Save your provider auth on Portkey and use a virtual key to directly make a call ```sh cURL {3} curl https://api.portkey.ai/v1/chat/completions \ -H "x-portkey-api-key: $PORTKEY_API_KEY" \ - -H "x-portkey-virtual-key: openai-virtual-key" \ + -H "x-portkey-provider: @openai-prod" \ ``` ```py Python {5} @@ -110,7 +112,7 @@ from portkey_ai import Portkey portkey = Portkey( api_key = "PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY") - virtual_key = "openai-virtual-key" + provider = "@openai-prod" # Your AI Provider slug from Model Catalog ) ``` @@ -119,7 +121,7 @@ import Portkey from 'portkey-ai'; const portkey = new Portkey({ apiKey: 'PORTKEY_API_KEY', // defaults to process.env["PORTKEY_API_KEY"] - virtualKey: 'openai-virtual-key' + provider: '@openai-prod' // Your AI Provider slug from Model Catalog }); ``` @@ -229,7 +231,7 @@ An ID you can pass to refer to one or more requests later on. If not provided, P ```sh cURL {4} curl https://api.portkey.ai/v1/chat/completions \ -H "x-portkey-api-key: $PORTKEY_API_KEY" \ - -H "x-portkey-virtual-key: openai-virtual-key" \ + -H "x-portkey-provider: @openai-prod" \ -H "x-portkey-trace-id: test-request" \ ``` @@ -238,7 +240,7 @@ from portkey_ai import Portkey portkey = Portkey( api_key = "PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY") - virtual_key = "openai-virtual-key", + provider = "@openai-prod", trace_id = "test-request" ) ``` @@ -248,7 +250,7 @@ import Portkey from 'portkey-ai'; const portkey = new Portkey({ apiKey: 'PORTKEY_API_KEY', // defaults to process.env["PORTKEY_API_KEY"] - virtualKey: "openai-virtual-key", + provider: "@openai-prod", traceId: "test-request" }); ``` @@ -266,7 +268,7 @@ You can include the special metadata type `_user` to associate requests with spe ```sh cURL {4} curl https://api.portkey.ai/v1/chat/completions \ -H "x-portkey-api-key: $PORTKEY_API_KEY" \ - -H "x-portkey-virtual-key: openai-virtual-key" \ + -H "x-portkey-provider: @openai-prod" \ -H "x-portkey-metadata: {'_user': 'user_id_123', 'foo': 'bar'}" \ ``` @@ -275,7 +277,7 @@ from portkey_ai import Portkey portkey = Portkey( api_key = "PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY") - virtual_key = "openai-virtual-key", + provider = "@openai-prod", metadata = {"_user": "user_id_123", "foo": "bar"}" ) ``` @@ -285,7 +287,7 @@ import Portkey from 'portkey-ai'; const portkey = new Portkey({ apiKey: 'PORTKEY_API_KEY', // defaults to process.env["PORTKEY_API_KEY"] - virtualKey: "openai-virtual-key", + provider: "@openai-prod", metadata: {"_user": "user_id_123", "foo": "bar"}" }); ``` @@ -303,7 +305,7 @@ Expects `true` or `false` See the caching documentation for more information. ([ ```sh cURL {4} curl https://api.portkey.ai/v1/chat/completions \ -H "x-portkey-api-key: $PORTKEY_API_KEY" \ - -H "x-portkey-virtual-key: openai-virtual-key" \ + -H "x-portkey-provider: @openai-prod" \ -H "x-portkey-cache-force-refresh: true" \ ``` @@ -312,7 +314,7 @@ from portkey_ai import Portkey portkey = Portkey( api_key = "PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY") - virtual_key = "openai-virtual-key", + provider = "@openai-prod", cache_force_refresh = True ) ``` @@ -322,7 +324,7 @@ import Portkey from 'portkey-ai'; const portkey = new Portkey({ apiKey: 'PORTKEY_API_KEY', // defaults to process.env["PORTKEY_API_KEY"] - virtualKey: "openai-virtual-key", + provider: "@openai-prod", cacheForceRefresh: True }); ``` @@ -339,7 +341,7 @@ Partition your cache store based on custom strings, ignoring metadata and other ```sh cURL {4} curl https://api.portkey.ai/v1/chat/completions \ -H "x-portkey-api-key: $PORTKEY_API_KEY" \ - -H "x-portkey-virtual-key: openai-virtual-key" \ + -H "x-portkey-provider: @openai-prod" \ -H "x-portkey-cache-namespace: any-string" \ ``` @@ -348,7 +350,7 @@ from portkey_ai import Portkey portkey = Portkey( api_key = "PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY") - virtual_key = "openai-virtual-key", + provider = "@openai-prod", cache_namespace = "any-string" ) ``` @@ -358,7 +360,7 @@ import Portkey from 'portkey-ai'; const portkey = new Portkey({ apiKey: 'PORTKEY_API_KEY', // defaults to process.env["PORTKEY_API_KEY"] - virtualKey: "openai-virtual-key", + provider: "@openai-prod", cacheNamespace: "any-string" }); ``` @@ -375,7 +377,7 @@ Set timeout after which a request automatically terminates. The time is set in m ```sh cURL {4} curl https://api.portkey.ai/v1/chat/completions \ -H "x-portkey-api-key: $PORTKEY_API_KEY" \ - -H "x-portkey-virtual-key: openai-virtual-key" \ + -H "x-portkey-provider: @openai-prod" \ -H "x-portkey-request-timeout: 3000" \ ``` @@ -384,7 +386,7 @@ from portkey_ai import Portkey portkey = Portkey( api_key = "PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY") - virtual_key = "openai-virtual-key", + provider = "@openai-prod", request_timeout = 3000 ) ``` @@ -394,7 +396,7 @@ import Portkey from 'portkey-ai'; const portkey = new Portkey({ apiKey: 'PORTKEY_API_KEY', // defaults to process.env["PORTKEY_API_KEY"] - virtualKey: "openai-virtual-key", + provider: "@openai-prod", reqiestTimeout: 3000 }); ``` @@ -416,7 +418,7 @@ Pass all the headers you want to forward directly in this array. ([Docs](https:/ ```sh cURL {4-6} curl https://api.portkey.ai/v1/chat/completions \ -H "x-portkey-api-key: $PORTKEY_API_KEY" \ - -H "x-portkey-virtual-key: openai-virtual-key" \ + -H "x-portkey-provider: @openai-prod" \ -H "X-Custom-Header: ...."\ -H "Another-Header: ....."\ -H "x-portkey-forward-headers: ['X-Custom-Header', 'Another-Header']" \ @@ -427,7 +429,7 @@ from portkey_ai import Portkey portkey = Portkey( api_key = "PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY") - virtual_key = "openai-virtual-key", + provider = "@openai-prod", X_Custom_Header = "....", Another_Header = "....", # The values in forward_headers list must be the original header names @@ -440,7 +442,7 @@ import Portkey from 'portkey-ai'; const portkey = new Portkey({ apiKey: 'PORTKEY_API_KEY', // defaults to process.env["PORTKEY_API_KEY"] - virtualKey: "openai-virtual-key", + provider: "@openai-prod", CustomHeader: "....", AnotherHeader: "....", forwardHeaders: ['CustomHeader', 'AnotherHeader'] @@ -494,7 +496,7 @@ Portkey adheres to language-specific naming conventions: | Parameter | Type | Key | | :--- | :--- | :--- | | **API Key** Your Portkey account's API Key. | stringrequired | `apiKey` | -| **Virtual Key** The virtual key created from Portkey's vault for a specific provider | string | `virtualKey` | +| **Virtual Key** *(Legacy — use `provider` with `@provider-slug` instead)* | string | `virtualKey` | | **Config** The slug or [config object](/api-reference/inference-api/config-object) to use | stringobject | `config` | | **Provider** The AI provider to use for your calls. ([supported providers](/integrations/llms#supported-ai-providers)). | string | `provider` | | **Base URL** You can edit the URL of the gateway to use. Needed if you're [self-hosting the AI gateway](https://github.com/Portkey-AI/gateway/blob/main/docs/installation-deployments.md) | string | `baseURL` | @@ -514,7 +516,7 @@ Portkey adheres to language-specific naming conventions: | Parameter | Type | Key | | :--- | :--- | :--- | | **API Key** Your Portkey account's API Key. | stringrequired | `api_key` | -| **Virtual Key** The virtual key created from Portkey's vault for a specific provider | string | `virtual_key` | +| **Virtual Key** *(Legacy — use `provider` with `@provider-slug` instead)* | string | `virtual_key` | | **Config** The slug or [config object](/api-reference/inference-api/config-object) to use | stringobject | `config` | | **Provider** The AI provider to use for your calls. ([supported providers](/integrations/llms#supported-ai-providers)). | string | `provider` | | **Base URL** You can edit the URL of the gateway to use. Needed if you're [self-hosting the AI gateway](https://github.com/Portkey-AI/gateway/blob/main/docs/installation-deployments.md) | string | `base_url` | @@ -532,7 +534,7 @@ Portkey adheres to language-specific naming conventions: | Parameter | Type | Header Key | | :--- | :--- | :--- | | **API Key** Your Portkey account's API Key. | stringrequired | `x-portkey-api-key` | -| **Virtual Key** The virtual key created from Portkey's vault for a specific provider | string | `x-portkey-virtual-key` | +| **Virtual Key** *(Legacy — use `x-portkey-provider` with `@provider-slug` instead)* | string | `x-portkey-virtual-key` | | **Config** The slug or [config object](/api-reference/inference-api/config-object) to use | string | `x-portkey-config` | | **Provider** The AI provider to use for your calls. ([supported providers](/integrations/llms#supported-ai-providers)). | string | `x-portkey-provider` | | **Base URL** You can edit the URL of the gateway to use. Needed if you're [self-hosting the AI gateway](https://github.com/Portkey-AI/gateway/blob/main/docs/installation-deployments.md) | string | Change the request URL | @@ -558,7 +560,7 @@ You can send these headers in multiple ways: curl https://api.portkey.ai/v1/chat/completions \ -H "Content-Type: application/json" \ -H "x-portkey-api-key: PORTKEY_API_KEY" \ - -H "x-portkey-virtual-key: VIRTUAL_KEY" \ + -H "x-portkey-provider: @openai-prod" \ -H "x-portkey-trace-id: your_trace_id" \ -H "x-portkey-metadata: {\"_user\": \"user_12345\"}" \ -d '{ @@ -572,7 +574,7 @@ from portkey_ai import Portkey portkey = Portkey( api_key="PORTKEY_API_KEY", - provider="@VIRTUAL_KEY", + provider="@openai-prod", config="CONFIG_ID" ) @@ -592,7 +594,7 @@ import Portkey from 'portkey-ai'; const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", - virtualKey: "VIRTUAL_KEY", + provider: "@openai-prod", config: "CONFIG_ID" }); @@ -619,7 +621,7 @@ client = OpenAI( base_url="https://api.portkey.ai/v1", default_headers=createHeaders({ "apiKey": "PORTKEY_API_KEY", - "virtualKey": "VIRTUAL_KEY" + "provider": "@openai-prod" }) ) @@ -644,7 +646,7 @@ const client = new OpenAI({ baseURL: "https://api.portkey.ai/v1", defaultHeaders: createHeaders({ apiKey: "PORTKEY_API_KEY", - virtualKey: "VIRTUAL_KEY" + provider: "@openai-prod" }) }); diff --git a/api-reference/sdk/c-sharp.mdx b/api-reference/sdk/c-sharp.mdx index 3dea881d..dc1d7f6d 100644 --- a/api-reference/sdk/c-sharp.mdx +++ b/api-reference/sdk/c-sharp.mdx @@ -432,7 +432,7 @@ messages.Add(new AssistantChatMessage(completion)); ``` -Switching providers is just a matter of swapping out your virtual key. Change the virtual key to Anthropic, set the model name, and start making requests to Anthropic from the OpenAI .NET library. +Switching providers is just a matter of changing your AI Provider slug. Change it to Anthropic, set the model name, and start making requests to Anthropic from the OpenAI .NET library. ```csharp {41,44} [expandable] using OpenAI; @@ -487,7 +487,7 @@ public class Program ``` -Similarly, just change your virtual key to Vertex virtual key: +Similarly, just change your provider to the Vertex AI Provider: ```csharp {41,44} [expandable] using OpenAI; diff --git a/api-reference/sdk/node.mdx b/api-reference/sdk/node.mdx index 327c7e85..a2236373 100644 --- a/api-reference/sdk/node.mdx +++ b/api-reference/sdk/node.mdx @@ -119,7 +119,7 @@ Here's how you can use these headers with the Node.js SDK: ```js const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", - virtualKey: "VIRTUAL_KEY", + provider: "@openai-prod", // Add any other headers from the reference }); diff --git a/api-reference/sdk/python.mdx b/api-reference/sdk/python.mdx index 6437e51f..f83a9c25 100644 --- a/api-reference/sdk/python.mdx +++ b/api-reference/sdk/python.mdx @@ -45,7 +45,7 @@ from portkey_ai import Portkey client = Portkey( api_key="your_api_key_here", # Or use the env var PORTKEY_API_KEY - provider="@your_virtual_key_here" # Or use config="cf-***" + provider="@openai-prod" # Or use config="cf-***" ) response = client.chat.completions.create( @@ -54,20 +54,20 @@ response = client.chat.completions.create( ) ``` - You can use either a Virtual Key or a Config object to select your AI provider. Find more info on different authentication mechanisms [here](/api-reference/inference-api/headers#provider-authentication). + Use an AI Provider slug or a Config object to select your AI provider. Find more info on different authentication mechanisms [here](/api-reference/inference-api/headers#provider-authentication). ## Authentication & Configuration The SDK requires: - **Portkey API Key**: Your Portkey API key (env var `PORTKEY_API_KEY` recommended) - **Provider Authentication**: - - **Virtual Key**: The [Virtual Key](/product/ai-gateway/virtual-keys#using-virtual-keys) of your chosen AI provider + - **Provider Slug**: The [AI Provider](/product/model-catalog) slug (e.g. `@openai-prod`) from Model Catalog - **Config**: The [Config object](/api-reference/inference-api/config-object) or config slug for advanced routing - **Provider Slug + Auth Headers**: Useful if you do not want to save your API keys to Portkey and make direct requests. ```python -# With Virtual Key -portkey = Portkey(api_key="...", provider="@...") +# With AI Provider slug +portkey = Portkey(api_key="...", provider="@openai-prod") # With Config portkey = Portkey(api_key="...", config="cf-***") @@ -87,7 +87,7 @@ from portkey_ai import AsyncPortkey portkey = AsyncPortkey( api_key="PORTKEY_API_KEY", - provider="@VIRTUAL_KEY" + provider="@openai-prod" ) async def main(): @@ -117,7 +117,7 @@ custom_client = httpx.Client(verify=False) portkey = Portkey( api_key="your_api_key_here", - provider="@your_virtual_key_here", + provider="@openai-prod", http_client=custom_client ) diff --git a/guides/getting-started/function-calling.mdx b/guides/getting-started/function-calling.mdx index c8073c6b..561da22b 100644 --- a/guides/getting-started/function-calling.mdx +++ b/guides/getting-started/function-calling.mdx @@ -20,7 +20,7 @@ import Portkey from "portkey-ai"; const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", - virtualKey: "ANYSCALE_VIRTUAL_KEY", + provider: "@anyscale-prod", }); // Describing what the Weather API does and expects diff --git a/guides/prompts/llama-prompts.mdx b/guides/prompts/llama-prompts.mdx index fd7208f6..ac95ed53 100644 --- a/guides/prompts/llama-prompts.mdx +++ b/guides/prompts/llama-prompts.mdx @@ -95,10 +95,6 @@ To add a provider: 2. Click **Add Provider** and select your provider (e.g., OpenAI) 3. Enter your API key and name your provider (e.g., `openai-prod`) - - - - Your provider slug will be `@openai-prod` (or whatever name you chose with @ prefix). diff --git a/guides/use-cases/deepseek-r1.mdx b/guides/use-cases/deepseek-r1.mdx index 22ea41aa..d95e8675 100644 --- a/guides/use-cases/deepseek-r1.mdx +++ b/guides/use-cases/deepseek-r1.mdx @@ -19,7 +19,7 @@ All of this is made possible through Portkey's AI Gateway, which provides a unif ## Accessing DeepSeek R1 Through Multiple Providers -DeepSeek R1 is available across several major cloud providers, and with Portkey's unified API, the implementation remains consistent regardless of your chosen provider. All you need is the appropriate virtual key for your desired provider. +DeepSeek R1 is available across several major cloud providers, and with Portkey's unified API, the implementation remains consistent regardless of your chosen provider. All you need is the AI Provider slug for your desired provider. ### Basic Implementation diff --git a/guides/use-cases/setting-up-resilient-load-balancers-with-failure-mitigating-fallbacks.mdx b/guides/use-cases/setting-up-resilient-load-balancers-with-failure-mitigating-fallbacks.mdx index 86330339..9b2aec7d 100644 --- a/guides/use-cases/setting-up-resilient-load-balancers-with-failure-mitigating-fallbacks.mdx +++ b/guides/use-cases/setting-up-resilient-load-balancers-with-failure-mitigating-fallbacks.mdx @@ -9,7 +9,7 @@ This cookbook will teach you how to utilize Portkey to distribute traffic across Prerequisites: -You should have the [Portkey API Key](https://portkey.ai/docs/api-reference/authentication#obtaining-your-api-key). Please sign up to obtain it. Additionally, you should have stored the OpenAI, Azure OpenAI, and Anthropic details in the [Portkey vault](https://portkey.ai/docs/product/ai-gateway-streamline-llm-integrations/virtual-keys). +You should have the [Portkey API Key](https://portkey.ai/docs/api-reference/authentication#obtaining-your-api-key). Please sign up to obtain it. Additionally, you should have added the OpenAI, Azure OpenAI, and Anthropic providers to [Model Catalog](https://app.portkey.ai/model-catalog). ## 1\. Import the SDK and authenticate Portkey diff --git a/guides/use-cases/track-costs-using-metadata.mdx b/guides/use-cases/track-costs-using-metadata.mdx index c5c5fd97..cf4a32a7 100644 --- a/guides/use-cases/track-costs-using-metadata.mdx +++ b/guides/use-cases/track-costs-using-metadata.mdx @@ -37,7 +37,7 @@ from portkey import Portkey portkey = Portkey( api_key="PORTKEY_API_KEY", - provider="@OPENAI_VIRTUAL_KEY" + provider="@openai-prod" ) @@ -48,7 +48,7 @@ response = portkey.with_options( "env": "production" }).chat.completions.create( messages = [{ "role": 'user', "content": 'What is 1729' }], - model = 'gpt-4' + model = 'gpt-4o' ) print(response.choices[0].message) @@ -58,7 +58,7 @@ import {Portkey} from 'portkey-ai' const portkey = new Portkey({ apiKey: "PORTKEY_API_KEY", - virtualKey: "OPENAI_VIRTUAL_KEY" + provider: "@openai-prod" }) const requestOptions = { @@ -71,7 +71,7 @@ const requestOptions = { const chatCompletion = await portkey.chat.completions.create({ messages: [{ role: 'user', content: 'Who was ariadne?' }], - model: 'gpt-4', + model: 'gpt-4o', }, requestOptions); console.log(chatCompletion.choices); @@ -81,7 +81,7 @@ console.log(chatCompletion.choices); curl https://api.portkey.ai/v1/chat/completions \ -H "Content-Type: application/json" \ -H "x-portkey-api-key: $PORTKEY_API_KEY" \ - -H "x-portkey-virtual-key: $OPENAI_VIRTUAL_KEY" \ + -H "x-portkey-provider: @openai-prod" \ -H "x-portkey-metadata: {\"_user\":\"USER_ID\", \"organisation\":\"ORG_ID\", \"request_id\":\"1729\"}" \ -d '{ "model": "gpt-4", diff --git a/integrations/agents/crewai.mdx b/integrations/agents/crewai.mdx index 61b005eb..b0ac4f8c 100644 --- a/integrations/agents/crewai.mdx +++ b/integrations/agents/crewai.mdx @@ -653,7 +653,7 @@ researcher = Agent( ### Step 1: Implement Budget Controls & Rate Limits Enable granular control over LLM access at the team/department level. This helps you: -- Set up [budget limits](/product/ai-gateway/virtual-keys/budget-limits) +- Set up [budget limits](/product/model-catalog/budget-limits) - Prevent unexpected usage spikes using Rate limits - Track departmental spending diff --git a/integrations/agents/mastra-agents.mdx b/integrations/agents/mastra-agents.mdx index 22193f82..622db46f 100644 --- a/integrations/agents/mastra-agents.mdx +++ b/integrations/agents/mastra-agents.mdx @@ -648,7 +648,7 @@ export const agent = new Agent({ ### Step 1: Implement Budget Controls & Rate Limits Integrations enable granular control over LLM access at the team/department level. This helps you: -- Set up [budget limits](/product/ai-gateway/virtual-keys/budget-limits) +- Set up [budget limits](/product/model-catalog/budget-limits) - Prevent unexpected usage spikes using Rate limits - Track departmental spending diff --git a/integrations/agents/openai-agents-ts.mdx b/integrations/agents/openai-agents-ts.mdx index e8ac52aa..51a9e10c 100644 --- a/integrations/agents/openai-agents-ts.mdx +++ b/integrations/agents/openai-agents-ts.mdx @@ -598,20 +598,14 @@ import { setDefaultOpenAIClient, Agent, run } from '@openai/agents'; // Using OpenAI const openaiConfig = { - "provider": "openai", - "api_key": "YOUR_OPENAI_API_KEY", - "override_params": { - "model": "gpt-4o" - } + "provider": "@openai-prod", + "override_params": {"model": "gpt-4o"} }; // Using Anthropic const anthropicConfig = { - "provider": "anthropic", - "api_key": "YOUR_ANTHROPIC_API_KEY", - "override_params": { - "model": "claude-3-opus-20240229" - } + "provider": "@anthropic-prod", + "override_params": {"model": "claude-3-5-sonnet-latest"} }; // Choose which config to use @@ -672,29 +666,20 @@ Portkey adds a comprehensive governance layer to address these enterprise needs. Portkey allows you to use 1600+ LLMs with your OpenAI Agents setup, with minimal configuration required. Let's set up the core components in Portkey that you'll need for integration. - -Virtual Keys are Portkey's secure way to manage your LLM provider API keys. Think of them like disposable credit cards for your LLM API keys, providing essential controls like: -- Budget limits for API usage + +Add your LLM provider credentials to Model Catalog. This gives you centralized control with: +- Budget limits per provider - Rate limiting capabilities -- Secure API key storage - -To create a virtual key: -Go to [Virtual Keys](https://app.portkey.ai/virtual-keys) in the Portkey App. Save and copy the virtual key ID - - - - +- Secure credential storage - -Save your virtual key ID - you'll need it for the next step. - +Go to [Model Catalog](https://app.portkey.ai/model-catalog) in the Portkey App and add your provider. Copy the AI Provider slug — you'll need it for the next step. Configs in Portkey are JSON objects that define how your requests are routed. They help with implementing features like advanced routing, fallbacks, and retries. -We need to create a default config to route our requests to the virtual key created in Step 1. +We need to create a default config to route our requests to the AI Provider added in Step 1. To create your config: 1. Go to [Configs](https://app.portkey.ai/configs) in Portkey dashboard @@ -715,7 +700,7 @@ To create your config: -This basic config connects to your virtual key. You can add more advanced portkey features later. +This basic config connects to your AI Provider. You can add more advanced Portkey features later. @@ -758,20 +743,16 @@ const client = new OpenAI({ ### Step 1: Implement Budget Controls & Rate Limits -Virtual Keys enable granular control over LLM access at the team/department level. This helps you: -- Set up [budget limits](/product/ai-gateway/virtual-keys/budget-limits) -- Prevent unexpected usage spikes using Rate limits +AI Providers in Model Catalog enable granular control over LLM access at the team/department level. This helps you: +- Set up [budget limits](/product/model-catalog/budget-limits) +- Prevent unexpected usage spikes using rate limits - Track departmental spending #### Setting Up Department-Specific Controls: -1. Navigate to [Virtual Keys](https://app.portkey.ai/virtual-keys) in Portkey dashboard -2. Create new Virtual Key for each department with budget limits and rate limits +1. Navigate to [Model Catalog](https://app.portkey.ai/model-catalog) in the Portkey dashboard +2. Add an AI Provider for each department with budget limits and rate limits 3. Configure department-specific limits - - - - diff --git a/integrations/agents/openai-agents.mdx b/integrations/agents/openai-agents.mdx index 10b981c0..833461ee 100644 --- a/integrations/agents/openai-agents.mdx +++ b/integrations/agents/openai-agents.mdx @@ -761,20 +761,14 @@ from agents import set_default_openai_client # Using OpenAI openai_config = { - "provider": "openai", - "api_key": "YOUR_OPENAI_API_KEY", - "override_params": { - "model": "gpt-4o" - } + "provider": "@openai-prod", + "override_params": {"model": "gpt-4o"} } # Using Anthropic anthropic_config = { - "provider": "anthropic", - "api_key": "YOUR_ANTHROPIC_API_KEY", - "override_params": { - "model": "claude-3-opus-20240229" - } + "provider": "@anthropic-prod", + "override_params": {"model": "claude-3-5-sonnet-latest"} } # Choose which config to use @@ -961,29 +955,20 @@ Portkey adds a comprehensive governance layer to address these enterprise needs. Portkey allows you to use 1600+ LLMs with your OpenAI Agents setup, with minimal configuration required. Let's set up the core components in Portkey that you'll need for integration. - -Virtual Keys are Portkey's secure way to manage your LLM provider API keys. Think of them like disposable credit cards for your LLM API keys, providing essential controls like: -- Budget limits for API usage + +Add your LLM provider credentials to Model Catalog. This gives you centralized control with: +- Budget limits per provider - Rate limiting capabilities -- Secure API key storage - -To create a virtual key: -Go to [Virtual Keys](https://app.portkey.ai/virtual-keys) in the Portkey App. Save and copy the virtual key ID - - - - +- Secure credential storage - -Save your virtual key ID - you'll need it for the next step. - +Go to [Model Catalog](https://app.portkey.ai/model-catalog) in the Portkey App and add your provider. Copy the AI Provider slug — you'll need it for the next step. Configs in Portkey are JSON objects that define how your requests are routed. They help with implementing features like advanced routing, fallbacks, and retries. -We need to create a default config to route our requests to the virtual key created in Step 1. +We need to create a default config to route our requests to the AI Provider added in Step 1. To create your config: 1. Go to [Configs](https://app.portkey.ai/configs) in Portkey dashboard @@ -1004,7 +989,7 @@ To create your config: -This basic config connects to your virtual key. You can add more advanced portkey features later. +This basic config connects to your AI Provider. You can add more advanced Portkey features later. @@ -1050,20 +1035,16 @@ Save your API key securely - you'll need it for OpenAI Agents integration. ### Step 1: Implement Budget Controls & Rate Limits -Virtual Keys enable granular control over LLM access at the team/department level. This helps you: -- Set up [budget limits](/product/ai-gateway/virtual-keys/budget-limits) -- Prevent unexpected usage spikes using Rate limits +AI Providers in Model Catalog enable granular control over LLM access at the team/department level. This helps you: +- Set up [budget limits](/product/model-catalog/budget-limits) +- Prevent unexpected usage spikes using rate limits - Track departmental spending #### Setting Up Department-Specific Controls: -1. Navigate to [Virtual Keys](https://app.portkey.ai/virtual-keys) in Portkey dashboard -2. Create new Virtual Key for each department with budget limits and rate limits +1. Navigate to [Model Catalog](https://app.portkey.ai/model-catalog) in the Portkey dashboard +2. Add an AI Provider for each department with budget limits and rate limits 3. Configure department-specific limits - - - - diff --git a/integrations/agents/phidata.mdx b/integrations/agents/phidata.mdx index 24fd7ae1..7a343425 100644 --- a/integrations/agents/phidata.mdx +++ b/integrations/agents/phidata.mdx @@ -62,11 +62,10 @@ To switch to Azure as your provider, add your Azure details to Portley and creat ```py llm = OpenAIChat( base_url=PORTKEY_GATEWAY_URL, - api_key="api_key", #We will be using Virtual Key + api_key="PORTKEY_API_KEY", default_headers=createHeaders( - provider="azure-openai", - api_key="PORTKEY_API_KEY", # Replace with your Portkey API key - provider="@AZURE_OPENAI_KEY" + provider="@AZURE_OPENAI_KEY", # Your AI Provider slug from Model Catalog + api_key="PORTKEY_API_KEY" ) ) ``` @@ -88,11 +87,10 @@ To switch to AWS Bedrock as your provider, add your AWS Bedrock details to Portl ```py llm = OpenAIChat( base_url=PORTKEY_GATEWAY_URL, - api_key="api_key", #We will be using Virtual Key + api_key="PORTKEY_API_KEY", default_headers=createHeaders( - provider="bedrock", - api_key="PORTKEY_API_KEY", # Replace with your Portkey API key - provider="@BEDROCK_OPENAI_KEY" #Bedrock Virtual Key + provider="@BEDROCK_OPENAI_KEY", # Your AI Provider slug from Model Catalog + api_key="PORTKEY_API_KEY" ) ) ``` diff --git a/integrations/agents/pydantic-ai.mdx b/integrations/agents/pydantic-ai.mdx index e17d6c6c..2d301a9c 100644 --- a/integrations/agents/pydantic-ai.mdx +++ b/integrations/agents/pydantic-ai.mdx @@ -875,7 +875,7 @@ Portkey adds a comprehensive governance layer to address these enterprise needs. -Since Portkey now uses the model format like `@team-name/model-name`, you can specify your team and model directly without needing virtual keys. Create a Portkey API key with an attached config: +Since Portkey uses the model format `@provider-slug/model-name`, you can specify your AI Provider and model directly. Create a Portkey API key with an attached config: 1. Go to [API Keys](https://app.portkey.ai/api-keys) in Portkey and Create new API key 2. Optionally attach a config for advanced routing, fallbacks, and reliability features @@ -1073,7 +1073,7 @@ Here's a basic configuration to route requests to OpenAI, specifically using GPT - Yes! Portkey uses your own API keys for the various LLM providers. You can configure them through configs and virtual keys, allowing you to easily manage and rotate keys without changing your code. + Yes! Portkey uses your own API keys for the various LLM providers. Add them to Model Catalog to manage, rotate, and set limits without changing your code. diff --git a/integrations/agents/strands-backup.mdx b/integrations/agents/strands-backup.mdx index 5208e550..bbc77a75 100644 --- a/integrations/agents/strands-backup.mdx +++ b/integrations/agents/strands-backup.mdx @@ -59,11 +59,11 @@ First, let's setup your provider keys and settings on Portkey, that you can late -Go to [Virtual Keys](https://app.portkey.ai/virtual-keys) in the Portkey App to add your AI provider key and copy the virtual key ID. +Go to [Model Catalog](https://app.portkey.ai/model-catalog) in the Portkey App to add your AI provider key and copy the AI Provider slug. -Go to [Configs](https://app.portkey.ai/configs) in the Portkey App, create a new config that uses your virtual key, then save the Config ID. +Go to [Configs](https://app.portkey.ai/configs) in the Portkey App, create a new config that uses your AI Provider slug, then save the Config ID. @@ -391,8 +391,8 @@ If you are using Strands inside your organization, you need to consider several ## Enterprise Governance - -Define budget and rate limits with a Virtual Key in the Portkey App. + +Add your provider credentials to [Model Catalog](https://app.portkey.ai/model-catalog) and set budget and rate limits. For SSO/SCIM setup, see @[product/enterprise-offering/org-management/sso.mdx] and @[product/enterprise-offering/org-management/scim/scim.mdx]. diff --git a/integrations/agents/strands.mdx b/integrations/agents/strands.mdx index 6f366332..7480d40f 100644 --- a/integrations/agents/strands.mdx +++ b/integrations/agents/strands.mdx @@ -87,7 +87,7 @@ Before using the integration, you need to configure your AI providers and create -Go to [Virtual Keys](https://app.portkey.ai/virtual-keys) in the Portkey dashboard and add your actual AI provider keys (OpenAI, Anthropic, etc.). Each provider key gets a virtual key ID that you'll reference in configs. +Go to [Model Catalog](https://app.portkey.ai/model-catalog) in the Portkey dashboard and add your AI provider keys (OpenAI, Anthropic, etc.). Each provider gets an AI Provider slug that you'll reference in configs. diff --git a/integrations/libraries/autogen.mdx b/integrations/libraries/autogen.mdx index 562a87ae..e7b25e50 100644 --- a/integrations/libraries/autogen.mdx +++ b/integrations/libraries/autogen.mdx @@ -85,11 +85,9 @@ user_proxy.initiate_chat(assistant, message="Say this is also a test - part 2.") # This initiates an automated chat between the two agents to solve the task ``` -## Using a Virtual Key - -[Virtual keys](/product/ai-gateway/virtual-keys) in Portkey allow you to easily switch between providers without manually having to store and change their API keys. Let's use the same Mistral example above, but this time using a Virtual Key. - +## Using Model Catalog +[Model Catalog](/product/model-catalog) in Portkey lets you manage provider credentials centrally. Add your Anyscale API key to Model Catalog and use the AI Provider slug in your config. ```py from autogen import AssistantAgent, UserProxyAgent, config_list_from_json @@ -99,8 +97,7 @@ from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders config_list = [ { - # Set a dummy value, since we'll pick the API key from the virtual key - "api_key": 'X', + "api_key": "PORTKEY_API_KEY", # Pick the model from the provider of your choice "model": "mistralai/Mistral-7B-Instruct-v0.1", @@ -109,8 +106,8 @@ config_list = [ "default_headers": createHeaders( api_key = "Your Portkey API Key", - # Add your virtual key here - virtual_key = "Your Anyscale Virtual Key", + # Add your AI Provider slug from Model Catalog + provider = "@anyscale-prod", ) } ] @@ -135,7 +132,7 @@ from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders config_list = [ { - # Set a dummy value, since we'll pick the API key from the virtual key + # Set a dummy value, since we'll pick the API key from the config "api_key": 'X', # Pick the model from the provider of your choice diff --git a/integrations/libraries/openclaw.mdx b/integrations/libraries/openclaw.mdx index 55b8bde8..3926ba08 100644 --- a/integrations/libraries/openclaw.mdx +++ b/integrations/libraries/openclaw.mdx @@ -418,7 +418,7 @@ Developers use a simple config — all routing and reliability logic is handled Content filtering - + Spending controls diff --git a/introduction/feature-overview.mdx b/introduction/feature-overview.mdx index ade4a79a..a0e800ec 100644 --- a/introduction/feature-overview.mdx +++ b/introduction/feature-overview.mdx @@ -41,7 +41,7 @@ Connect to 250+ AI models using a single consistent API. Set up load balancers, Route requests based on specific conditions - + Set and manage budget limits diff --git a/product/integrations.mdx b/product/integrations.mdx index c97ab09f..446f5410 100644 --- a/product/integrations.mdx +++ b/product/integrations.mdx @@ -24,7 +24,7 @@ This guide walks you through connecting a new provider and making it available t #### **Step 1: Connect the Provider** -If you are an existing Portkey user, this step is similar to creating a Virtual Key, but it's happening at the organization level. +If you are an existing Portkey user, this step is similar to adding an AI Provider in the Model Catalog, but at the organization level. 1. From the **`All`** tab, find the provider you want to connect (e.g., OpenAI, Azure OpenAI, AWS Bedrock) and click **Connect**. diff --git a/snippets/portkey-advanced-features.mdx b/snippets/portkey-advanced-features.mdx index 2460a899..1f4dfc81 100644 --- a/snippets/portkey-advanced-features.mdx +++ b/snippets/portkey-advanced-features.mdx @@ -17,7 +17,7 @@ Portkey adds a comprehensive governance layer to address these enterprise needs. ### Step 1: Implement Budget Controls & Rate Limits Model Catalog enables you to have granular control over LLM access at the team/department level. This helps you: -- Set up [budget limits](/product/ai-gateway/virtual-keys/budget-limits) +- Set up [budget limits](/product/model-catalog/budget-limits) - Prevent unexpected usage spikes using Rate limits - Track departmental spending @@ -175,7 +175,7 @@ Portkey's logging dashboard provides detailed logs for every request made to you ### 3. Unified Access to 1600+ LLMs -You can easily switch between 1600+ LLMs. Call various LLMs such as Anthropic, Gemini, Mistral, Azure OpenAI, Google Vertex AI, AWS Bedrock, and many more by simply changing the `virtual key` in your default `config` object. +You can easily switch between 1600+ LLMs. Call various LLMs such as Anthropic, Gemini, Mistral, Azure OpenAI, Google Vertex AI, AWS Bedrock, and many more by simply changing the `provider` slug in your default `config` object. ### 4. Advanced Metadata Tracking @@ -189,7 +189,7 @@ Using Portkey, you can add custom metadata to your LLM requests for detailed tra ### 5. Enterprise Access Management - + Set and manage spending limits across teams and departments. Control costs with granular budget limits and usage tracking. @@ -224,7 +224,7 @@ Comprehensive access control rules and detailed audit logging for security compl Automatic retry handling with exponential backoff for failed requests - + Set and manage budget limits across teams and departments. Control costs with granular budget limits and usage tracking. @@ -248,15 +248,15 @@ Implement real-time protection for your LLM interactions with automatic detectio # FAQs - - You can update your Virtual Key limits at any time from the Portkey dashboard:1. Go to Virtual Keys section2. Click on the Virtual Key you want to modify3. Update the budget or rate limits4. Save your changes + + Update AI Provider limits at any time from [Model Catalog](https://app.portkey.ai/model-catalog): 1. Open the provider you want to modify. 2. Update the budget or rate limits. 3. Save your changes. - Yes! You can create multiple Virtual Keys (one for each provider) and attach them to a single config. This config can then be connected to your API key, allowing you to use multiple providers through a single API key. + Yes! Add multiple AI Providers to Model Catalog (one for each provider) and attach them to a single config. This config can then be connected to your API key, allowing you to use multiple providers through a single API key. Portkey provides several ways to track team costs: -- Create separate Virtual Keys for each team +- Create separate AI Providers for each team - Use metadata tags in your configs - Set up team-specific API keys - Monitor usage in the analytics dashboard