Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions api-reference/inference-api/authentication.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ import Portkey from 'portkey-ai'

const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // Replace with your actual API key
virtualKey: "VIRTUAL_KEY" // Optional: Use for virtual key management
provider: "@openai-prod" // Optional: AI Provider slug from Model Catalog
})

const chatCompletion = await portkey.chat.completions.create({
Expand All @@ -48,7 +48,7 @@ from portkey_ai import Portkey

client = Portkey(
api_key="PORTKEY_API_KEY", # Replace with your actual API key
provider="@VIRTUAL_KEY" # Optional: Use if virtual keys are set up
provider="@openai-prod" # Optional: AI Provider slug from Model Catalog
)

chat_completion = client.chat.completions.create(
Expand All @@ -65,7 +65,7 @@ print(chat_completion.choices[0].message["content"])
curl https://api.portkey.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-virtual-key: $VIRTUAL_KEY" \
-H "x-portkey-provider: @openai-prod" \
-d '{
"model": "gpt-4o",
"messages": [
Expand Down
24 changes: 12 additions & 12 deletions api-reference/inference-api/gateway-for-other-apis.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -39,24 +39,24 @@ Create or log in to your Portkey account. Grab your account's API key from the [
Choose one of these authentication methods:

<AccordionGroup>
<Accordion title="Option 1. Generate a Virtual Key (Recommended)">
Portkey integrates with 40+ LLM providers. Add your provider credentials (such as API key) to Portkey, and get a virtual key that you can use to authenticate and send your requests.
<Accordion title="Option 1. Add an AI Provider (Recommended)">
Add your provider credentials to [Model Catalog](https://app.portkey.ai/model-catalog) and use the AI Provider slug to authenticate your requests.

<CodeGroup>

```sh cURL
curl https://api.portkey.ai/v1/rerank \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-virtual-key: $PORTKEY_PROVIDER" \
-H "x-portkey-provider: @cohere-prod" \
```

```py Python
from portkey_ai import Portkey

portkey = Portkey(
api_key = "PORTKEY_API_KEY",
virtual_key = "PROVIDER"
provider = "@cohere-prod"
)
```

Expand All @@ -65,15 +65,15 @@ import Portkey from 'portkey-ai';

const portkey = new Portkey({
apiKey: 'PORTKEY_API_KEY',
virtualKey: 'PROVIDER'
provider: '@cohere-prod'
});
```
</CodeGroup>
<Note>
Creating virtual keys lets you:
- Manage all credentials in one place
Model Catalog lets you:
- Manage all provider credentials in one place
- Set budget limits & rate limits per provider
- Rotate between different provider keys
- Set custom budget limits & rate limits per key
</Note>
</Accordion>

Expand Down Expand Up @@ -165,7 +165,7 @@ curl --request POST \
--url https://api.portkey.ai/v1/rerank \
--header 'Content-Type: application/json' \
--header 'x-portkey-api-key: $PORTKEY_API_KEY' \
--header 'x-portkey-virtual-key: $COHERE_VIRTUAL_KEY' \
--header 'x-portkey-provider: @cohere-prod' \
--data '{
"model": "rerank-english-v2.0",
"query": "What is machine learning?",
Expand All @@ -181,15 +181,15 @@ curl --request GET \
--url https://api.portkey.ai/v1/collections \
--header 'Content-Type: application/json' \
--header 'x-portkey-api-key: $PORTKEY_API_KEY' \
--header 'x-portkey-virtual-key: $PROVIDER'
--header 'x-portkey-provider: @provider-prod'
```

```bash PUT
curl --request PUT \
--url https://api.portkey.ai/v1/collections/my-collection \
--header 'Content-Type: application/json' \
--header 'x-portkey-api-key: $PORTKEY_API_KEY' \
--header 'x-portkey-virtual-key: $PROVIDER' \
--header 'x-portkey-provider: @provider-prod' \
--data '{
"metadata": {
"description": "Updated collection description"
Expand All @@ -202,7 +202,7 @@ curl --request DELETE \
--url https://api.portkey.ai/v1/collections/my-collection \
--header 'Content-Type: application/json' \
--header 'x-portkey-api-key: $PORTKEY_API_KEY' \
--header 'x-portkey-virtual-key: $PROVIDER'
--header 'x-portkey-provider: @provider-prod'
```
</CodeGroup>
</Tab>
Expand Down
66 changes: 34 additions & 32 deletions api-reference/inference-api/headers.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -90,10 +90,12 @@ const portkey = new Portkey({
</CodeGroup>
</Accordion>

### 2. Virtual Key
### 2. AI Provider

<ResponseField name="x-portkey-virtual-key / virtual_key / virtualKey" type="string">
Save your provider auth on Portkey and use a virtual key to directly make a call. [Docs](/product/ai-gateway/virtual-keys))
<ResponseField name="x-portkey-provider / provider" type="string">
Specify your AI Provider slug (from [Model Catalog](/product/model-catalog)) to route requests through a managed provider. Use the `@provider-slug` format. ([Docs](/product/model-catalog))

<Note>The `x-portkey-virtual-key` / `virtual_key` / `virtualKey` parameter is the legacy equivalent and still works for backward compatibility.</Note>
</ResponseField>

<Accordion title="Example">
Expand All @@ -102,15 +104,15 @@ Save your provider auth on Portkey and use a virtual key to directly make a call
```sh cURL {3}
curl https://api.portkey.ai/v1/chat/completions \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-virtual-key: openai-virtual-key" \
-H "x-portkey-provider: @openai-prod" \
```

```py Python {5}
from portkey_ai import Portkey

portkey = Portkey(
api_key = "PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY")
virtual_key = "openai-virtual-key"
provider = "@openai-prod" # Your AI Provider slug from Model Catalog
)
```

Expand All @@ -119,7 +121,7 @@ import Portkey from 'portkey-ai';

const portkey = new Portkey({
apiKey: 'PORTKEY_API_KEY', // defaults to process.env["PORTKEY_API_KEY"]
virtualKey: 'openai-virtual-key'
provider: '@openai-prod' // Your AI Provider slug from Model Catalog
});
```
</CodeGroup>
Expand Down Expand Up @@ -229,7 +231,7 @@ An ID you can pass to refer to one or more requests later on. If not provided, P
```sh cURL {4}
curl https://api.portkey.ai/v1/chat/completions \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-virtual-key: openai-virtual-key" \
-H "x-portkey-provider: @openai-prod" \
-H "x-portkey-trace-id: test-request" \
```

Expand All @@ -238,7 +240,7 @@ from portkey_ai import Portkey

portkey = Portkey(
api_key = "PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY")
virtual_key = "openai-virtual-key",
provider = "@openai-prod",
trace_id = "test-request"
)
```
Expand All @@ -248,7 +250,7 @@ import Portkey from 'portkey-ai';

const portkey = new Portkey({
apiKey: 'PORTKEY_API_KEY', // defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "openai-virtual-key",
provider: "@openai-prod",
traceId: "test-request"
});
```
Expand All @@ -266,7 +268,7 @@ You can include the special metadata type `_user` to associate requests with spe
```sh cURL {4}
curl https://api.portkey.ai/v1/chat/completions \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-virtual-key: openai-virtual-key" \
-H "x-portkey-provider: @openai-prod" \
-H "x-portkey-metadata: {'_user': 'user_id_123', 'foo': 'bar'}" \
```

Expand All @@ -275,7 +277,7 @@ from portkey_ai import Portkey

portkey = Portkey(
api_key = "PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY")
virtual_key = "openai-virtual-key",
provider = "@openai-prod",
metadata = {"_user": "user_id_123", "foo": "bar"}"
)
```
Expand All @@ -285,7 +287,7 @@ import Portkey from 'portkey-ai';

const portkey = new Portkey({
apiKey: 'PORTKEY_API_KEY', // defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "openai-virtual-key",
provider: "@openai-prod",
metadata: {"_user": "user_id_123", "foo": "bar"}"
});
```
Expand All @@ -303,7 +305,7 @@ Expects `true` or `false` See the caching documentation for more information. ([
```sh cURL {4}
curl https://api.portkey.ai/v1/chat/completions \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-virtual-key: openai-virtual-key" \
-H "x-portkey-provider: @openai-prod" \
-H "x-portkey-cache-force-refresh: true" \
```

Expand All @@ -312,7 +314,7 @@ from portkey_ai import Portkey

portkey = Portkey(
api_key = "PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY")
virtual_key = "openai-virtual-key",
provider = "@openai-prod",
cache_force_refresh = True
)
```
Expand All @@ -322,7 +324,7 @@ import Portkey from 'portkey-ai';

const portkey = new Portkey({
apiKey: 'PORTKEY_API_KEY', // defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "openai-virtual-key",
provider: "@openai-prod",
cacheForceRefresh: True
});
```
Expand All @@ -339,7 +341,7 @@ Partition your cache store based on custom strings, ignoring metadata and other
```sh cURL {4}
curl https://api.portkey.ai/v1/chat/completions \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-virtual-key: openai-virtual-key" \
-H "x-portkey-provider: @openai-prod" \
-H "x-portkey-cache-namespace: any-string" \
```

Expand All @@ -348,7 +350,7 @@ from portkey_ai import Portkey

portkey = Portkey(
api_key = "PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY")
virtual_key = "openai-virtual-key",
provider = "@openai-prod",
cache_namespace = "any-string"
)
```
Expand All @@ -358,7 +360,7 @@ import Portkey from 'portkey-ai';

const portkey = new Portkey({
apiKey: 'PORTKEY_API_KEY', // defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "openai-virtual-key",
provider: "@openai-prod",
cacheNamespace: "any-string"
});
```
Expand All @@ -375,7 +377,7 @@ Set timeout after which a request automatically terminates. The time is set in m
```sh cURL {4}
curl https://api.portkey.ai/v1/chat/completions \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-virtual-key: openai-virtual-key" \
-H "x-portkey-provider: @openai-prod" \
-H "x-portkey-request-timeout: 3000" \
```

Expand All @@ -384,7 +386,7 @@ from portkey_ai import Portkey

portkey = Portkey(
api_key = "PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY")
virtual_key = "openai-virtual-key",
provider = "@openai-prod",
request_timeout = 3000
)
```
Expand All @@ -394,7 +396,7 @@ import Portkey from 'portkey-ai';

const portkey = new Portkey({
apiKey: 'PORTKEY_API_KEY', // defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "openai-virtual-key",
provider: "@openai-prod",
reqiestTimeout: 3000
});
```
Expand All @@ -416,7 +418,7 @@ Pass all the headers you want to forward directly in this array. ([Docs](https:/
```sh cURL {4-6}
curl https://api.portkey.ai/v1/chat/completions \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-virtual-key: openai-virtual-key" \
-H "x-portkey-provider: @openai-prod" \
-H "X-Custom-Header: ...."\
-H "Another-Header: ....."\
-H "x-portkey-forward-headers: ['X-Custom-Header', 'Another-Header']" \
Expand All @@ -427,7 +429,7 @@ from portkey_ai import Portkey

portkey = Portkey(
api_key = "PORTKEY_API_KEY", # defaults to os.environ.get("PORTKEY_API_KEY")
virtual_key = "openai-virtual-key",
provider = "@openai-prod",
X_Custom_Header = "....",
Another_Header = "....",
# The values in forward_headers list must be the original header names
Expand All @@ -440,7 +442,7 @@ import Portkey from 'portkey-ai';

const portkey = new Portkey({
apiKey: 'PORTKEY_API_KEY', // defaults to process.env["PORTKEY_API_KEY"]
virtualKey: "openai-virtual-key",
provider: "@openai-prod",
CustomHeader: "....",
AnotherHeader: "....",
forwardHeaders: ['CustomHeader', 'AnotherHeader']
Expand Down Expand Up @@ -494,7 +496,7 @@ Portkey adheres to language-specific naming conventions:
| Parameter | Type | Key |
| :--- | :--- | :--- |
| **API Key** Your Portkey account's API Key. | stringrequired | `apiKey` |
| **Virtual Key** The virtual key created from Portkey's vault for a specific provider | string | `virtualKey` |
| **Virtual Key** *(Legacy — use `provider` with `@provider-slug` instead)* | string | `virtualKey` |
| **Config** The slug or [config object](/api-reference/inference-api/config-object) to use | stringobject | `config` |
| **Provider** The AI provider to use for your calls. ([supported providers](/integrations/llms#supported-ai-providers)). | string | `provider` |
| **Base URL** You can edit the URL of the gateway to use. Needed if you're [self-hosting the AI gateway](https://github.com/Portkey-AI/gateway/blob/main/docs/installation-deployments.md) | string | `baseURL` |
Expand All @@ -514,7 +516,7 @@ Portkey adheres to language-specific naming conventions:
| Parameter | Type | Key |
| :--- | :--- | :--- |
| **API Key** Your Portkey account's API Key. | stringrequired | `api_key` |
| **Virtual Key** The virtual key created from Portkey's vault for a specific provider | string | `virtual_key` |
| **Virtual Key** *(Legacy — use `provider` with `@provider-slug` instead)* | string | `virtual_key` |
| **Config** The slug or [config object](/api-reference/inference-api/config-object) to use | stringobject | `config` |
| **Provider** The AI provider to use for your calls. ([supported providers](/integrations/llms#supported-ai-providers)). | string | `provider` |
| **Base URL** You can edit the URL of the gateway to use. Needed if you're [self-hosting the AI gateway](https://github.com/Portkey-AI/gateway/blob/main/docs/installation-deployments.md) | string | `base_url` |
Expand All @@ -532,7 +534,7 @@ Portkey adheres to language-specific naming conventions:
| Parameter | Type | Header Key |
| :--- | :--- | :--- |
| **API Key** Your Portkey account's API Key. | stringrequired | `x-portkey-api-key` |
| **Virtual Key** The virtual key created from Portkey's vault for a specific provider | string | `x-portkey-virtual-key` |
| **Virtual Key** *(Legacy — use `x-portkey-provider` with `@provider-slug` instead)* | string | `x-portkey-virtual-key` |
| **Config** The slug or [config object](/api-reference/inference-api/config-object) to use | string | `x-portkey-config` |
| **Provider** The AI provider to use for your calls. ([supported providers](/integrations/llms#supported-ai-providers)). | string | `x-portkey-provider` |
| **Base URL** You can edit the URL of the gateway to use. Needed if you're [self-hosting the AI gateway](https://github.com/Portkey-AI/gateway/blob/main/docs/installation-deployments.md) | string | Change the request URL |
Expand All @@ -558,7 +560,7 @@ You can send these headers in multiple ways:
curl https://api.portkey.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: PORTKEY_API_KEY" \
-H "x-portkey-virtual-key: VIRTUAL_KEY" \
-H "x-portkey-provider: @openai-prod" \
-H "x-portkey-trace-id: your_trace_id" \
-H "x-portkey-metadata: {\"_user\": \"user_12345\"}" \
-d '{
Expand All @@ -572,7 +574,7 @@ from portkey_ai import Portkey

portkey = Portkey(
api_key="PORTKEY_API_KEY",
provider="@VIRTUAL_KEY",
provider="@openai-prod",
config="CONFIG_ID"
)

Expand All @@ -592,7 +594,7 @@ import Portkey from 'portkey-ai';

const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY",
virtualKey: "VIRTUAL_KEY",
provider: "@openai-prod",
config: "CONFIG_ID"
});

Expand All @@ -619,7 +621,7 @@ client = OpenAI(
base_url="https://api.portkey.ai/v1",
default_headers=createHeaders({
"apiKey": "PORTKEY_API_KEY",
"virtualKey": "VIRTUAL_KEY"
"provider": "@openai-prod"
})
)

Expand All @@ -644,7 +646,7 @@ const client = new OpenAI({
baseURL: "https://api.portkey.ai/v1",
defaultHeaders: createHeaders({
apiKey: "PORTKEY_API_KEY",
virtualKey: "VIRTUAL_KEY"
provider: "@openai-prod"
})
});

Expand Down
Loading