-
Notifications
You must be signed in to change notification settings - Fork 10
Onboarding : take multiple credentials from endpoint #453
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
666006e
b4e1c95
04c93d0
d89b215
20988e5
5cd2b20
67a136d
4530ab6
ca56642
9fe18f3
f5fcdaf
bcc1a75
a8a1fe2
03185a2
cf0acb5
33f8b94
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,14 +1,14 @@ | ||
| Remove a collection from the platform. This is a two step process: | ||
|
|
||
| 1. Delete all OpenAI resources that were allocated: File's, the Vector | ||
| 1. Delete all OpenAI resources that were allocated: file(s), the Vector | ||
| Store, and the Assistant. | ||
| 2. Delete the collection entry from the AI platform database. | ||
| 2. Delete the collection entry from the kaapi database. | ||
|
|
||
| No action is taken on the documents themselves: the contents of the | ||
| documents that were a part of the collection remain unchanged, those | ||
| documents can still be accessed via the documents endpoints. The response from this | ||
| endpoint will be a `collection_job` object which will contain the collection `job_id` and | ||
| status. when you take the id returned and use the collection job | ||
| info endpoint, if the job is successful, you will get the status as successful. | ||
| status. When you take the id returned and use the `collection job info` endpoint, | ||
| if the job is successful, you will get the status as successful. | ||
| Additionally, if a `callback_url` was provided in the request body, | ||
| you will receive a message indicating whether the deletion was successful or if it failed. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,4 +1,4 @@ | ||
| Retrieve detailed information about `a specific collection by its ID` from the collection table. This endpoint returns the collection object including its project, organization, | ||
| timestamps, and associated LLM service details (`llm_service_id`). | ||
| Retrieve detailed information about a specific collection by its collection id. This endpoint returns the collection object including its project, organization, | ||
| timestamps, and associated LLM service details (`llm_service_id` and `llm_service_name`). | ||
|
|
||
| Additionally, if the `include_docs` flag in the request body is true then you will get a list of document IDs associated with a given collection as well. Documents returned are not only stored by the AI platform, but also by OpenAI. | ||
| Additionally, if the `include_docs` flag in the request body is true then you will get a list of document IDs associated with a given collection as well. Note that, documents returned are not only stored by the AI platform, but also by OpenAI. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,34 @@ | ||
| Create a new LLM configuration with an initial version. | ||
|
|
||
| Configurations allow you to store and manage reusable LLM parameters | ||
| (such as temperature, max_tokens, model selection, etc.) with version control. | ||
|
|
||
| **Key Features:** | ||
| * Automatically creates an initial version (v1) with the provided configuration | ||
| * Enforces unique configuration names per project | ||
| * Stores provider-specific parameters as flexible JSON (config_blob) | ||
| * Supports optional commit messages for tracking changes | ||
| * Provider-agnostic storage - params are passed through to the provider as-is | ||
|
|
||
|
|
||
| **Example for the config blob: OpenAI Responses API with File Search** | ||
|
|
||
| ```json | ||
|
nishika26 marked this conversation as resolved.
|
||
| "config_blob": { | ||
| "completion": { | ||
| "provider": "openai", | ||
| "params": { | ||
| "model": "gpt-4o-mini", | ||
| "instructions": "You are a helpful assistant for farming communities...", | ||
| "temperature": 1, | ||
| "tools": [ | ||
| { | ||
| "type": "file_search", | ||
| "vector_store_ids": ["vs_692d71f3f5708191b1c46525f3c1e196"], | ||
| "max_num_results": 20 | ||
| }]}}} | ||
| ``` | ||
|
|
||
| The configuration name must be unique within your project. Once created, | ||
| you can create additional versions to track parameter changes while | ||
| maintaining the configuration history. | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
| Create a new version for an existing configuration. | ||
|
|
||
| To create a new version, provide the `config_id` in the URL path and the new | ||
| configuration parameters in the request body. The system will automatically | ||
| create a new version under the same configuration with an incremented version number. | ||
| Version numbers are automatically incremented sequentially (1, 2, 3, etc.) | ||
| and cannot be manually set or skipped. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| Delete a configuration and all its versions. | ||
|
|
||
| This operation performs a delete, marking the configuration and all | ||
| associated versions as deleted in the database while retaining records | ||
| for audit purposes. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,4 @@ | ||
| Delete a specific version of a configuration. | ||
|
|
||
| Performs a delete on the version, marking it as deleted while | ||
| retaining the record for audit purposes. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| Retrieve a specific configuration by its ID. | ||
|
|
||
| Returns the configuration metadata including name, description, and | ||
| timestamps. This endpoint provides configuration-level details but does | ||
| not include version information. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,4 @@ | ||
| Retrieve a specific version of a configuration. | ||
|
|
||
| Returns the complete version details including the full configuration | ||
| blob (config_blob) with all LLM parameters. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| Retrieve all configurations for the current project. | ||
|
|
||
| Returns a paginated list of configurations ordered by most recently | ||
| updated first. Each configuration includes metadata (name, description, | ||
| timestamps) but excludes version details for performance. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,4 @@ | ||
| List all versions for a specific configuration. | ||
|
|
||
| Returns versions in descending order (newest first), allowing you to | ||
| see the evolution of configuration parameters over time. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| Update a configuration's metadata (name or description). | ||
|
|
||
| This endpoint modifies only the configuration-level metadata, not the | ||
| LLM parameters themselves. To change LLM parameters, create a new | ||
| version instead. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,42 @@ | ||
| Make an LLM API call using either a stored configuration or an ad-hoc configuration. | ||
|
|
||
| This endpoint initiates an asynchronous LLM call job. The request is queued | ||
| for processing, and results are delivered via the callback URL when complete. | ||
|
|
||
| ### Request Fields | ||
|
|
||
| **`query`** (required) - Query parameters for this LLM call: | ||
| - `input` (required, string, min 1 char): User question/prompt/query | ||
| - `conversation` (optional, object): Conversation configuration | ||
| - `id` (optional, string): Existing conversation ID to continue | ||
| - `auto_create` (optional, boolean, default false): Create new conversation if no ID provided | ||
| - **Note**: Cannot specify both `id` and `auto_create=true` | ||
|
|
||
| **`config`** (required) - Configuration for the LLM call (just choose one mode): | ||
|
|
||
| - **Mode 1: Stored Configuration** | ||
| - `id` (UUID): Configuration ID | ||
| - `version` (integer >= 1): Version number | ||
| - **Both required together** | ||
| - **Note**: When using stored configuration, do not include the `blob` field in the request body | ||
|
|
||
| - **Mode 2: Ad-hoc Configuration** | ||
| - `blob` (object): Complete configuration object (see Create Config endpoint documentation for examples) | ||
| - `completion` (required): | ||
| - `provider` (required, string): Currently only "openai" | ||
| - `params` (required, object): Provider-specific parameters (flexible JSON) | ||
| - **Note**: When using ad-hoc configuration, do not include `id` and `version` fields | ||
|
|
||
| **`callback_url`** (optional, HTTPS URL): | ||
| - Webhook endpoint to receive the response | ||
| - Must be a valid HTTPS URL | ||
| - If not provided, response is only accessible through job status | ||
|
|
||
| **`include_provider_raw_response`** (optional, boolean, default false): | ||
| - When true, includes the unmodified raw response from the LLM provider | ||
|
|
||
| **`request_metadata`** (optional, object): | ||
| - Custom JSON metadata | ||
| - Passed through unchanged in the response | ||
|
|
||
| --- |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -18,10 +18,31 @@ | |
|
|
||
| --- | ||
|
|
||
| ## 🔑 OpenAI API Key (Optional) | ||
| - If provided, the API key will be **encrypted** and stored as project credentials. | ||
| - If omitted, the project will be created **without OpenAI credentials**. | ||
|
|
||
| ## 🔑 Credentials (Optional) | ||
| - If provided, the given credentials will be **encrypted** and stored as project credentials. | ||
| - The `credential` parameter accepts a list of one or more credentials (e.g., an OpenAI key, Langfuse credentials, etc.). | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. would be good to also show a sample json for the we can put this in the swagger schema for credentials model also if not here
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. also, we should have a list of providers that we currenlty support for eg. do cross-check if this is the way we want the credentials to look like
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes that’s the intended structure for the credentials. The current code already validates both the provider and the required fields for that provider, once we receive the request body. you’re right that adding the list of supported providers and an example in the endpoint documentation would make it clearer.
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. i have added them to the documentation |
||
| - If omitted, the project will be created **without credentials**. | ||
| - We’ve also included a list of the providers currently supported by kaapi. | ||
| ### Example: For sending multiple credentials - | ||
| ``` | ||
|
nishika26 marked this conversation as resolved.
|
||
| "credentials": [ | ||
|
nishika26 marked this conversation as resolved.
|
||
| { | ||
| "openai": { | ||
| "api_key": "sk-proj-..." | ||
| } | ||
| }, | ||
| { | ||
| "langfuse": { | ||
| "public_key": "pk-lf-...", | ||
| "secret_key": "sk-lf-...", | ||
| "host": "https://cloud.langfuse.com" | ||
| } | ||
| } | ||
| ] | ||
| ``` | ||
| ### Supported Providers | ||
| - openai | ||
| - langfuse | ||
| --- | ||
|
|
||
| ## 🔄 Transactional Guarantee | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.