Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 53 additions & 0 deletions backend/app/alembic/versions/008_add_answer_relevance_prompt.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
"""Add answer_relevance_prompt table

Revision ID: 008
Revises: 007
Create Date: 2026-05-08 00:00:00.000000

"""

from typing import Sequence, Union

import sqlalchemy as sa
from alembic import op

revision: str = "008"
down_revision = "007"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None


def upgrade() -> None:
op.create_table(
"answer_relevance_prompt",
sa.Column("id", sa.Uuid(), nullable=False),
sa.Column("organization_id", sa.Integer(), nullable=False),
sa.Column("project_id", sa.Integer(), nullable=False),
sa.Column("name", sa.String(), nullable=False),
sa.Column("description", sa.String(), nullable=False),
sa.Column("prompt_template", sa.Text(), nullable=False),
sa.Column("is_active", sa.Boolean(), nullable=False, server_default=sa.true()),
sa.Column("created_at", sa.DateTime(), nullable=False),
sa.Column("updated_at", sa.DateTime(), nullable=False),
sa.PrimaryKeyConstraint("id"),
)

op.create_index(
"idx_answer_relevance_prompt_org",
"answer_relevance_prompt",
["organization_id"],
)
op.create_index(
"idx_answer_relevance_prompt_project",
"answer_relevance_prompt",
["project_id"],
)
op.create_index(
"idx_answer_relevance_prompt_is_active",
"answer_relevance_prompt",
["is_active"],
)


def downgrade() -> None:
op.drop_table("answer_relevance_prompt")
94 changes: 88 additions & 6 deletions backend/app/api/API_USAGE.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ This guide explains how to use the current API surface for:
- Guardrail execution
- Ban list CRUD for multi-tenant projects
- Topic relevance config CRUD for multi-tenant projects
- Answer relevance prompt config CRUD for multi-tenant projects

## Base URL and Version

Expand Down Expand Up @@ -184,8 +185,8 @@ Request fields:
Important:
- Runtime validators use `on_fail`.
- If you pass objects from config APIs, server normalization supports `on_fail_action` and strips non-runtime fields.
- For `topic_relevance`, pass `topic_relevance_config_id` only.
- The API resolves `configuration` + `prompt_schema_version` in `guardrails.py` before validator execution, so the validator always executes with both values.
- For `topic_relevance`, pass `topic_relevance_config_id` only. The API resolves `configuration` + `prompt_schema_version` in `guardrails.py` before validator execution.
- For `answer_relevance_custom_llm`, `input` must be a JSON string `{"query": "...", "answer": "..."}`. Pass `custom_prompt_id` to use a stored tenant prompt, or omit to use the built-in default prompt.

Example:

Expand Down Expand Up @@ -421,7 +422,84 @@ curl -X DELETE "http://localhost:8001/api/v1/guardrails/topic_relevance_configs/
-H "X-API-KEY: <api-key>"
```

## 7) End-to-End Usage Pattern
## 7) Answer Relevance Prompt APIs (multi-tenant)

These endpoints manage tenant-scoped custom prompt templates for the `answer_relevance_custom_llm` validator and use `X-API-KEY` auth.

Base path:
- `/api/v1/guardrails/answer_relevance_prompts`

## 7.1 Create answer relevance prompt

Endpoint:
- `POST /api/v1/guardrails/answer_relevance_prompts/`

Example:

```bash
curl -X POST "http://localhost:8001/api/v1/guardrails/answer_relevance_prompts/" \
-H "X-API-KEY: <api-key>" \
-H "Content-Type: application/json" \
-d '{
"name": "Maternal Health Relevance",
"description": "Checks if LLM answer addresses a maternal health query",
"prompt_template": "You are evaluating a maternal health assistant.\nQuery: {query}\nAnswer: {answer}\n\nDoes the answer directly address the maternal health query with accurate information?\nAnswer only YES or NO."
}'
```

## 7.2 List answer relevance prompts

Endpoint:
- `GET /api/v1/guardrails/answer_relevance_prompts/?offset=0&limit=20`

Example:

```bash
curl -X GET "http://localhost:8001/api/v1/guardrails/answer_relevance_prompts/?offset=0&limit=20" \
-H "X-API-KEY: <api-key>"
```

## 7.3 Get answer relevance prompt by id

Endpoint:
- `GET /api/v1/guardrails/answer_relevance_prompts/{id}`

Example:

```bash
curl -X GET "http://localhost:8001/api/v1/guardrails/answer_relevance_prompts/<prompt_id>" \
-H "X-API-KEY: <api-key>"
```

## 7.4 Update answer relevance prompt

Endpoint:
- `PATCH /api/v1/guardrails/answer_relevance_prompts/{id}`

Example:

```bash
curl -X PATCH "http://localhost:8001/api/v1/guardrails/answer_relevance_prompts/<prompt_id>" \
-H "X-API-KEY: <api-key>" \
-H "Content-Type: application/json" \
-d '{
"prompt_template": "Query: {query}\nAnswer: {answer}\n\nIs this answer helpful and relevant?\nAnswer only YES or NO."
}'
```

## 7.5 Delete answer relevance prompt

Endpoint:
- `DELETE /api/v1/guardrails/answer_relevance_prompts/{id}`

Example:

```bash
curl -X DELETE "http://localhost:8001/api/v1/guardrails/answer_relevance_prompts/<prompt_id>" \
-H "X-API-KEY: <api-key>"
```

## 8) End-to-End Usage Pattern

Recommended request flow:
1. Create/update validator configs via `/guardrails/validators/configs`.
Expand All @@ -431,15 +509,16 @@ Recommended request flow:
5. If `rephrase_needed=true`, ask user to rephrase.
6. For `ban_list` validators without inline `banned_words`, create/manage a ban list first and pass `ban_list_id`.
7. For `topic_relevance`, create/manage a topic relevance config and pass `topic_relevance_config_id` at runtime. The server resolves the configuration string internally.
8. For `answer_relevance_custom_llm`, format `input` as `{"query": "...", "answer": "..."}`. Optionally create a custom prompt via the Answer Relevance Prompt APIs and pass `custom_prompt_id`. If no `custom_prompt_id` is given, the built-in default prompt is used.

## 8) Common Errors
## 9) Common Errors

- `401 Missing Authorization header`
- Add `Authorization: Bearer <token>`.
- `401 Invalid authorization token`
- Verify plaintext token matches server-side hash.
- `401 Missing X-API-KEY header`
- Add `X-API-KEY: <api-key>` for ban list and topic relevance config endpoints.
- Add `X-API-KEY: <api-key>` for ban list, topic relevance config, and answer relevance prompt endpoints.
- `401 Invalid API key`
- Verify the API key is valid in the upstream Kaapi auth service.
- `Invalid request_id`
Expand All @@ -450,8 +529,10 @@ Recommended request flow:
- Confirm `id`, `organization_id`, and `project_id` match.
- `Topic relevance preset not found`
- Confirm topic relevance config `id` exists within your tenant scope.
- `Answer relevance prompt not found`
- Confirm the answer relevance prompt `id` exists within your tenant scope.

## 9) Current Validator Types
## 10) Current Validator Types

From `validators.json`:
- `uli_slur_match`
Expand All @@ -463,6 +544,7 @@ From `validators.json`:
- `llamaguard_7b`
- `profanity_free`
- `nsfw_text`
- `answer_relevance_custom_llm`

Source of truth:
- `backend/app/core/validators/validators.json`
Expand Down
43 changes: 43 additions & 0 deletions backend/app/api/docs/answer_relevance_prompts/create_prompt.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
Creates an answer relevance prompt config for the tenant resolved from `X-API-KEY`.

Behavior notes:
- Stores a custom prompt template used by the `answer_relevance_custom_llm` validator to evaluate whether an LLM answer is relevant to a user query.
- Tenant scope is enforced from the API key context.
- `prompt_template` must contain both `{query}` and `{answer}` placeholders; the server rejects templates missing either.

Common failure cases:
- Missing or invalid API key.
- Payload schema validation errors.
- `prompt_template` is missing `{query}` or `{answer}` placeholder.

## Field glossary

**`prompt_template`**
A string with `{query}` and `{answer}` placeholders. At validation time, the guardrail substitutes the user's query and the LLM's answer, then asks the model to respond `YES` (relevant) or `NO` (not relevant).

Default template used when no custom prompt is configured:
```
Query: {query}
Answer: {answer}

Does the answer fully satisfy the query and constraints?
Answer only YES or NO.
```
Comment on lines +19 to +25
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Add language identifiers to fenced code blocks to satisfy markdownlint.

Both fenced blocks should declare a language (e.g., text) to clear MD040 warnings.

Proposed patch
-```
+```text
 Query: {query}
 Answer: {answer}
 
 Does the answer fully satisfy the query and constraints?
 Answer only YES or NO.

@@
- +text
You are evaluating a maternal health assistant.
Query: {query}
Answer: {answer}

Does the answer directly address the maternal health query with accurate information?
Answer only YES or NO.

Also applies to: 30-37

🧰 Tools
🪛 markdownlint-cli2 (0.22.1)

[warning] 19-19: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@backend/app/api/docs/answer_relevance_prompts/create_prompt.md` around lines
19 - 25, The two fenced code blocks in create_prompt.md (the blocks beginning
with the lines "Query: {query} ... Answer only YES or NO." and the one starting
"You are evaluating a maternal health assistant.") need explicit language
identifiers to satisfy markdownlint MD040; update both opening fences from ```
to ```text so each block reads ```text and leave the block contents unchanged.


NGOs can customise this to add domain-specific constraints, language preferences, or stricter relevance criteria for their use case.

Example custom template:
```
You are evaluating a maternal health assistant.
Query: {query}
Answer: {answer}

Does the answer directly address the maternal health query with accurate information?
Answer only YES or NO.
```

**`name`**
Human-readable label for this prompt config (max 100 characters).

**`description`**
What this prompt evaluates (max 500 characters).
10 changes: 10 additions & 0 deletions backend/app/api/docs/answer_relevance_prompts/delete_prompt.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
Deletes an answer relevance prompt config by id for the tenant resolved from `X-API-KEY`.

Behavior notes:
- Tenant scope is enforced from the API key context.
- Deletion is permanent; any guardrail configs referencing this `custom_prompt_id` will fail to resolve at runtime after deletion.

Common failure cases:
- Missing or invalid API key.
- Prompt config not found in tenant's scope.
- Invalid id format.
9 changes: 9 additions & 0 deletions backend/app/api/docs/answer_relevance_prompts/get_prompt.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
Fetches a single answer relevance prompt config by id for the tenant resolved from `X-API-KEY`.

Behavior notes:
- Tenant scope is enforced: only configs belonging to the resolved `organization_id` and `project_id` are accessible.

Common failure cases:
- Missing or invalid API key.
- Prompt config not found in tenant's scope.
- Invalid id format.
12 changes: 12 additions & 0 deletions backend/app/api/docs/answer_relevance_prompts/list_prompts.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
Lists answer relevance prompt configs for the tenant resolved from `X-API-KEY`.

Behavior notes:
- Returns all prompt configs scoped to the tenant's `organization_id` and `project_id`.
- Supports pagination via `offset` and `limit`.
- `offset` defaults to `0`.
- `limit` is optional; when omitted, no limit is applied.
- Results are ordered by `created_at` ascending, then `id`.

Common failure cases:
- Missing or invalid API key.
- Invalid pagination values.
12 changes: 12 additions & 0 deletions backend/app/api/docs/answer_relevance_prompts/update_prompt.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
Partially updates an answer relevance prompt config by id for the tenant resolved from `X-API-KEY`.

Behavior notes:
- Supports patch-style updates; omitted fields remain unchanged.
- Tenant scope is enforced from the API key context.
- If `prompt_template` is updated, it must still contain both `{query}` and `{answer}` placeholders.

Common failure cases:
- Missing or invalid API key.
- Prompt config not found in tenant's scope.
- Payload schema validation errors.
- Updated `prompt_template` is missing `{query}` or `{answer}` placeholder.
1 change: 1 addition & 0 deletions backend/app/api/docs/guardrails/run_guardrails.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ Behavior notes:
- For `ban_list`, `ban_list_id` can be resolved to `banned_words` from tenant ban list configs.
- For `topic_relevance`, `topic_relevance_config_id` is required and is resolved to `configuration` + `prompt_schema_version` from tenant topic relevance configs. Requires `OPENAI_API_KEY` to be configured; returns a validation failure with an explicit error if missing.
- For `llm_critic`, `OPENAI_API_KEY` must be configured; returns `success=false` with an explicit error if missing.
- For `answer_relevance_custom_llm`, `input` must be a JSON string `{"query": "...", "answer": "..."}`. Pass `custom_prompt_id` to use a tenant-stored prompt template, or `prompt_template` inline. Requires `OPENAI_API_KEY`.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Clarify precedence/mutual exclusivity for custom_prompt_id vs prompt_template.

Line 11 says “or”, but doesn’t define behavior if clients send both. Please document whether they are mutually exclusive or which one wins.

Suggested doc tweak
-- For `answer_relevance_custom_llm`, `input` must be a JSON string `{"query": "...", "answer": "..."}`. Pass `custom_prompt_id` to use a tenant-stored prompt template, or `prompt_template` inline. Requires `OPENAI_API_KEY`.
+- For `answer_relevance_custom_llm`, `input` must be a JSON string `{"query": "...", "answer": "..."}`. Use `custom_prompt_id` for a tenant-stored prompt template or `prompt_template` inline, and document the behavior when both are provided (mutually exclusive vs precedence). Requires `OPENAI_API_KEY`.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@backend/app/api/docs/guardrails/run_guardrails.md` at line 11, Update the
docs for the answer_relevance_custom_llm operation to explicitly state the
precedence and mutual-exclusivity behavior when both custom_prompt_id and
prompt_template are provided: specify whether they are mutually exclusive
(reject requests containing both) or define a deterministic precedence rule
(e.g., "custom_prompt_id takes precedence over prompt_template if both are
set"), and show a short example of the accepted input JSON {"query":"...",
"answer":"..."} with the chosen behavior. Ensure the text mentions the parameter
names custom_prompt_id and prompt_template and that OPENAI_API_KEY is required.

- For `llamaguard_7b`, `policies` accepts human-readable policy names (see table below). If omitted, all policies are enforced by default.

| `policies` value | Policy enforced |
Expand Down
2 changes: 2 additions & 0 deletions backend/app/api/main.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from fastapi import APIRouter

from app.api.routes import (
answer_relevance_prompts,
ban_lists,
guardrails,
topic_relevance_configs,
Expand All @@ -9,6 +10,7 @@
)

api_router = APIRouter()
api_router.include_router(answer_relevance_prompts.router)
api_router.include_router(ban_lists.router)
api_router.include_router(guardrails.router)
api_router.include_router(topic_relevance_configs.router)
Expand Down
Loading