Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
65 changes: 65 additions & 0 deletions .github/workflows/docs-freeze.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
on:
workflow_dispatch:

name: Docs - Update freeze cache

env:
UV_VERSION: "0.4.x"
PYTHON_VERSION: 3.12

permissions:
contents: write

jobs:
docs-freeze:
runs-on: ubuntu-latest

env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}

steps:
- name: Check out repository
uses: actions/checkout@v4
with:
fetch-depth: 0

- name: 🔵 Set up Quarto
uses: quarto-dev/quarto-actions/setup@v2
with:
version: 1.9.37

- name: 🚀 Install uv
uses: astral-sh/setup-uv@v3
with:
version: ${{ env.UV_VERSION }}

- name: 🐍 Set up Python ${{ env.PYTHON_VERSION }}
run: uv python install ${{ env.PYTHON_VERSION }}

- name: 📦 Install chatlas and dependencies
run: uv sync --python ${{ env.PYTHON_VERSION }} --all-extras

- name: 🔌 Activate venv
run: |
source .venv/bin/activate
echo "$VIRTUAL_ENV/bin" >> $GITHUB_PATH
echo "VIRTUAL_ENV=$VIRTUAL_ENV" >> $GITHUB_ENV

- name: Run quartodoc
run: make quartodoc

- name: 🧊 Render docs to populate freeze cache
run: quarto render docs

- name: 💾 Commit updated freeze cache
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
git add docs/_freeze/
if git diff --staged --quiet; then
echo "No freeze cache changes to commit."
else
git commit -m "docs: update freeze cache [skip ci]"
git push origin HEAD
fi
2 changes: 1 addition & 1 deletion .github/workflows/docs-publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ jobs:
- name: 🔵 Set up Quarto
uses: quarto-dev/quarto-actions/setup@v2
with:
version: 1.6.26
version: 1.9.37

- name: 🚀 Install uv
uses: astral-sh/setup-uv@v3
Expand Down
4 changes: 4 additions & 0 deletions docs/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,7 @@ objects.txt
/reference

CHANGELOG.md
CHANGELOG.html
/CHANGELOG_files/

**/*.quarto_ipynb
7 changes: 7 additions & 0 deletions docs/_freeze/site_libs/clipboard/clipboard.min.js

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
{
"hash": "2bba27dce66fc08e608e5325d4da80f1",
"result": {
"engine": "jupyter",
"markdown": "---\ntitle: Classification\ncallout-appearance: simple\n---\n\nThe following example, which [closely inspired by the Claude documentation](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/extracting_structured_json.ipynb), shows how `.chat_structured()` can be used to perform text classification.\n\n::: {#3a69def2 .cell execution_count=1}\n``` {.python .cell-code}\nfrom typing import Literal\n\nfrom chatlas import ChatOpenAI\nfrom pydantic import BaseModel, Field\nimport pandas as pd\n\ntext = \"The new quantum computing breakthrough could revolutionize the tech industry.\"\n\n\nclass Classification(BaseModel):\n name: Literal[\n \"Politics\", \"Sports\", \"Technology\", \"Entertainment\", \"Business\", \"Other\"\n ] = Field(description=\"The category name\")\n\n score: float = Field(\n description=\"The classification score for the category, ranging from 0.0 to 1.0.\"\n )\n\n\nclass Classifications(BaseModel):\n \"\"\"Array of classification results. The scores should sum to 1.\"\"\"\n classifications: list[Classification]\n\n\nchat = ChatOpenAI()\ndata = chat.chat_structured(text, data_model=Classifications)\npd.DataFrame([c.model_dump() for c in data.classifications])\n```\n\n::: {.cell-output .cell-output-display execution_count=1}\n```{=html}\n<div>\n<style scoped>\n .dataframe tbody tr th:only-of-type {\n vertical-align: middle;\n }\n\n .dataframe tbody tr th {\n vertical-align: top;\n }\n\n .dataframe thead th {\n text-align: right;\n }\n</style>\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>name</th>\n <th>score</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>Technology</td>\n <td>0.94</td>\n </tr>\n <tr>\n <th>1</th>\n <td>Business</td>\n <td>0.04</td>\n </tr>\n <tr>\n <th>2</th>\n <td>Other</td>\n <td>0.02</td>\n </tr>\n </tbody>\n</table>\n</div>\n```\n:::\n:::\n\n\n",
"supporting": [
"classification_files"
],
"filters": [],
"includes": {
"include-in-header": [
"<script src=\"https://cdn.jsdelivr.net/npm/requirejs@2.3.6/require.min.js\" integrity=\"sha384-c9c+LnTbwQ3aujuU7ULEPVvgLs+Fn6fJUvIGTsuu1ZcCf11fiEubah0ttpca4ntM sha384-6V1/AdqZRWk1KAlWbKBlGhN7VG4iE/yAZcO6NZPMF8od0vukrvr0tg4qY6NSrItx\" crossorigin=\"anonymous\"></script>\n<script src=\"https://cdn.jsdelivr.net/npm/jquery@3.5.1/dist/jquery.min.js\" integrity=\"sha384-ZvpUoO/+PpLXR1lu4jmpXWu80pZlYUAfxl5NsBMWOEPSjUn/6Z/hRTt8+pR6L4N2\" crossorigin=\"anonymous\" data-relocate-top=\"true\"></script>\n<script type=\"application/javascript\">define('jquery', [],function() {return window.jQuery;})</script>\n"
]
}
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
{
"hash": "99534743ffcbbb600a2d7fa317246f3d",
"result": {
"engine": "jupyter",
"markdown": "---\ntitle: Entity recognition\ncallout-appearance: simple\n---\n\nThe following example, which [closely inspired by the Claude documentation](https://github.com/anthropics/anthropic-cookbook/blob/main/tool_use/extracting_structured_json.ipynb), shows how `.chat_structured()` can be used to perform entity recognition.\n\n::: {#8a46015f .cell execution_count=1}\n``` {.python .cell-code}\nfrom chatlas import ChatOpenAI\nfrom pydantic import BaseModel, Field\nimport pandas as pd\n\n# | warning: false\ntext = \"John works at Google in New York. He met with Sarah, the CEO of Acme Inc., last week in San Francisco.\"\n\n\nclass NamedEntity(BaseModel):\n \"\"\"Named entity in the text.\"\"\"\n\n name: str = Field(description=\"The extracted entity name\")\n\n type_: str = Field(description=\"The entity type, e.g. 'person', 'location', 'organization'\")\n\n context: str = Field(description=\"The context in which the entity appears in the text.\")\n\n\nclass NamedEntities(BaseModel):\n \"\"\"Named entities in the text.\"\"\"\n\n entities: list[NamedEntity] = Field(description=\"Array of named entities\")\n\n\nchat = ChatOpenAI()\ndata = chat.chat_structured(text, data_model=NamedEntities)\npd.DataFrame([e.model_dump() for e in data.entities])\n```\n\n::: {.cell-output .cell-output-display execution_count=1}\n```{=html}\n<div>\n<style scoped>\n .dataframe tbody tr th:only-of-type {\n vertical-align: middle;\n }\n\n .dataframe tbody tr th {\n vertical-align: top;\n }\n\n .dataframe thead th {\n text-align: right;\n }\n</style>\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>name</th>\n <th>type_</th>\n <th>context</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>John</td>\n <td>person</td>\n <td>John works at Google in New York.</td>\n </tr>\n <tr>\n <th>1</th>\n <td>Google</td>\n <td>organization</td>\n <td>John works at Google in New York.</td>\n </tr>\n <tr>\n <th>2</th>\n <td>New York</td>\n <td>location</td>\n <td>John works at Google in New York.</td>\n </tr>\n <tr>\n <th>3</th>\n <td>Sarah</td>\n <td>person</td>\n <td>He met with Sarah, the CEO of Acme Inc., last ...</td>\n </tr>\n <tr>\n <th>4</th>\n <td>Acme Inc.</td>\n <td>organization</td>\n <td>He met with Sarah, the CEO of Acme Inc., last ...</td>\n </tr>\n <tr>\n <th>5</th>\n <td>San Francisco</td>\n <td>location</td>\n <td>He met with Sarah, the CEO of Acme Inc., last ...</td>\n </tr>\n </tbody>\n</table>\n</div>\n```\n:::\n:::\n\n\n",
"supporting": [
"entity-recognition_files"
],
"filters": [],
"includes": {
"include-in-header": [
"<script src=\"https://cdn.jsdelivr.net/npm/requirejs@2.3.6/require.min.js\" integrity=\"sha384-c9c+LnTbwQ3aujuU7ULEPVvgLs+Fn6fJUvIGTsuu1ZcCf11fiEubah0ttpca4ntM sha384-6V1/AdqZRWk1KAlWbKBlGhN7VG4iE/yAZcO6NZPMF8od0vukrvr0tg4qY6NSrItx\" crossorigin=\"anonymous\"></script>\n<script src=\"https://cdn.jsdelivr.net/npm/jquery@3.5.1/dist/jquery.min.js\" integrity=\"sha384-ZvpUoO/+PpLXR1lu4jmpXWu80pZlYUAfxl5NsBMWOEPSjUn/6Z/hRTt8+pR6L4N2\" crossorigin=\"anonymous\" data-relocate-top=\"true\"></script>\n<script type=\"application/javascript\">define('jquery', [],function() {return window.jQuery;})</script>\n"
]
}
}
}
Loading
Loading