Skip to content

amithadke/pixeltable

 
 

Repository files navigation

Pixeltable Logo

The only open source Python library providing declarative, transactional data infrastructure for building multimodal AI applications — with incremental storage, transformation, indexing, retrieval, and orchestration of data, all with full operational integrity.

License PyPI Package Python tests status nightly status stress-tests status Discord

Quick Start | Documentation | API Reference | Starter Kit | AI Coding Skill | Pixeltable Cloud


Installation

pip install pixeltable

Pixeltable bundles its own transactional database, orchestration engine, and local dashboard. No Docker, no external services — pip install is all you need. All data is managed in ~/.pixeltable and accessed through the Python SDK. See Working with External Files and Storage Architecture for details.

Quick Start

Define your data processing and AI workflow declaratively using computed columns on tables. Focus on your logic, not the data plumbing.

pip install pixeltable google-genai torch transformers scenedetect

Set your Gemini API key via environment variable or ~/.pixeltable/config.toml. See Configuration for all provider keys and options.

import pixeltable as pxt
from pixeltable.functions import gemini, huggingface

# 1. Store — structured data + media references, versioned and materialized automatically
videos = pxt.create_table('video_search', {'video': pxt.Video, 'title': pxt.String})

# 2. Orchestrate — computed columns are nodes in the table's DAG; the table is the pipeline
videos.add_computed_column(scenes=videos.video.scene_detect_adaptive())

# 3. AI integration — external API calls with rate limiting, retry, and async parallelism
videos.add_computed_column(
    response=gemini.generate_content(
        [videos.video, 'Describe this video in detail.'], model='gemini-3-flash-preview'
    )
)

# 4. JSON path expressions — extract nested fields with just-in-time typing
videos.add_computed_column(
    description=videos.response.candidates[0].content.parts[0].text
)

# 5. Incremental index maintenance — embedding indexes stay in sync, no ETL pipeline needed
videos.add_embedding_index('video', embedding=gemini.embed_content.using(model='gemini-embedding-2-preview'))

# Insert data — triggers the full pipeline automatically
base_url = 'https://raw.githubusercontent.com/pixeltable/pixeltable/release/docs/resources'
videos.insert([
    {'video': f'{base_url}/bangkok.mp4', 'title': 'Bangkok Street Tour'},
    {'video': f'{base_url}/The-Pursuit-of-Happiness-Video-Extract.mp4', 'title': 'The Pursuit of Happiness'},
])

# 6. Retrieve — structured + unstructured data side by side, with on-the-fly transforms
videos.select(
    videos.video,
    videos.title,
    videos.description,
    detections=huggingface.detr_for_object_detection(
        videos.video.extract_frame(timestamp=2.0),
        model_id='facebook/detr-resnet-50',
    ),
).collect()

# 7. Cross-modal search — find similar videos using a reference image, with filters
sim = videos.video.similarity(image=f'{base_url}/The-Pursuit-of-Happiness-Screenshot.png')
videos.where(videos.description != None).order_by(sim, asc=False).limit(5).collect()

What Pixeltable Does

You Write Pixeltable Does
pxt.Image, pxt.Video, pxt.Document columns Stores media, handles formats, caches from URLs
add_computed_column(fn(...)) Runs incrementally, caches results, retries failures
add_embedding_index(column) Manages vector storage, keeps index in sync
@pxt.udf / @pxt.query Creates reusable functions with dependency tracking
table.insert(...) Triggers all dependent computations automatically
t.sample(5).select(t.text, summary=udf(t.text)) Experiment on a sample — nothing stored, calls parallelized and cached
table.select(...).collect() Returns structured + unstructured data together
(nothing — it's automatic) Versions all data and schema changes for time-travel

That single workflow replaces most of the typical AI stack:

Instead of ... Pixeltable gives you ...
PostgreSQL / MySQL pxt.create_table() — schema is Python, versioned automatically
pgAdmin / Retool.. Built-in local dashboard — auto-launches, zero config
Pinecone / Weaviate / Qdrant add_embedding_index() — one line, stays in sync
S3 / boto3 / blob storage pxt.Image / Video / Audio / Document types with caching; destination='s3://...'
Airflow / Prefect / Celery Computed columns trigger on insert — no orchestrator needed
LangChain / LlamaIndex (RAG) @pxt.query + .similarity() + computed column chaining
pandas / polars (multimodal) .sample(), ephemeral UDFs, then add_computed_column()
DVC / MLflow / W&B Built-in history(), revert(), time travel (table:N), snapshots
Custom retry / rate-limit / caching Built into every AI integration; results cached, only new rows recomputed
Custom ETL / glue code Declarative schema — Pixeltable handles execution, caching, incremental updates

On top of these, Pixeltable ships with built-in functions for media processing (FFmpeg, Pillow, spaCy), embeddings (sentence-transformers, CLIP), and 30+ AI providers (OpenAI, Anthropic, Gemini, Ollama, and more). For anything domain-specific, wrap your own logic with @pxt.udf. You still write the application layer (FastAPI, React, Docker).

Deployment options: Pixeltable can serve as your full backend (managing media locally or syncing with S3/GCS/Azure, plus built-in vector search and orchestration) or as an orchestration layer alongside your existing infrastructure.

Demo

See Pixeltable in action — table creation, computed columns, multimodal processing, and querying in a single workflow:

Pixeltable.2-min.Overview.mp4

Core Capabilities

Store: Unified Multimodal Interface

pxt.Image, pxt.Video, pxt.Audio, pxt.Document, pxt.Json – manage diverse data consistently.

t = pxt.create_table(
    'media',
    {
        'img': pxt.Image,
        'video': pxt.Video,
        'audio': pxt.Audio,
        'document': pxt.Document,
        'metadata': pxt.Json,
    },
)

Type System · Tables & Data

Orchestrate: Declarative Computed Columns

Define processing steps once; they run automatically on new/updated data. Supports API calls (OpenAI, Anthropic, Gemini), local inference (Hugging Face, YOLOX, Whisper), vision models, and any Python logic.

# LLM API call
t.add_computed_column(
    summary=openai.chat_completions(
        messages=[{'role': 'user', 'content': t.text}], model='gpt-4o-mini'
    )
)

# Local model inference
t.add_computed_column(
    classification=huggingface.vit_for_image_classification(t.image)
)

# Vision analysis (multimodal)
t.add_computed_column(
    description=openai.chat_completions(
        messages=[{'role': 'user', 'content': [
            {'type': 'text', 'text': 'Describe this image'},
            {'type': 'image_url', 'image_url': t.image},
        ]}],
        model='gpt-4o-mini'
    )
)

Computed Columns · AI Integrations · Sample App: Prompt Studio

Iterate: Explode & Process Media

Create views with iterators to explode one row into many (video→frames, doc→chunks, audio→segments).

from pixeltable.functions.video import frame_iterator
from pixeltable.functions.document import document_splitter

# Document chunking with overlap & metadata
chunks = pxt.create_view(
    'chunks', docs,
    iterator=document_splitter(
        document=docs.doc,
        separators='sentence,token_limit',
        overlap=50, limit=500
    )
)

# Video frame extraction
frames = pxt.create_view(
    'frames', videos,
    iterator=frame_iterator(video=videos.video, fps=0.5)
)

Views · Iterators · RAG Pipeline

Index: Built-in Vector Search

Add embedding indexes and perform similarity searches directly on tables/views.

t.add_embedding_index(
    'img',
    embedding=clip.using(model_id='openai/clip-vit-base-patch32')
)

sim = t.img.similarity(string='cat playing with yarn')
results = t.order_by(sim, asc=False).limit(10).collect()

Embedding Indexes · Semantic Search · Image Search App

Extend: Bring Your Own Code

Extend Pixeltable with UDFs, reusable queries, batch processing, and custom aggregators.

@pxt.udf
def format_prompt(context: list, question: str) -> str:
    return f'Context: {context}\nQuestion: {question}'

@pxt.query
def search_by_topic(topic: str):
    return t.where(t.category == topic).select(t.title, t.summary)

UDFs Guide · Custom Aggregates

Agents & Tools: Tool Calling & MCP Integration

Register @pxt.udf, @pxt.query functions, or MCP servers as callable tools. LLMs decide which tool to invoke; Pixeltable executes and stores results.

# Load tools from MCP server, UDFs, and query functions
mcp_tools = pxt.mcp_udfs('http://localhost:8000/mcp')
tools = pxt.tools(get_weather_udf, search_context_query, *mcp_tools)

# LLM decides which tool to call; Pixeltable executes it
t.add_computed_column(
    tool_output=invoke_tools(tools, t.llm_tool_choice)
)

Tool Calling Cookbook · Agents & MCP · Pixelbot · Pixelagent

Query & Experiment: The Best Path from Prototype to Production

Unlike pandas/polars, Pixeltable persists everything, parallelizes API calls automatically, caches results, and turns your experiment into production with one line change. No separate notebook → pipeline handoff:

# Explore with a familiar DSL — filter, sample, apply UDFs ephemerally
results = (
    t.where(t.score > 0.8)
    .order_by(t.timestamp)
    .select(t.image, score=t.score)
    .limit(10)
    .collect()
)

# Sample 5 rows and test a UDF — nothing stored, API calls parallelized and cached
t.sample(5).select(t.text, summary=summarize(t.text)).collect()

# Happy? One line to commit — runs on full dataset, skips already-cached rows
t.add_computed_column(summary=summarize(t.text))

Queries & Expressions · Iterative Workflow · Version Control

Version: Data Persistence & Time Travel

All data is automatically stored and versioned. Query any prior version.

t = pxt.get_table('my_table')  # Get a handle to an existing table
t.revert()  # Undo the last modification

t.history()  # Display all prior versions
old_version = pxt.get_table('my_table:472')  # Query a specific version

Version Control · Data Sharing

Inspect: Local Dashboard

Pixeltable ships with a built-in local dashboard that launches automatically when you start a session. Browse tables, inspect schemas, view media with lightbox navigation, visualize your full data pipeline as a DAG, and track computation errors — all from your browser.

import pixeltable as pxt

# Dashboard launches automatically at http://localhost:22089
pxt.init()

# Disable if needed
pxt.init(config_overrides={'start_dashboard': False})
# Or set environment variable: PIXELTABLE_START_DASHBOARD=false

Highlights: Table browser with sorting & filtering · Media preview (images, video, audio) · Column lineage visualization · Pipeline graph · Per-column error tracking · CSV export · Auto-refresh

No extra dependencies. No setup. It's just there.

Import/Export: I/O & Integration

Import from any source and export to ML formats.

# Import from files, URLs, S3, Hugging Face
t.insert(pxt.io.import_csv('data.csv'))
t.insert(pxt.io.import_huggingface_dataset(dataset))

# Export to analytics/ML formats
pxt.io.export_parquet(table, 'data.parquet')
pytorch_ds = table.to_pytorch_dataset('pt')  # → PyTorch DataLoader ready
coco_path = table.to_coco_dataset()          # → COCO annotations

# ML tool integrations
pxt.create_label_studio_project(table, label_config)  # Annotation
pxt.export_images_as_fo_dataset(table, table.image)   # FiftyOne

Data Import · PyTorch Export · Label Studio · Data Wrangling for ML

Tutorials & Cookbooks

Fundamentals Cookbooks Providers Sample Apps
Colab Colab OpenAI GitHub
Colab Colab Anthropic GitHub
Colab Colab Gemini GitHub
Colab Colab Ollama GitHub
Colab Colab DeepSeek Discord
All → All → All providers → All →

External Storage and Pixeltable Cloud

S3 GCS Azure R2 B2 Tigris

Store computed media using the destination parameter on columns, or set defaults globally via PIXELTABLE_OUTPUT_MEDIA_DEST and PIXELTABLE_INPUT_MEDIA_DEST. See Configuration.

Data Sharing: Publish datasets to Pixeltable Cloud for team collaboration or public sharing. Replicate public datasets instantly—no account needed for replication.

import pixeltable as pxt

# Replicate a public dataset (no account required)
coco = pxt.replicate(
    remote_uri='pxt://pixeltable:fiftyone/coco_mini_2017',
    local_path='coco-copy'
)

# Publish your own dataset (requires free account)
pxt.publish(source='my-table', destination_uri='pxt://myorg/my-dataset')

# Store computed media in external cloud storage
t.add_computed_column(
    thumbnail=t.image.resize((256, 256)),
    destination='s3://my-bucket/thumbnails/'
)

Data Sharing Guide | Cloud Storage | Public Datasets

Built with Pixeltable

Project Description
Starter Kit Production-ready FastAPI + React app with deployment configs for Docker, Helm, Terraform (EKS/GKE/AKS), and AWS CDK
Pixelbot Multimodal AI agent, an interactive data studio with on-demand ML inference, media generation, and a database explore
Pixelagent Lightweight agent framework with built-in memory and tool orchestration
Pixelmemory Persistent memory layer for AI applications
Skill AI coding skill for Cursor, Claude Code, Copilot, Windsurf, and other AI IDEs — reduces hallucination and generates accurate Pixeltable code
MCP Server Model Context Protocol server for Claude, Cursor, and other AI IDEs

Contributing

We love contributions! Whether it's reporting bugs, suggesting features, improving documentation, or submitting code changes, please check out our Contributing Guide and join the Discussions or our Discord Server.

License

Pixeltable is licensed under the Apache 2.0 License.

About

Pixeltable — AI Data infrastructure providing a declarative, incremental approach for multimodal workloads.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages

  • Python 85.3%
  • HTML 9.7%
  • TypeScript 4.4%
  • Shell 0.2%
  • Makefile 0.2%
  • CSS 0.1%
  • Other 0.1%