Skip to content

Sinapsis-AI/sinapsis-chatbots

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

42 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation



Sinapsis Chatbots

A monorepo for Sinapsis chatbot packages, integrations, and demo webapps.

🐍 Installation β€’ πŸ“¦ Packages β€’ 🌐 Webapps β€’ πŸ“™ Documentation β€’ πŸ” License

The sinapsis-chatbots project groups the Sinapsis chatbot packages in one workspace. It covers shared chatbot abstractions, provider-specific templates, retrieval and memory integrations, and demo webapps built on top of those packages.

🐍 Installation

This monorepo includes the following workspace packages:

  • sinapsis-anthropic
  • sinapsis-chatbots-base
  • sinapsis-chat-history
  • sinapsis-llama-cpp
  • sinapsis-llama-index
  • sinapsis-mem0
  • sinapsis-vllm

Install using your preferred package manager. We strongly recommend using uv. To install uv, refer to the official documentation.

Install the root package:

uv pip install sinapsis-chatbots --extra-index-url https://pypi.sinapsis.tech

Or with raw pip:

pip install sinapsis-chatbots --extra-index-url https://pypi.sinapsis.tech

Important

The root package exposes these extras:

  • integrations: installs the package integrations in this workspace
  • webapp: installs the dependencies required to run the demo webapps
  • all: installs both extras

For example, to install everything:

uv pip install sinapsis-chatbots[all] --extra-index-url https://pypi.sinapsis.tech

Tip

If you only want one integration package, install it directly by name, for example sinapsis-vllm or sinapsis-llama-cpp.

πŸ“¦ Packages

This repository is structured into modular packages, each facilitating the integration of AI-driven chatbots with various LLM frameworks. These packages provide flexible and easy-to-use templates for building and deploying chatbot solutions. Below is an overview of the available packages:

Sinapsis Anthropic

This package offers templates for building text-to-text, image-to-text, and tool-enabled conversational chatbots using Anthropic's Claude models.

  • AnthropicTextGeneration: Template for text and code generation with Claude models using the Anthropic API.

  • AnthropicMultiModal: Template for multimodal chat processing using Anthropic's Claude models.

  • AnthropicWithMCP: Template for Claude chat workflows that expose LLMConversationPacket.tools and consume tool calls/results.

For specific instructions and further details, see the README.md.

Sinapsis Chatbots Base

This package provides the shared chatbot foundation for building and processing LLMConversationPacket flows.

  • LLMConversationInput: Template for creating conversation packets from prompts, system prompts, and identity fields.

  • QueryContextualizeFromFile: Template for attaching document context from preloaded generic_data entries.

  • EndpointLLMCompletion: Template for connecting to OpenAI-compatible LLM endpoints such as OpenAI, Ollama, llama.cpp server, vLLM server, Gemini, and similar APIs.

For specific instructions and further details, see the README.md.

Sinapsis llama-cpp

This package provides local GGUF-backed inference templates built on top of llama-cpp-python.

  • LLaMACPPTextCompletion: Local text completion with structured-output support.

  • LLaMACPPStreamingTextCompletion: Streaming text completion for partial packet updates during generation.

  • LLaMACPPTextCompletionWithMCP: Local text completion with packet-native MCP tool state.

For specific instructions and further details, see the README.md.

Sinapsis llama-index

This package provides ingestion, retrieval, and reranking templates built on top of LlamaIndex.

  • CodeEmbeddingNodeGenerator: Template to generate nodes for a code base.

  • EmbeddingNodeGenerator: Template for generating text embeddings using the HuggingFace model.

  • LLaMAIndexInsertNodes: Template for inserting embeddings (nodes) into PostgreSQL vector tables through PGVectorStore.

  • LLaMAIndexNodeRetriever: Template for retrieving nodes and attaching them as packet contexts.

  • LLaMAIndexReranker: Template for reranking selected retrieved contexts before a downstream LLM consumes them.

  • LLaMAIndexSemanticCacheLookup: Reuse cached LLM responses from PGVector when a semantically similar prompt is found under the same request scope.

  • LLaMAIndexSemanticCacheWrite: Persist completed LLM responses into a PGVector-backed semantic cache for future reuse.

For specific instructions and further details, see the README.md.

Sinapsis Mem0

This package provides persistent memory functionality for Sinapsis agents using Mem0, supporting both managed (Mem0 platform) and self-hosted backends.

  • Managed templates: Mem0ManagedAdd, Mem0ManagedGetAll, Mem0ManagedGetMemory, Mem0ManagedSearch, Mem0ManagedDeleteAll, Mem0ManagedDeleteMemory, Mem0ManagedReset
  • OSS templates: Mem0OSSAdd, Mem0OSSGetAll, Mem0OSSGetMemory, Mem0OSSSearch, Mem0OSSDeleteAll, Mem0OSSDeleteMemory, Mem0OSSReset

For specific instructions and further details, see the README.md.

Sinapsis Chat History

This package provides persistent chat history storage for Sinapsis agents using LLMConversationPacket workflows across SQL backends.

  • ChatHistoryFetch: Template for retrieving stored chat history and attaching it to packet messages.

  • ChatHistorySave: Template for persisting the current conversation turn from a packet.

  • ChatHistoryDelete: Template for deleting stored chat history using explicit user_id / session_id scope and optional filters.

  • ChatHistoryReset: Template for dropping and recreating the configured chat history table.

The package supports sqlite by default, with postgres and supabase available through the optional postgres extra.

For specific instructions and further details, see the README.md.

Sinapsis vLLM

This package offers a suite of templates for running LLMs using vLLM, a high-throughput and memory-efficient inference engine for serving large language models.

  • vLLMTextCompletion: Template for text completion using vLLM with support for structured outputs.

  • vLLMBatchTextCompletion: Template for batched text completion using vLLM's continuous batching engine. Processes multiple conversations in a single batch for improved throughput.

  • vLLMStreamingTextCompletion: Streaming version of vLLMTextCompletion for real-time response generation.

  • vLLMMultiModal: Template for multimodal (text + image) completion using vLLM. Supports vision-language models like Qwen-VL.

For specific instructions and further details, see the README.md.

🌐 Webapps

The webapps included in this project showcase how the package templates can be combined into runnable chatbot demos.

Important

To run the app you first need to clone this repository:

git clone git@github.com:Sinapsis-ai/sinapsis-chatbots.git
cd sinapsis-chatbots

Note

If you'd like to enable external app sharing in Gradio, set GRADIO_SHARE_APP=true.

Note

The generic webapps/chatbot.py entrypoint can run different provider variants by changing AGENT_CONFIG_PATH. Current chatbot variants are:

  • webapps/configs/llama_cpp/llama_cpp_chatbot.yaml
  • webapps/configs/vllm/vllm_text_generation.yaml
  • webapps/configs/vllm/vllm_multimodal.yaml
  • webapps/configs/anthropic/anthropic_text_generation.yaml
  • webapps/configs/anthropic/anthropic_multimodal.yaml

The dedicated entrypoints are:

Important

Provider-specific credentials:

  • Anthropic variants require ANTHROPIC_API_KEY
  • Mem0 requires MEM0_API_KEY
  • vLLM multimodal and local llama-cpp variants may require model or GPU tuning depending on your hardware
🐳 Docker

IMPORTANT: This Docker image depends on the sinapsis-nvidia:base image. For detailed instructions, please refer to the Sinapsis README.

  1. Build the sinapsis-chatbots image:
docker compose -f docker/compose.yaml build
  1. Start one webapp service
  • llama-cpp chatbot:
docker compose -f docker/compose_apps.yaml up chatbot-llama-cpp -d
  • vLLM chatbot:
docker compose -f docker/compose_apps.yaml up chatbot-vllm -d
  • vLLM multimodal chatbot:
docker compose -f docker/compose_apps.yaml up chatbot-vllm-multimodal -d
  • Anthropic chatbot:
export ANTHROPIC_API_KEY=your_api_key
docker compose -f docker/compose_apps.yaml up chatbot-anthropic -d
  • Anthropic multimodal chatbot:
export ANTHROPIC_API_KEY=your_api_key
docker compose -f docker/compose_apps.yaml up chatbot-anthropic-multimodal -d
  • Mem0 chatbot:
export MEM0_API_KEY=your_api_key
docker compose -f docker/compose_apps.yaml up chatbot-mem0 -d
  • RAG chatbot:
docker compose -f docker/compose_apps.yaml up chatbot-rag -d
  1. Check the logs
  • llama-cpp chatbot:
docker logs -f sinapsis-chatbot-llama-cpp
  • vLLM chatbot:
docker logs -f sinapsis-chatbot-vllm
  • vLLM multimodal chatbot:
docker logs -f sinapsis-chatbot-vllm-multimodal
  • Anthropic chatbot:
docker logs -f sinapsis-chatbot-anthropic
  • Anthropic multimodal chatbot:
docker logs -f sinapsis-chatbot-anthropic-multimodal
  • Mem0 chatbot:
docker logs -f sinapsis-chatbot-mem0
  • RAG chatbot:
docker logs -f sinapsis-chatbot-rag
  1. The logs will display the URL to access the webapp, e.g.:
Running on local URL:  http://127.0.0.1:7860

NOTE: The url may be different, check the output of logs.

  1. To stop the app:
docker compose -f docker/compose_apps.yaml down

To run a different variant with the generic chatbot entrypoint, update AGENT_CONFIG_PATH in the service environment to point to the desired YAML file under webapps/configs/.

πŸ’» UV

To run the webapp using the uv package manager, follow these steps:

  1. Sync the virtual environment:
uv sync --frozen
  1. Install the workspace packages and webapp dependencies:
uv pip install sinapsis-chatbots[all] --extra-index-url https://pypi.sinapsis.tech
  1. Run one webapp variant:
  • llama-cpp chatbot:
uv run webapps/chatbot.py
  • vLLM chatbot:
export AGENT_CONFIG_PATH=webapps/configs/vllm/vllm_text_generation.yaml
uv run webapps/chatbot.py
  • vLLM multimodal chatbot:
export AGENT_CONFIG_PATH=webapps/configs/vllm/vllm_multimodal.yaml
uv run webapps/chatbot.py
  • Anthropic chatbot:
export ANTHROPIC_API_KEY=your_api_key
export AGENT_CONFIG_PATH=webapps/configs/anthropic/anthropic_text_generation.yaml
uv run webapps/chatbot.py
  • Anthropic multimodal chatbot:
export ANTHROPIC_API_KEY=your_api_key
export AGENT_CONFIG_PATH=webapps/configs/anthropic/anthropic_multimodal.yaml
uv run webapps/chatbot.py
  • Mem0 chatbot:
export MEM0_API_KEY=your_api_key
uv run webapps/chatbot_with_mem0.py
  • RAG chatbot:
uv run webapps/rag_chatbot.py
  1. The terminal will display the URL to access the webapp, e.g.:
Running on local URL:  http://127.0.0.1:7860

NOTE: The URL may vary; check the terminal output for the correct address.

To switch the generic chatbot entrypoint to a different provider or modality, change AGENT_CONFIG_PATH to the corresponding file under webapps/configs/.

πŸ“™ Documentation

Documentation for this and other sinapsis packages is available on the sinapsis website

Tutorials for different projects within sinapsis are available at sinapsis tutorials page

πŸ” License

This project is licensed under the AGPLv3 license, which encourages open collaboration and sharing. For more details, please refer to the LICENSE file.

For commercial use, please refer to our official Sinapsis website for information on obtaining a commercial license.

About

Monorepo for sinapsis templates supporting LLM based Agents

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors