A powerful, production-ready chatbot system with CLI and Web UI, powered by Google Generative AI
- Features
- Installation
- Quick Start
- Configuration
- CLI Commands
- Slash Commands
- Web UI
- Advanced Features
- Architecture
- Troubleshooting
- License
- β Multiple Interfaces: CLI (Typer) + Web UI (Streamlit)
- β AI Models: Google Generative AI (Gemini) integration
- β RAG System: ChromaDB for document retrieval and embedding
- β Memory Management: Persistent session and conversation history
- β
Configuration Management: Easy setup with
pixella config - β Rich Styling: Beautiful terminal output with colors and formatting
- β Error Handling: Comprehensive error handling with helpful messages
- β Production Ready: Full logging, type hints, modular architecture
- π RAG (Retrieval-Augmented Generation): Import and query documents
- πΎ Session Management: Save and manage conversation sessions
- π― User Personas: Customize AI responses with user context
- π§ Slash Commands: Discord-style commands in interactive mode
- π Statistics: Session and usage tracking
- π Web Settings: Configure everything from the web UI
- π Document Upload: Import documents directly from web UI
- Python 3.11 or higher (required)
- Git: For cloning the repository.
- Internet connection (for dependencies)
The installation script will handle pip and virtual environment setup automatically.
Use the provided installation script:
# Navigate to Pixella directory (if not already there)
# cd Pixella
# Run installation script
bash scripts/install.shThis script will:
- Detect your operating system (Linux, macOS, Windows/WSL/Git Bash).
- Check for compatible Python (3.11+) and Git installations.
- Clone the repository (if running remotely).
- Create and activate a Python virtual environment (
.venv). - Install all Python dependencies into the virtual environment.
- Create necessary data directories.
- Generate a
.envtemplate if one doesn't exist. - Prompt for your Google API Key and save it to
.env. - Create a
pixellacommand wrapper inbin/and add it to your shell's PATH. - Verify the installation.
We strongly recommend using the automated installation script for users:
bash -c "$(curl -fsSL https://raw.githubusercontent.com/ObaTechHub-inc/Pixella-chatbot/main/scripts/install.sh)"This script will set up everything you need for development. See the Installation docs for more details.
It's highly recommended to use a Python virtual environment to manage dependencies.
# Navigate to project directory
cd Pixella
# Create a virtual environment
python3 -m venv .venv
# Activate the virtual environment
# On macOS/Linux:
source .venv/bin/activate
# On Windows (Git Bash/WSL):
source .venv/Scripts/activate
# On Windows (Cmd/PowerShell), you'll need to run:
# .\.venv\Scripts\activate.bat (for Cmd)
# .\.venv\Scripts\Activate.ps1 (for PowerShell)
# Install Python dependencies
pip install -r requirements.txtDependencies included:
typer[all]- CLI frameworkstreamlit- Web interfacelangchain- LLM integrationlangchain-google-genai- Google AI modelschromadb- Vector database for RAGgooglegenerataveaiembeddings- Embedding modelspython-dotenv- Environment variablesrich- Terminal styling
python main.pyThis runs the verification hub that checks:
- Python version (3.11+)
- Dependencies installed
- Configuration ready
- All systems operational
pixella config --initOr set API key directly:
pixella config --google-api-key "your-api-key-here"Interactive CLI mode:
pixella cli --interactiveSingle question:
pixella chat "What is machine learning?"Web UI:
pixella uiCreate a .env file in the Pixella directory:
# Required
GOOGLE_API_KEY=your_api_key_here
# Optional (with defaults)
GOOGLE_AI_MODEL=gemini-2.5-flash
DB_PATH=./db/chroma
USER_NAME=User
USER_PERSONA=
MEMORY_PATH=./data/memory
EMBEDDING_MODEL=all-MiniLM-L6-v2
MODELS_CACHE_DIR=./models
ALWAYS_DEBUG=false
DISABLE_COLORS=false# Interactive setup
pixella config --init
# View current configuration
pixella config --show
# Set specific values
pixella config --google-api-key "key-here"
pixella config --user-name "John"
pixella config --db-path "./custom/path"
# Reset to defaults
pixella config --reset
# Export configuration
pixella config --export settings.json
# Generate .env template
pixella config --template
# List all available options
pixella config --list# Show version
pixella --version
# Show all commands
pixella --help
# Send a single message
pixella chat "Your question here"
pixella chat "What is Python?" --debug
# Start interactive mode
pixella cli
pixella cli --interactive
pixella cli --debug
# Launch web interface
pixella ui
pixella ui --background # Run in background
pixella ui --end # Stop background UI
pixella ui --debug
# Run verification
python main.py
# Run test
pixella test/name, /n [text] Set your name
/persona, /p [text] Set your persona/context
/clear, /c Clear conversation history
/stats, /st Show session statistics
/sessions, /s List all saved sessions
/rag, /ra Show RAG system status
/import, /i [file] Import documents for RAG
/export, /ex [file] Export RAG data
/models List available embedding models
/debug, /d Toggle debug logging on/off
/model, /m [name] Switch AI model
/exit, /quit, /q, /x End the session
/help, /h, /? Show all commands
pixella cli --interactive
You: /name Alice
You: /persona I'm a Python expert with 10 years experience
You: /import ~/documents/python_guide.txt
You: What are decorators?
You: /rag
You: /stats
You: /exit# Start Streamlit UI
pixella ui
# Run in background (specify port)
pixella ui --background
# Stop background UI
pixella ui --end
# With debug logging
pixella ui --debugAccess at: http://localhost:8501
- Send messages to the chatbot
- View chat history
- Set user name and persona
- Clear chat history
- View all sessions
- Create new sessions
- See session statistics
- Clear all memory
- Upload documents (txt, md, pdf)
- View document count
- View RAG collection info
- Clear RAG database
- Export collection data
Import documents to enhance chatbot responses, This is not perfect or working yet.:
# From CLI (interactive mode)
/import ~/documents/report.txt
/import ~/documents/guide.pdf
# From Web UI
1. Go to RAG Tab
2. Click file uploader
3. Select document
4. Click ImportSave and manage conversations, has some issues currently:
# List sessions
/sessions
# Create new session
/new
# Clear current session
/clear
# Export session
/export sessions_backup.jsonCustomize AI responses:
# Set name
/name "Your Name"
# Set persona
/persona "I am a senior software engineer specializing in Python and distributed systems"# View current model
/model
# List available models
/models
# Change model (from config)
pixella config --google-model "gemini-2.5-flash"Pixella/
βββ main.py # Verification hub & central entrypoint
βββ test.py # Test script
βββ entrypoint.py # Main CLI router
βββ cli.py # CLI interface with slash commands
βββ app.py # Streamlit web UI
βββ chatbot.py # Core AI chatbot
βββ config.py # Configuration management
βββ chromadb_rag.py # RAG system with ChromaDB
βββ memory.py # Session & memory management
βββ requirements.txt # Python dependencies
βββ .env # Environment variables (create this)
βββ .env.template # Template with all options
βββ LICENSE # MIT License
βββ README.md # This file
βββ bin/
β βββ pixella # Global command wrapper
βββ scripts/
β βββ install.sh # Installation script
βββ db/
β βββ chroma/ # ChromaDB storage (created on first use)
βββ data/
β βββ memory/ # Session data (created on first use)
βββ models/ # Embedding models (created on first use)
User Input (CLI/Web UI)
β
βββ Config Management (config.py)
βββ Memory System (memory.py)
β βββ SQLite Database / JSON Files
β βββ Session Persistence
βββ RAG System (chromadb_rag.py)
β βββ ChromaDB Vector Store
β βββ Document Retrieval
βββ Chatbot (chatbot.py)
βββ Google Generative AI API
entrypoint.py β cli.py β chatbot.py
β
config.py
memory.py
chromadb_rag.py
app.py (Streamlit UI)
β
chatbot.py, config.py, memory.py, chromadb_rag.py
The chatbot uses Google's Generative AI API (formerly PaLM).
Get your API key:
- Go to Google AI Studio
- Click "Get API Key"
- Create a new API key
- Add to
.envasGOOGLE_API_KEY
Supported Models:
gemini-2.5-flash(default, recommended)gemini-2.5-pro- Other Gemini models (check Google AI docs)
If you see an error like Python 3.11 or higher is required, it means you are using an older version of Python. You can check your Python version by running python3 --version. If you have multiple Python versions installed, you can use python3.11 or python3.12 to run the application.
If you see an error like ModuleNotFoundError: No module named 'langchain', it means you have not installed the required dependencies. You can install them by running pip install -r requirements.txt.
If you are having issues with ChromaDB, you can try clearing the cache and re-downloading the models by running the following commands:
rm -rf models/
rm -rf db/Then, reinstall the sentence-transformers package:
pip install --force-reinstall sentence-transformersIf you see an error like 429 Resource exhausted, it means you have exceeded your API quota. You can check your usage and limits in the Google AI Studio.
If you see an error like pixella: command not found, it means the pixella command is not in your PATH or your shell hasn't reloaded its configuration.
For Linux/macOS users: Reload your shell configuration:
# For zsh
source ~/.zshrc
# For bash
source ~/.bashrc
# For other shells, or if the above doesn't work, try sourcing ~/.profile
source ~/.profileYou may need to restart your terminal for changes to take full effect.
For Windows (Git Bash/WSL) users:
The install.sh script attempts to add pixella to your PATH within your Bash/WSL environment. If the command is not found after installation, try reloading your shell configuration as above, or restart your terminal.
For Windows (Cmd/PowerShell) users:
The install.sh script does NOT automatically add pixella to the system PATH for native Windows command prompts (Cmd, PowerShell). You will need to manually add the Pixella/bin directory to your system's PATH environment variable. The full path is typically C:\Users\<YourUsername>\.pixella\bin (if installed remotely) or path\to\your\cloned\repo\Pixella\bin (if installed locally).
If you can't connect to the Streamlit UI, make sure that port 8501 is not in use. You can check this by running lsof -i :8501. If the port is in use, you can kill the process and try again.
Run the verification hub to check everything:
python main.pyThis checks:
- β Python version (3.11+)
- β .env file present
- β Environment variables set
- β All modules installed
- β Directories exist
- β Chatbot initializes
- β RAG system ready
- β Memory system ready
Contributions are welcome! Feel free to:
- Report bugs
- Suggest features
- Submit pull requests
- Improve documentation
Please read the CONTRIBUTING.md for guidelines.
This project is licensed under the MIT License - see the LICENSE file for details.
You are free to:
- β Use commercially
- β Modify the code
- β Distribute
- β Use privately
You must:
- β Include license notice
- β Include copyright notice
Limitations:
- β No warranty
- β No liability
Built with:
- Google Generative AI - LLM backend
- LangChain - LLM integration framework
- ChromaDB - Vector database for RAG
- Streamlit - Web UI framework
- Typer - CLI framework
- Rich - Terminal styling
- HuggingFace - Embedding models
For issues and questions:
- Check the Troubleshooting section
- Run
pixella --helpfor command help - Use
pixella config --showto view configuration - Run
python main.pyto verify installation - Read the Troubleshooting docs: https://obatechhub-inc.github.io/Pixella-chatbot/troubleshooting.html
Planned features:
- Voice input/output support
- Advanced RAG with multi-document search
- Session export/import
- Custom model selection UI
- Plugin system
- Analytics dashboard
- Batch processing
- API server mode
Made with β€οΈ by Pixella Contributors
Last updated: December 2025 - version 1.20.5 -> 1.20.8(1.20.7) Python 3.11+ | MIT License