Clone your memory and personality using AI - A fully local application that fine-tunes large language models on your personal memories from Notion to create an AI representation of yourself.
- 🔗 Notion Integration - Browser-based API key authentication to fetch all your notes
- 🎯 Personalized Training - Uses your personal details (name, age, location, hobbies, etc.) to create context
- 🚀 Powered by Unsloth - 2x faster fine-tuning with 60% less memory usage
- 💡 LoRA & Full Fine-tuning - Choose between fast LoRA fine-tuning or comprehensive full fine-tuning
- 🎨 Beautiful UI - Clean Gradio interface with tabs for training, settings, and logs
- 🔒 100% Private - All processing happens locally. NO data is collected on our servers
- 📤 HuggingFace Integration - Push your trained models directly to HuggingFace Hub
- ⚙️ Fully Configurable - All settings stored in
charisma.tomland editable via UI - 📊 Real-time Console - Live training logs displayed directly in the UI
Larger models produce better, more coherent responses!
- Small models (270M - 1B parameters): May closely mimic your writing style from memories but can struggle with general conversation. Best for quick testing or limited hardware.
- Medium models (3B - 8B parameters): Good balance between performance and quality. Can handle both your memories and general conversation well.
- Large models (12B+ parameters): Excellent understanding, natural responses, and best personality representation. Recommended for production use.
Example Models:
Small: unsloth/gemma-3-270m-it (testing only)
Medium: unsloth/gemma-3-4b-it (recommended for most users)
Large: unsloth/Llama-3.1-8B (best quality)
Huge: unsloth/Llama-3.3-70B (requires powerful GPU)
If your AI clone doesn't respond the way you expect:
-
Adjust Training Parameters in Settings tab:
- Learning rate (higher = faster learning, but less stable)
- Number of epochs (more = better learning, but risk overfitting)
- Max steps (increase for more training iterations)
- LoRA rank (higher = more capacity to learn)
-
Customize System Prompt in Settings → Prompt Configuration:
- Define exactly how your AI should behave
- Use placeholders:
{name},{age},{gender},{location}, etc. - Click "🔄 Refresh Config" to reload from
charisma.toml
-
Adjust Inference Settings in Inference tab:
- Temperature (1.0 recommended for Gemma-3): Higher = more creative
- Top P (0.95 recommended): Controls randomness
- Top K (64 recommended): Limits token choices
- Max Tokens: Control response length
-
Use Better Training Data:
- Add more detailed, conversational memories to Notion
- Format as Q&A pairs for better results
- Use the "🔄 Refresh Memories" button to reload cached data
-
Edit
charisma.tomlDirectly:- All settings are in one file for easy tweaking
- Use "🔄 Refresh Config" button to reload after manual edits
- No need to restart the application!
For Best Results:
- Use at least 10-20 diverse memories for training
- Include different topics and writing styles in your Notion pages
- Larger batch sizes (if your GPU allows) = more stable training
- Monitor training loss - it should decrease over time
- Try different models - bigger isn't always better for your use case!
IMPORTANT: Charisma is designed with privacy as the top priority:
- ✅ All data processing happens locally on your machine
- ✅ Your Notion data never leaves your computer
- ✅ Personal information is only used for training
- ✅ No telemetry, no analytics, no data collection
- ✅ You control where your models are saved
See NOTICE.md for complete privacy details.
- Python: 3.10 or higher
- CUDA: NVIDIA GPU with CUDA support (recommended)
- RAM: 8GB minimum, 16GB+ recommended
- VRAM: 4GB+ for default model, varies by model size
- Storage: 10GB+ free space
# Clone the repository
git clone https://github.com/muhammad-fiaz/charisma.git
cd charisma
# Install dependencies using uv (recommended)
uv sync
# Or using pip
pip install -e .
uv run launch.py🚀 Quick Start: Click the badge above to open Charisma in Google Colab and start creating your AI clone immediately!
Charisma uses Internal Integration for secure, private access to your Notion workspace.
- Go to https://www.notion.so/profile/integrations
- Login to your Notion account if not already logged in
- Click "+ New integration"
- Fill in the details:
- Name:
Charisma(or any name you prefer) - Associated workspace: Select your workspace/organization
- Name:
- Under "Integration type", select "Internal"
- ℹ️ This keeps your integration private - only you can use it
- Click "Submit" to create the integration
- You'll land on the Configuration tab after creating the integration
- Under "Capabilities", make sure these are enabled:
- ✅ Read content (REQUIRED - enable this!)
- ✅ Read comments (optional)
- ✅ No user information (recommended for privacy)
- Copy your Internal Integration Secret (looks like:
secret_xxxxxxxxxxxxx)- Save this securely - you'll paste it in Charisma Settings
-
Click on the "Access" tab at the top
-
Here you'll see which Notion pages/databases your integration can access
-
Important: You must manually allow access to your memory pages:
Method A (Recommended) - Share Individual Pages:
- Open each memory page in Notion
- Click
•••(three dots) at the top right - Select "Add connections"
- Find and select your Charisma integration
- Click "Confirm"
- Repeat for ALL your memory pages
Method B - Share Parent Folder:
- Share the parent folder/database containing all memories
- All child pages automatically get access
- Easier if you have many pages
-
Select ALL your memory pages from your organization/workspace
-
Verify in the Access tab that all pages are listed
Make sure you've granted access to:
- ✅ All daily memory pages (e.g., "Mem 30-10-2025", "Mem 29-10-2025", etc.)
- ✅ Any databases containing memories
- ✅ Your workspace/organization if using private workspace
Some models require authentication to download. Create a free token:
- Go to https://huggingface.co/settings/tokens
- Create an account or login if you haven't already
- Click "New token"
- Give it a name (e.g., "Charisma")
- Select "Read" permission (or "Write" if you want to upload models later)
- Click "Generate"
- Copy the token (looks like:
hf_xxxxxxxxxxxxx) - Save this - you'll paste it in Charisma Settings
Why is this needed?
- Some models on HuggingFace are gated (require agreement to terms)
- Token allows Charisma to download these models automatically
- Without it, some models may fail to download
Important: For best AI clone results, organize your memories properly!
Memory Page Structure:
- Each daily memory should be a separate page in Notion
- Use clear, date-based naming (any format works):
- ✅
Mem 30-10-2025 - ✅
October 30, 2025 - Daily Journal - ✅
2025-10-30 Memories - ✅ Any descriptive name you prefer
- ✅
- Do NOT put all memories in one giant page - this confuses the AI
Recommended Setup:
📁 My Workspace (Private recommended)
├─ 📄 Mem 30-10-2025
├─ 📄 Mem 29-10-2025
├─ 📄 Mem 28-10-2025
├─ 📄 Mem 27-10-2025
└─ ... (one page per day/memory)
Tips:
- Use a private workspace for personal memories (more secure)
- Write naturally - the AI learns from your writing style
- Include thoughts, experiences, opinions, and daily events
- More memories = better AI clone quality (recommend at least 10-20 pages)
# Launch locally
charisma
# Launch with public URL (for Google Colab)
charisma --live
# Custom port
charisma --port 8080
# Or run directly with Python
uv run python launch.pyThe UI will open in your browser at http://127.0.0.1:7860
- Navigate to the Settings tab
- Under "Notion API Key":
- Paste your Internal Integration Secret (from Step 2 above)
- Format:
secret_xxxxxxxxxxxxx
- (Optional) Add your HuggingFace Token
- Format:
hf_xxxxxxxxxxxxx
- Format:
- Adjust training parameters if needed (defaults work great):
- Max steps: 100
- Learning rate: 2e-4
- Batch size: 2
- Click "💾 Save All Settings"
Enter Personal Information:
- Name: Your full name
- Age: Your age
- Country: Your country
- Location: Your city
- Hobbies: Your hobbies (e.g., "Reading, Coding, Photography")
- Favorites: Your favorite things (e.g., "Pizza, Sci-fi movies, Python")
Connect to Notion:
- Click "🔗 Connect to Notion"
- Connection happens automatically using your API key
- You'll see a success message with your workspace info:
✅ Connected to Notion Workspace: Your Workspace Name Pages: 25 - Important: Only pages you shared with the integration (in Step 3 above) will be visible
- If you see "0 pages", make sure you've shared your memory pages with the Charisma integration
Select Memories:
- All accessible memory pages are listed with checkboxes
- By default, all are selected
- Uncheck any pages you don't want to include in training
- Tip: Include at least 10-20 memory pages for best results
Choose Model & Configure Training:
- Model Selection:
- Default:
unsloth/gemma-3-270m-it(270M params, ~4GB VRAM) - Or choose from 10+ pre-configured models
- Or enter any HuggingFace model ID
- Default:
- Training Mode:
- ✅ LoRA Fine-tune - Fast, efficient (recommended)
- ⬜ Full Fine-tune - Thorough but slower
- Output Model Name:
- Enter a name for your model (e.g.,
my-memory-clone)
- Enter a name for your model (e.g.,
Generate Your Clone:
- Click "✨ Generate AI Clone"
- Watch real-time training progress in the console output below
- Training logs show:
- Data processing steps
- Model loading progress
- Training metrics (loss, learning rate)
- Completion status
- Wait for completion (typically 5-30 minutes depending on model size)
- View detailed training logs
- Select different log files from the dropdown
- Monitor progress and debug any issues
- Logs are automatically saved to the
logs/directory
| Model | Parameters | VRAM | Description |
|---|---|---|---|
unsloth/gemma-3-270m-it |
270M | ~4GB | Default - Fast & efficient |
unsloth/gemma-2-2b-it |
2B | ~6GB | Balanced performance |
unsloth/Llama-3.2-1B-Instruct |
1B | ~4GB | Compact & fast |
unsloth/Llama-3.2-3B-Instruct |
3B | ~8GB | Better quality |
unsloth/Meta-Llama-3.1-8B-Instruct |
8B | ~16GB | High quality |
unsloth/Qwen2.5-7B-Instruct |
7B | ~14GB | Excellent reasoning |
unsloth/Phi-3.5-mini-instruct |
3.8B | ~8GB | Microsoft Phi |
unsloth/mistral-7b-instruct-v0.3 |
7B | ~14GB | Strong general model |
unsloth/Ministral-8B-Instruct-2410 |
8B | ~16GB | Latest Ministral |
unsloth/Llama-3.3-70B-Instruct |
70B | ~40GB | Best quality (needs large GPU) |
You can also use any custom model from HuggingFace!
All settings are stored in charisma.toml (created automatically):
[project]
name = "charisma"
version = "0.1.0"
[model]
max_seq_length = 2048
load_in_4bit = true
[training]
batch_size = 2
learning_rate = 0.0002
num_epochs = 1
max_steps = 60
[lora]
r = 16
lora_alpha = 16
lora_dropout = 0
[notion]
api_key = ""
[huggingface]
token = ""
default_repo = "my-charisma-model"
private = trueEdit these values in the Settings tab or directly in the file.
charisma/
├── charisma/
│ ├── __init__.py
│ ├── main.py # Entry point
│ ├── config/ # Configuration management
│ │ ├── config_manager.py
│ │ └── models.py
│ ├── core/ # Core training logic
│ │ ├── data_processor.py
│ │ ├── model_manager.py
│ │ └── trainer.py
│ ├── integrations/ # External integrations
│ │ ├── notion_client.py
│ │ └── huggingface_client.py
│ ├── ui/ # Gradio UI
│ │ ├── app.py
│ │ └── tabs/
│ │ ├── main_tab.py
│ │ ├── settings_tab.py
│ │ └── logs_tab.py
│ └── utils/ # Utilities
│ ├── logger.py
│ └── validators.py
├── outputs/ # Trained models (created at runtime)
├── logs/ # Application logs (created at runtime)
├── charisma.toml # Configuration file (created at runtime)
├── pyproject.toml # Project metadata
├── NOTICE.md # Privacy notice
└── README.md # This file
# Install in editable mode
uv sync
# Or with pip
pip install -e .
# Run directly
python -m charisma.maincharisma --help
Options:
--live Create public URL (for Colab)
--port PORT Port number (default: 7860)
--config PATH Config file path (default: charisma.toml)
--server-name IP Server IP (default: 127.0.0.1)
--debug Enable debug mode- Use a smaller model (e.g.,
unsloth/Llama-3.2-1B-Instruct) - Reduce
batch_sizein Settings - Enable
load_in_4bitin Settings - Use LoRA fine-tuning instead of full fine-tuning
- Verify your API token in Settings
- Ensure you've shared pages with your Notion integration
- Test connection using the "🧪 Test" button
- LoRA fine-tuning is much faster than full fine-tuning
- Reduce
max_stepsornum_epochs - Use a smaller model
- Ensure you have a CUDA-capable GPU
- Check that your Notion pages are shared with the integration
- Ensure the integration has read permissions
- Refresh the connection
- Data Collection: Fetches your notes from Notion via the Notion API
- Data Processing: Converts memories into conversation format with your personal context
- Model Loading: Loads an Unsloth FastLanguageModel with optional LoRA adapters
- Training: Fine-tunes the model on your memories using supervised fine-tuning (SFT)
- Saving: Saves the trained model locally (and optionally to HuggingFace)
The training uses the Gemma-3 chat template format:
System: You are [your name], [your details]...
User: Tell me about [date/topic]
Assistant: [Your memory content]
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the AGPL License - see the LICENSE file for details.
- Unsloth - For 2x faster and more memory-efficient LLM fine-tuning
- Gradio - For the amazing UI framework
- HuggingFace - For transformers and model hosting
- Notion - For the API that makes memory collection possible
Muhammad Fiaz
- Email: contact@muhammadfiaz.com
- GitHub: @muhammad-fiaz
This tool is for personal use or educational purposes only. By using Charisma:
- You are responsible for your Notion data and API usage
- You agree that all processing is done locally at your own risk
- The authors are not responsible for any data loss or misuse
- Ensure you comply with Notion's and HuggingFace's terms of service