A terminal-based text editor built with Python's Textual framework, featuring efficient document editing through a piece table data structure and AI-powered text completion.
- Efficient Text Editing: Uses a piece table implementation for O(1) insert/delete operations
- AI Text Completion: Real-time text suggestions powered by local LLM (Ollama) or external APIs
- Ghost Text Preview: View AI suggestions before accepting them
- Auto-generation: Automatic text suggestions after pausing (debounced)
- Terminal-Based UI: Clean, responsive interface with keyboard shortcuts
- File Operations: Save, load, and manage text files with modification tracking
- Docker Desktop (Windows/Mac) or Docker Engine (Linux)
- Docker Compose
- Python 3.9 or higher
- pip (Python package installer)
- Ollama (for local AI completions) - Download here
Docker installation handles all dependencies automatically and includes Ollama configuration.
-
Install Docker Desktop
- Download from Docker Desktop for Windows
- Run the installer and follow the prompts
- Restart your computer if prompted
-
Clone or Download the Project
git clone https://github.com/vmanvs/Texter.git cd Texter
-
Build and Start Services
# Using PowerShell script .\run.ps1 build .\run.ps1 up # Or using docker compose directly docker compose build docker compose up -d
-
Run the Editor
# Start with new file .\run.ps1 edit # Open existing file .\run.ps1 edit myfile.txt
-
Install Docker
# Linux (Ubuntu/Debian) sudo apt update sudo apt install docker.io docker-compose # Mac - Download Docker Desktop from docker.com
-
Clone or Download the Project
git clone https://github.com/vmanvs/Texter.git cd Texter -
Build and Start Services
# Using Makefile make build make up # Or using docker compose directly docker compose build docker compose up -d
-
Run the Editor
# Start with new file make edit # Open existing file make edit myfile.txt
Manual installation gives you more control but requires setting up Ollama separately.
-
Install Python
- Download Python 3.9+ from python.org
- During installation, check "Add Python to PATH"
-
Install Ollama
- Download from ollama.ai
- Install and start Ollama
- Pull the model:
ollama pull gemma3:1b
-
Install Project Dependencies
# Using PowerShell script .\run.ps1 install-local # Or using pip directly pip install -r requirements.txt
-
Configure API Endpoint
Open
txtarea.pyand change line 110 from:f"http://ollama:11434/api/generate",to:
f"http://localhost:11434/api/generate", -
Run the Editor
# Using PowerShell script .\run.ps1 run-local # Or directly with Python python txtarea.py # Open specific file python txtarea.py myfile.txt
-
Install Python
# Linux (Ubuntu/Debian) sudo apt update sudo apt install python3 python3-pip # Mac (using Homebrew) brew install python3
-
Install Ollama
# Linux curl -fsSL https://ollama.ai/install.sh | sh # Mac brew install ollama # Pull the model ollama pull gemma3:1b # Start Ollama service (if not auto-started) ollama serve
-
Install Project Dependencies
# Using Makefile make install-local # Or using pip directly pip install -r requirements.txt
-
Configure API Endpoint
Open
txtarea.pyand change line 110 from:f"http://ollama:11434/api/generate",to:
f"http://localhost:11434/api/generate", -
Run the Editor
# Using Makefile make run-local # Or directly with Python python txtarea.py # Open specific file python txtarea.py myfile.txt
Docker:
# Windows
.\run.ps1 edit [filename]
# Linux/Mac
make edit [filename]Manual:
# Windows
.\run.ps1 run-local [filename]
# Linux/Mac
make run-local [filename]
python txtarea.py [filename]- Manual Generation: Press
Ctrl+Gto request AI completion - Auto-Generation: Stop typing for 2 seconds to trigger automatic suggestions
- Accepting Suggestions: Press
Tabto accept the grey ghost text - Dismissing Suggestions: Type any key to clear ghost text
- Cancelling Generation: Press any key during generation to cancel
All files are stored in the my-files/ directory.
-
Opening Files: Files should be placed in
my-files/directory# File structure project-root/ ├── my-files/ │ ├── myfile.txt # Your files here │ └── notes.txt └── txtarea.py
-
Saving Files: When you save a file using
Ctrl+S, it will be automatically saved tomy-files/- If the file doesn't exist, you'll be prompted for a filename
- The
.txtextension is added automatically - Files are saved as:
my-files/yourfilename.txt
-
Creating the Directory:
# Windows mkdir my-files # Linux/Mac mkdir -p my-files
Ctrl+S: Save current fileCtrl+Q: Quit (prompts if unsaved changes)Ctrl+D: Force quit without saving (when in quit dialog)
Edit these settings in txtarea.py:
AI Model (Line 96):
"model": "gemma3:1b" # Change to your preferred Ollama modelContext Size (Line 653):
context_size = 3000 # Characters before cursor sent as contextAuto-generation Delay (Line 56):
self._auto_generate_delay = 2.0 # Seconds of inactivity before auto-genYou can configure the editor to use external AI APIs like Claude, GPT-4, Gemini, etc., instead of local Ollama.
-
Locate the API Configuration
Open
txtarea.pyand find theget_completionmethod (around line 96-110). -
Replace the API Endpoint and Payload
Current Ollama Configuration (Lines 96-110):
payload = { "model": "gemma3:1b", "prompt": prompt, "stream": False, "options": { "temperature": 0.3, "num_predict": 500, "stop": ["\n\n\n", "```"] } } async with httpx.AsyncClient(timeout=20.0) as client: response = await client.post( f"http://localhost:11434/api/generate", # Line 110 json=payload, timeout=20 )
async def get_completion(self, context_before: str, context_after: str = "") -> Optional[str]:
"""Get completion from Claude API"""
try:
prompt = context_before
payload = {
"model": "claude-sonnet-4-20250514",
"max_tokens": 500,
"temperature": 0.3,
"messages": [
{
"role": "user",
"content": prompt
}
]
}
async with httpx.AsyncClient(timeout=20.0) as client:
response = await client.post(
"https://api.anthropic.com/v1/messages",
json=payload,
headers={
"x-api-key": "YOUR_ANTHROPIC_API_KEY",
"anthropic-version": "2023-06-01",
"content-type": "application/json"
},
timeout=20
)
if response.status_code == 200:
data = response.json()
completion = data['content'][0]['text'].strip()
return completion
return None
except (httpx.RequestError, Exception):
return Noneasync def get_completion(self, context_before: str, context_after: str = "") -> Optional[str]:
"""Get completion from OpenAI API"""
try:
prompt = context_before
payload = {
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "You are a helpful text completion assistant."
},
{
"role": "user",
"content": prompt
}
],
"temperature": 0.3,
"max_tokens": 500
}
async with httpx.AsyncClient(timeout=20.0) as client:
response = await client.post(
"https://api.openai.com/v1/chat/completions",
json=payload,
headers={
"Authorization": "Bearer YOUR_OPENAI_API_KEY",
"Content-Type": "application/json"
},
timeout=20
)
if response.status_code == 200:
data = response.json()
completion = data['choices'][0]['message']['content'].strip()
return completion
return None
except (httpx.RequestError, Exception):
return Noneasync def get_completion(self, context_before: str, context_after: str = "") -> Optional[str]:
"""Get completion from Google Gemini API"""
try:
prompt = context_before
payload = {
"contents": [
{
"parts": [
{
"text": prompt
}
]
}
],
"generationConfig": {
"temperature": 0.3,
"maxOutputTokens": 500,
}
}
async with httpx.AsyncClient(timeout=20.0) as client:
response = await client.post(
f"https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=YOUR_GEMINI_API_KEY",
json=payload,
timeout=20
)
if response.status_code == 200:
data = response.json()
completion = data['candidates'][0]['content']['parts'][0]['text'].strip()
return completion
return None
except (httpx.RequestError, Exception):
return None- API Keys: Replace placeholder API keys with your actual keys
- Cost: External APIs typically charge per request - monitor your usage
- Rate Limits: Be aware of API rate limits to avoid service interruptions
- Timeouts: Adjust the
timeoutparameter if you experience frequent timeouts - Error Handling: The current implementation has basic error handling; consider adding more robust logging
- Security: Never commit API keys to version control - use environment variables:
import os
api_key = os.getenv("ANTHROPIC_API_KEY") # Set via environment variable| Shortcut | Action |
|---|---|
Ctrl+S |
Save file |
Ctrl+Q |
Quit (prompts if unsaved) |
Ctrl+G |
Manually trigger AI completion |
Tab |
Accept ghost text suggestion |
Ctrl+D |
Force quit without saving (in quit dialog) |
Esc |
Cancel dialog/dismiss ghost text |
Container won't start:
# Check container logs
docker compose logs
# Rebuild containers
docker compose down
docker compose build --no-cache
docker compose up -dOllama model not loading:
# Enter container and pull model manually
docker compose exec ollama ollama pull gemma3:1b"Module not found" errors:
# Reinstall dependencies
pip install -r requirements.txt --force-reinstallOllama connection refused:
# Check if Ollama is running
ollama list
# Start Ollama service
ollama serve
# Verify endpoint in txtarea.py is set to localhost:11434Permission errors on Linux:
# Add execute permissions to scripts
chmod +x *.sh
# Run Python with correct permissions
python3 txtarea.pyNo AI suggestions appearing:
- Verify Ollama/API is running and accessible
- Check the endpoint URL in
txtarea.py(line 110) - Ensure the model is downloaded:
ollama list - Check for errors in the application logs
Slow AI responses:
- Try a smaller model (e.g.,
gemma3:1binstead of larger models) - Reduce context size in configuration
- Check your system resources (CPU/RAM usage)
Files not saving:
- Ensure
my-files/directory exists - Check write permissions on the directory
- Verify disk space availability
Can't open files:
- Ensure files are in
my-files/directory - Use correct filename:
python txtarea.py myfile.txt(notmy-files/myfile.txt)
project-root/
├── my-files/ # Your text files (create this directory)
│ └── *.txt
├── PieceTable.py # Core piece table implementation
├── pt_for_textarea.py # Textual DocumentBase adapter
├── txtarea.py # Main editor application
├── sysprompt.txt # AI system prompt
├── requirements.txt # Python dependencies
├── Dockerfile # Docker configuration
├── docker-compose.yml # Docker Compose setup
├── run.ps1 # Windows convenience script
├── Makefile.txt # Linux/Mac convenience commands
└── README.md # This file
- Insert/Delete: O(1) - only modifies piece array
- Get Text: O(n) where n = number of pieces
- Memory: O(m) where m = total edited characters
MIT License - Feel free to use and modify for your projects.
- Built with Textual
- AI powered by Ollama
- Piece table concept from Charles Crowley's research