Throughline is a memory layer for what you’re building and thinking.
Instead of trying to write perfect posts every day, you log small, messy check-ins about your work, ideas, learnings, or struggles. Throughline remembers those fragments over time, finds patterns across days and weeks, and turns them into coherent stories in your own voice.
It’s not a journaling app. It’s not just another AI writer. It’s a system that treats your daily progress as data and helps it compound into narrative.
You don’t write stories here. You live them. Throughline connects the dots.
- Capture lightweight daily check-ins (raw, messy, unpolished)
- Remember your activity over time
- Synthesize weekly and monthly narratives
- Generate “base posts” from your real progress
- Adapt those posts for platforms like LinkedIn, X, and Reddit
- Learn your voice from samples and tone preferences
- Let you regenerate, refine, like/dislike, and evolve output
- Visualize momentum (streaks, calendar, activity)
The loop is simple:
Log what matters → We remember → We connect → You share
Most tools generate content from a prompt.
Throughline generates content from you over time.
- You don’t start from a blank page
- You don’t need perfect thoughts
- You don’t have to remember everything
You just show up honestly. Throughline handles continuity, context, and synthesis.
This repository contains two main components:
- Backend (
/backend) - Express.js API with AI orchestration - Frontend (
/frontend) - React + TypeScript web application
- Runtime: Node.js (ESM modules)
- Framework: Express.js
- Database: MySQL with Prisma ORM
- AI/LLM: Mastra framework with support for multiple providers (OpenRouter, Google Gemini, OpenAI, Anthropic, Groq)
- Authentication: JWT + Google OAuth
- Email: Resend API
- Error Tracking: Sentry
- Scheduling: node-cron (with optional external cron support)
- Framework: React 18 + Vite
- Language: TypeScript
- Styling: Tailwind CSS
- UI Components: Radix UI + shadcn/ui
- Routing: React Router v6
- Animations: Framer Motion
- State Management: React Query (@tanstack/react-query)
Before you begin, ensure you have:
- Node.js 18+ and npm installed
- MySQL database (local or cloud-based like PlanetScale, Railway, etc.)
- Google OAuth credentials (for Google login)
- LLM API Key (at least one of: Groq, OpenRouter, Google Gemini, OpenAI, or Anthropic)
- Resend API Key (for email verification and password reset)
- Sentry DSN (optional but recommended for error tracking)
cd backendnpm installCreate a .env file in the backend root directory by copying the example:
cp .env.example .envEdit the .env file with your actual values:
# Server Configuration
PORT=3000
NODE_ENV=development
TZ=Asia/Kolkata
# CORS Configuration
ALLOWED_ORIGINS=http://localhost:5173
# Application Mode
MODE=self-hosted # or "saas" for managed service
# Application URLs
API_URL=http://localhost:3000
FRONTEND_URL=http://localhost:5173
# Database Configuration (MySQL)
DATABASE_URL=mysql://username:password@host:port/database
# JWT Secrets (MUST be at least 32 characters each)
JWT_SECRET=your_super_secret_jwt_key_min_32_chars_12345678
JWT_REFRESH_SECRET=your_super_secret_refresh_key_min_32_chars_12345678
JWT_VERIFICATION_SECRET=your_super_secret_verification_key_min_32_chars_12345678
JWT_PASSWORD_RESET_SECRET=your_super_secret_password_reset_key_min_32_chars_12345678
# Google OAuth Configuration
GOOGLE_CLIENT_ID=your-google-client-id.apps.googleusercontent.com
GOOGLE_CLIENT_SECRET=your-google-client-secret
GOOGLE_CALLBACK_URL=http://localhost:3000/auth/google/callback
# LLM Configuration (Choose at least one)
# Option 1: Groq (Free, Fast - Recommended for testing)
GROQ_API_KEY=gsk_...
# GROQ_MODEL=llama-3.3-70b-versatile # Optional, this is the default
# Option 2: OpenRouter (Access to many models)
# OPENROUTER_API_KEY=sk-or-v1-...
# OPENROUTER_MODEL=meta-llama/llama-3.1-8b-instruct:free
# Option 3: Google Gemini
# GOOGLE_GENERATIVE_AI_API_KEY=AIza...
# GOOGLE_MODEL=gemini-2.0-flash
# Option 4: OpenAI
# OPENAI_API_KEY=sk-proj-...
# OPENAI_MODEL=gpt-4o-mini
# Option 5: Anthropic
# ANTHROPIC_API_KEY=sk-ant-...
# ANTHROPIC_MODEL=claude-sonnet-4
# Email Configuration (Resend)
RESEND_API_KEY=re_...
# Sentry Configuration (Optional)
SENTRY_DSN=https://...@sentry.io/...
# Scheduler Configuration (Optional)
# DISABLE_INTERNAL_CRON=false # Set to true to use external cronGenerate Prisma client and run migrations:
# Generate Prisma client
npm run build
# Push database schema (creates tables)
npx prisma db push
# Optional: Open Prisma Studio to view your database
npx prisma studionpm start
# or for development with auto-reload
npm run devThe backend will start on http://localhost:3000
You should see output like:
============================================================
Starting Throughline Backend...
============================================================
Environment Configuration:
• Mode: self-hosted
• Node Environment: development
• Port: 3000
• Database: configured
• Frontend URL: http://localhost:5173
• Timezone: Asia/Kolkata
• LLM Providers: Groq
Server running on port 3000
API URL: http://localhost:3000
Frontend URL: http://localhost:5173
============================================================
cd frontendnpm installCreate a .env file in the frontend root directory:
cp .env.example .envEdit the .env file:
VITE_API_URL=http://localhost:3000
VITE_GOOGLE_CLIENT_ID=your-google-client-id.apps.googleusercontent.comImportant: The VITE_GOOGLE_CLIENT_ID must match the one configured in your backend.
npm run devThe frontend will start on http://localhost:5173
- Open
http://localhost:5173in your browser - Click "Sign Up" and create an account
- Check your email for verification link (if RESEND is configured)
- Verify your email and log in
After logging in for the first time, you'll go through onboarding:
- Add sample posts (examples of your writing style)
- The system will extract your tone profile from these samples
- Configure your post generation schedule (daily, weekly, monthly)
- Daily Check-ins: Log your progress, thoughts, and activities
- Generated Posts: View AI-generated narratives based on your check-ins
- Tone Profile: Customize how the AI writes in your voice
- Settings: Configure notifications and generation schedules
To enable Google login, you need to set up OAuth credentials:
- Go to Google Cloud Console
- Create a new project or select existing one
- Enable the Google+ API
- Navigate to "APIs & Services" → "Credentials"
- Click "Create Credentials" → "OAuth client ID"
- Application type: "Web application"
- Authorized JavaScript origins:
http://localhost:5173(for local development)- Your production frontend URL
- Authorized redirect URIs:
http://localhost:3000/auth/google/callback(for local)- Your production backend URL +
/auth/google/callback
Copy the Client ID and Client Secret to your .env files:
- Backend:
GOOGLE_CLIENT_IDandGOOGLE_CLIENT_SECRET - Frontend:
VITE_GOOGLE_CLIENT_ID
Throughline supports multiple LLM providers. Configure at least one:
- Sign up at https://groq.com
- Get API key from https://console.groq.com/keys
- Add to backend
.env:GROQ_API_KEY=gsk_...
- Sign up at https://openrouter.ai
- Get API key from https://openrouter.ai/keys
- Add to backend
.env:OPENROUTER_API_KEY=sk-or-v1-...
- Get API key from https://aistudio.google.com/app/apikey
- Add to backend
.env:GOOGLE_GENERATIVE_AI_API_KEY=AIza...
- Get API key from https://platform.openai.com/api-keys
- Add to backend
.env:OPENAI_API_KEY=sk-proj-...
- Get API key from https://console.anthropic.com/
- Add to backend
.env:ANTHROPIC_API_KEY=sk-ant-...
DATABASE_URL=mysql://USERNAME:PASSWORD@HOST:PORT/DATABASE
Examples:
Local MySQL:
DATABASE_URL=mysql://root:password@localhost:3306/throughline
PlanetScale:
DATABASE_URL=mysql://username:password@aws.connect.psdb.cloud/throughline?sslaccept=strict
Railway:
DATABASE_URL=mysql://root:password@containers-us-west-123.railway.app:1234/railway
Make sure your MySQL database exists before running Prisma commands. You can create it using:
CREATE DATABASE throughline CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;cd backend
npm install
npm run build
npx prisma db push
npm startcd frontend
npm install
npm run buildThe build output will be in frontend/dist/ directory. Deploy this to any static hosting service (Vercel, Netlify, Cloudflare Pages, etc.)
Users configure their own LLM API keys. The system detects which provider is available and uses it.
Provider Priority Order:
- Groq (checked first)
- OpenRouter
- Google Gemini
- OpenAI
- Anthropic
Admin configures a single LLM provider for all users.
Required additional variables:
MODE=saas
SAAS_LLM_PROVIDER=openrouter # or google, openai, anthropic, groq
SAAS_OPENROUTER_MODEL=meta-llama/llama-3.1-8b-instruct:freeThroughline uses scheduled jobs for automatic post generation.
Jobs run inside the Node.js process using node-cron:
- Daily posts: 9:00 PM (user's timezone)
- Weekly posts: Sunday 8:00 PM
- Monthly posts: 28th of month at 8:00 PM
For better reliability and resource usage:
- Set in backend
.env:
DISABLE_INTERNAL_CRON=true- Set up external cron jobs to call:
# Daily job (run at 9:00 PM)
curl -X POST http://your-api-url/api/cron/daily
# Weekly job (run Sunday 8:00 PM)
curl -X POST http://your-api-url/api/cron/weekly
# Monthly job (run on 28th at 8:00 PM)
curl -X POST http://your-api-url/api/cron/monthlyThe backend exposes these main routes:
POST /auth/signup- Create accountPOST /auth/login- Email/password loginGET /auth/google- Initiate Google OAuthGET /auth/google/callback- Google OAuth callbackPOST /auth/refresh- Refresh access tokenPOST /auth/logout- LogoutPOST /auth/verify-email- Verify emailPOST /auth/forgot-password- Request password resetPOST /auth/reset-password- Reset password
GET /profile- Get user profilePATCH /profile- Update profilePATCH /profile/photo- Update profile photo
POST /checkin- Create check-inGET /checkin- List check-insGET /checkin/:id- Get specific check-inDELETE /checkin/:id- Delete check-in
POST /sample- Add sample postGET /sample- List sample postsDELETE /sample/:id- Delete sample post
POST /tone/extract- Extract tone from samplesGET /tone- Get tone profilePATCH /tone- Update tone profile
GET /generation/posts- List generated postsGET /generation/posts/:id- Get specific postPOST /generation/regenerate/:id- Regenerate postGET /generation/limits- Check regeneration limits
GET /notifications- Get notification settingsPATCH /notifications- Update settings
GET /schedule- Get generation schedulePATCH /schedule- Update schedule
POST /feedback- Submit feedback on posts
GET /health- Health check endpoint
User: Stores user accounts and authentication data
- Email/password or Google OAuth
- Profile information (name, bio, photo)
- Email verification status
- Onboarding completion status
CheckIn: Daily user check-ins/logs
- User-submitted content
- Timestamped entries
SamplePost: Example posts in user's writing style
- Used for tone extraction
- User-provided examples
ToneProfile: AI-extracted writing style
- Voice, sentence style, emotional range
- Manual customization options
- Writing goals and preferences
GeneratedPost: AI-generated narratives
- Daily, weekly, or monthly posts
- Base content and metadata
- Version tracking
- Token usage tracking
GenerationSchedule: User's post generation preferences
- Daily/weekly/monthly timing
- Timezone configuration
GenerationJob: Background job tracking
- Job status (pending, processing, completed, failed)
- Error logging
Error: Environment validation failed
- Check all required environment variables are set
- Ensure JWT secrets are at least 32 characters
- Verify database connection string is correct
Error: Cannot connect to database
- Verify MySQL is running
- Check DATABASE_URL format
- Ensure database exists
- Test connection:
npx prisma db push
Error: No LLM provider configured
- Add at least one LLM API key to
.env - Verify the API key is valid
Error: Cannot connect to backend
- Ensure backend is running on the correct port
- Check
VITE_API_URLin frontend.env - Verify CORS is configured correctly in backend
Build errors
- Delete
node_modulesandpackage-lock.json - Run
npm installagain - Check Node.js version (18+ required)
- Verify Google Client ID matches in both frontend and backend
- Check authorized redirect URIs in Google Console
- Ensure callback URL format:
{API_URL}/auth/google/callback - Clear browser cookies and try again
- Verify RESEND_API_KEY is set correctly
- Check Resend dashboard for errors
- Ensure email domain is verified (for production)
- Check LLM API key is valid and has credits
- Review logs for specific error messages
- Verify user has check-ins in the relevant time period
- Check token usage limits haven't been exceeded
# Generate Prisma client
npx prisma generate
# Push schema changes to database
npx prisma db push
# Create a migration
npx prisma migrate dev --name description
# Open Prisma Studio (database GUI)
npx prisma studio
# Reset database (WARNING: deletes all data)
npx prisma db push --force-resetEnable debug logging:
DEBUG=* # Backend (in .env)View Sentry errors:
- Check Sentry dashboard if configured
- Look for error logs in console
Rate limits are applied to:
- Auth endpoints: 5 requests per 15 minutes
- LLM endpoints: 10 requests per hour
- Global: 100 requests per 15 minutes
To disable during development:
SKIP_RATE_LIMIT=true- Use strong, unique JWT secrets (32+ characters)
- Enable HTTPS for both frontend and backend
- Configure proper CORS origins (no wildcards)
- Set
NODE_ENV=production - Enable Sentry error tracking
- Use environment variables, never commit secrets
- Configure rate limiting appropriately
- Set up database backups
- Use secure database connection (SSL/TLS)
- Keep dependencies updated (
npm audit)
Users' passwords must be:
- At least 8 characters long
- Passwords are hashed with bcrypt before storage
- Tone Extraction: User provides sample posts → AI extracts writing style
- Check-in Collection: User logs daily activities
- Scheduled Generation: Cron jobs trigger at configured times
- AI Generation: Mastra agents generate posts using:
- User's tone profile
- Recent check-ins
- Configured platform specs
- Version Management: Posts can be regenerated (with limits)
- Token Tracking: All LLM usage is tracked for cost monitoring
- Short-term: Check-ins from recent period
- Long-term: All historical check-ins in MySQL
- Synthesis: AI generates narratives connecting fragments over time
- Tone Persistence: User's writing style maintained across generations
For issues or questions:
- Check this README first
- Review error logs in console/Sentry
- Verify all environment variables are correct
- Check database connectivity with
npx prisma studio
Make sure to:
- Set all environment variables in platform settings
- Configure build commands:
- Backend:
npm run build && npx prisma db push - Frontend:
npm run build
- Backend:
- Set start commands:
- Backend:
npm start - Frontend: Static site (no start command needed)
- Backend: