You're setting up your own personal AI that:
- Runs on your home computer - processes audio, stores memories, runs AI models
- Connects to your phone - where you use the Chronicle app and OMI device
- Works everywhere - your phone can access your home AI from anywhere
Think of it like having Siri/Alexa, but it's your own AI running on your hardware with your data.
- Docker - Runs all the AI services (like having multiple apps in containers)
- Chronicle Backend - The main AI brain (transcription, memory, processing)
- Tailscale - Creates secure tunnel so your phone can reach home
- Tailscale - Connects securely to your home computer
- Chronicle Mobile App - Interface for your OMI device and conversations
Option A: Cloud Services (Easiest - Recommended for Beginners)
- Deepgram - Speech-to-text ($200 free credits, then pay per use)
- OpenAI - Memory extraction (~$1-5/month typical usage)
- Best quality, minimal setup, small monthly cost
Option B: Local Services (Free but More Complex)
- Parakeet ASR - Offline speech-to-text (runs on your computer)
- Ollama - Local AI models (runs on your computer)
- Completely free and private, requires more powerful hardware
Optional Add-ons (Both Paths)
- Hugging Face - Speaker recognition (free API key)
Git (Downloads code from the internet):
- Windows/Mac: Download Git
- Linux:
sudo apt install gitorsudo yum install git
Docker (Runs the AI services):
- Windows/Mac: Download Docker Desktop
- Linux: Install Docker
- After install: Make sure Docker Desktop is running
WSL Users: Chronicle services will fail to start unless Docker is installed and running inside WSL2 (or Docker Desktop with WSL integration enabled).
uv (Python package manager):
curl -LsSf https://astral.sh/uv/install.sh | shThis downloads and runs Python programs for you
Tailscale (Connects your phone to home computer):
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale upFollow the login prompts - this gives your computer a special IP address
Tailscale App:
- iPhone: App Store - Tailscale
- Android: Google Play - Tailscale
Log in with the same account you used on your computer
Deepgram (Speech-to-Text)
- Go to console.deepgram.com
- Sign up for free account (get $200 free credits)
- Go to "API Keys" → Create new key
- Copy the key - you'll need it in setup
OpenAI (AI Brain)
- Go to platform.openai.com
- Create account and add payment method (typically costs $1-5/month)
- Go to "API Keys" → Create new key
- Copy the key - you'll need it in setup
Optional: Hugging Face (Speaker Recognition)
- Go to huggingface.co
- Create free account
- Go to Settings → Access Tokens → Create new token
- Copy the token - for identifying different speakers
No API keys needed! Everything runs on your computer.
The setup wizard will automatically download and configure:
- Parakeet ASR - Local speech-to-text service
- Ollama - Local AI model runner
Note: First-time setup will download AI models (this can take time and storage space)
Download the code:
git clone https://github.com/chronicle-ai/chronicle.git
cd chronicleRun the setup wizard:
# Using convenience script (recommended)
./wizard.sh
# Or use direct command:
uv run --with-requirements setup-requirements.txt python wizard.pyNote: Convenience scripts (./wizard.sh, ./start.sh, ./restart.sh, ./stop.sh, ./status.sh) are wrappers around wizard.py and services.py that simplify the longer uv run commands.
The wizard will ask questions - here's what to answer:
"Admin email": Your email (for logging into web dashboard) "Admin password": Password for web dashboard (8+ characters)
"Choose transcription provider": Choose deepgram
"Deepgram API key": Paste the key you got from Deepgram
"Choose LLM provider": Choose openai
"OpenAI API key": Paste the key you got from OpenAI
"OpenAI model": Keep default (gpt-4o-mini)
"Choose transcription provider": Choose parakeet
- The wizard will configure local Parakeet ASR service
- No API key needed
"Choose LLM provider": Choose ollama
- The wizard will configure local Ollama
- No API key needed
- Default model: llama3.2 (will be downloaded automatically)
"Enable Speaker Recognition": Say Yes if you got Hugging Face token "Hugging Face token": Paste your token (if you got one)
"Enable HTTPS": Say Yes (needed for phone connection) "Server IP for SSL certificate":
- Run
tailscale ipin another terminal - Copy the IP that starts with
100.(like100.64.1.5) - Paste that IP here
The wizard creates all the configuration files automatically
Start the services:
# Option 1: Using convenience script (recommended)
./start.sh
# Option 2: Direct command
uv run --with-requirements setup-requirements.txt python services.py start --all --buildThis downloads and starts all the AI services - takes 5-10 minutes first time
Before connecting your phone, make sure everything works:
-
Visit: https://[your-tailscale-ip] (like
https://100.64.1.5)Your browser will warn about "unsafe certificate" - click "Advanced" → "Proceed anyway"
-
You should see the Chronicle dashboard
-
Click "Live Recording" in the sidebar
-
Test your microphone - record a short clip
-
Check that it gets transcribed and appears in "Conversations"
-
Only proceed to phone setup when this works perfectly!
No development setup needed - just download and install!
- Go to GitHub Releases
- Find the latest release and download
chronicle-android.apk - Install APK on your phone:
- Enable "Install from unknown sources" in Android settings
- Tap the downloaded APK file to install
-
Go to GitHub Releases
-
Find the latest release and download
chronicle-ios.ipa -
Install using sideloading tool:
- AltStore (recommended): altstore.io
- Sideloadly: sideloadly.io
Note: iOS requires sideloading since we're not on App Store yet
-
First: Make sure Tailscale is running on your phone
-
Open Chronicle app
-
Go to Settings → Backend Configuration
-
Enter Backend URL:
https://[your-tailscale-ip]Use the same IP as your web dashboard - like
https://100.64.1.5 -
Tap "Test Connection" - should show green checkmark
-
If connection fails, double-check:
- Tailscale is running on phone
- Same IP as web dashboard
- Using
https://(nothttp://)
- Turn on your OMI/Friend device (make sure it's charged)
- Open Chronicle app on your phone
- Go to "Devices" tab → "Add New Device"
- Follow Bluetooth pairing instructions
- Once connected, start a conversation!
- Check your web dashboard - the conversation should appear there
What you now have:
- ✅ Personal AI running on your home computer
- ✅ Phone app connected securely via Tailscale
- ✅ OMI device streaming audio to your AI
- ✅ All conversations processed privately and stored locally
- ✅ Access from anywhere via your phone
Next steps:
- Explore the web dashboard features
- Try voice commands and see memories get extracted
- Invite others to test conversations (if you enabled speaker recognition)
- "Command not found": Make sure Docker Desktop is running
- "Permission denied": Try
sudobefore commands (Linux/Mac) - "uv not found": Restart terminal after installing uv
- Phone can't reach backend: Check Tailscale is running on both devices
- Certificate warnings: Click "Advanced" → "Proceed" in browser
- Test connection fails: Verify you're using
https://and correct Tailscale IP
General Service Management:
- Services not responding: Try restarting with
./restart.sh - Check service status: Use
./status.sh - Stop all services: Use
./stop.sh
Full commands (what the convenience scripts wrap):
- Restart:
uv run --with-requirements setup-requirements.txt python services.py restart --all - Status:
uv run --with-requirements setup-requirements.txt python services.py status - Stop:
uv run --with-requirements setup-requirements.txt python services.py stop --all
Cloud Services (Deepgram/OpenAI):
- Transcription not working: Check Deepgram API key is correct
- No memories created: Check OpenAI API key and account has credits
- High costs: Switch to
gpt-4o-minimodel for cheaper processing
Local Services (Parakeet/Ollama):
- Parakeet not starting: Check
docker compose ps- Parakeet container should be running - Slow transcription: Local ASR is slower than cloud services, this is normal
- Ollama model download stuck: Check internet connection, models can be large (5-20GB)
- Out of memory errors: Local services need sufficient RAM, try smaller Ollama models
- Full Documentation: CLAUDE.md - Complete technical reference
- Architecture Details: Docs/overview.md - How everything works
- Advanced Setup: Docs/init-system.md - Power user options