diff --git a/.gitignore b/.gitignore
index 9362c14e..8363d1ff 100644
--- a/.gitignore
+++ b/.gitignore
@@ -21,4 +21,12 @@ transcription_results.csv
**/hub/*
*.log
**/speaker_data/**
-**/.venv/*
\ No newline at end of file
+**/.venv/*
+**metrics_report**
+
+*.db
+**/advanced_omi_backend.egg-info/
+**/dist/*
+**/build/*
+
+untracked/*
\ No newline at end of file
diff --git a/CLAUDE.md b/CLAUDE.md
new file mode 100644
index 00000000..eccd740a
--- /dev/null
+++ b/CLAUDE.md
@@ -0,0 +1,169 @@
+# CLAUDE.md
+
+This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
+
+## Project Overview
+
+Friend-Lite is an AI-powered wearable ecosystem for audio capture, transcription, memory extraction, and action item detection. The system features real-time audio streaming from OMI devices via Bluetooth, intelligent conversation processing, and a comprehensive web dashboard for management.
+
+## Development Commands
+
+### Backend Development (Advanced Backend - Primary)
+```bash
+cd backends/advanced-backend
+
+# Start full stack with Docker
+docker compose up --build -d
+
+# Development with live reload
+uv run python src/main.py
+
+# Code formatting and linting
+uv run black src/
+uv run isort src/
+
+# Run tests
+uv run pytest
+uv run pytest tests/test_memory_service.py # Single test file
+uv run pytest test_endpoints.py # Integration tests
+uv run pytest test_failure_recovery.py # Failure recovery tests
+uv run pytest test_memory_debug.py # Memory debug tests
+
+# Environment setup
+cp .env.template .env # Configure environment variables
+
+# Reset data (development)
+sudo rm -rf ./audio_chunks/ ./mongo_data/ ./qdrant_data/
+```
+
+### Mobile App Development
+```bash
+cd friend-lite
+
+# Start Expo development server
+npm start
+
+# Platform-specific builds
+npm run android
+npm run ios
+npm run web
+```
+
+### Additional Services
+```bash
+# ASR Services
+cd extras/asr-services
+docker compose up moonshine # Offline ASR with Moonshine
+docker compose up parakeet # Offline ASR with Parakeet
+
+# Speaker Recognition
+cd extras/speaker-recognition
+docker compose up --build
+
+# HAVPE Relay (ESP32 bridge)
+cd extras/havpe-relay
+docker compose up --build
+```
+
+## Architecture Overview
+
+### Core Structure
+- **backends/advanced-backend/**: Primary FastAPI backend with real-time audio processing
+ - `src/main.py`: Central FastAPI application with WebSocket audio streaming
+ - `src/auth.py`: Email-based authentication with JWT tokens
+ - `src/memory/`: LLM-powered conversation memory system using mem0
+ - `src/failure_recovery/`: Robust processing pipeline with SQLite tracking
+ - `webui/streamlit_app.py`: Web dashboard for conversation and user management
+
+### Key Components
+- **Audio Pipeline**: Real-time Opus/PCM โ Deepgram WebSocket transcription โ memory extraction
+- **Transcription**: Deepgram Nova-3 model with Wyoming ASR fallback, auto-reconnection
+- **Authentication**: Email-based login with MongoDB ObjectId user system
+- **Client Management**: Auto-generated client IDs as `{user_id_suffix}-{device_name}`, centralized ClientManager
+- **Data Storage**: MongoDB (conversations), Qdrant (vector memory), SQLite (failure recovery)
+- **Web Interface**: Streamlit dashboard with authentication and real-time monitoring
+
+### Service Dependencies
+```yaml
+Required:
+ - MongoDB: User data and conversations
+ - FastAPI Backend: Core audio processing
+
+Recommended:
+ - Qdrant: Vector storage for semantic memory
+ - Ollama: LLM for memory extraction and action items
+ - Deepgram: Primary transcription service (Nova-3 WebSocket)
+ - Wyoming ASR: Fallback transcription service (offline)
+
+Optional:
+ - Speaker Recognition: Voice identification service
+ - Nginx Proxy: Load balancing and routing
+```
+
+## Data Flow Architecture
+
+1. **Audio Ingestion**: OMI devices stream Opus audio via WebSocket with JWT auth
+2. **Real-time Processing**: Per-client queues handle transcription and buffering
+3. **Conversation Management**: Automatic timeout-based conversation segmentation
+4. **Memory Extraction**: LLM processes completed conversations for semantic storage
+5. **Action Items**: Automatic task detection with "Simon says" trigger phrases
+6. **Audio Optimization**: Speech segment extraction removes silence automatically
+
+## Authentication & Security
+
+- **User System**: Email-based authentication with MongoDB ObjectId user IDs
+- **Client Registration**: Automatic `{objectid_suffix}-{device_name}` format
+- **Data Isolation**: All data scoped by user_id with efficient permission checking
+- **API Security**: JWT tokens required for all endpoints and WebSocket connections
+- **Admin Bootstrap**: Automatic admin account creation with ADMIN_EMAIL/ADMIN_PASSWORD
+
+## Configuration
+
+### Required Environment Variables
+```bash
+AUTH_SECRET_KEY=your-super-secret-jwt-key-here
+ADMIN_PASSWORD=your-secure-admin-password
+ADMIN_EMAIL=admin@example.com
+```
+
+### Optional Service Configuration
+```bash
+# Transcription (Deepgram primary, Wyoming fallback)
+DEEPGRAM_API_KEY=your-deepgram-key-here
+OFFLINE_ASR_TCP_URI=tcp://host.docker.internal:8765
+
+# LLM Processing
+OLLAMA_BASE_URL=http://ollama:11434
+
+# Vector Storage
+QDRANT_BASE_URL=qdrant
+
+# Speaker Recognition
+SPEAKER_SERVICE_URL=http://speaker-recognition:8001
+```
+
+## Development Notes
+
+### Package Management
+- **Backend**: Uses `uv` for Python dependency management (faster than pip)
+- **Mobile**: Uses `npm` with React Native and Expo
+- **Docker**: Primary deployment method with docker-compose
+
+### Testing Strategy
+- **Integration Tests**: `test_endpoints.py` covers API functionality
+- **Unit Tests**: Individual service tests in `tests/` directory
+- **System Tests**: `test_failure_recovery.py` and `test_memory_debug.py`
+
+### Code Style
+- **Python**: Black formatter with 100-character line length, isort for imports
+- **TypeScript**: Standard React Native conventions
+
+### Health Monitoring
+The system includes comprehensive health checks:
+- `/readiness`: Service dependency validation
+- `/health`: Basic application status
+- Failure recovery system with SQLite tracking
+- Memory debug system for transcript processing monitoring
+
+### Cursor Rule Integration
+Project includes `.cursor/rules/always-plan-first.mdc` requiring understanding before coding. Always explain the task and confirm approach before implementation.
\ No newline at end of file
diff --git a/README.md b/README.md
index eae5cfda..a4549c4e 100644
--- a/README.md
+++ b/README.md
@@ -109,6 +109,7 @@ Choose one based on your needs:
- Requires multiple services (MongoDB, Qdrant, Ollama)
- Higher resource requirements
- Steeper learning curve
+- Authentication setup required
---
diff --git a/backends/advanced-backend/.dockerignore b/backends/advanced-backend/.dockerignore
index cbc4c356..f1e845d9 100644
--- a/backends/advanced-backend/.dockerignore
+++ b/backends/advanced-backend/.dockerignore
@@ -5,4 +5,6 @@
!pyproject.toml
!pyproject.blackwell.toml
!README.md
-!src
\ No newline at end of file
+!src
+!.env
+!memory_config.yaml
\ No newline at end of file
diff --git a/backends/advanced-backend/.env.template b/backends/advanced-backend/.env.template
index 57903a1c..d5337c11 100644
--- a/backends/advanced-backend/.env.template
+++ b/backends/advanced-backend/.env.template
@@ -1,6 +1,70 @@
-OFFLINE_ASR_TCP_URI=
-OLLAMA_BASE_URL=
-NGROK_AUTHTOKEN=
-HF_TOKEN=
-SPEAKER_SERVICE_URL=
-MONGODB_URI=
\ No newline at end of file
+# This key is used to sign your JWT token, just make it random and long
+AUTH_SECRET_KEY=
+
+# This is the password for the admin user
+ADMIN_PASSWORD=
+
+# Admin email (defaults to admin@example.com if not set)
+ADMIN_EMAIL=admin@example.com
+
+# ========================================
+# LLM CONFIGURATION (Choose one)
+# ========================================
+
+# LLM Provider: "openai" or "ollama" (default: ollama)
+LLM_PROVIDER=openai
+
+# For OpenAI (recommended for best memory extraction)
+OPENAI_API_KEY=
+OPENAI_MODEL=gpt-4o
+
+# For Ollama (local LLM)
+OLLAMA_BASE_URL=http://ollama:11434
+# OLLAMA_MODEL=gemma3n:e4b
+
+# ========================================
+# SPEECH-TO-TEXT CONFIGURATION (Choose one)
+# ========================================
+
+# Option 1: Deepgram (recommended for best transcription quality)
+DEEPGRAM_API_KEY=
+
+# Option 2: Local ASR service from extras/asr-services
+# OFFLINE_ASR_TCP_URI=tcp://localhost:8765
+
+# ========================================
+# DATABASE CONFIGURATION
+# ========================================
+
+# MongoDB for conversations and user data (defaults to mongodb://mongo:27017)
+MONGODB_URI=mongodb://mongo:27017
+
+# Qdrant for vector memory storage (defaults to qdrant)
+QDRANT_BASE_URL=qdrant
+
+# ========================================
+# OPTIONAL FEATURES
+# ========================================
+
+# Debug directory for troubleshooting
+DEBUG_DIR=./debug_dir
+
+# Ngrok for external access (if using ngrok from docker-compose)
+# NGROK_AUTHTOKEN=
+
+# Speaker recognition service
+# HF_TOKEN=
+# SPEAKER_SERVICE_URL=http://speaker-recognition:8001
+
+# Audio processing settings
+# NEW_CONVERSATION_TIMEOUT_MINUTES=1.5
+# AUDIO_CROPPING_ENABLED=true
+# MIN_SPEECH_SEGMENT_DURATION=1.0
+# CROPPING_CONTEXT_PADDING=0.1
+
+# Server settings
+# HOST=0.0.0.0
+# PORT=8000
+
+# Memory settings
+# MEM0_TELEMETRY=False
\ No newline at end of file
diff --git a/backends/advanced-backend/Dockerfile b/backends/advanced-backend/Dockerfile
index df85267b..35e26d78 100644
--- a/backends/advanced-backend/Dockerfile
+++ b/backends/advanced-backend/Dockerfile
@@ -16,17 +16,21 @@ COPY --from=ghcr.io/astral-sh/uv:0.6.10 /uv /uvx /bin/
# Set up the working directory
WORKDIR /app
-# Copy dependency files
-COPY pyproject.toml .
+# Copy package structure and dependency files first
+COPY pyproject.toml README.md ./
+RUN mkdir -p src/advanced_omi_backend
+COPY src/advanced_omi_backend/__init__.py src/advanced_omi_backend/
-# Install dependencies using uv
+# Install dependencies using uv with deepgram extra
RUN --mount=type=cache,target=/root/.cache/uv \
- uv sync
+ uv sync --extra deepgram
-
-# Copy application code
+# Copy all application code
COPY . .
+# Copy memory config to the expected location
+COPY memory_config.yaml src/
+
# Run the application
-CMD ["uv", "run", "python3", "src/main.py"]
+CMD ["uv", "run", "python3", "src/advanced_omi_backend/main.py"]
diff --git a/backends/advanced-backend/Dockerfile.blackwell b/backends/advanced-backend/Dockerfile.blackwell
index 48a6c632..613600bd 100644
--- a/backends/advanced-backend/Dockerfile.blackwell
+++ b/backends/advanced-backend/Dockerfile.blackwell
@@ -18,6 +18,7 @@ WORKDIR /app
# Copy dependency files
COPY pyproject.blackwell.toml pyproject.toml
+COPY README.md .
# Install dependencies using uv
RUN --mount=type=cache,target=/root/.cache/uv \
diff --git a/backends/advanced-backend/Dockerfile.webui b/backends/advanced-backend/Dockerfile.webui
new file mode 100644
index 00000000..4f1de396
--- /dev/null
+++ b/backends/advanced-backend/Dockerfile.webui
@@ -0,0 +1,24 @@
+FROM python:3.11-slim
+
+WORKDIR /app
+
+# Install uv
+COPY --from=ghcr.io/astral-sh/uv:0.6.10 /uv /uvx /bin/
+
+# Copy pyproject.toml and README.md from the current directory (advanced-backend)
+COPY pyproject.toml README.md ./
+
+# Copy the entire src directory to make advanced_omi_backend package available
+RUN mkdir -p src/advanced_omi_backend
+COPY src/advanced_omi_backend/__init__.py src/advanced_omi_backend/
+
+# Install dependencies using uv with webui extra
+RUN --mount=type=cache,target=/root/.cache/uv \
+ uv sync --extra webui
+
+# Set PYTHONPATH so imports work
+COPY src/ /app/src/
+ENV PYTHONPATH=/app/src
+
+CMD ["uv", "run", "streamlit", "run", "src/webui/streamlit_app.py", \
+ "--server.address=0.0.0.0", "--server.port=8501"]
\ No newline at end of file
diff --git a/backends/advanced-backend/Docs/README.md b/backends/advanced-backend/Docs/README.md
new file mode 100644
index 00000000..7a50b126
--- /dev/null
+++ b/backends/advanced-backend/Docs/README.md
@@ -0,0 +1,220 @@
+# Friend-Lite Backend Documentation Guide
+
+## ๐ **New Developer Reading Order**
+
+Welcome to friend-lite! This guide provides the optimal reading sequence to understand the complete voice โ transcript โ memories + action items system.
+
+---
+
+## ๐ฏ **Start Here: System Overview**
+
+### 1. **[Overview & Quick Start](./quickstart.md)** โญ *START HERE*
+**Read first** - Complete system overview and setup guide
+- What the system does (voice โ memories + action items)
+- Key features and capabilities
+- Basic setup and configuration
+- **Code References**: `main.py`, `memory_config.yaml`, `docker-compose.yml`
+
+### 2. **[System Architecture](./architecture.md)**
+**Read second** - Complete technical architecture with diagrams
+- Component relationships and data flow
+- Authentication and security architecture
+- Deployment structure and containers
+- **Code References**: `main.py:1-100`, `auth.py`, `users.py`
+
+---
+
+## ๐ง **Core Components Deep Dive**
+
+### 3. **[Memory System](./memories.md)**
+**Memory extraction and semantic search**
+- How conversations become memories
+- Mem0 integration and vector storage
+- Configuration and customization options
+- **Code References**:
+ - `src/memory/memory_service.py:159-282` (main processing)
+ - `main.py:1047-1065` (conversation end trigger)
+ - `main.py:1163-1195` (background processing)
+
+### 4. **[Action Items System](./action-items.md)**
+**Real-time task extraction and management**
+- How action items are detected and extracted
+- MongoDB storage and CRUD operations
+- Trigger phrases and configuration
+- **Code References**:
+ - `src/action_items_service.py` (primary handler)
+ - `main.py:1341-1378` (real-time processing)
+ - `main.py:2671-2800` (API endpoints)
+
+### 5. **[Authentication System](./auth.md)**
+**User management and security**
+- Dual authentication (email + user_id)
+- JWT tokens and OAuth integration
+- User-centric data architecture
+- **Code References**:
+ - `src/auth.py` (authentication logic)
+ - `src/users.py` (user management)
+ - `main.py:1555-1563` (auth router setup)
+
+---
+
+## ๐ **Advanced Topics**
+
+### 6. **Memory Debug System** โ `../MEMORY_DEBUG_IMPLEMENTATION.md`
+**Pipeline tracking and debugging**
+- How to track transcript โ memory conversion
+- Debug database schema and API endpoints
+- Performance monitoring and troubleshooting
+- **Code References**:
+ - `src/memory_debug.py` (SQLite tracking)
+ - `src/memory_debug_api.py` (debug endpoints)
+ - `main.py:1562-1563` (debug router integration)
+
+### 7. **Action Items Architecture** โ `../ACTION_ITEMS_CLEANUP_SUMMARY.md`
+**Clean architecture explanation**
+- Why action items were moved out of memory service
+- Current single-responsibility design
+- How components interact
+- **Code References**: `src/action_items_service.py` vs removed functions
+
+---
+
+## ๐ **Configuration & Customization**
+
+### 8. **Configuration File** โ `../memory_config.yaml`
+**Central configuration for all extraction**
+- Memory extraction settings and prompts
+- Action item triggers and configuration
+- Quality control and debug settings
+- **Code References**:
+ - `src/memory_config_loader.py` (config loading)
+ - `src/memory/memory_service.py:176-204` (config usage)
+
+---
+
+## ๐ **Quick Reference by Use Case**
+
+### **"I want to understand the system quickly"** (30 min)
+1. [quickstart.md](./quickstart.md) - System overview
+2. [architecture.md](./architecture.md) - Technical architecture
+3. `main.py:1-200` - Core imports and setup
+4. `memory_config.yaml` - Configuration overview
+
+### **"I want to work on memory extraction"**
+1. [memories.md](./memories.md) - Memory system details
+2. `../memory_config.yaml` - Memory configuration
+3. `src/memory/memory_service.py` - Implementation
+4. `main.py:1047-1065, 1163-1195` - Processing triggers
+
+### **"I want to work on action items"**
+1. [action-items.md](./action-items.md) - Action items system
+2. `../memory_config.yaml` - Action item configuration
+3. `src/action_items_service.py` - Implementation
+4. `main.py:1341-1378` - Real-time processing
+
+### **"I want to debug pipeline issues"**
+1. `../MEMORY_DEBUG_IMPLEMENTATION.md` - Debug system overview
+2. `src/memory_debug.py` - Debug tracking implementation
+3. API: `GET /api/debug/memory/stats` - Live debugging
+4. `src/memory_debug_api.py` - Debug endpoints
+
+### **"I want to understand authentication"**
+1. [auth.md](./auth.md) - Authentication system
+2. `src/auth.py` - Authentication implementation
+3. `src/users.py` - User management
+4. `main.py:1555-1563` - Auth router setup
+
+---
+
+## ๐ **File Organization Reference**
+
+```
+backends/advanced-backend/
+โโโ Docs/ # ๐ Documentation
+โ โโโ README.md # ๐ This file (start here)
+โ โโโ quickstart.md # System overview & setup
+โ โโโ architecture.md # Technical architecture
+โ โโโ memories.md # Memory system details
+โ โโโ action-items.md # Action items system
+โ โโโ auth.md # Authentication system
+โ
+โโโ src/ # ๐ง Source Code
+โ โโโ main.py # Core application (WebSocket, API)
+โ โโโ auth.py # Authentication system
+โ โโโ users.py # User management
+โ โโโ action_items_service.py # Action items (MongoDB)
+โ โโโ memory/
+โ โ โโโ memory_service.py # Memory system (Mem0)
+โ โโโ memory_debug.py # Debug tracking (SQLite)
+โ โโโ memory_debug_api.py # Debug API endpoints
+โ โโโ memory_config_loader.py # Configuration loading
+โ
+โโโ memory_config.yaml # ๐ Central configuration
+โโโ MEMORY_DEBUG_IMPLEMENTATION.md # Debug system details
+โโโ ACTION_ITEMS_CLEANUP_SUMMARY.md # Architecture cleanup
+```
+
+---
+
+## ๐ฏ **Key Code Entry Points**
+
+### **Audio Processing Pipeline**
+- **Entry**: WebSocket endpoints in `main.py:1562+`
+- **Transcription**: `main.py:1258-1340` (transcription processor)
+- **Memory Trigger**: `main.py:1047-1065` (conversation end)
+- **Action Items**: `main.py:1341-1378` (real-time processing)
+
+### **Data Storage**
+- **Memories**: `src/memory/memory_service.py` โ Mem0 โ Qdrant
+- **Action Items**: `src/action_items_service.py` โ MongoDB
+- **Debug Data**: `src/memory_debug.py` โ SQLite
+
+### **Configuration**
+- **Loading**: `src/memory_config_loader.py`
+- **File**: `memory_config.yaml`
+- **Usage**: `src/memory/memory_service.py:176-204`
+
+### **Authentication**
+- **Setup**: `src/auth.py`
+- **Users**: `src/users.py`
+- **Integration**: `main.py:1555-1563`
+
+---
+
+## ๐ก **Reading Tips**
+
+1. **Follow the references**: Each doc links to specific code files and line numbers
+2. **Use the debug API**: `GET /api/debug/memory/stats` shows live system status
+3. **Check configuration first**: Many behaviors are controlled by `memory_config.yaml`
+4. **Understand the dual pipeline**: Memories (end-of-conversation) vs Action Items (real-time)
+5. **Test with curl**: All API endpoints have curl examples in the docs
+
+---
+
+## ๐ฏ **After Reading This Guide**
+
+### **Next Steps for New Developers**
+
+1. **Set up the system**: Follow [quickstart.md](./quickstart.md) to get everything running
+2. **Test the API**: Use the curl examples in the documentation to test endpoints
+3. **Explore the debug system**: Check `GET /api/debug/memory/stats` to see live data
+4. **Modify configuration**: Edit `memory_config.yaml` to see how it affects extraction
+5. **Read the code**: Start with `main.py` and follow the references in each doc
+
+### **Contributing Guidelines**
+
+- **Add code references**: When updating docs, include file paths and line numbers
+- **Test your changes**: Use the debug API to verify your modifications work
+- **Update configuration**: Add new settings to `memory_config.yaml` when needed
+- **Follow the architecture**: Keep memories and action items in their respective services
+
+### **Getting Help**
+
+- **Debug API**: `GET /api/debug/memory/*` endpoints show real-time system status
+- **Configuration**: Check `memory_config.yaml` for behavior controls
+- **Logs**: Check Docker logs with `docker compose logs friend-backend`
+- **Documentation**: Each doc file links to relevant code sections
+
+---
+
+This documentation structure ensures you understand both the **big picture** and **implementation details** in a logical progression!
\ No newline at end of file
diff --git a/backends/advanced-backend/README_speaker_enrollment.md b/backends/advanced-backend/Docs/README_speaker_enrollment.md
similarity index 100%
rename from backends/advanced-backend/README_speaker_enrollment.md
rename to backends/advanced-backend/Docs/README_speaker_enrollment.md
diff --git a/backends/advanced-backend/Docs/UI.md b/backends/advanced-backend/Docs/UI.md
new file mode 100644
index 00000000..f41684ac
--- /dev/null
+++ b/backends/advanced-backend/Docs/UI.md
@@ -0,0 +1,241 @@
+# Streamlit Web Dashboard Documentation
+
+## Overview
+
+The Friend-Lite web dashboard provides a comprehensive interface for managing conversations, memories, users, and system debugging. Built with Streamlit, it offers real-time access to audio processing pipelines and administrative functions.
+
+## Access & Authentication
+
+### Dashboard URL
+- **Local**: `http://localhost:8501`
+- **Audio**: Configure `BACKEND_PUBLIC_URL` environment variable to point to some public URL of your server. Your BROWSER must be able to access this URL.
+
+### Authentication Methods
+1. **Email/Password Login**: Standard authentication via backend API
+2. **JWT Token**: Direct token authentication (for programmatic access)
+3. **Google OAuth**: Social login integration (if configured)
+
+### User Roles
+- **Regular Users**: Access to own conversations, memories, and action items
+- **Admin Users**: Full system access including debug tools and user management
+
+## Dashboard Sections
+
+### 1. Conversations Tab
+**Purpose**: View and manage audio conversations and transcripts
+
+**Features**:
+- Real-time conversation listing with metadata
+- Audio playback and transcript viewing
+- Conversation status tracking (open/closed)
+- Speaker identification and timing information
+- Audio file upload for processing existing recordings
+
+**Admin Features**:
+- View all users' conversations
+- Advanced filtering and search capabilities
+
+### 2. Memories Tab
+**Purpose**: Browse and search extracted conversation memories
+
+**Features**:
+- Semantic search across all memories
+- Memory categorization and tagging
+- Temporal filtering and sorting
+- Memory source tracking (which conversation)
+- Export capabilities
+
+**Admin Features**:
+- **Admin Debug Section**: Load and view all user memories for debugging
+- **System-wide Memory View**: Access all memories across users
+- **Memory Statistics**: Processing success rates and performance metrics
+
+### 3. User Management Tab
+**Purpose**: Administrative user and client management
+
+**Features**:
+- **Create New Users**: Email-based user registration
+- **User Listing**: View all registered users with details
+- **User Deletion**: Remove users and optionally clean up their data
+- **Client Management**: View active audio clients and connections
+
+**Admin Only**: This entire tab requires superuser privileges
+
+### 4. Conversation Management Tab
+**Purpose**: Real-time conversation and client control
+
+**Features**:
+- **Active Clients**: View currently connected audio clients
+- **Conversation Control**: Manually close open conversations
+- **Connection Monitoring**: Real-time client status and metadata
+- **WebSocket Information**: Authentication tokens and connection details
+
+### 5. ๐ง System State Tab (Admin Only)
+**Purpose**: Real-time system monitoring, debugging, and failure recovery status
+
+**Important**: This tab uses **lazy loading** - click the buttons to load specific data sections. This design prevents performance issues and allows selective monitoring.
+
+**Features**:
+
+#### System Overview (Click "๐ Load Debug Stats")
+- **Processing Metrics**: Total memory sessions, success rates, processing times
+- **Failure Analysis**: Failed extractions and error tracking
+- **Performance Monitoring**: Average processing times and bottlenecks
+- **Live Statistics**: Real-time system performance data
+
+#### Recent Memory Sessions (Click "๐ Load Recent Sessions")
+- **Session Listing**: Recent memory processing attempts with status
+- **Session Details**: Deep dive into specific processing sessions with full JSON data
+- **Pipeline Tracing**: Step-by-step processing flow analysis
+- **Error Debugging**: Detailed error messages and stack traces for failed sessions
+
+#### System Configuration (Click "๐ Load Memory Config")
+- **Memory Config**: Current memory extraction settings and LLM configuration
+- **Action Item Config**: Action item detection configuration and trigger phrases
+- **Debug Settings**: System debug mode, logging levels, and performance settings
+- **Live Config**: Real-time configuration without restart required
+
+#### Failure Recovery System (Click "๐ Load System Overview")
+- **System Health**: Overall system status (healthy/degraded/critical)
+- **Queue Statistics**: Processing queue metrics, backlogs, and throughput
+- **Service Health**: Real-time health checks for all dependencies:
+ - MongoDB connectivity and response times
+ - Qdrant vector database status and performance
+ - Ollama/OpenAI API availability and model status
+ - ASR service connectivity and transcription status
+- **Recovery Metrics**: Automatic recovery attempts and success rates
+
+#### Service Health Monitoring (Click "๐ Check Service Health")
+- **Service Status Grid**: Visual status indicators for all services
+- **Response Times**: Real-time latency metrics for each service
+- **Failure Tracking**: Consecutive failure counts and error messages
+- **Circuit Breaker Status**: Service protection states and thresholds
+
+#### Usage Tips
+- **Click buttons to load data**: Content appears only after clicking section buttons
+- **Refresh data**: Use the "๐ Refresh Debug Data" button to clear cache and reload
+- **Monitor continuously**: Regularly check different sections for system health
+- **Error investigation**: Use session details to debug processing failures
+
+## API Integration
+
+### Debug API Endpoints (Admin)
+The dashboard integrates with comprehensive debug APIs:
+
+**Memory Debug APIs:**
+- `GET /api/debug/memory/stats` - Processing statistics
+- `GET /api/debug/memory/sessions` - Recent sessions
+- `GET /api/debug/memory/session/{uuid}` - Session details
+- `GET /api/debug/memory/config` - Configuration
+- `GET /api/debug/memory/pipeline/{uuid}` - Pipeline trace
+
+**Failure Recovery APIs:**
+- `GET /api/failure-recovery/system-overview` - System status
+- `GET /api/failure-recovery/queue-stats` - Queue metrics
+- `GET /api/failure-recovery/health` - Service health
+- `GET /api/failure-recovery/circuit-breakers` - Circuit breaker status
+
+### Authentication Requirements
+- All debug APIs require admin authentication
+- JWT tokens must have `is_superuser: true`
+- Regular users see filtered data based on their user ID
+
+## Advanced Features
+
+### Real-time Updates
+- **Auto-refresh**: Configurable refresh intervals for live data
+- **WebSocket Status**: Live connection monitoring
+- **Health Monitoring**: Real-time service status updates
+
+### Data Export
+- **Memory Export**: Download memories in JSON format
+- **Conversation Export**: Export transcripts and audio metadata
+- **Debug Reports**: Export system performance reports
+
+### Troubleshooting Tools
+- **Connection Testing**: Verify backend API connectivity
+- **Authentication Debugging**: Token validation and user info display
+- **Service Diagnostics**: Health checks for all system components
+
+## Configuration
+
+### Environment Variables
+```bash
+# Backend API connection
+BACKEND_API_URL=http://localhost:8000
+BACKEND_PUBLIC_URL=http://your-domain:8000
+
+# Debug mode
+DEBUG=true # Enable detailed logging
+
+# Authentication (inherited from backend)
+AUTH_SECRET_KEY=your-secret-key
+ADMIN_PASSWORD=your-admin-password
+```
+
+### Streamlit Configuration
+- **Port**: Default 8501
+- **Logs**: Stored in `./logs/streamlit.log`
+- **Session State**: Manages authentication and UI state
+- **Caching**: 5-minute cache for API responses
+
+## Usage Patterns
+
+### Admin Workflow
+1. **Login** with superuser credentials
+2. **System Health**: Check debug logs tab for service status
+3. **Monitor Processing**: Review memory debug statistics
+4. **User Management**: Create/manage user accounts as needed
+5. **Troubleshooting**: Use debug tools to investigate issues
+
+### User Workflow
+1. **Authentication**: Login via sidebar
+2. **View Conversations**: Browse recent audio sessions
+3. **Search Memories**: Find relevant conversation insights
+4. **Manage Action Items**: Track and update tasks
+5. **Connect Clients**: Use provided tokens for audio devices
+
+## Security Considerations
+
+### Access Control
+- **Role-based UI**: Admin features hidden from regular users
+- **API Security**: All requests include proper authentication headers
+- **Token Management**: Secure token storage and automatic refresh
+
+### Data Privacy
+- **User Isolation**: Non-admin users only see their own data
+- **Audit Logging**: All admin actions logged for accountability
+- **Secure Communication**: HTTPS recommended for production
+
+## Troubleshooting
+
+### Common Issues
+
+#### Connection Problems
+- Verify `BACKEND_API_URL` points to running backend
+- Check firewall/port settings
+- Ensure backend health endpoint responds
+
+#### Authentication Failures
+- Verify admin credentials in backend `.env`
+- Check JWT token expiration (1-hour default)
+- Confirm user has appropriate permissions
+
+#### Missing Debug Tab
+- Only visible to admin users (`is_superuser: true`)
+- Verify authentication with admin account
+- Check backend user creation and superuser flag
+
+#### API Errors
+- Check backend logs for detailed error information
+- Verify all required services are running (MongoDB, Qdrant, etc.)
+- Test API endpoints directly with curl for debugging
+
+### Debug Steps
+1. **Check Logs**: `./logs/streamlit.log` for frontend issues
+2. **Backend Health**: Use `/health` endpoint to verify backend status
+3. **API Testing**: Test endpoints directly with admin token
+4. **Service Status**: Use debug tab to check component health
+5. **Configuration**: Verify all environment variables are set correctly
+
+This dashboard provides comprehensive system management capabilities with particular strength in debugging and monitoring the audio processing pipeline and memory extraction systems.
\ No newline at end of file
diff --git a/backends/advanced-backend/Docs/action-items.md b/backends/advanced-backend/Docs/action-items.md
new file mode 100644
index 00000000..ed27adcd
--- /dev/null
+++ b/backends/advanced-backend/Docs/action-items.md
@@ -0,0 +1,345 @@
+# Action Items Configuration and Usage
+
+> ๐ **Prerequisite**: Read [quickstart.md](./quickstart.md) first for system overview.
+
+## Overview
+
+The friend-lite backend includes a comprehensive action items system that automatically extracts tasks and commitments from conversations. This system operates in **real-time** alongside the memory extraction system, providing immediate task detection and management capabilities.
+
+**Code References**:
+- **Main Implementation**: `src/action_items_service.py` (MongoDB-based storage and processing)
+- **Real-time Processing**: `main.py:1341-1378` (per-transcript-segment processing)
+- **API Endpoints**: `main.py:2671-2800` (action items CRUD operations)
+- **Configuration**: `memory_config.yaml` (action_item_extraction section)
+
+## Architecture
+
+### Dual Processing System
+
+The action items system operates in parallel with memory extraction:
+
+```
+Audio โ Transcription โ Dual Processing
+ โโ Memory Pipeline (end-of-conversation)
+ โโ Action Item Pipeline (real-time per-segment)
+```
+
+### Key Components
+
+1. **Real-time Detection**: Each transcript segment is checked for action item triggers
+2. **Configurable Extraction**: YAML-based configuration for prompts and triggers
+3. **MongoDB Storage**: Action items stored in dedicated collection with full CRUD
+4. **Debug Tracking**: SQLite-based tracking of extraction process
+5. **User-Centric Design**: All action items keyed by user_id, not client_id
+
+### Architecture Cleanup
+
+**Previous Issue**: The system had duplicated action item processing in two places:
+- `ActionItemsService` (MongoDB-based, primary handler)
+- `MemoryService` (Mem0-based, unused legacy code)
+
+**Current Architecture**: Clean separation of concerns:
+- **`ActionItemsService`**: Handles ALL action item operations (MongoDB-based)
+- **`MemoryService`**: Handles ONLY memory operations (Mem0-based)
+- **Debug System**: Tracks both memories and action items in unified SQLite database
+
+## Configuration
+
+### Basic Configuration (`memory_config.yaml`)
+
+**Configuration Loading**: See `src/memory_config_loader.py:get_action_item_extraction_config()` for how this configuration is loaded and used.
+
+```yaml
+action_item_extraction:
+ # Enable/disable action item extraction
+ enabled: true
+
+ # Trigger phrases that indicate action items
+ trigger_phrases:
+ - "simon says" # Primary trigger (case-insensitive)
+ - "action item" # Explicit action item
+ - "todo" # Simple todo
+ - "follow up" # Follow-up tasks
+ - "next step" # Next steps
+ - "homework" # Assignments
+ - "deliverable" # Project deliverables
+ - "deadline" # Time-sensitive tasks
+ - "schedule" # Scheduling tasks
+ - "reminder" # Reminders
+
+ # LLM extraction prompt
+ prompt: |
+ Extract actionable tasks and commitments from this conversation.
+
+ Look for:
+ - Explicit commitments ("I'll send you the report")
+ - Requested actions ("Can you review the document?")
+ - Scheduled tasks ("We need to meet next week")
+ - Follow-up items ("Let's check on this tomorrow")
+ - Deliverables mentioned ("The presentation is due Friday")
+
+ For each action item, determine:
+ - What needs to be done (clear, specific description)
+ - Who is responsible (assignee)
+ - When it's due (deadline if mentioned)
+ - Priority level (high/medium/low)
+
+ Return ONLY valid JSON array. If no action items found, return [].
+
+ Example format:
+ [
+ {
+ "description": "Send project status report to team",
+ "assignee": "John",
+ "due_date": "Friday",
+ "priority": "high",
+ "context": "Discussed in weekly team meeting"
+ }
+ ]
+
+ # LLM settings for action item extraction
+ llm_settings:
+ temperature: 0.1 # Low temperature for consistent extraction
+ max_tokens: 1000 # Sufficient for multiple action items
+ model: "llama3.1:latest" # Can be overridden by environment
+```
+
+### Advanced Configuration
+
+```yaml
+action_item_extraction:
+ enabled: true
+
+ # Enhanced trigger detection
+ trigger_phrases:
+ - "simon says"
+ - "action item"
+ - "i need to"
+ - "we should"
+ - "let's"
+ - "can you"
+ - "please"
+ - "remember to"
+ - "don't forget"
+ - "make sure"
+
+ # Custom extraction prompt with specific instructions
+ prompt: |
+ You are an expert task manager. Extract actionable items from this conversation.
+
+ Focus on:
+ 1. Specific commitments with clear ownership
+ 2. Time-bound tasks with deadlines
+ 3. Follow-up actions requiring completion
+ 4. Deliverables with clear outcomes
+
+ For each action item, provide:
+ - description: Clear, specific task description
+ - assignee: Person responsible (use "unassigned" if unclear)
+ - due_date: Deadline if mentioned (use "not_specified" if not clear)
+ - priority: Based on urgency (high/medium/low/not_specified)
+ - context: Brief context about when/why this was mentioned
+
+ Return ONLY valid JSON array. Empty array if no action items found.
+
+ # Fine-tuned LLM parameters
+ llm_settings:
+ temperature: 0.05 # Very low for consistent extraction
+ max_tokens: 1500 # More tokens for detailed extraction
+ model: "llama3.1:latest"
+```
+
+## Usage Examples
+
+### Trigger Phrase Examples
+
+The system detects action items when trigger phrases are present:
+
+```
+โ
"Simon says we need to schedule a follow-up meeting"
+โ
"Action item: John will send the report by Friday"
+โ
"Todo: Review the contract before tomorrow"
+โ
"Follow up with the client about their requirements"
+โ
"Next step is to finalize the budget proposal"
+โ
"Can you please update the documentation?"
+โ
"Let's schedule a review meeting for next week"
+โ
"Don't forget to submit the quarterly report"
+```
+
+### Action Item Data Structure
+
+```json
+{
+ "description": "Send project status report to team",
+ "assignee": "John Smith",
+ "due_date": "Friday, December 15th",
+ "priority": "high",
+ "status": "open",
+ "context": "Discussed in weekly team meeting",
+ "audio_uuid": "audio_12345",
+ "client_id": "user1-laptop",
+ "user_id": "user1",
+ "created_at": 1703548800,
+ "updated_at": 1703548800
+}
+```
+
+## API Endpoints
+
+### Action Items Management
+
+**API Implementation**: See `main.py:2671-2800` for complete CRUD endpoint implementations.
+
+```bash
+# Get user's action items
+GET /api/action_items?status=open&limit=20
+
+# Get specific action item
+GET /api/action_items/{action_item_id}
+
+# Update action item status
+PUT /api/action_items/{action_item_id}
+Content-Type: application/json
+{
+ "status": "completed"
+}
+
+# Search action items
+GET /api/action_items/search?query=report&status=open
+
+# Delete action item
+DELETE /api/action_items/{action_item_id}
+```
+
+### Debug & Monitoring
+
+```bash
+# View action item extraction stats
+GET /api/debug/memory/stats
+
+# View recent action item sessions
+GET /api/debug/memory/sessions
+
+# Debug specific session
+GET /api/debug/memory/session/{audio_uuid}
+
+# View pipeline trace
+GET /api/debug/memory/pipeline/{audio_uuid}
+```
+
+## Debug Tracking
+
+The system tracks all action item extraction attempts:
+
+### What's Tracked
+
+- **Extraction Attempts**: Success/failure of each extraction
+- **Processing Time**: How long each extraction takes
+- **Prompt Used**: Which prompt was used for extraction
+- **LLM Model**: Which model performed the extraction
+- **Transcript Length**: Size of input text
+- **Error Details**: Specific error messages for failed extractions
+
+### Debug Database Schema
+
+```sql
+-- Action item extractions are stored as memory_extractions with type='action_item'
+SELECT
+ audio_uuid,
+ memory_text,
+ extraction_prompt,
+ metadata_json,
+ created_at
+FROM memory_extractions
+WHERE memory_type = 'action_item';
+
+-- Processing attempts show success/failure patterns
+SELECT
+ audio_uuid,
+ attempt_type,
+ success,
+ error_message,
+ processing_time_ms
+FROM extraction_attempts
+WHERE attempt_type = 'action_item_extraction';
+```
+
+## Performance Optimization
+
+### Configuration Tips
+
+1. **Adjust Trigger Phrases**: Add domain-specific triggers for your use case
+2. **Tune LLM Parameters**: Lower temperature for consistency, higher for creativity
+3. **Optimize Prompts**: Include examples specific to your workflow
+4. **Monitor Processing Time**: Use debug endpoints to identify bottlenecks
+
+### Quality Control
+
+```yaml
+quality_control:
+ # Skip very short transcripts
+ min_conversation_length: 10
+
+ # Skip transcripts with low meaningful content
+ skip_low_content: true
+ min_content_ratio: 0.2
+
+ # Skip common filler patterns
+ skip_patterns:
+ - "^(um|uh|hmm|yeah|ok|okay)\\s*$"
+ - "^test\\s*$"
+```
+
+## Integration with Memory System
+
+Action items and memories work together:
+
+1. **Shared Debug Tracking**: Both use the same SQLite debug database
+2. **Coordinated Processing**: Both respect the same quality control settings
+3. **User-Centric Storage**: Both keyed by user_id for proper isolation
+4. **Unified Configuration**: Single YAML file controls both systems
+
+## Troubleshooting
+
+### Common Issues
+
+1. **No Action Items Detected**
+ - Check if trigger phrases are present in transcript
+ - Verify `action_item_extraction.enabled: true` in config
+ - Check debug logs for extraction attempts
+
+2. **JSON Parsing Errors**
+ - Review extraction prompt for clarity
+ - Lower LLM temperature for more consistent output
+ - Check debug database for exact error messages
+
+3. **Performance Issues**
+ - Monitor processing times in debug stats
+ - Adjust `max_tokens` and `temperature` settings
+ - Consider using quality control to filter low-value transcripts
+
+### Debug Commands
+
+```bash
+# Test action item configuration
+curl -H "Authorization: Bearer $TOKEN" \
+ "http://localhost:8000/api/debug/memory/config/test?test_text=Simon%20says%20we%20need%20to%20schedule%20a%20meeting"
+
+# View extraction statistics
+curl -H "Authorization: Bearer $TOKEN" \
+ http://localhost:8000/api/debug/memory/stats
+
+# Check recent processing
+curl -H "Authorization: Bearer $TOKEN" \
+ http://localhost:8000/api/debug/memory/sessions?limit=10
+```
+
+## Best Practices
+
+1. **Use Specific Trigger Phrases**: Add domain-specific triggers for your use case
+2. **Test Prompts Regularly**: Use the debug API to test prompt effectiveness
+3. **Monitor Performance**: Check debug stats for processing times and success rates
+4. **Customize for Your Workflow**: Adjust prompts and triggers based on your conversation patterns
+5. **Regular Configuration Updates**: Reload configuration without restart using the API
+
+This action items system provides comprehensive task management capabilities with full configurability and debugging support, integrating seamlessly with the memory extraction pipeline.
\ No newline at end of file
diff --git a/backends/advanced-backend/Docs/architecture.md b/backends/advanced-backend/Docs/architecture.md
new file mode 100644
index 00000000..0ac47b79
--- /dev/null
+++ b/backends/advanced-backend/Docs/architecture.md
@@ -0,0 +1,703 @@
+# Friend-Lite Backend Architecture
+
+> ๐ **Prerequisite**: Read [quickstart.md](./quickstart.md) first for basic system understanding.
+
+## System Overview
+
+Friend-Lite is a comprehensive real-time conversation processing system that captures audio streams, performs speech-to-text transcription, extracts memories, and generates action items. The system features a FastAPI backend with WebSocket audio streaming, a Streamlit web dashboard for management, and complete user authentication with role-based access control.
+
+**Core Implementation**: The complete system is implemented in `src/main.py` with supporting services in dedicated modules.
+
+## Architecture Diagram
+
+```mermaid
+graph TB
+ %% External Clients
+ Client[Audio Client
Mobile/Desktop Apps]
+ WebUI[Streamlit Dashboard
Web Interface]
+
+ %% Main Backend Components
+ subgraph "Core Application"
+ Main[main.py
FastAPI Backend]
+ Auth[auth.py
Authentication System]
+ StreamlitApp[streamlit_app.py
Web Dashboard]
+ end
+
+ %% Audio Processing Pipeline
+ subgraph "Audio Processing Pipeline"
+ OpusDecoder[Opus Decoder
Realtime Audio]
+ AudioChunks[Audio Chunks
Per-Client Queues]
+ Transcription[Transcription Manager
Deepgram WebSocket/Wyoming Fallback]
+ ClientState[Per-Client State
Conversation Management]
+ AudioCropping[Audio Cropping
Speech Segment Extraction]
+ end
+
+ %% Business Logic Services
+ subgraph "Intelligence Services"
+ ActionItems[action_items_service.py
Task Extraction]
+ Memory[memory/
Conversation Memory]
+ Metrics[metrics.py
System Monitoring]
+ end
+
+ %% Data Layer
+ subgraph "Data Models & Security"
+ Users[users.py
User Management]
+ ChunkRepo[ChunkRepo
Audio Data Access]
+ end
+
+ %% External Services
+ subgraph "External Services"
+ MongoDB[(MongoDB
Users & Conversations)]
+ Ollama[Ollama
LLM Processing]
+ Qdrant[Qdrant
Vector Memory Store]
+ ASR[Wyoming ASR/
Deepgram API]
+ end
+
+ %% Docker Container Structure
+ subgraph "Docker Deployment"
+ BackendContainer[friend-backend
FastAPI + Python]
+ StreamlitContainer[streamlit
Web Dashboard]
+ ProxyContainer[nginx
Reverse Proxy]
+ MongoContainer[mongo
Database]
+ QdrantContainer[qdrant
Vector DB]
+ end
+
+ %% Authentication & API Flow
+ Client -->|WebSocket + JWT| Auth
+ WebUI -->|HTTP + JWT| Auth
+ Auth -->|User Validation| Main
+ StreamlitApp -->|Backend API| Main
+
+ %% Audio Processing Flow
+ Client -->|Opus/PCM Audio| Main
+ Main -->|Audio Packets| OpusDecoder
+ OpusDecoder -->|PCM Data| AudioChunks
+ AudioChunks -->|Per-Client Processing| ClientState
+ ClientState -->|Audio Chunks| Transcription
+ ClientState -->|Speech Segments| AudioCropping
+
+ %% WebSocket & REST Endpoints
+ Main -->|/ws, /ws_pcm| Client
+ Main -->|/api/* endpoints| WebUI
+
+ %% Service Integration
+ Transcription -->|Transcript Text| ActionItems
+ Transcription -->|Conversation Data| Memory
+ ActionItems -->|Tasks| MongoDB
+ Memory -->|Memory Storage| Ollama
+ Memory -->|Vector Storage| Qdrant
+
+ %% External Service Connections
+ Main -->|User & Conversation Data| MongoDB
+ Transcription -->|Speech Processing| ASR
+ Memory -->|Embeddings| Qdrant
+ ActionItems -->|LLM Analysis| Ollama
+
+ %% Monitoring & Metrics
+ Main -->|System Metrics| Metrics
+ Metrics -->|Performance Data| MongoDB
+
+ %% Container Relationships
+ BackendContainer -->|Internal Network| MongoContainer
+ BackendContainer -->|Internal Network| QdrantContainer
+ StreamlitContainer -->|HTTP API| BackendContainer
+ ProxyContainer -->|Load Balance| BackendContainer
+ ProxyContainer -->|Route /dashboard| StreamlitContainer
+```
+
+## Component Descriptions
+
+### Core Application
+
+#### FastAPI Backend (`main.py`)
+- **Authentication-First Design**: All endpoints require JWT authentication
+- **WebSocket Audio Streaming**: Real-time Opus/PCM audio ingestion with per-client isolation (`main.py:1562+`)
+- **Conversation Management**: Automatic conversation lifecycle with timeout handling (`main.py:1018-1149`)
+- **REST API Suite**: Comprehensive endpoints for user, conversation, memory, and action item management (`main.py:1700+`)
+- **Health Monitoring**: Detailed service health checks and performance metrics (`main.py:2500+`)
+- **Audio Cropping**: Intelligent speech segment extraction using FFmpeg (`main.py:174-200`)
+
+#### Authentication System (`auth.py`)
+- **FastAPI-Users Integration**: Complete user lifecycle management
+- **Email Authentication**: User authentication via email and password
+- **Multi-Authentication**: JWT tokens, and cookie-based sessions
+- **Role-Based Access Control**: Admin vs regular user permissions with data isolation
+- **WebSocket Security**: Custom authentication for real-time connections with token/cookie support
+- **Admin User Bootstrap**: Automatic admin account creation
+- **Client ID Generation**: Automatic `objectid_suffix-device_name` format for client identification
+
+> ๐ **Read more**: [Authentication Architecture](./auth.md) for complete authentication system details
+
+#### Streamlit Dashboard (`streamlit_app.py`)
+- **User-Friendly Interface**: Complete web-based management interface
+- **Authentication Integration**: Login with backend JWT tokens or Google OAuth
+- **Real-Time Monitoring**: Live client status and conversation management
+- **Data Management**: User, conversation, memory, and action item interfaces
+- **Audio Playback**: Smart audio player with original/cropped audio options
+- **System Health**: Visual service status and configuration display
+
+### Audio Processing Pipeline
+
+#### Transcription Architecture
+
+The system implements a dual transcription approach with Deepgram as primary and Wyoming ASR as fallback:
+
+**Deepgram Batch Processing**:
+- **Model**: Nova-3 (Deepgram's latest high-accuracy model)
+- **Features**: Smart formatting, punctuation, speaker diarization
+- **Processing**: Collect-then-process approach using REST API
+- **Timeout**: 1.5 minute collection timeout for optimal quality
+- **Client Manager Integration**: Uses centralized ClientManager for clean client state access
+- **Configuration**: Auto-enables when `DEEPGRAM_API_KEY` environment variable is present
+
+**Wyoming ASR Fallback**:
+- **Purpose**: Offline fallback when Deepgram unavailable
+- **Protocol**: TCP connection to self-hosted Wyoming ASR service
+- **Event-Driven**: Asynchronous event processing with background queue management
+- **Graceful Degradation**: Seamless fallback without service interruption
+
+**TranscriptionManager Architecture**:
+```python
+# Clean dependency injection pattern
+TranscriptionManager(
+ action_item_callback=callback_func,
+ chunk_repo=database_repo,
+ # Uses get_client_manager() singleton for client state access
+)
+```
+
+#### Client Manager Architecture
+
+The system uses a centralized **ClientManager** for managing active client connections and state:
+
+**Centralized Client Management**:
+```python
+# Singleton pattern for global client state access
+client_manager = get_client_manager()
+
+# Client state management
+client_state = ClientState(
+ client_id="user_id_suffix-device_name",
+ chunk_repo=database_repo,
+ action_items_service=action_service,
+ chunk_dir=audio_storage_path
+)
+```
+
+**Client ID Format**: `{objectid_suffix}-{device_name}`
+- Uses last 6 characters of MongoDB ObjectId + device name
+- Examples: `cd7994-laptop`, `e26efe-upload-001`
+- Ensures unique identification across users and devices
+
+**Key Features**:
+- **Connection Tracking**: Real-time monitoring of active clients
+- **State Isolation**: Per-client queues and processing pipelines
+- **Resource Management**: Automatic cleanup on client disconnect
+- **Multi-Device Support**: Single user can have multiple active clients
+- **Thread-Safe Operations**: Concurrent client access with proper synchronization
+
+#### Per-Client State Management
+```mermaid
+stateDiagram-v2
+ [*] --> Connected: WebSocket Auth
+ Connected --> Processing: Audio Received
+ Processing --> Transcribing: Chunk Buffered
+ Transcribing --> ActiveConversation: Transcript Generated
+ ActiveConversation --> Processing: Continue Audio
+ ActiveConversation --> ConversationTimeout: 1.5min Silence
+ Processing --> ManualClose: User Action
+ ConversationTimeout --> ProcessingMemory: Extract Insights
+ ManualClose --> ProcessingMemory: Extract Insights
+ ProcessingMemory --> AudioCropping: Remove Silence
+ AudioCropping --> ConversationClosed: Cleanup Complete
+ ConversationClosed --> Connected: Ready for New
+ Connected --> [*]: Client Disconnect
+```
+
+#### Audio Processing Queues (Per-Client)
+- **Chunk Queue**: Raw audio buffering with client isolation
+- **Transcription Queue**: Audio chunks for real-time ASR processing with quality validation
+- **Memory Queue**: Completed conversations for LLM memory extraction (with transcript validation)
+- **Action Item Queue**: Transcript analysis for task detection
+- **Quality Control**: Multi-stage validation prevents empty/invalid transcripts from consuming LLM resources
+
+#### Speech Processing Features
+- **Voice Activity Detection**: Automatic silence removal and speech segment extraction
+- **Audio Cropping**: FFmpeg-based processing to create concise audio files
+- **Multiple Format Support**: Opus (compressed) and PCM (uncompressed) audio input
+- **Conversation Chunking**: 60-second segments with seamless processing
+
+### Intelligence Services
+
+#### Action Items Service (`action_items_service.py`)
+- **User-Centric Storage**: Action items stored with database user_id (not client_id)
+- **LLM-Powered Extraction**: Uses Ollama for intelligent task identification
+- **Trigger Recognition**: Special "Simon says" keyphrase detection for explicit task creation
+- **Task Management**: Full CRUD operations with status tracking (open, in-progress, completed, cancelled)
+- **Client Metadata**: Client and user information stored for reference and debugging
+- **Context Preservation**: Links action items to original conversations and audio segments
+
+> ๐ **Read more**: [Action Items Documentation](./action-items.md) for detailed task extraction features
+
+#### Memory Management (`src/memory/memory_service.py`)
+- **User-Centric Storage**: All memories keyed by database user_id (not client_id)
+- **Conversation Summarization**: Automatic memory extraction using mem0 framework
+- **Vector Storage**: Semantic memory search with Qdrant embeddings
+- **Client Metadata**: Client information stored in memory metadata for reference
+- **User Isolation**: Complete data separation between users via user_id
+- **Temporal Memory**: Long-term conversation history with semantic retrieval
+- **Processing Trigger**: `main.py:1047-1065` (conversation end) โ `main.py:1163-1195` (background processing)
+
+> ๐ **Read more**: [Memory System Documentation](./memories.md) for detailed memory extraction and storage
+
+#### Metrics System (`metrics.py`)
+- **Performance Tracking**: Audio processing latency, transcription success rates
+- **Service Health Monitoring**: External service connectivity and response times
+- **User Analytics**: Connection patterns, conversation statistics
+- **Resource Monitoring**: System resource usage and bottleneck identification
+
+### Data Models & Access
+
+#### User Management (`users.py`)
+- **Beanie ODM**: MongoDB document modeling with type safety
+- **User ID System**: MongoDB ObjectId-based user identification
+- **Authentication Data**: Secure password hashing, email verification, email-based login
+- **Profile Management**: User preferences, display names, and permissions
+- **Client Registration**: Tracking of registered clients per user with device names
+- **Data Ownership**: All data (conversations, memories, action items) associated via user_id
+- **Client ID Generation**: Helper functions for `objectid_suffix-device_name` format
+
+#### Conversation Data Access (`ChunkRepo`)
+- **Audio Metadata**: File paths, timestamps, duration tracking
+- **Transcript Management**: Speaker identification and timing information
+- **Memory Links**: Connection between conversations and extracted memories
+- **Action Item Relations**: Task tracking per conversation
+
+#### Permission System
+- **Dictionary-Based Mapping**: Clean client-user relationship tracking via in-memory dictionaries
+- **Active Client Tracking**: `client_to_user_mapping` for currently connected clients
+- **Persistent Tracking**: `all_client_user_mappings` for database query permission checks
+- **Ownership Validation**: Simple dictionary lookup instead of regex pattern matching
+- **Data Isolation**: User-scoped queries using client ID lists for efficient permission filtering
+
+## Deployment Architecture
+
+### Docker Compose Structure
+
+```mermaid
+graph LR
+ subgraph "Docker Network"
+ Backend[friend-backend
uv + FastAPI]
+ Streamlit[streamlit
Dashboard UI]
+ Proxy[nginx
Load Balancer]
+ Mongo[mongo:4.4.18
Primary Database]
+ Qdrant[qdrant
Vector Store]
+ end
+
+ subgraph "External Services"
+ Ollama[ollama
LLM Service]
+ ASRService[ASR Services
extras/asr-services]
+ end
+
+ subgraph "Client Access"
+ WebBrowser[Web Browser
Dashboard]
+ AudioClient[Audio Client
Mobile/Desktop]
+ end
+
+ WebBrowser -->|Port 8501| Streamlit
+ WebBrowser -->|Port 80| Proxy
+ AudioClient -->|Port 8000| Backend
+
+ Proxy --> Backend
+ Proxy --> Streamlit
+ Backend --> Mongo
+ Backend --> Qdrant
+ Backend -.->|Optional| Ollama
+ Backend -.->|Optional| ASRService
+```
+
+### Container Specifications
+
+#### Backend Container (`friend-backend`)
+- **Base**: Python 3.12 slim with uv package manager
+- **Dependencies**: FastAPI, WebSocket libraries, audio processing tools
+- **Volumes**: Audio chunk storage, debug directories
+- **Health Checks**: Automated readiness and liveness probes
+- **Environment**: All configuration via environment variables
+
+#### Streamlit Container (`streamlit`)
+- **Purpose**: Web dashboard interface
+- **Dependencies**: Streamlit, requests, pandas for data visualization
+- **Backend Integration**: HTTP API client with authentication
+- **Configuration**: Backend URL configuration for API calls
+
+#### Infrastructure Containers
+- **MongoDB 4.4.18**: Primary data storage with persistence
+- **Qdrant Latest**: Vector database for memory embeddings
+- **Nginx Alpine**: Reverse proxy and load balancing
+
+## Detailed Data Flow Architecture
+
+> ๐ **Reference Documentation**:
+> - [Authentication Details](./auth.md) - Complete authentication system documentation
+
+### Complete System Data Flow Diagram
+
+```mermaid
+flowchart TB
+ %% External Clients
+ Client[๐ฑ Audio Client
Mobile/Desktop/HAVPE]
+ WebUI[๐ Web Dashboard
Streamlit Interface]
+
+ %% Authentication Gateway
+ subgraph "๐ Authentication Layer"
+ AuthGW[JWT/Cookie Auth
๐ 1hr token lifetime]
+ ClientGen[Client ID Generator
user_suffix-device_name]
+ UserDB[(๐ค User Database
MongoDB ObjectId)]
+ end
+
+ %% Audio Processing Pipeline
+ subgraph "๐ต Audio Processing Pipeline"
+ WSAuth[WebSocket Auth
๐ Connection timeout: 30s]
+ OpusDecoder[Opus/PCM Decoder
Real-time Processing]
+
+ subgraph "โฑ๏ธ Per-Client State Management"
+ ClientState[Client State
๐ Conversation timeout: 1.5min]
+ AudioQueue[Audio Chunk Queue
60s segments]
+ ConversationTimer[Conversation Timer
๐ Auto-timeout tracking]
+ end
+
+ subgraph "๐๏ธ Transcription Layer"
+ ASRManager[Transcription Manager
๐ Init timeout: 60s]
+ DeepgramWS[Deepgram WebSocket
Nova-3 Model, Smart Format
๐ Auto-reconnect on disconnect]
+ OfflineASR[Wyoming ASR Fallback
๐ Connect timeout: 5s]
+ ClientManager[Client Manager
Centralized client state access]
+ TranscriptValidation[Transcript Validation
๐ Min 10 chars]
+ end
+ end
+
+ %% Intelligence Services
+ subgraph "๐ง Intelligence Processing"
+ subgraph "๐ญ Memory Pipeline"
+ MemoryService[Memory Service
๐ Init timeout: 60s
๐ Processing timeout: 20min]
+ MemoryValidation[Memory Validation
๐ Min conversation length]
+ LLMProcessor[Ollama LLM
๐ Circuit breaker protection]
+ VectorStore[Qdrant Vector Store
๐ Semantic search]
+ end
+
+ subgraph "โ
Action Items Pipeline"
+ ActionService[Action Items Service
๐ "Simon says" detection]
+ TaskExtraction[Task Extraction
๐ค LLM-powered analysis]
+ end
+ end
+
+ %% Failure Recovery System
+ subgraph "๐ก๏ธ Failure Recovery System"
+ QueueTracker[Queue Tracker
๐ SQLite tracking]
+ PersistentQueue[Persistent Queue
๐พ Survives restarts]
+ RecoveryManager[Recovery Manager
๐ Auto-retry with backoff]
+ HealthMonitor[Health Monitor
๐ฅ Service health checks]
+ CircuitBreaker[Circuit Breaker
โก Fast-fail protection]
+ DeadLetter[Dead Letter Queue
๐ Persistent failures]
+ end
+
+ %% Data Storage
+ subgraph "๐พ Data Storage Layer"
+ MongoDB[(MongoDB
Users & Conversations
๐ Health check: 5s)]
+ QdrantDB[(Qdrant
Vector Embeddings
๐ Semantic memory)]
+ SQLiteTracking[(SQLite
Failure Recovery Tracking
๐ Performance metrics)]
+ AudioFiles[Audio Files
๐ Chunk storage + cropping]
+ end
+
+ %% Connection Flow with Timeouts
+ Client -->|๐ Auth Token| AuthGW
+ AuthGW -->|โ 401 Unauthorized
โฑ๏ธ Invalid/expired token| Client
+ AuthGW -->|โ
Validated| ClientGen
+ ClientGen -->|๐ท๏ธ Generate client_id| WSAuth
+
+ %% Audio Processing Flow
+ Client -->|๐ต Opus/PCM Stream
๐ 30s connection timeout| WSAuth
+ WSAuth -->|โ 1008 Policy Violation
๐ Auth required| Client
+ WSAuth -->|โ
Authenticated| OpusDecoder
+ OpusDecoder -->|๐ฆ Audio chunks| ClientState
+ ClientState -->|โฑ๏ธ 1.5min timeout check| ConversationTimer
+ ConversationTimer -->|๐ Timeout exceeded| ClientState
+
+ %% Transcription Flow with Failure Points
+ ClientState -->|๐ต Audio data| ASRManager
+ ASRManager -->|๐ Primary connection| DeepgramWS
+ ASRManager -->|๐ Fallback connection| OfflineASR
+ ASRManager -->|๐ Client state access| ClientManager
+ DeepgramWS -->|โ WebSocket disconnect
๐ Auto-reconnect after 2s| ASRManager
+ OfflineASR -->|โ TCP connection timeout
๐ 5s limit| ASRManager
+ ASRManager -->|๐ Raw transcript| TranscriptValidation
+ TranscriptValidation -->|โ Too short (<10 chars)
๐ซ Skip processing| QueueTracker
+ TranscriptValidation -->|โ
Valid transcript| MemoryService
+
+ %% Memory Processing with Timeouts
+ MemoryService -->|๐ 20min timeout| LLMProcessor
+ LLMProcessor -->|โ Model stopped
๐ Circuit breaker trip| CircuitBreaker
+ LLMProcessor -->|โ Empty response
๐ Fallback memory| MemoryService
+ LLMProcessor -->|โ
Memory extracted| VectorStore
+ MemoryService -->|๐ Track processing| QueueTracker
+
+ %% Action Items Flow
+ TranscriptValidation -->|๐ Valid transcript| ActionService
+ ActionService -->|๐ "Simon says" detected| TaskExtraction
+ TaskExtraction -->|โ
Task extracted| MongoDB
+
+ %% Failure Recovery Integration
+ QueueTracker -->|๐ Track all items| PersistentQueue
+ PersistentQueue -->|๐ Failed items| RecoveryManager
+ RecoveryManager -->|๐ Exponential backoff retry| MemoryService
+ RecoveryManager -->|๐ Max retries exceeded| DeadLetter
+ HealthMonitor -->|๐ฅ Service health checks
๐ 5s MongoDB
๐ 8s Ollama
๐ 5s ASR| CircuitBreaker
+ CircuitBreaker -->|โก Service unavailable
๐ Fast-fail mode| RecoveryManager
+
+ %% Disconnect and Cleanup Flow
+ Client -->|๐ Disconnect| ClientState
+ ClientState -->|๐งน Cleanup tasks
๐ Background memory: 5min
๐ Transcription queue: 60s| ASRManager
+ ASRManager -->|๐ Graceful disconnect
๐ 2s timeout| DeepgramWS
+ ClientState -->|๐ Final conversation processing| MemoryService
+
+ %% Storage Integration
+ MemoryService -->|๐พ Store memories| MongoDB
+ VectorStore -->|๐พ Embeddings| QdrantDB
+ QueueTracker -->|๐ Metrics & tracking| SQLiteTracking
+ ClientState -->|๐ Audio segments| AudioFiles
+ ActionService -->|๐ Tasks| MongoDB
+
+ %% Web Dashboard Flow
+ WebUI -->|๐ Cookie/JWT auth
๐ 1hr lifetime| AuthGW
+ WebUI -->|๐ API calls| MongoDB
+ WebUI -->|๐ต Audio playback| AudioFiles
+```
+
+### Critical Timeout and Failure Points
+
+#### ๐ **Timeout Configuration**
+| Component | Timeout Value | Failure Behavior | Recovery Action |
+|-----------|---------------|------------------|-----------------|
+| **JWT Tokens** | 1 hour | 401 Unauthorized | Client re-authentication required |
+| **WebSocket Connection** | 30 seconds | Connection dropped | Client reconnection with auth |
+| **Conversation Auto-Close** | 1.5 minutes | New conversation started | Memory processing triggered |
+| **Transcription Queue** | 60 seconds | Queue processing timeout | Graceful degradation |
+| **Memory Service Init** | 60 seconds | Service unavailable | Health check failure |
+| **Ollama Processing** | 20 minutes | LLM timeout | Fallback memory creation |
+| **Background Memory Task** | 5 minutes | Task cancellation | Partial processing retained |
+| **MongoDB Health Check** | 5 seconds | Service marked unhealthy | Circuit breaker activation |
+| **Ollama Health Check** | 8 seconds | Service marked unhealthy | Circuit breaker activation |
+| **ASR Connection** | 5 seconds | Connection failure | Fallback ASR or degraded mode |
+
+#### ๐ **Disconnection Scenarios**
+1. **Client Disconnect**: Graceful cleanup with conversation finalization
+2. **Network Interruption**: Auto-reconnection with exponential backoff
+3. **Service Failure**: Circuit breaker protection and alternative routing
+4. **Authentication Expiry**: Forced re-authentication with clear error codes
+
+
+### Audio Ingestion & Processing
+1. **Client Authentication**: JWT token validation for WebSocket connection (email or user_id based)
+2. **Client ID Generation**: Automatic `user_id-device_name` format creation for client identification
+3. **Permission Registration**: Client-user relationship tracking in permission dictionaries
+4. **Audio Streaming**: Real-time Opus/PCM packets over WebSocket with user context
+5. **Per-Client Processing**: Isolated audio queues and state management per user
+6. **Transcription Pipeline**: Configurable ASR service integration with user-scoped storage
+7. **Conversation Lifecycle**: Automatic timeout handling and memory processing
+8. **Audio Optimization**: Speech segment extraction and silence removal
+
+### Memory & Intelligence Processing
+1. **Conversation Completion**: End-of-session trigger for memory extraction
+2. **Transcript Validation**: Multi-layer validation prevents empty/short transcripts from reaching LLM
+ - Individual transcript filtering during collection (`main.py:594, 717, 858`)
+ - Full conversation length validation before memory processing (`main.py:1224`)
+ - Memory service validation with 10-character minimum (`memory_service.py:242`)
+3. **User Resolution**: Client-ID to database user mapping for proper data association
+4. **LLM Processing**: Ollama-based conversation summarization with user context (only for validated transcripts)
+5. **Vector Storage**: Semantic embeddings stored in Qdrant keyed by user_id
+6. **Action Item Analysis**: Automatic task detection with user-centric storage
+7. **Metadata Enhancement**: Client information and user email stored in metadata
+8. **Search & Retrieval**: User-scoped semantic memory search capabilities
+
+### User Management & Security
+1. **Registration**: Admin-controlled user creation with email/password and auto-generated user_id
+2. **Dual Authentication**: JWT token generation for both email and user_id login methods
+3. **Client Association**: Automatic client ID generation as `user_id-device_name`
+4. **Permission Tracking**: Dictionary-based client-user relationship management
+5. **Authorization**: Per-endpoint permission checking with simplified ownership validation
+6. **Data Isolation**: User-scoped data access via client ID mapping and ownership validation
+
+## Security Architecture
+
+### Authentication Layers
+- **API Gateway**: JWT middleware on all protected endpoints with email/user_id support
+- **WebSocket Security**: Custom authentication handler for real-time connections (token + cookie support)
+- **Client ID Management**: Automatic generation and validation of `user_id-device_name` format
+- **Permission Mapping**: Dictionary-based client-user relationship tracking
+- **Role Validation**: Admin vs user permission matrix enforcement
+- **Data Scoping**: Efficient user context filtering via client ID mapping
+
+### Access Control Matrix
+| Resource | Regular User | Superuser |
+|----------|-------------|-----------|
+| Own Conversations | Full Access | Full Access |
+| Other Users' Conversations | No Access | Full Access |
+| User Management | Profile Only | Full CRUD |
+| System Administration | Health Check Only | Full Access |
+| Active Client Management | Own Clients Only | All Clients |
+| Memory Management | Own Memories Only | All Memories (with client info) |
+| Action Items | Own Items Only | All Items (with client info) |
+
+### Data Protection
+- **Encryption**: JWT token signing with configurable secret keys
+- **Password Security**: Bcrypt hashing with salt rounds
+- **User Identification**: MongoDB ObjectId-based user system
+- **Data Isolation**: User ID validation on all data operations via client mapping
+- **Permission Efficiency**: Dictionary-based ownership checking instead of regex patterns
+- **Audit Logging**: Comprehensive request and authentication logging with user_id tracking
+
+## Configuration & Environment
+
+### Required Environment Variables
+```bash
+AUTH_SECRET_KEY=your-super-secret-jwt-key-here-make-it-long-and-random
+ADMIN_PASSWORD=your-secure-admin-password
+```
+
+### Optional Service Configuration
+```bash
+# Database
+MONGODB_URI=mongodb://mongo:27017
+
+# LLM Processing
+OLLAMA_BASE_URL=http://ollama:11434
+
+# Vector Storage
+QDRANT_BASE_URL=qdrant
+
+# Transcription Services (Deepgram Primary, Wyoming Fallback)
+DEEPGRAM_API_KEY=your-deepgram-api-key-here
+OFFLINE_ASR_TCP_URI=tcp://host.docker.internal:8765
+
+```
+
+### Service Dependencies
+
+#### Critical Services (Required for Core Functionality)
+- **MongoDB**: User data, conversations, action items
+- **Authentication**: JWT token validation and user sessions
+
+#### Enhanced Services (Optional but Recommended)
+- **Ollama**: Memory processing and action item extraction
+- **Qdrant**: Vector storage for semantic memory search
+- **Deepgram**: Primary speech-to-text transcription service (WebSocket streaming)
+- **Wyoming ASR**: Fallback transcription service (self-hosted)
+
+#### External Services (Optional)
+- **Ngrok**: Public internet access for development
+- **HAVPE Relay**: ESP32 audio streaming bridge with authentication (`extras/havpe-relay/`)
+
+### HAVPE Relay Integration
+The HAVPE relay (`extras/havpe-relay/main.py`) provides ESP32 audio streaming capabilities:
+
+- **Authentication**: Supports both `AUTH_EMAIL` and `AUTH_USER_ID` environment variables
+- **Client ID Generation**: Creates client ID as `user_id-havpe` automatically
+- **Audio Processing**: Converts ESP32 32-bit stereo to 16-bit mono for backend
+- **Reconnection**: Automatic JWT token refresh and WebSocket reconnection on auth failures
+- **Device Name**: Configurable device identifier for multi-device support
+
+## REST API Architecture
+
+The system provides a comprehensive REST API organized into functional modules:
+
+### API Organization
+```
+/api/
+โโโ /users # User management (admin only)
+โโโ /clients/active # Active client monitoring
+โโโ /conversations # Conversation CRUD operations
+โโโ /memories # Memory management and search
+โ โโโ /admin # Admin view (all users)
+โ โโโ /search # Semantic memory search
+โโโ /action_items # Task management
+โโโ /admin/ # Admin compatibility endpoints
+โ โโโ /memories # Consolidated admin memory view
+โ โโโ /memories/debug # Legacy debug endpoint
+โโโ /active_clients # Client monitoring (compatibility)
+```
+
+### Key Endpoints
+
+#### User & Authentication
+- `POST /auth/jwt/login` - Email/password authentication
+- `GET /api/users` - User management (admin only)
+- `POST /api/create_user` - User creation (admin only)
+
+#### Client Management
+- `GET /api/clients/active` - Active client monitoring
+- `GET /api/active_clients` - Compatibility endpoint for Streamlit UI
+
+#### Memory Management
+- `GET /api/memories` - User memories (with user_id filter for admin)
+- `GET /api/memories/admin` - All memories grouped by user (admin only)
+- `GET /api/admin/memories` - Consolidated admin view with debug info
+- `GET /api/memories/search?query=` - Semantic memory search
+
+#### Audio & Conversations
+- `GET /api/conversations` - User conversations
+- `POST /api/process-audio-files` - Batch audio file processing
+- WebSocket `/ws` - Real-time Opus audio streaming
+- WebSocket `/ws_pcm` - Real-time PCM audio streaming
+
+### Authentication & Authorization
+- **JWT Tokens**: All API endpoints require valid JWT authentication
+- **User Isolation**: Regular users see only their own data
+- **Admin Access**: Superusers can access cross-user data with `user_id` filters
+- **WebSocket Auth**: Token or cookie-based authentication for real-time connections
+
+### Data Formats
+```json
+// Active clients response
+{
+ "clients": [
+ {
+ "client_id": "cd7994-laptop",
+ "user_id": "507f1f77bcf86cd799439011",
+ "connected_at": "2025-01-15T10:30:00Z",
+ "conversation_count": 3
+ }
+ ],
+ "active_clients_count": 1,
+ "total_count": 1
+}
+
+// Admin memories response
+{
+ "memories": [...], // Flat list for compatibility
+ "user_memories": {...}, // Grouped by user_id
+ "stats": {
+ "total_memories": 150,
+ "total_users": 5,
+ "debug_tracker_initialized": true,
+ "users_with_memories": ["user1", "user2"],
+ "client_ids_with_memories": ["cd7994-laptop", "e26efe-upload"]
+ }
+}
+```
+
+## Performance & Scalability
+
+### Client Isolation Design
+- **Per-Client Queues**: Independent processing pipelines prevent cross-client interference
+- **Async Processing**: Non-blocking audio ingestion with background processing
+- **Resource Management**: Configurable timeouts and cleanup procedures
+- **State Management**: Memory-efficient client state with automatic cleanup
+
+### Monitoring & Observability
+- **Health Checks**: Comprehensive service dependency validation
+- **Performance Metrics**: Audio processing latency, transcription accuracy
+- **Resource Tracking**: Memory usage, connection counts, processing queues
+- **Error Handling**: Graceful degradation with detailed logging
+- **System Tracking**: Debug tracking and monitoring via SystemTracker
+
+This architecture supports a fully-featured conversation processing system with enterprise-grade authentication, real-time audio processing, and intelligent content analysis, all deployable via a single Docker Compose command.
\ No newline at end of file
diff --git a/backends/advanced-backend/Docs/auth.md b/backends/advanced-backend/Docs/auth.md
new file mode 100644
index 00000000..4a3f7267
--- /dev/null
+++ b/backends/advanced-backend/Docs/auth.md
@@ -0,0 +1,338 @@
+# Authentication Architecture
+
+## Overview
+
+Friend-Lite uses a comprehensive authentication system built on `fastapi-users` with support for multiple authentication methods including JWT tokens and cookies. The system provides secure user management with proper data isolation and role-based access control using MongoDB ObjectIds for user identification.
+
+## Architecture Components
+
+### 1. User Model (`users.py`)
+
+```python
+class User(BeanieBaseUser, Document):
+ # Standard fastapi-users fields
+ email: str
+ hashed_password: str
+ is_active: bool = True
+ is_superuser: bool = False
+ is_verified: bool = False
+
+ # Custom fields
+ display_name: Optional[str] = None
+ registered_clients: dict[str, dict] = Field(default_factory=dict)
+
+ @property
+ def user_id(self) -> str:
+ """Return string representation of MongoDB ObjectId for backward compatibility."""
+ return str(self.id)
+```
+
+**Key Features:**
+- **Email-based Authentication**: Users authenticate using email addresses
+- **MongoDB ObjectId**: Uses MongoDB's native ObjectId as unique identifier
+- **MongoDB Integration**: Uses Beanie ODM for document storage
+- **Backward Compatibility**: user_id property provides ObjectId as string
+
+### 2. Authentication Manager (`auth.py`)
+
+```python
+class UserManager(BaseUserManager[User, PydanticObjectId]):
+ async def authenticate(self, credentials: dict) -> Optional[User]:
+ """Authenticate with email+password"""
+ username = credentials.get("username")
+ # Email-based authentication only
+
+ async def get_by_email(self, email: str) -> Optional[User]:
+ """Get user by email address"""
+```
+
+**Key Features:**
+- **Email Authentication**: Login with email address only
+- **Password Management**: Secure password hashing and verification
+- **Standard fastapi-users**: Uses standard user creation without custom IDs
+- **MongoDB ObjectId**: Relies on MongoDB's native unique ID generation
+
+### 3. Authentication Backends
+
+#### JWT Bearer Token
+- **Endpoint**: `/auth/jwt/login`
+- **Transport**: Authorization header (`Bearer `)
+- **Lifetime**: 1 hour
+- **Usage**: API calls, WebSocket authentication
+
+#### Cookie Authentication
+- **Endpoint**: `/auth/cookie/login`
+- **Transport**: HTTP cookies
+- **Lifetime**: 1 hour
+- **Usage**: Web dashboard, browser-based clients
+
+
+## Authentication Flow
+
+### 1. User Registration
+
+**Admin-Only Registration:**
+```bash
+# Create user with auto-generated MongoDB ObjectId
+curl -X POST "http://localhost:8000/api/create_user" \
+ -H "Authorization: Bearer $ADMIN_TOKEN" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "email": "user@example.com",
+ "password": "userpass",
+ "display_name": "John Doe"
+ }'
+
+# Response: {"id": "507f1f77bcf86cd799439011", "email": "user@example.com", ...}
+```
+
+### 2. Authentication Methods
+
+#### Email-based Login
+```bash
+curl -X POST "http://localhost:8000/auth/jwt/login" \
+ -H "Content-Type: application/x-www-form-urlencoded" \
+ -d "username=user@example.com&password=userpass"
+```
+
+
+### 3. WebSocket Authentication
+
+#### Token-based (Recommended)
+```javascript
+const ws = new WebSocket('ws://localhost:8000/ws_pcm?token=JWT_TOKEN&device_name=phone');
+```
+
+#### Cookie-based
+```javascript
+// Requires existing cookie from web login
+const ws = new WebSocket('ws://localhost:8000/ws_pcm?device_name=phone');
+```
+
+## Client ID Management
+
+### Format: `user_id_suffix-device_name`
+
+The system automatically generates client IDs by combining:
+- **user_id_suffix**: Last 6 characters of MongoDB ObjectId
+- **device_name**: Sanitized device identifier
+
+**Examples:**
+- `a39011-phone` (user ObjectId ending in a39011, device: phone)
+- `cd7994-desktop` (user ObjectId ending in cd7994, device: desktop)
+- `f86cd7-havpe` (user ObjectId ending in f86cd7, device: havpe)
+
+### Benefits:
+- **User Association**: Clear mapping between clients and users
+- **Device Tracking**: Multiple devices per user
+- **Data Isolation**: Each user only accesses their own data
+- **Audit Trail**: Track activity by user and device
+- **Unique IDs**: MongoDB ObjectId ensures global uniqueness
+
+## Security Features
+
+### 1. Password Security
+- **Bcrypt Hashing**: Secure password storage with salt
+- **Password Verification**: Constant-time comparison
+- **Hash Updates**: Automatic rehashing on login when needed
+
+### 2. Token Security
+- **JWT Tokens**: Signed with secret key
+- **Short Lifetime**: 1-hour expiration
+- **Secure Transport**: HTTPS recommended for production
+
+### 3. Data Isolation
+- **User Scoping**: All data scoped to user's MongoDB ObjectId
+- **Client Filtering**: Users only see their own clients
+- **Admin Override**: Superusers can access all data
+
+### 4. WebSocket Security
+- **Authentication Required**: All WebSocket connections require auth
+- **Token Validation**: JWT tokens validated on connection
+- **Graceful Rejection**: Unauthenticated connections rejected with reason
+
+## Environment Configuration
+
+### Required Variables
+```bash
+# JWT secret key (minimum 32 characters)
+AUTH_SECRET_KEY=your-super-secret-jwt-key-here-make-it-long-and-random
+
+# Admin user credentials
+ADMIN_PASSWORD=your-secure-admin-password
+ADMIN_EMAIL=admin@example.com
+```
+
+### Optional Variables
+```bash
+# Cookie security (set to true for HTTPS)
+COOKIE_SECURE=false
+
+```
+
+## API Endpoints
+
+### Authentication
+- `POST /auth/jwt/login` - JWT token authentication
+- `POST /auth/cookie/login` - Cookie-based authentication
+- `POST /auth/logout` - Logout (clear cookies)
+
+### User Management
+- `POST /api/create_user` - Create new user (admin only)
+- `GET /api/users/me` - Get current user info
+- `PATCH /api/users/me` - Update user profile
+
+### WebSocket Endpoints
+- `ws://host/ws` - Opus audio stream with auth
+- `ws://host/ws_pcm` - PCM audio stream with auth
+
+## Error Handling
+
+### Authentication Errors
+- **401 Unauthorized**: Invalid credentials or expired token
+- **403 Forbidden**: Insufficient permissions
+- **422 Validation Error**: Invalid request format
+
+### WebSocket Errors
+- **1008 Policy Violation**: Authentication required
+- **1011 Server Error**: Internal authentication error
+
+## Best Practices
+
+### 1. Client Implementation
+```python
+# Use email for authentication
+AUTH_USERNAME = "user@example.com" # Email address
+AUTH_PASSWORD = "secure_password"
+
+# Use single AUTH_USERNAME variable for email authentication
+```
+
+### 2. Token Management
+```python
+# Store tokens securely
+# Refresh tokens before expiry
+# Handle 401 errors gracefully
+```
+
+### 3. Production Deployment
+```bash
+# Use strong secrets
+AUTH_SECRET_KEY=$(openssl rand -base64 32)
+
+# Enable HTTPS
+COOKIE_SECURE=true
+
+# Use environment variables
+# Never commit secrets to version control
+```
+
+### 4. Admin User Setup
+```bash
+# Create admin during startup
+ADMIN_PASSWORD=secure_admin_password
+ADMIN_EMAIL=admin@yourdomain.com
+```
+
+## Troubleshooting
+
+### Common Issues
+
+#### 1. Authentication Failures
+```bash
+# Check credentials
+curl -X POST "http://localhost:8000/auth/jwt/login" \
+ -H "Content-Type: application/x-www-form-urlencoded" \
+ -d "username=user@example.com&password=test"
+
+# Verify user exists by email
+# Check password is correct
+# Ensure user is active
+```
+
+#### 2. WebSocket Connection Issues
+```javascript
+// Check token validity
+// Verify URL format
+// Test with curl first
+```
+
+#### 3. Admin User Creation
+```bash
+# Check logs for admin creation
+docker compose logs friend-backend | grep -i admin
+
+# Verify environment variables
+echo $ADMIN_PASSWORD
+```
+
+### Debug Commands
+```bash
+# Check user database
+docker exec -it mongo-container mongosh friend-lite
+
+# View authentication logs
+docker compose logs friend-backend | grep -i auth
+
+# Test API endpoints
+curl -H "Authorization: Bearer $TOKEN" http://localhost:8000/api/users/me
+```
+
+## Migration Guide
+
+### From Custom user_id to Email-Only Authentication
+
+1. **Update Environment Variables**
+ ```bash
+ # Old
+ AUTH_USERNAME=abc123 # Custom user_id (deprecated)
+
+ # New
+ AUTH_USERNAME=user@example.com # Email address only
+ ```
+
+2. **Update Client Code**
+ ```python
+ # Old
+ username = AUTH_USERNAME # Could be email or user_id
+
+ # New
+ username = AUTH_USERNAME # Email address only
+ ```
+
+3. **Test Authentication**
+ ```bash
+ # Only email authentication works now
+ curl -X POST "http://localhost:8000/auth/jwt/login" \
+ -d "username=user@example.com&password=pass"
+ ```
+
+## Advanced Features
+
+### 1. Role-Based Access Control
+```python
+# Regular user - can only access own data
+@app.get("/api/data")
+async def get_data(user: User = Depends(current_active_user)):
+ return get_user_data(user.user_id)
+
+# Admin user - can access all data
+@app.get("/api/admin/data")
+async def get_all_data(user: User = Depends(current_superuser)):
+ return get_all_data()
+```
+
+### 2. OAuth Integration
+```python
+# Automatic email verification for OAuth users
+```
+
+### 3. Multi-Device Support
+```python
+# Single user, multiple devices
+# Client IDs: a39011-phone, cd7994-tablet, f86cd7-desktop
+# Separate conversation streams per device
+# Unified user dashboard
+```
+
+This authentication system provides enterprise-grade security with developer-friendly APIs, supporting email/password authentication and modern OAuth flows while maintaining proper data isolation and user management capabilities using MongoDB's robust ObjectId system.
\ No newline at end of file
diff --git a/backends/advanced-backend/Docs/contribution.md b/backends/advanced-backend/Docs/contribution.md
new file mode 100644
index 00000000..488d3b19
--- /dev/null
+++ b/backends/advanced-backend/Docs/contribution.md
@@ -0,0 +1,44 @@
+ 1. Docs/quickstart.md (15 min)
+ 2. Docs/architecture.md (20 min)
+ 3. main.py - just the imports and WebSocket sections (15 min)
+ 4. memory_config.yaml (10 min)
+
+ ๐ง "I want to work on memory extraction"
+
+ 1. Docs/quickstart.md โ Docs/memories.md
+ 2. memory_config.yaml (memory_extraction section)
+ 3. main.py lines 1047-1065 (trigger)
+ 4. main.py lines 1163-1195 (processing)
+ 5. src/memory/memory_service.py
+ 6. src/memory_debug.py (for tracking)
+
+ ๐ "I want to work on action items"
+
+ 1. Docs/quickstart.md โ Docs/action-items.md
+ 2. memory_config.yaml (action_item_extraction section)
+ 3. main.py lines 1341-1378 (real-time processing)
+ 4. src/action_items_service.py
+ 5. ACTION_ITEMS_CLEANUP_SUMMARY.md (architecture)
+
+ ๐ "I want to debug pipeline issues"
+
+ 1. MEMORY_DEBUG_IMPLEMENTATION.md
+ 2. src/memory_debug.py
+ 3. src/memory_debug_api.py
+ 4. API endpoints: /api/debug/memory/*
+
+ ๐๏ธ "I want to understand the full architecture"
+
+ 1. Docs/architecture.md
+ 2. main.py (full file, focusing on class structures)
+ 3. src/auth.py (authentication flow)
+ 4. src/users.py (user management)
+ 5. All service files (memory_service.py, action_items_service.py)
+
+ ๐ฏ Key Concepts to Understand
+
+ Data Flow
+
+ Audio โ Transcription โ Dual Processing
+ โโ Memory Pipeline (end-of-conversation)
+ โโ Action Item Pipeline (real-time per-segment)
\ No newline at end of file
diff --git a/backends/advanced-backend/Docs/memories.md b/backends/advanced-backend/Docs/memories.md
new file mode 100644
index 00000000..0b0fd983
--- /dev/null
+++ b/backends/advanced-backend/Docs/memories.md
@@ -0,0 +1,680 @@
+# Memory Service Configuration and Customization
+
+> ๐ **Prerequisite**: Read [quickstart.md](./quickstart.md) first for system overview.
+
+This document explains how to configure and customize the memory service in the friend-lite backend.
+
+**Code References**:
+- **Main Implementation**: `src/memory/memory_service.py`
+- **Processing Trigger**: `main.py:1047-1065` (conversation end)
+- **Background Processing**: `main.py:1163-1195` (memory extraction)
+- **Configuration**: `memory_config.yaml` + `src/memory_config_loader.py`
+
+## Overview
+
+The memory service uses [Mem0](https://mem0.ai/) to store, retrieve, and search conversation memories. It integrates with Ollama for embeddings and LLM processing, and Qdrant for vector storage.
+
+**Key Architecture Change**: All memories are now keyed by the database user_id instead of client_id, with client information stored in metadata for reference.
+
+## Architecture
+
+```
+โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
+โ Transcripts โ โ Ollama โ โ Qdrant โ
+โ (Audio Input) โโโโโถโ (LLM + โโโโโถโ (Vector Store) โ
+โ + User Context โ โ Embeddings) โ โ (user_id โ
+โ โ โ โ โ keyed) โ
+โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
+ โ
+ โผ
+ โโโโโโโโโโโโโโโโโโโโ
+ โ Mem0 Memory โ
+ โ Service โ
+ โ (User-Centric) โ
+ โโโโโโโโโโโโโโโโโโโโ
+```
+
+## Configuration
+
+### Environment Variables
+
+The memory service is configured via environment variables:
+
+```bash
+# Ollama Configuration
+OLLAMA_BASE_URL=http://192.168.0.110:11434
+
+# Qdrant Configuration (optional)
+QDRANT_BASE_URL=localhost
+
+# Mem0 Organization Settings (optional)
+MEM0_ORGANIZATION_ID=friend-lite-org
+MEM0_PROJECT_ID=audio-conversations
+MEM0_APP_ID=omi-backend
+
+# Disable telemetry (privacy)
+MEM0_TELEMETRY=False
+```
+
+### Memory Service Configuration
+
+The core configuration is in `src/memory/memory_service.py:45-81`:
+
+```python
+MEM0_CONFIG = {
+ "llm": {
+ "provider": "ollama",
+ "config": {
+ "model": "llama3.1:latest",
+ "ollama_base_url": OLLAMA_BASE_URL,
+ "temperature": 0,
+ "max_tokens": 2000,
+ },
+ },
+ "embedder": {
+ "provider": "ollama",
+ "config": {
+ "model": "nomic-embed-text:latest",
+ "embedding_dims": 768,
+ "ollama_base_url": OLLAMA_BASE_URL,
+ },
+ },
+ "vector_store": {
+ "provider": "qdrant",
+ "config": {
+ "collection_name": "omi_memories",
+ "embedding_model_dims": 768,
+ "host": QDRANT_BASE_URL,
+ "port": 6333,
+ },
+ },
+}
+```
+
+## Mem0 Custom Prompts Configuration
+
+### Understanding Mem0 Prompts
+
+Mem0 uses two types of custom prompts:
+
+1. **`custom_fact_extraction_prompt`**: Controls how facts are extracted from conversations
+2. **`custom_update_memory_prompt`**: Controls how memories are updated/merged
+
+### Key Discovery: Fact Extraction Format
+
+The `custom_fact_extraction_prompt` must follow a specific JSON format with few-shot examples:
+
+```python
+custom_fact_extraction_prompt = """
+Please extract relevant facts from the conversation.
+Here are some few shot examples:
+
+Input: Hi.
+Output: {"facts" : []}
+
+Input: I need to buy groceries tomorrow.
+Output: {"facts" : ["Need to buy groceries tomorrow"]}
+
+Input: The meeting is at 3 PM on Friday.
+Output: {"facts" : ["Meeting scheduled for 3 PM on Friday"]}
+
+Now extract facts from the following conversation. Return only JSON format with "facts" key.
+"""
+```
+
+### Configuration Parameters
+
+Mem0 configuration requires these specific parameters:
+
+- `custom_fact_extraction_prompt`: For fact extraction (if enabled)
+- `version`: Should be set to "v1.1"
+- Standard LLM, embedder, and vector_store configurations
+
+### Common Issues
+
+1. **Using `custom_prompt` instead of `custom_fact_extraction_prompt`**: Will cause empty results
+2. **Missing JSON format examples**: Facts won't be extracted properly
+3. **Setting `custom_fact_extraction_prompt` to empty string**: Disables fact extraction entirely
+
+## Customization Options
+
+### 1. LLM Model Configuration
+
+#### Change the LLM Model
+
+To use a different Ollama model for memory processing:
+
+```python
+# In memory_service.py
+MEM0_CONFIG["llm"]["config"]["model"] = "llama3.2:latest" # or any other model
+```
+
+#### Switch to OpenAI GPT-4o (Recommended for JSON Reliability)
+
+For better JSON parsing and reduced errors, switch to OpenAI:
+
+```bash
+# In your .env file
+LLM_PROVIDER=openai
+OPENAI_API_KEY=your-openai-api-key
+OPENAI_MODEL=gpt-4o # Recommended for reliable JSON output
+
+# Alternative models
+# OPENAI_MODEL=gpt-4o-mini # Faster, cheaper option
+# OPENAI_MODEL=gpt-3.5-turbo # Budget option
+```
+
+Or configure via `memory_config.yaml`:
+
+```yaml
+memory_extraction:
+ llm_settings:
+ model: "gpt-4o" # When LLM_PROVIDER=openai
+ temperature: 0.1
+ max_tokens: 2000
+
+action_item_extraction:
+ enabled: true
+ llm_settings:
+ model: "gpt-4o" # Better JSON reliability
+ temperature: 0.1
+ max_tokens: 1000
+
+fact_extraction:
+ enabled: true # Safe to enable with GPT-4o
+ llm_settings:
+ model: "gpt-4o"
+ temperature: 0.0
+ max_tokens: 1500
+```
+
+#### Adjust LLM Parameters
+
+```python
+MEM0_CONFIG["llm"]["config"].update({
+ "temperature": 0.1, # Higher for more creative summaries
+ "max_tokens": 4000, # More tokens for longer memories
+ "top_p": 0.9, # Nucleus sampling
+})
+```
+
+#### Benefits of OpenAI GPT-4o
+
+**Improved JSON Reliability:**
+- Consistent JSON formatting reduces parsing errors
+- Better instruction following for structured output
+- Built-in understanding of JSON requirements
+- Reduced need to disable fact extraction
+
+**When to Use GPT-4o:**
+- Experiencing frequent JSON parsing errors
+- Need reliable action item extraction
+- Want to enable fact extraction safely
+- Require consistent structured output
+
+**Monitoring JSON Success:**
+```bash
+# Check for parsing errors
+docker logs advanced-backend | grep "JSONDecodeError"
+
+# Verify OpenAI usage
+docker logs advanced-backend | grep "Using OpenAI provider"
+
+# Monitor action item extraction
+docker logs advanced-backend | grep "OpenAI response"
+```
+
+### 2. Embedding Model Configuration
+
+#### Change Embedding Model
+
+```python
+MEM0_CONFIG["embedder"]["config"]["model"] = "mxbai-embed-large:latest"
+```
+
+#### Adjust Embedding Dimensions
+
+```python
+# Must match your embedding model's output dimensions
+MEM0_CONFIG["embedder"]["config"]["embedding_dims"] = 1024
+MEM0_CONFIG["vector_store"]["config"]["embedding_model_dims"] = 1024
+```
+
+### 3. Memory Processing Customization
+
+#### Custom Memory Prompt
+
+You can customize how memories are extracted from conversations:
+
+```python
+# In src/memory/memory_service.py:207-225 (_add_memory_to_store function)
+process_memory.add(
+ transcript,
+ user_id=user_id, # Database user_id (not client_id)
+ metadata={
+ "client_id": client_id, # Stored in metadata
+ "user_email": user_email,
+ # ... other metadata
+ },
+ prompt="Please extract key information and relationships from this conversation"
+)
+```
+
+#### Memory Metadata
+
+Enrich memories with custom metadata:
+
+```python
+metadata = {
+ "source": "offline_streaming",
+ "client_id": client_id, # Client ID stored in metadata
+ "user_email": user_email, # User email for identification
+ "audio_uuid": audio_uuid,
+ "timestamp": int(time.time()),
+ "conversation_context": "audio_transcription",
+ "device_type": "audio_recording",
+ "mood": "professional", # Custom field
+ "topics": ["sales", "meetings"], # Custom field
+ "organization_id": MEM0_ORGANIZATION_ID,
+ "project_id": MEM0_PROJECT_ID,
+ "app_id": MEM0_APP_ID,
+}
+```
+
+### 4. Vector Store Configuration
+
+#### Change Collection Name
+
+```python
+MEM0_CONFIG["vector_store"]["config"]["collection_name"] = "my_custom_memories"
+```
+
+#### Qdrant Advanced Configuration
+
+```python
+MEM0_CONFIG["vector_store"]["config"].update({
+ "url": "http://localhost:6333", # Full URL
+ "api_key": "your-api-key", # If using Qdrant Cloud
+ "prefer_grpc": True, # Use gRPC instead of HTTP
+})
+```
+
+### 5. Search and Retrieval Customization
+
+#### Custom Search Filters
+
+```python
+def search_memories_with_filters(self, query: str, user_id: str, topic: str = None):
+ filters = {"metadata.type": {"$ne": "action_item"}}
+
+ if topic:
+ filters["metadata.topics"] = {"$in": [topic]}
+
+ return self.memory.search(
+ query=query,
+ user_id=user_id,
+ filters=filters,
+ limit=20
+ )
+```
+
+#### Memory Ranking
+
+```python
+def get_important_memories(self, user_id: str):
+ """Get memories sorted by importance/frequency"""
+ memories = self.memory.get_all(user_id=user_id)
+
+ # Custom scoring logic
+ for memory in memories:
+ score = 0
+ if "meeting" in memory.get('memory', '').lower():
+ score += 2
+ if "deadline" in memory.get('memory', '').lower():
+ score += 3
+ memory['importance_score'] = score
+
+ return sorted(memories, key=lambda x: x.get('importance_score', 0), reverse=True)
+```
+
+## User-Centric Memory Architecture
+
+### Key Changes
+
+**All memories are now keyed by database user_id instead of client_id:**
+
+- **Memory Storage**: `user_id` parameter identifies the memory owner
+- **Client Information**: Stored in metadata for reference and debugging
+- **User Email**: Included in metadata for easy identification
+- **Backward Compatibility**: Admin debug shows both user and client information
+
+### Client-User Mapping
+
+The system maintains a mapping between client IDs and database users:
+
+```python
+# Client ID format: objectid_suffix-device_name
+client_id = "cd7994-laptop" # Maps to user_id="507f1f77bcf86cd799439011" (ObjectId)
+
+# Memory storage uses database user_id (full ObjectId)
+process_memory.add(
+ transcript,
+ user_id="507f1f77bcf86cd799439011", # Database user_id (MongoDB ObjectId)
+ metadata={
+ "client_id": "cd7994-laptop", # Client reference
+ "user_email": "user@example.com",
+ # ... other metadata
+ }
+)
+```
+
+## Memory Types and Structure
+
+### Standard Memory Structure
+
+```json
+{
+ "id": "01b76e66-8a9c-4567-b890-123456789abc",
+ "memory": "Planning a vacation to Italy in September",
+ "user_id": "abc123",
+ "created_at": "2025-07-10T07:44:15.316499-07:00",
+ "metadata": {
+ "source": "offline_streaming",
+ "client_id": "abc123-laptop",
+ "user_email": "user@example.com",
+ "audio_uuid": "test_audio_6e38c2c8",
+ "timestamp": 1720616655,
+ "conversation_context": "audio_transcription",
+ "device_type": "audio_recording",
+ "organization_id": "friend-lite-org",
+ "project_id": "audio-conversations",
+ "app_id": "omi-backend"
+ }
+}
+```
+
+### Action Item Memory Structure
+
+```json
+{
+ "id": "5e8db55f-1234-5678-9abc-def012345678",
+ "memory": "Action Item: Complete user authentication module (Status: open)",
+ "user_id": "abc123",
+ "metadata": {
+ "type": "action_item",
+ "client_id": "abc123-laptop",
+ "user_email": "user@example.com",
+ "action_item_data": {
+ "description": "Complete user authentication module",
+ "assignee": "development_team",
+ "due_date": "not_specified",
+ "priority": "high",
+ "status": "open"
+ }
+ }
+}
+```
+
+## Advanced Customization
+
+### 1. Custom Memory Processing Pipeline
+
+Create a custom processing function:
+
+```python
+def custom_memory_processor(transcript: str, client_id: str, audio_uuid: str, user_id: str, user_email: str):
+ # Extract entities
+ entities = extract_named_entities(transcript)
+
+ # Classify conversation type
+ conv_type = classify_conversation(transcript)
+
+ # Generate custom summary
+ summary = generate_custom_summary(transcript, conv_type)
+
+ # Store with enriched metadata
+ process_memory.add(
+ summary,
+ user_id=user_id, # Database user_id
+ metadata={
+ "client_id": client_id,
+ "user_email": user_email,
+ "entities": entities,
+ "conversation_type": conv_type,
+ "audio_uuid": audio_uuid,
+ "processing_version": "v2.0"
+ }
+ )
+```
+
+### 2. Multiple Memory Collections
+
+Configure different collections for different types of memories:
+
+```python
+def init_specialized_memory_services():
+ # Personal memories
+ personal_config = MEM0_CONFIG.copy()
+ personal_config["vector_store"]["config"]["collection_name"] = "personal_memories"
+
+ # Work memories
+ work_config = MEM0_CONFIG.copy()
+ work_config["vector_store"]["config"]["collection_name"] = "work_memories"
+ work_config["custom_prompt"] = "Focus on work-related tasks, meetings, and projects"
+
+ return {
+ "personal": Memory.from_config(personal_config),
+ "work": Memory.from_config(work_config)
+ }
+```
+
+### 3. Memory Lifecycle Management
+
+Implement automatic memory cleanup:
+
+```python
+def cleanup_old_memories(self, user_id: str, days_old: int = 365):
+ """Remove memories older than specified days"""
+ cutoff_timestamp = int(time.time()) - (days_old * 24 * 60 * 60)
+
+ memories = self.get_all_memories(user_id)
+ for memory in memories:
+ if memory.get('metadata', {}).get('timestamp', 0) < cutoff_timestamp:
+ self.delete_memory(memory['id'])
+```
+
+## Testing Memory Configuration
+
+Use the provided test script to verify your configuration:
+
+```bash
+# Run the memory test script
+python test_memory_creation.py
+```
+
+This will:
+- Test connectivity to Ollama and Qdrant
+- Create sample memories with database user IDs (not client IDs)
+- Test memory retrieval and search functionality
+- Verify the new user-centric memory structure and metadata
+- Validate client-user mapping functionality
+
+## Troubleshooting
+
+### Common Issues
+
+1. **Connection Timeouts**
+ - Check Ollama is running: `curl http://localhost:11434/api/version`
+ - Check Qdrant is accessible: `curl http://localhost:6333/collections`
+
+2. **Memory Not Created**
+ - Check Ollama has required models: `ollama list`
+ - Verify Qdrant collection exists
+ - Check memory service logs for errors
+
+3. **Search Not Working**
+ - Ensure embedding model is available in Ollama
+ - Check vector dimensions match between embedder and Qdrant
+ - Verify collection has vectors: `curl http://localhost:6333/collections/omi_memories`
+
+### Required Ollama Models
+
+Make sure these models are available:
+
+```bash
+# LLM for memory processing
+ollama pull llama3.1:latest
+
+# Embedding model for semantic search
+ollama pull nomic-embed-text:latest
+```
+
+### Memory Service Logs
+
+Enable debug logging to troubleshoot issues:
+
+```python
+import logging
+logging.getLogger("memory_service").setLevel(logging.DEBUG)
+```
+
+## Performance Optimization
+
+### 1. Batch Processing
+
+Process multiple memories at once:
+
+```python
+async def batch_add_memories(self, transcripts_data: List[Dict]):
+ tasks = []
+ for data in transcripts_data:
+ task = self.add_memory(
+ data['transcript'],
+ data['client_id'],
+ data['audio_uuid'],
+ data['user_id'], # Database user_id
+ data['user_email'] # User email
+ )
+ tasks.append(task)
+
+ results = await asyncio.gather(*tasks, return_exceptions=True)
+ return results
+```
+
+### 2. Memory Compression
+
+Implement memory consolidation:
+
+```python
+def consolidate_memories(self, user_id: str, time_window_hours: int = 24):
+ """Consolidate related memories from the same time period"""
+ recent_memories = self.get_recent_memories(user_id, time_window_hours)
+
+ if len(recent_memories) > 5: # If many memories in short time
+ consolidated = self.summarize_memories(recent_memories)
+
+ # Delete individual memories and store consolidated version
+ for memory in recent_memories:
+ self.delete_memory(memory['id'])
+
+ return self.add_consolidated_memory(consolidated, user_id)
+```
+
+## API Endpoints
+
+The memory service exposes these endpoints:
+
+- `GET /api/memories` - Get user memories (keyed by database user_id)
+- `GET /api/memories/search?query={query}` - Search memories (user-scoped)
+- `DELETE /api/memories/{memory_id}` - Delete specific memory (requires authentication)
+- `GET /api/admin/memories` - Admin view of all memories across all users (superuser only)
+- `GET /api/admin/memories/debug` - Admin debug view with user and client information (superuser only)
+
+### Admin Endpoints
+
+#### All Memories Endpoint (`/api/admin/memories`)
+
+Returns all memories across all users in a clean, searchable format:
+
+```json
+{
+ "total_memories": 25,
+ "total_users": 3,
+ "memories": [
+ {
+ "id": "memory-uuid",
+ "memory": "Planning vacation to Italy in September",
+ "user_id": "abc123",
+ "created_at": "2025-07-10T14:30:00Z",
+ "owner_user_id": "abc123",
+ "owner_email": "user@example.com",
+ "owner_display_name": "John Doe",
+ "metadata": {
+ "client_id": "abc123-laptop",
+ "user_email": "user@example.com",
+ "audio_uuid": "audio-uuid"
+ }
+ }
+ ]
+}
+```
+
+#### Debug Endpoint (`/api/admin/memories/debug`)
+
+The admin debug endpoint provides comprehensive debugging information:
+
+```json
+{
+ "total_users": 2,
+ "total_memories": 15,
+ "admin_user": {
+ "id": "admin1",
+ "email": "admin@example.com",
+ "is_superuser": true
+ },
+ "users_with_memories": [
+ {
+ "user_id": "abc123",
+ "email": "user@example.com",
+ "memory_count": 10,
+ "memories": [...],
+ "registered_clients": [
+ {
+ "client_id": "abc123-laptop",
+ "device_name": "laptop",
+ "last_seen": "2025-07-10T14:30:00Z"
+ }
+ ],
+ "client_count": 1
+ }
+ ]
+}
+```
+
+## Conclusion
+
+The memory service is highly customizable and can be adapted for various use cases. Key areas for customization include:
+
+- LLM and embedding models
+- Memory processing prompts
+- Metadata enrichment
+- Search and retrieval logic
+- Storage collections and structure
+
+For more advanced use cases, consider implementing custom processing pipelines, multiple memory types, or integration with external knowledge bases.
+
+## Migration from Client-Based to User-Based Storage
+
+If migrating from an existing system where memories were keyed by client_id:
+
+1. **Clean existing data**: Remove old memories from Qdrant
+2. **Restart services**: Ensure new architecture is active
+3. **Test with fresh data**: Verify memories are properly keyed by user_id
+4. **Admin verification**: Use `/api/admin/memories/debug` to confirm proper storage
+
+The new architecture ensures proper user isolation and simplifies admin debugging while maintaining all client information in metadata.
+
+
+Both load all user memories and view all memories are helpful
+Both views complement each other - the debug view helps you understand how the system is working, while the clean view
+helps you understand what content is being stored.
\ No newline at end of file
diff --git a/backends/advanced-backend/Docs/quickstart.md b/backends/advanced-backend/Docs/quickstart.md
new file mode 100644
index 00000000..0c20070f
--- /dev/null
+++ b/backends/advanced-backend/Docs/quickstart.md
@@ -0,0 +1,539 @@
+# Friend-Lite Backend Quickstart Guide
+
+> ๐ **New to friend-lite?** This is your starting point! After reading this, continue with [architecture.md](./architecture.md) for technical details.
+
+## Overview
+
+Friend-Lite is an eco-system of services to support "AI wearable" agents/functionality.
+At the moment, the basic functionalities are:
+- Audio capture (via WebSocket, from OMI device, files, or a laptop)
+- Audio transcription
+- Memory extraction
+- Action item extraction
+- Streamlit web dashboard
+- Basic user management
+
+**Core Implementation**: See `src/main.py` for the complete FastAPI application and WebSocket handling.
+
+## Prerequisites
+
+- Docker and Docker Compose
+- (Optional) Deepgram API key for high-quality cloud transcription
+- (Optional) Ollama for local LLM processing (memory extraction, action items)
+- (Optional) Wyoming ASR for offline speech-to-text processing
+
+## Quick Start
+
+### 1. Environment Setup
+
+Copy the `.env.template` file to `.env` and configure the required values:
+
+**Required Environment Variables:**
+```bash
+AUTH_SECRET_KEY=your-super-secret-jwt-key-here
+ADMIN_PASSWORD=your-secure-admin-password
+ADMIN_EMAIL=admin@example.com
+```
+
+**LLM Configuration (Choose One):**
+```bash
+# Option 1: OpenAI (Recommended for best memory extraction)
+LLM_PROVIDER=openai
+OPENAI_API_KEY=your-openai-api-key-here
+OPENAI_MODEL=gpt-4o
+
+# Option 2: Local Ollama
+LLM_PROVIDER=ollama
+OLLAMA_BASE_URL=http://ollama:11434
+```
+
+**Transcription Services (Choose One):**
+```bash
+# Option 1: Deepgram (Recommended for best transcription quality)
+DEEPGRAM_API_KEY=your-deepgram-api-key-here
+
+# Option 2: Local ASR service
+OFFLINE_ASR_TCP_URI=tcp://host.docker.internal:8765
+```
+
+**Important Notes:**
+- **OpenAI is strongly recommended** for LLM processing as it provides much better memory extraction and eliminates JSON parsing errors
+- If `DEEPGRAM_API_KEY` is provided, the system automatically uses Deepgram's Nova-3 model for transcription
+- The system falls back to offline services if cloud APIs are not configured
+
+### 2. Start the System
+
+**Recommended: Docker Compose**
+```bash
+cd backends/advanced-backend
+docker compose up --build -d
+```
+
+This starts:
+- **Backend API**: `http://localhost:8000`
+- **Web Dashboard**: `http://localhost:8501`
+- **MongoDB**: `localhost:27017`
+- **Qdrant**: `localhost:6333`
+- (optional) **Ollama**: # commented out
+
+**Implementation**: See `docker-compose.yml` for complete service configuration and `src/main.py` for FastAPI application setup.
+
+### 3. Optional: Start ASR Service
+
+For self-hosted speech recognition, see instructions in `extras/asr-services/`:
+
+## Using the System
+
+### Web Dashboard
+
+1. Open `http://localhost:8501`
+2. **Login** using the sidebar:
+ - **Admin**: `admin@example.com` / `your-admin-password`
+ - **Create new users** via admin interface
+
+### Dashboard Features
+
+- **Conversations**: View audio recordings, transcripts, and cropped audio
+- **Memories**: Search extracted conversation memories
+- **Action Items**: Manage automatically detected tasks
+- **User Management**: Create/delete users and their data
+- **Client Management**: View active connections and close conversations
+
+### Audio Client Connection
+
+Connect audio clients via WebSocket with authentication:
+
+**WebSocket URLs:**
+```javascript
+// Opus audio stream
+ws://your-server-ip:8000/ws?token=YOUR_JWT_TOKEN&device_name=YOUR_DEVICE_NAME
+
+// PCM audio stream
+ws://your-server-ip:8000/ws_pcm?token=YOUR_JWT_TOKEN&device_name=YOUR_DEVICE_NAME
+```
+
+**Authentication Methods:**
+The system uses email-based authentication with JWT tokens:
+
+```bash
+# Login with email
+curl -X POST "http://localhost:8000/auth/jwt/login" \
+ -H "Content-Type: application/x-www-form-urlencoded" \
+ -d "username=admin@example.com&password=your-admin-password"
+
+# Response: {"access_token": "eyJhbGciOiJIUzI1NiIs...", "token_type": "bearer"}
+```
+
+**Authentication Flow:**
+1. **User Registration**: Admin creates users via API or dashboard
+2. **Login**: Users authenticate with email and password
+3. **Token Usage**: Include JWT token in API calls and WebSocket connections
+4. **Data Access**: Users can only access their own data (admins see all)
+
+For detailed authentication documentation, see [`auth.md`](./auth.md).
+
+**Create User Account:**
+```bash
+export ADMIN_TOKEN="your-admin-token"
+
+# Create user
+curl -X POST "http://localhost:8000/api/create_user" \
+ -H "Authorization: Bearer $ADMIN_TOKEN" \
+ -H "Content-Type: application/json" \
+ -d '{"email": "user@example.com", "password": "userpass", "display_name": "John Doe"}'
+
+# Response includes the user_id (MongoDB ObjectId)
+# {"message": "User user@example.com created successfully", "user": {"id": "507f1f77bcf86cd799439011", ...}}
+```
+
+**Client ID Format:**
+The system automatically generates client IDs using the last 6 characters of the MongoDB ObjectId plus device name (e.g., `439011-phone`, `439011-desktop`). This ensures proper user-client association and data isolation.
+
+## Add Existing Data
+
+### Audio File Upload & Processing
+
+The system supports processing existing audio files through the file upload API. This allows you to import and process pre-recorded conversations without requiring a live WebSocket connection.
+
+**Upload and Process WAV Files:**
+```bash
+export USER_TOKEN="your-jwt-token"
+
+# Upload single WAV file
+curl -X POST "http://localhost:8000/api/process-audio-files" \
+ -H "Authorization: Bearer $USER_TOKEN" \
+ -F "files=@/path/to/audio.wav" \
+ -F "device_name=file_upload"
+
+# Upload multiple WAV files
+curl -X POST "http://localhost:8000/api/process-audio-files" \
+ -H "Authorization: Bearer $USER_TOKEN" \
+ -F "files=@/path/to/recording1.wav" \
+ -F "files=@/path/to/recording2.wav" \
+ -F "device_name=import_batch"
+```
+
+**Response Example:**
+```json
+{
+ "message": "Successfully processed 2 audio files",
+ "processed_files": [
+ {
+ "filename": "recording1.wav",
+ "sample_rate": 16000,
+ "channels": 1,
+ "duration_seconds": 120.5,
+ "size_bytes": 3856000
+ },
+ {
+ "filename": "recording2.wav",
+ "sample_rate": 44100,
+ "channels": 2,
+ "duration_seconds": 85.2,
+ "size_bytes": 7532800
+ }
+ ],
+ "client_id": "user01-import_batch"
+}
+```
+
+## System Features
+
+### Audio Processing
+- **Real-time streaming**: WebSocket audio ingestion
+- **Multiple formats**: Opus and PCM audio support
+- **Per-client processing**: Isolated conversation management
+- **Speech detection**: Automatic silence removal
+- **Audio cropping**: Extract only speech segments
+
+**Implementation**: See `src/main.py:1562+` for WebSocket endpoints and `src/main.py:895-1340` for audio processing pipeline.
+
+### Transcription Options
+- **Deepgram API**: Cloud-based batch processing, high accuracy (recommended)
+- **Self-hosted ASR**: Local Wyoming protocol services with real-time processing
+- **Collection timeout**: 1.5 minute collection for optimal Deepgram quality
+
+### Conversation Management
+- **Automatic chunking**: 60-second audio segments
+- **Conversation timeouts**: Auto-close after 1.5 minutes of silence
+- **Speaker identification**: Track multiple speakers per conversation
+- **Manual controls**: Close conversations via API or dashboard
+
+### Memory & Intelligence
+- **Enhanced Memory Extraction**: Improved fact extraction with granular, specific memories instead of generic transcript storage
+- **User-centric storage**: All memories and action items keyed by database user_id
+- **Memory extraction**: Automatic conversation summaries using LLM with enhanced prompts
+- **Semantic search**: Vector-based memory retrieval
+- **Action item detection**: Automatic task extraction with "Simon says" triggers
+- **Configurable extraction**: YAML-based configuration for memory and action item extraction
+- **Debug tracking**: SQLite-based tracking of transcript โ memory/action item conversion
+- **Client metadata**: Device information preserved for debugging and reference
+- **User isolation**: All data scoped to individual users with multi-device support
+- **No more fallbacks**: System now creates proper memories instead of generic transcript placeholders
+
+**Implementation**:
+- **Memory System**: `src/memory/memory_service.py` + `main.py:1047-1065, 1163-1195`
+- **Action Items**: `src/action_items_service.py` + `main.py:1341-1378`
+- **Configuration**: `memory_config.yaml` + `src/memory_config_loader.py`
+- **Debug Tracking**: `src/memory_debug.py` + API endpoints at `/api/debug/memory/*`
+
+### Authentication & Security
+- **Email Authentication**: Login with email and password
+- **JWT tokens**: Secure API and WebSocket authentication with 1-hour expiration
+- **Role-based access**: Admin vs regular user permissions
+- **Data isolation**: Users can only access their own data
+- **Client ID Management**: Automatic client-user association via `objectid_suffix-device_name` format
+- **Multi-device support**: Single user can connect multiple devices
+- **Security headers**: Proper CORS, cookie security, and token validation
+
+**Implementation**: See `src/auth.py` for authentication logic, `src/users.py` for user management, and [`auth.md`](./auth.md) for comprehensive documentation.
+
+## Verification
+
+```bash
+# System health check
+curl http://localhost:8000/health
+
+# Web dashboard
+open http://localhost:8501
+
+# View active clients (requires auth token)
+curl -H "Authorization: Bearer your-token" http://localhost:8000/api/active_clients
+
+# Alternative endpoint (same data)
+curl -H "Authorization: Bearer your-token" http://localhost:8000/api/clients/active
+```
+
+## HAVPE Relay Configuration
+
+For ESP32 audio streaming using the HAVPE relay (`extras/havpe-relay/`):
+
+```bash
+# Environment variables for HAVPE relay
+export AUTH_USERNAME="user@example.com" # Email address
+export AUTH_PASSWORD="your-password"
+export DEVICE_NAME="havpe" # Device identifier
+
+# Run the relay
+cd extras/havpe-relay
+python main.py --backend-url http://your-server:8000 --backend-ws-url ws://your-server:8000
+```
+
+The relay will automatically:
+- Authenticate using `AUTH_USERNAME` (email address)
+- Generate client ID as `objectid_suffix-havpe`
+- Forward ESP32 audio to the backend with proper authentication
+- Handle token refresh and reconnection
+
+## Development tip
+uv sync --group (whatever group you want to sync)
+(for example, deepgram, etc.)
+
+## Troubleshooting
+
+**Service Issues:**
+- Check logs: `docker compose logs friend-backend`
+- Restart services: `docker compose restart`
+- View all services: `docker compose ps`
+
+**Authentication Issues:**
+- Verify `AUTH_SECRET_KEY` is set and long enough (minimum 32 characters)
+- Check admin credentials match `.env` file
+- Ensure user email/password combinations are correct
+
+**ASR Issues:**
+- **Deepgram**: Verify API key is valid
+- **Self-hosted**: Ensure ASR service is running on port 8765
+- Check ASR connection in health endpoint
+
+**Memory Issues:**
+- Ensure Ollama is running and model is pulled
+- Check Qdrant connection in health endpoint
+- Memory processing happens at conversation end
+
+**Connection Issues:**
+- Use server's IP address, not localhost for mobile clients
+- Ensure WebSocket connections include authentication token
+- Check firewall/port settings for remote connections
+
+## Data Architecture
+
+The friend-lite backend uses a **user-centric data architecture**:
+
+- **All memories and action items are keyed by database user_id** (not client_id)
+- **Client information is stored in metadata** for reference and debugging
+- **User email is included** for easy identification in admin interfaces
+- **Multi-device support**: Users can access their data from any registered device
+
+For detailed information, see [User Data Architecture](user-data-architecture.md).
+
+## Memory & Action Item Configuration
+
+The system uses **centralized configuration** via `memory_config.yaml` for all memory and action item extraction settings. All hardcoded values have been removed from the code to ensure consistent, configurable behavior.
+
+### Configuration File Location
+- **Path**: `backends/advanced-backend/memory_config.yaml`
+- **Hot-reload**: Changes are applied on next processing cycle (no restart required)
+- **Fallback**: If file is missing, system uses safe defaults with environment variables
+
+### LLM Provider & Model Configuration
+
+โญ **OpenAI is STRONGLY RECOMMENDED** for optimal memory extraction performance.
+
+The system supports **multiple LLM providers** - configure via environment variables:
+
+```bash
+# In your .env file
+LLM_PROVIDER=openai # RECOMMENDED: Use "openai" for best results
+OPENAI_API_KEY=your-openai-api-key
+OPENAI_MODEL=gpt-4o # RECOMMENDED: "gpt-4o" for better memory extraction
+
+# Alternative: Local Ollama (may have reduced memory quality)
+LLM_PROVIDER=ollama
+OLLAMA_BASE_URL=http://ollama:11434
+OLLAMA_MODEL=gemma3n:e4b # Fallback if YAML config fails to load
+```
+
+**Why OpenAI is recommended:**
+- **Enhanced memory extraction**: Creates multiple granular memories instead of fallback transcripts
+- **Better fact extraction**: More reliable JSON parsing and structured output
+- **No more "fallback memories"**: Eliminates generic transcript-based memory entries
+- **Improved conversation understanding**: Better context awareness and detail extraction
+
+**YAML Configuration** (provider-specific models):
+```yaml
+memory_extraction:
+ enabled: true
+ prompt: |
+ Extract anything relevant about this conversation that would be valuable to remember.
+ Focus on key topics, people, decisions, dates, and emotional context.
+ llm_settings:
+ # Model selection based on LLM_PROVIDER:
+ # - Ollama: "gemma3n:e4b", "llama3.1:latest", "llama3.2:latest", etc.
+ # - OpenAI: "gpt-4o" (recommended for JSON reliability), "gpt-4o-mini", "gpt-3.5-turbo", etc.
+ model: "gemma3n:e4b"
+ temperature: 0.1
+ max_tokens: 2000
+
+fact_extraction:
+ enabled: false # Disabled to avoid JSON parsing issues
+ # RECOMMENDATION: Enable with OpenAI GPT-4o for better JSON reliability
+ llm_settings:
+ model: "gemma3n:e4b" # Auto-switches based on LLM_PROVIDER
+ temperature: 0.0 # Lower for factual accuracy
+ max_tokens: 1500
+
+action_item_extraction:
+ enabled: true
+ # RECOMMENDATION: Works best with OpenAI GPT-4o for reliable JSON parsing
+ trigger_phrases:
+ - "simon says"
+ - "action item"
+ - "todo"
+ - "follow up"
+ - "next step"
+ - "homework"
+ - "deliverable"
+ - "task"
+ - "assignment"
+ llm_settings:
+ model: "gemma3n:e4b" # Auto-switches based on LLM_PROVIDER
+ temperature: 0.1
+ max_tokens: 1000
+```
+
+**Provider-Specific Behavior:**
+- **Ollama**: Uses local models with Ollama embeddings (nomic-embed-text)
+- **OpenAI**: Uses OpenAI models with OpenAI embeddings (text-embedding-3-small)
+- **Embeddings**: Automatically selected based on provider (768 dims for Ollama, 1536 for OpenAI)
+
+#### Fixing JSON Parsing Errors
+
+If you experience JSON parsing errors in action items or fact extraction:
+
+1. **Switch to OpenAI GPT-4o** (recommended solution):
+ ```bash
+ # In your .env file
+ LLM_PROVIDER=openai
+ OPENAI_API_KEY=your-openai-api-key
+ OPENAI_MODEL=gpt-4o
+ ```
+
+2. **Enable fact extraction** with reliable JSON output:
+ ```yaml
+ # In memory_config.yaml
+ fact_extraction:
+ enabled: true # Safe to enable with GPT-4o
+ ```
+
+3. **Monitor logs** for JSON parsing success:
+ ```bash
+ # Check for JSON parsing errors
+ docker logs advanced-backend | grep "JSONDecodeError"
+
+ # Verify OpenAI usage
+ docker logs advanced-backend | grep "OpenAI response"
+ ```
+
+**Why GPT-4o helps with JSON errors:**
+- More consistent JSON formatting
+- Better instruction following for structured output
+- Reduced malformed JSON responses
+- Built-in JSON mode for reliable parsing
+
+#### Testing OpenAI Configuration
+
+To verify your OpenAI setup is working:
+
+1. **Check logs for OpenAI usage**:
+ ```bash
+ # Start the backend and check logs
+ docker logs advanced-backend | grep -i "openai"
+
+ # You should see:
+ # "Using OpenAI provider with model: gpt-4o"
+ ```
+
+2. **Test memory extraction** with a conversation:
+ ```bash
+ # The health endpoint includes LLM provider info
+ curl http://localhost:8000/health
+
+ # Response should include: "llm_provider": "openai"
+ ```
+
+3. **Monitor memory processing**:
+ ```bash
+ # After a conversation ends, check for successful processing
+ docker logs advanced-backend | grep "memory processing"
+ ```
+
+If you see errors about missing API keys or models, verify your `.env` file has:
+```bash
+LLM_PROVIDER=openai
+OPENAI_API_KEY=sk-your-actual-api-key-here
+OPENAI_MODEL=gpt-4o
+```
+
+### Quality Control Settings
+```yaml
+quality_control:
+ min_conversation_length: 50 # Skip very short conversations
+ max_conversation_length: 50000 # Skip extremely long conversations
+ skip_low_content: true # Skip conversations with mostly filler words
+ min_content_ratio: 0.3 # Minimum meaningful content ratio
+ skip_patterns: # Regex patterns to skip
+ - "^(um|uh|hmm|yeah|ok|okay)\\s*$"
+ - "^test\\s*$"
+ - "^testing\\s*$"
+```
+
+### Processing & Performance
+```yaml
+processing:
+ parallel_processing: true # Enable concurrent processing
+ max_concurrent_tasks: 3 # Limit concurrent LLM requests
+ processing_timeout: 300 # Timeout for memory extraction (seconds)
+ retry_failed: true # Retry failed extractions
+ max_retries: 2 # Maximum retry attempts
+ retry_delay: 5 # Delay between retries (seconds)
+```
+
+### Debug & Monitoring
+```yaml
+debug:
+ enabled: true
+ db_path: "/app/debug/memory_debug.db"
+ log_level: "INFO" # DEBUG, INFO, WARNING, ERROR
+ log_full_conversations: false # Privacy consideration
+ log_extracted_memories: true # Log successful extractions
+```
+
+### Configuration Validation
+The system validates configuration on startup and provides detailed error messages for invalid settings. Use the debug API to verify your configuration:
+
+```bash
+# Check current configuration
+curl -H "Authorization: Bearer $ADMIN_TOKEN" \
+ http://localhost:8000/api/debug/memory/config
+```
+
+### API Endpoints for Debugging
+- `GET /api/debug/memory/stats` - Processing statistics
+- `GET /api/debug/memory/sessions` - Recent memory sessions
+- `GET /api/debug/memory/session/{audio_uuid}` - Detailed session info
+- `GET /api/debug/memory/config` - Current configuration
+- `GET /api/debug/memory/pipeline/{audio_uuid}` - Pipeline trace
+
+**Implementation**: See `src/memory_debug_api.py` for debug endpoints and `../MEMORY_DEBUG_IMPLEMENTATION.md` for complete debug system documentation.
+
+## Next Steps
+
+- **Configure Google OAuth** for easy user login
+- **Set up Ollama** for local memory processing
+- **Deploy ASR service** for self-hosted transcription
+- **Connect audio clients** using the WebSocket API
+- **Explore the dashboard** to manage conversations and users
+- **Review the user data architecture** for understanding data organization
+- **Customize memory extraction** by editing `memory_config.yaml`
+- **Monitor processing performance** using debug API endpoints
\ No newline at end of file
diff --git a/backends/advanced-backend/Docs/system-tracker.md b/backends/advanced-backend/Docs/system-tracker.md
new file mode 100644
index 00000000..1f32982b
--- /dev/null
+++ b/backends/advanced-backend/Docs/system-tracker.md
@@ -0,0 +1,385 @@
+# Debug System Tracker
+
+The **Debug System Tracker** provides centralized monitoring and debugging for the audio processing pipeline in the Friend-Lite backend. It tracks transactions through the complete pipeline from audio reception to memory/action item creation, giving you comprehensive visibility into system health and bottlenecks.
+
+## Overview
+
+The Debug System Tracker replaces scattered debug systems with a unified approach that:
+- **Tracks complete pipeline transactions** from audio โ transcription โ memory โ action items
+- **Provides real-time monitoring** via the Streamlit dashboard
+- **Captures detailed failure information** for debugging
+- **Detects stalled transactions** automatically
+- **Thread-safe and performant** with background monitoring
+- **Exports debug dumps** for detailed analysis
+
+## Architecture
+
+```
+Audio Ingestion โ Transcription โ Memory โ Action Items
+ โ โ โ โ
+ AUDIO_RECEIVED โ TRANSCRIPTION_* โ MEMORY_* โ ACTION_ITEMS_*
+ โ โ โ โ
+ Debug System Tracker Events
+ โ
+ Dashboard Visualization
+```
+
+## Core Components
+
+### Pipeline Stages
+
+The tracker monitors these stages in the audio processing pipeline:
+
+```python
+class PipelineStage(Enum):
+ AUDIO_RECEIVED = "audio_received"
+ TRANSCRIPTION_STARTED = "transcription_started"
+ TRANSCRIPTION_COMPLETED = "transcription_completed"
+ MEMORY_STARTED = "memory_started"
+ MEMORY_COMPLETED = "memory_completed"
+ ACTION_ITEMS_STARTED = "action_items_started"
+ ACTION_ITEMS_COMPLETED = "action_items_completed"
+ CONVERSATION_ENDED = "conversation_ended"
+```
+
+### Transaction States
+
+```python
+class TransactionStatus(Enum):
+ IN_PROGRESS = "in_progress"
+ COMPLETED = "completed"
+ FAILED = "failed"
+ STALLED = "stalled" # Started but no progress in reasonable time
+```
+
+## Usage
+
+### Getting the Debug Tracker
+
+```python
+from advanced_omi_backend.debug_system_tracker import get_debug_tracker, PipelineStage
+
+# Get the global tracker instance
+tracker = get_debug_tracker()
+```
+
+### Basic Transaction Tracking
+
+```python
+# Create a new transaction
+transaction_id = tracker.create_transaction(
+ user_id="507f1f77bcf86cd799439011",
+ client_id="cd7994-laptop",
+ conversation_id="conv_123"
+)
+
+# Track events through the pipeline
+tracker.track_event(transaction_id, PipelineStage.AUDIO_RECEIVED, True,
+ chunk_size=1024)
+
+tracker.track_event(transaction_id, PipelineStage.TRANSCRIPTION_STARTED)
+
+# Mark successful completion
+tracker.track_event(transaction_id, PipelineStage.TRANSCRIPTION_COMPLETED, True,
+ transcript_length=500, processing_time=2.5)
+
+# Mark failure with error
+tracker.track_event(transaction_id, PipelineStage.MEMORY_STARTED)
+tracker.track_event(transaction_id, PipelineStage.MEMORY_COMPLETED, False,
+ error_message="Ollama connection timeout")
+```
+
+### Convenience Methods
+
+```python
+# Track audio chunks
+tracker.track_audio_chunk(transaction_id, chunk_size=1024)
+
+# Track WebSocket connections
+tracker.track_websocket_connected(user_id, client_id)
+tracker.track_websocket_disconnected(client_id)
+```
+
+### Real Usage Example (Memory Processing)
+
+```python
+# From memory_service.py
+debug_tracker = get_debug_tracker()
+
+# Start memory session tracking
+session_id = debug_tracker.start_memory_session(
+ audio_uuid, client_id, user_id, user_email
+)
+debug_tracker.start_memory_processing(session_id)
+
+try:
+ # Process memory
+ result = process_memory.add(transcript, user_id=user_id, ...)
+
+ # Record successful completion
+ debug_tracker.complete_memory_processing(session_id, True)
+
+except Exception as e:
+ # Record failure
+ debug_tracker.complete_memory_processing(session_id, False, str(e))
+```
+
+## Dashboard Integration
+
+The Debug System Tracker integrates with the Streamlit dashboard to provide real-time monitoring:
+
+### System Metrics
+- **Uptime** - System running time in hours
+- **Total Transactions** - All transactions processed
+- **Active Transactions** - Currently in progress
+- **Completed/Failed/Stalled** - Transaction outcomes
+- **Active WebSockets** - Current connections
+- **Processing Counts** - Audio chunks, transcriptions, memories, action items
+
+### Recent Activity
+- **Recent Transactions** - Last 10 transactions with status and timing
+- **Recent Issues** - Last 10 problems detected with descriptions
+- **Active Users** - Users active in the last 5 minutes
+
+### Transaction Details
+Each transaction shows:
+- **Transaction ID** (first 8 characters)
+- **User ID** (last 6 characters for privacy)
+- **Current Status** and **Stage**
+- **Creation Time**
+- **Issue Description** (if any problems detected)
+
+## Advanced Features
+
+### Automatic Stall Detection
+
+The tracker automatically detects stalled transactions:
+
+```python
+# Background monitoring detects transactions stuck for >60 seconds
+async def _monitor_stalled_transactions(self):
+ while self._monitoring:
+ for transaction in self.transactions.values():
+ if transaction.status == TransactionStatus.IN_PROGRESS:
+ elapsed = (now - transaction.updated_at).total_seconds()
+ if elapsed > 60: # 1 minute without progress
+ transaction.status = TransactionStatus.STALLED
+```
+
+### Issue Pattern Detection
+
+The tracker identifies common failure patterns:
+
+```python
+def get_issue_description(self) -> Optional[str]:
+ # Detects patterns like:
+ # - "Transcription completed but memory creation failed"
+ # - "Transcription completed but memory processing stalled"
+ # - Stage-specific failures with error messages
+```
+
+### Debug Data Export
+
+Export comprehensive debug information:
+
+```python
+# Export all system state to JSON file
+debug_file = tracker.export_debug_dump()
+# Creates: debug_dumps/debug_dump_.json
+
+# Contains:
+# - All transactions with complete event history
+# - System metrics and timing
+# - Recent issues and patterns
+# - Active WebSocket connections
+# - User activity tracking
+```
+
+## Configuration
+
+### Environment Variables
+
+```bash
+# Debug dump directory (optional)
+DEBUG_DUMP_DIR=debug_dumps
+```
+
+### Initialization
+
+The tracker is automatically initialized in `main.py`:
+
+```python
+# Startup
+init_debug_tracker()
+
+# Shutdown
+shutdown_debug_tracker()
+```
+
+## API Reference
+
+### Core Methods
+
+#### `get_debug_tracker() -> DebugSystemTracker`
+Get the global debug tracker singleton instance.
+
+#### `create_transaction(user_id: str, client_id: str, conversation_id: Optional[str] = None) -> str`
+Create a new pipeline transaction and return its ID.
+
+#### `track_event(transaction_id: str, stage: PipelineStage, success: bool = True, error_message: Optional[str] = None, **metadata)`
+Track an event in a transaction with optional metadata.
+
+#### `track_audio_chunk(transaction_id: str, chunk_size: int = 0)`
+Convenience method to track audio chunk processing.
+
+#### `track_websocket_connected(user_id: str, client_id: str)`
+Track WebSocket connection establishment.
+
+#### `track_websocket_disconnected(client_id: str)`
+Track WebSocket disconnection.
+
+### Dashboard Data
+
+#### `get_dashboard_data() -> Dict`
+Get formatted data for the Streamlit dashboard including:
+- System metrics
+- Recent transactions (last 10)
+- Recent issues (last 10)
+- Active user count
+
+#### `get_transaction(transaction_id: str) -> Optional[Transaction]`
+Get a specific transaction by ID.
+
+#### `get_user_transactions(user_id: str, limit: int = 10) -> List[Transaction]`
+Get recent transactions for a specific user.
+
+### Debug Export
+
+#### `export_debug_dump() -> Path`
+Export comprehensive debug data to a timestamped JSON file.
+
+## Integration Points
+
+The Debug System Tracker is currently integrated into:
+
+### WebSocket Audio Handling (`main.py:1782+`)
+```python
+tracker = get_debug_tracker()
+tracker.track_websocket_connected(user.user_id, client_id)
+# ... on disconnect:
+tracker.track_websocket_disconnected(client_id)
+```
+
+### Audio Processing Pipeline (`main.py:1039+`)
+```python
+tracker = get_debug_tracker()
+transaction_id = tracker.create_transaction(user.user_id, client_id)
+tracker.track_event(transaction_id, PipelineStage.AUDIO_RECEIVED)
+```
+
+### Memory Processing (`memory_service.py:230+`)
+```python
+debug_tracker = get_debug_tracker()
+session_id = debug_tracker.start_memory_session(audio_uuid, client_id, user_id, user_email)
+debug_tracker.start_memory_processing(session_id)
+```
+
+### API Router (`api_router.py:415+`)
+```python
+debug_tracker = get_debug_tracker()
+session_summary = debug_tracker.get_session_summary(audio_uuid)
+```
+
+## Best Practices
+
+### 1. Track All Critical Pipeline Stages
+```python
+# Good - Complete pipeline tracking
+transaction_id = tracker.create_transaction(user_id, client_id)
+tracker.track_event(transaction_id, PipelineStage.AUDIO_RECEIVED)
+tracker.track_event(transaction_id, PipelineStage.TRANSCRIPTION_STARTED)
+# ... continue through all stages
+```
+
+### 2. Include Rich Metadata
+```python
+# Good - Detailed metadata for debugging
+tracker.track_event(transaction_id, PipelineStage.TRANSCRIPTION_COMPLETED, True,
+ transcript_length=len(transcript),
+ processing_time_ms=elapsed_ms,
+ model_used="deepgram",
+ audio_duration=duration_seconds)
+```
+
+### 3. Handle Both Success and Failure
+```python
+try:
+ result = await process_transcription()
+ tracker.track_event(transaction_id, PipelineStage.TRANSCRIPTION_COMPLETED, True,
+ result_length=len(result))
+except Exception as e:
+ tracker.track_event(transaction_id, PipelineStage.TRANSCRIPTION_COMPLETED, False,
+ error_message=str(e), retry_count=attempt_num)
+```
+
+### 4. Use Proper Transaction Lifecycle
+```python
+# Create transaction when pipeline starts
+transaction_id = tracker.create_transaction(user_id, client_id, conversation_id)
+
+# Track through all stages
+# Always end with CONVERSATION_ENDED for completion
+tracker.track_event(transaction_id, PipelineStage.CONVERSATION_ENDED, True)
+```
+
+## Troubleshooting
+
+### Common Issues
+
+**Q: Transactions stuck in IN_PROGRESS**
+A: Check that your code calls `track_event()` with success/failure for all pipeline stages. Stalled transactions are automatically detected after 60 seconds.
+
+**Q: Missing transactions in dashboard**
+A: Ensure you're importing from the correct module: `from advanced_omi_backend.debug_system_tracker import get_debug_tracker`
+
+**Q: Memory usage growing**
+A: The tracker automatically limits to 100 recent transactions and 50 recent issues. For high volume, consider the cleanup mechanisms.
+
+**Q: Background monitoring not working**
+A: Ensure `init_debug_tracker()` is called at startup. Check logs for monitoring task errors.
+
+### Debug Tips
+
+1. **Check recent issues**: Use `get_dashboard_data()["recent_issues"]` to see detected problems
+2. **Monitor transaction patterns**: Use `get_user_transactions()` to see user-specific pipeline flow
+3. **Export debug dumps**: Use `export_debug_dump()` for detailed offline analysis
+4. **Watch stall detection**: Transactions with no progress for >60 seconds are automatically flagged
+
+## Migration Notes
+
+This system replaces various old debug tracking approaches:
+
+### From Old Memory Debug System
+```python
+# Old approach (if any existed)
+memory_debug.start_session(audio_uuid)
+memory_debug.log_processing(...)
+
+# New approach
+tracker = get_debug_tracker()
+transaction_id = tracker.create_transaction(user_id, client_id)
+tracker.track_event(transaction_id, PipelineStage.MEMORY_STARTED)
+```
+
+### From Scattered Logging
+```python
+# Old approach
+logger.info(f"Processing audio for {user_id}")
+logger.info(f"Transcription completed: {len(result)}")
+
+# New approach (includes logging + structured tracking)
+tracker.track_event(transaction_id, PipelineStage.TRANSCRIPTION_COMPLETED, True,
+ transcript_length=len(result))
+```
+
+The Debug System Tracker provides comprehensive visibility into the audio processing pipeline while maintaining performance and thread safety.
\ No newline at end of file
diff --git a/backends/advanced-backend/README.md b/backends/advanced-backend/README.md
index bcec9606..0478225c 100644
--- a/backends/advanced-backend/README.md
+++ b/backends/advanced-backend/README.md
@@ -33,4 +33,129 @@ To setup the backend, you need to do the following:
1. Change the directory to the backend,
`cd backends/advanced-backend`
2. Fill out the .env variables as you require (check the .env.template for the required variables)
-3. Run the backend with `docker compose up --build -d`. This will take a couple minutes, be patient.
\ No newline at end of file
+3. Run the backend with `docker compose up --build -d`. This will take a couple minutes, be patient.
+
+
+# Backend Walkthrough
+
+## Architecture Overview
+
+This is a real-time audio processing backend built with FastAPI that handles continuous audio streams, transcription, memory storage, and conversation management. The system is being designed for 24/7 operation with robust recovery mechanisms.
+
+## Core Services (Docker Compose)
+
+- **friend-backend**: Main FastAPI application serving the audio processing pipeline
+- **streamlit**: Web UI for conversation management, speaker enrollment, and system monitoring
+- **proxy**: Nginx reverse proxy handling external requests
+- **qdrant**: Vector database for semantic memory storage and retrieval
+- **mongo**: Document database for conversations, users, speakers, and action items
+- **Optional services**: speaker-recognition (GPU-based), ollama (LLM inference)
+
+## Audio Processing Flow
+
+### 1. Audio Ingestion
+- Clients connect via WebSocket endpoints:
+ - `/ws`: Opus-encoded audio streams (from mobile apps)
+ - `/ws_pcm`: Raw PCM audio streams (from desktop clients)
+- Each client gets a `ClientState` managing their processing pipeline
+- Audio data flows into central queues to decouple ingestion from processing
+
+### 2. Parallel Processing Pipeline
+The system runs multiple async consumers processing audio in parallel:
+
+**Audio Saver Consumer** (`_audio_saver`):
+- Buffers incoming PCM audio data
+- Writes 60-second WAV chunks to `./audio_chunks/` directory
+- Tracks speech segments for audio cropping
+- Generates unique audio UUIDs for each chunk
+
+**Transcription Consumer** (`_transcription_processor`):
+- Sends audio chunks to Wyoming ASR service via TCP
+- Supports fallback to Deepgram API (not yet implemented)
+- Handles real-time transcription with segment timing
+- Processes voice activity detection (VAD) events
+
+**Memory Consumer** (`_memory_processor`):
+- Stores completed transcripts in mem0 vector database
+- Creates semantic memories for long-term retrieval
+- Manages conversation context and user associations
+- Handles background memory processing
+
+### 3. Advanced Features
+
+**Speaker Recognition**:
+- Voice enrollment via audio samples
+- Real-time speaker identification during conversations
+- Speaker diarization and transcript attribution
+
+**Audio Cropping**:
+- Removes silence using speech segment detection
+- Preserves only voice activity with configurable padding
+- Reduces storage requirements and improves processing efficiency
+
+**Action Items Extraction**:
+- Uses LLM (Ollama) to extract tasks from conversations
+- Tracks action item status and assignments
+- Provides API for task management
+
+**Conversation Management**:
+- Automatic conversation segmentation based on silence timeouts
+- Session state management across client connections
+- Conversation closing and archival
+
+### 4. Data Storage
+
+**MongoDB Collections**:
+- `audio_chunks`: Audio file metadata, transcripts, timing, speakers
+- `users`: User profiles and settings
+- `speakers`: Voice enrollment data and models
+- `action_items`: Extracted tasks with status tracking
+
+**File System**:
+- `./audio_chunks/`: Raw and cropped WAV files
+- `./qdrant_data/`: Vector database storage
+- `./mongo_data/`: Document database storage
+
+### 5. Health & Monitoring
+
+Current health checks verify:
+- MongoDB connectivity (critical service)
+- ASR service availability (Wyoming protocol)
+- Memory service (mem0 + Qdrant + Ollama)
+- Speaker recognition service
+- File system access
+
+## Key Classes & Components
+
+- `ClientState`: Per-client audio processing state and queues
+- `TranscriptionManager`: ASR service management and reconnection logic
+- `ChunkRepo`: MongoDB operations for audio chunks and metadata
+- `MemoryService`: mem0 integration for semantic memory
+- `SpeakerService`: speaker recognition and enrollment
+- `ActionItemsService`: LLM-based task extraction and management
+
+## Recovery & Reliability
+TODO
+
+## Metrics & Monitoring Plan
+
+### Target: 24 Hours Uninterrupted Audio Processing
+
+The primary goal is to achieve at least 24 hours of continuous audio recording and processing without interruptions. The metrics system will track:
+
+### Core Metrics to Implement
+
+**System Uptime Metrics**:
+- Total system uptime vs. total recording time
+- Service-level uptime for each component (friend-backend, mongo, qdrant, ASR, etc.)
+- Connection uptime per client
+- WebSocket connection stability and reconnection events
+
+**Audio Processing Metrics**:
+- Total audio recorded (duration in hours/minutes)
+- Total voice activity detected vs. silence
+- Audio chunks successfully processed vs. failed
+- Transcription success rate and latency
+- Memory storage success rate
+
+On the happy path, you could do `sudo rm -rf ./audio_chunks/ ./mongo_data/ ./qdrant_data/` to reset the system.
\ No newline at end of file
diff --git a/backends/advanced-backend/README_laptop_client.md b/backends/advanced-backend/README_laptop_client.md
deleted file mode 100644
index f945f068..00000000
--- a/backends/advanced-backend/README_laptop_client.md
+++ /dev/null
@@ -1,44 +0,0 @@
-# Laptop Client Usage Guide
-
-## Basic Usage (without User ID)
-```bash
-python laptop_client.py
-```
-This connects to `ws://localhost:8000/ws_pcm` using a random client ID.
-
-## Usage with User ID
-```bash
-python laptop_client.py --user-id john_doe
-```
-This connects to `ws://localhost:8000/ws_pcm?user_id=john_doe` and associates audio with the user "john_doe".
-
-## Advanced Options
-```bash
-python laptop_client.py --host 192.168.1.100 --port 8001 --endpoint /ws --user-id alice
-```
-This connects to `ws://192.168.1.100:8001/ws?user_id=alice`.
-
-## Command Line Arguments
-- `--host`: WebSocket server host (default: localhost)
-- `--port`: WebSocket server port (default: 8000)
-- `--endpoint`: WebSocket endpoint (default: /ws_pcm)
-- `--user-id`: User ID for audio session (optional)
-
-## Examples
-
-### Test with different users:
-```bash
-# Terminal 1 - User Alice
-python laptop_client.py --user-id alice
-
-# Terminal 2 - User Bob
-python laptop_client.py --user-id bob
-```
-
-### Connect to remote server:
-```bash
-python laptop_client.py --host your-server.com --user-id remote_user
-```
-
-## Backward Compatibility
-The client works exactly as before when no `--user-id` is provided, maintaining full backward compatibility with existing setups.
\ No newline at end of file
diff --git a/backends/advanced-backend/docker-compose.yml b/backends/advanced-backend/docker-compose.yml
index 1c5f1e7c..080877c6 100644
--- a/backends/advanced-backend/docker-compose.yml
+++ b/backends/advanced-backend/docker-compose.yml
@@ -8,12 +8,19 @@ services:
volumes:
- ./audio_chunks:/app/audio_chunks
- ./debug_dir:/app/debug_dir
+ - ./data:/app/data
environment:
- DEEPGRAM_API_KEY=${DEEPGRAM_API_KEY}
- OFFLINE_ASR_TCP_URI=${OFFLINE_ASR_TCP_URI}
- OLLAMA_BASE_URL=${OLLAMA_BASE_URL}
- HF_TOKEN=${HF_TOKEN}
- SPEAKER_SERVICE_URL=${SPEAKER_SERVICE_URL}
+ - ADMIN_PASSWORD=${ADMIN_PASSWORD}
+ - ADMIN_EMAIL=${ADMIN_EMAIL}
+ - AUTH_SECRET_KEY=${AUTH_SECRET_KEY}
+ - LLM_PROVIDER=${LLM_PROVIDER}
+ - OPENAI_API_KEY=${OPENAI_API_KEY}
+ - OPENAI_MODEL=${OPENAI_MODEL}
depends_on:
qdrant:
condition: service_started
@@ -24,17 +31,19 @@ services:
interval: 10s
timeout: 5s
retries: 5
- start_period: 15s
-
+ start_period: 5s
+ restart: unless-stopped
+
streamlit:
build:
- context: ./webui
- dockerfile: Dockerfile
+ context: .
+ dockerfile: Dockerfile.webui
ports:
- "8501:8501"
environment:
- BACKEND_API_URL=http://friend-backend:8000
- - BACKEND_PUBLIC_URL=http://localhost:8000
+ - BACKEND_PUBLIC_URL=http://100.99.62.5:8000 # Your BROWSER should be able to access this (Only for displaying audio)
+ - STREAMLIT_SERVER_ENABLE_CORS=false
depends_on:
friend-backend:
condition: service_healthy
@@ -42,8 +51,13 @@ services:
condition: service_started
qdrant:
condition: service_started
+
+ proxy:
+ image: nginx:alpine
+ depends_on: [friend-backend, streamlit]
volumes:
- - ./webui:/app
+ - ./nginx.conf:/etc/nginx/nginx.conf:ro
+ ports: ["80:80"] # publish once; ngrok points here
# speaker-recognition:
# build:
@@ -89,28 +103,35 @@ services:
# - driver: nvidia
# count: all
# capabilities: [gpu]
+
+
qdrant:
image: qdrant/qdrant:latest
ports:
- "6333:6333" # gRPC
- "6334:6334" # HTTP
volumes:
- - ./qdrant_data:/qdrant/storage # Qdrant will store its data in this named volume
+ - ./qdrant_data:/qdrant/storage
mongo:
image: mongo:4.4.18
ports:
- "27017:27017"
volumes:
- ./mongo_data:/data/db
- ngrok:
- image: ngrok/ngrok:latest
- ports:
- - "4040:4040" # Ngrok web interface
- environment:
- - NGROK_AUTHTOKEN=${NGROK_AUTHTOKEN}
- command: "http friend-backend:8000 --url=intelligent-hypervisor.ngrok.app"
- depends_on:
- - friend-backend
+
+ # Use tailscale instead
+ # UNCOMMENT OUT FOR LOCAL DEMO - EXPOSES to internet
+ # ngrok:
+ # image: ngrok/ngrok:latest
+ # depends_on: [friend-backend, proxy]
+ # ports:
+ # - "4040:4040" # Ngrok web interface
+ # environment:
+ # - NGROK_AUTHTOKEN=${NGROK_AUTHTOKEN}
+ # command: "http proxy:80 --url=${NGROK_URL} --basic-auth=${NGROK_BASIC_AUTH}"
+
+
+# Question: These are named volumes, but they are not being used, right? Can we remove them?
volumes:
ollama_data:
driver: local
diff --git a/backends/advanced-backend/memory_config.yaml b/backends/advanced-backend/memory_config.yaml
new file mode 100644
index 00000000..226c3987
--- /dev/null
+++ b/backends/advanced-backend/memory_config.yaml
@@ -0,0 +1,204 @@
+# Memory Extraction Configuration
+# This file controls how memories and facts are extracted from conversations
+
+# General memory extraction settings
+memory_extraction:
+ # Whether to extract general memories (conversation summaries, topics, etc.)
+ enabled: true
+
+ # Main prompt for memory extraction - MODIFIED to be more aggressive
+ prompt: "Extract key information from this conversation."
+
+ # LLM parameters for memory extraction
+ # Provider is controlled by LLM_PROVIDER environment variable (ollama/openai)
+ llm_settings:
+ temperature: 0.1 # Lower temperature for more consistent extraction
+ max_tokens: 2000
+ # Model selection based on provider:
+ # - Ollama: "gemma3n:e4b", "llama3.1:latest", "llama3.2:latest", etc.
+ # - OpenAI: "gpt-4o" (recommended for JSON reliability), "gpt-4o-mini", "gpt-3.5-turbo", etc.
+ #
+ # RECOMMENDATION: Use "gpt-4o" with OpenAI provider to minimize JSON parsing errors
+ # Set environment variables: LLM_PROVIDER=openai and OPENAI_MODEL=gpt-4o
+ # model: "gemma3n:e4b"
+ model: "gpt-4o"
+
+# Fact extraction settings (structured information)
+fact_extraction:
+ # Whether to extract structured facts separately from general memories
+ # ENABLED: Using proper fact extraction prompt format
+ enabled: true
+
+ # Prompt for extracting structured facts
+ prompt: |
+ Extract specific, verifiable facts from this conversation. Focus on:
+ - Names of people and their roles/titles
+ - Company names and organizations
+ - Dates and specific times
+ - Locations and addresses
+ - Numbers, quantities, and measurements
+ - Contact information (emails, phone numbers)
+ - Project names and code names
+ - Technical specifications or requirements
+
+ Return the facts in JSON format as an array of strings. If no specific facts are mentioned, return an empty JSON array [].
+
+ Examples of JSON output:
+ ["John Smith works as Software Engineer at Acme Corp", "Project deadline is December 15th, 2024", "Meeting scheduled for 2 PM EST on Monday", "Budget approved for $50,000"]
+
+ # LLM parameters for fact extraction
+ llm_settings:
+ temperature: 0.0 # Very low temperature for factual accuracy
+ max_tokens: 1500
+ # RECOMMENDATION: Use "gpt-4o" for more reliable JSON output
+ # model: "gemma3n:e4b" # Model based on LLM_PROVIDER (ollama/openai)
+ model: "gpt-4o"
+
+# Action item extraction settings
+action_item_extraction:
+ # Whether to extract action items from conversations
+ # RECOMMENDATION: Works best with OpenAI GPT-4o for reliable JSON parsing
+ enabled: true
+
+ # Trigger phrases that indicate action items in conversation
+ trigger_phrases:
+ - "simon says"
+ - "action item"
+ - "todo"
+ - "follow up"
+ - "next step"
+ - "homework"
+ - "deliverable"
+ - "task"
+ - "assignment"
+ - "need to"
+ - "should do"
+ - "remember to"
+
+ # Prompt for extracting action items
+ prompt: |
+ Extract action items from this conversation. Look for tasks, assignments, or things that need to be done.
+
+ Return a JSON array where each item has:
+ - description: What needs to be done
+ - assignee: Who should do it ("unassigned" if unclear)
+ - due_date: When it should be done ("not_specified" if not mentioned)
+ - priority: high/medium/low/not_specified
+ - context: Why or how this task came up
+ - tool: Required tool ("check_email", "set_alarm", "none")
+
+ Return only valid JSON. No explanations or extra text.
+
+ # LLM parameters for action item extraction
+ llm_settings:
+ temperature: 0.1
+ max_tokens: 1000
+ # RECOMMENDATION: Use "gpt-4o" for reliable JSON output in action items
+ model: "gpt-4o" # Model based on LLM_PROVIDER (ollama/openai)
+
+# Memory categorization settings
+categorization:
+ # Whether to automatically categorize memories
+ enabled: true
+
+ # Predefined categories
+ categories:
+ - personal
+ - work
+ - meeting
+ - project
+ - learning
+ - social
+ - health
+ - finance
+ - travel
+ - other
+
+ # Prompt for categorizing memories
+ prompt: |
+ Categorize this conversation into one or more of these categories:
+ personal, work, meeting, project, learning, social, health, finance, travel, other
+
+ Return only the category names, comma-separated.
+ Examples: "work, meeting" or "personal, health" or "project"
+
+ # LLM parameters for categorization
+ llm_settings:
+ temperature: 0.2
+ max_tokens: 100
+ # model: "gemma3n:e4b" # Model based on LLM_PROVIDER (ollama/openai)
+ model: "gpt-4o" # Model based on LLM_PROVIDER (ollama/openai)
+
+# Quality control settings
+quality_control:
+ # Minimum conversation length (in characters) to process
+ # MODIFIED: Reduced from 50 to 1 to process almost all transcripts
+ min_conversation_length: 1
+
+ # Maximum conversation length (in characters) to process
+ max_conversation_length: 50000
+
+ # Whether to skip conversations that are mostly silence/filler
+ # MODIFIED: Disabled to ensure all transcripts are processed
+ skip_low_content: false
+
+ # Minimum meaningful content ratio (0.0-1.0)
+ # MODIFIED: Reduced to 0.0 to process all content
+ min_content_ratio: 0.0
+
+ # Skip conversations with these patterns
+ # MODIFIED: Removed most patterns to ensure all transcripts are processed
+ skip_patterns:
+ # Only skip completely empty patterns - removed test patterns to ensure all content is processed
+ []
+
+# Processing settings
+processing:
+ # Whether to process memories in parallel
+ parallel_processing: true
+
+ # Maximum number of concurrent processing tasks - reduced to avoid overwhelming Ollama
+ max_concurrent_tasks: 1
+
+ # Timeout for memory processing (seconds) - generous timeout for Ollama processing
+ processing_timeout: 600
+
+ # Whether to retry failed extractions
+ retry_failed: true
+
+ # Maximum number of retries
+ max_retries: 2
+
+ # Delay between retries (seconds)
+ retry_delay: 5
+
+# Storage settings
+storage:
+ # Whether to store detailed extraction metadata
+ store_metadata: true
+
+ # Whether to store the original prompts used
+ store_prompts: true
+
+ # Whether to store LLM responses
+ store_llm_responses: true
+
+ # Whether to store processing timing information
+ store_timing: true
+
+# Debug settings
+debug:
+ # Whether to enable debug tracking
+ enabled: true
+
+ # Debug database path
+ db_path: "/app/debug/memory_debug.db"
+
+ # Log level for memory processing
+ log_level: "INFO" # DEBUG, INFO, WARNING, ERROR
+
+ # Whether to log full conversations (privacy consideration)
+ log_full_conversations: false
+
+ # Whether to log extracted memories
+ log_extracted_memories: true
\ No newline at end of file
diff --git a/backends/advanced-backend/pyproject.toml b/backends/advanced-backend/pyproject.toml
index e299c056..2c443408 100644
--- a/backends/advanced-backend/pyproject.toml
+++ b/backends/advanced-backend/pyproject.toml
@@ -1,13 +1,13 @@
[project]
name = "advanced-omi-backend"
version = "0.1.0"
-description = "Add your description here"
+description = "AI-powered wearable ecosystem for audio capture, transcription, and memory extraction"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
- "easy-audio-interfaces>=0.5.1",
+ "easy-audio-interfaces>=0.7.1",
"fastapi>=0.115.12",
- "mem0ai>=0.1.111",
+ "mem0ai>=0.1.114",
"motor>=3.7.1",
"ollama>=0.4.8",
"omi-sdk>=0.1.5",
@@ -15,15 +15,19 @@ dependencies = [
"uvicorn>=0.34.2",
"wyoming>=1.6.1",
"aiohttp>=3.8.0",
+ "fastapi-users[beanie]>=14.0.1",
+ "PyYAML>=6.0.1",
]
-[dependency-groups]
+[project.optional-dependencies]
deepgram = [
"deepgram-sdk>=4.0.0",
]
dev = [
"black>=25.1.0",
"isort>=6.0.1",
+ "pytest>=8.4.1",
+ "pytest-asyncio>=1.0.0",
]
webui = [
"streamlit>=1.45.1",
@@ -33,6 +37,20 @@ tests = [
"pytest-asyncio>=1.0.0",
]
+[build-system]
+requires = ["setuptools>=61.0", "wheel"]
+build-backend = "setuptools.build_meta"
+
+[tool.setuptools]
+package-dir = {"" = "src"}
+
+[tool.setuptools.packages.find]
+where = ["src"]
+
+
[tool.isort]
profile = "black"
+
+[tool.black]
+line-length = 100
diff --git a/backends/advanced-backend/run_with_deepgram.sh b/backends/advanced-backend/run_with_deepgram.sh
deleted file mode 100755
index 3eaec73a..00000000
--- a/backends/advanced-backend/run_with_deepgram.sh
+++ /dev/null
@@ -1,15 +0,0 @@
-#!/bin/bash
-
-echo "Installing Deepgram SDK for advanced transcription features..."
-
-# Check if uv is available
-if command -v uv &> /dev/null; then
- echo "Using uv to install Deepgram SDK..."
- uv sync --group deepgram
-else
- echo "uv not found, using pip..."
- pip install deepgram-sdk
-fi
-
-echo "Deepgram SDK installation complete!"
-echo "Don't forget to set your DEEPGRAM_API_KEY environment variable."
\ No newline at end of file
diff --git a/backends/advanced-backend/src/enroll_speaker.py b/backends/advanced-backend/scripts/enroll_speaker.py
similarity index 85%
rename from backends/advanced-backend/src/enroll_speaker.py
rename to backends/advanced-backend/scripts/enroll_speaker.py
index d6bbc456..1993a579 100644
--- a/backends/advanced-backend/src/enroll_speaker.py
+++ b/backends/advanced-backend/scripts/enroll_speaker.py
@@ -3,20 +3,20 @@
Speaker enrollment script for the OMI backend.
This script helps enroll speakers by:
-1. Recording audio from microphone
+1. Recording audio from microphone
2. Using existing audio files
3. Calling the enrollment API
Usage examples:
# Enroll from an existing audio file
python enroll_speaker.py --id john_doe --name "John Doe" --file audio_chunks/sample.wav
-
+
# Enroll from a specific segment of an audio file
python enroll_speaker.py --id jane_smith --name "Jane Smith" --file audio_chunks/sample.wav --start 10.0 --end 15.0
-
+
# Record new audio for enrollment (requires microphone)
python enroll_speaker.py --id bob_jones --name "Bob Jones" --record --duration 5.0
-
+
# List enrolled speakers
python enroll_speaker.py --list
"""
@@ -39,22 +39,30 @@
DEFAULT_HOST = "localhost"
DEFAULT_PORT = 8000
-async def enroll_speaker_api(host: str, port: int, speaker_id: str, speaker_name: str,
- audio_file_path: str, start_time=None, end_time=None):
+
+async def enroll_speaker_api(
+ host: str,
+ port: int,
+ speaker_id: str,
+ speaker_name: str,
+ audio_file_path: str,
+ start_time=None,
+ end_time=None,
+):
"""Call the API to enroll a speaker."""
url = f"http://{host}:{port}/api/speakers/enroll"
-
+
data = {
"speaker_id": speaker_id,
"speaker_name": speaker_name,
- "audio_file_path": audio_file_path
+ "audio_file_path": audio_file_path,
}
-
+
if start_time is not None:
data["start_time"] = start_time
if end_time is not None:
data["end_time"] = end_time
-
+
async with aiohttp.ClientSession() as session:
async with session.post(url, json=data) as response:
result = await response.json()
@@ -65,10 +73,11 @@ async def enroll_speaker_api(host: str, port: int, speaker_id: str, speaker_name
logger.error(f"โ Failed to enroll speaker: {result}")
return False
+
async def list_speakers_api(host: str, port: int):
"""List all enrolled speakers."""
url = f"http://{host}:{port}/api/speakers"
-
+
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
result = await response.json()
@@ -80,8 +89,9 @@ async def list_speakers_api(host: str, port: int):
for speaker in speakers:
enrolled_time = ""
if speaker.get("enrolled_at"):
- enrolled_time = time.strftime("%Y-%m-%d %H:%M:%S",
- time.localtime(speaker["enrolled_at"]))
+ enrolled_time = time.strftime(
+ "%Y-%m-%d %H:%M:%S", time.localtime(speaker["enrolled_at"])
+ )
print(f"ID: {speaker['id']}")
print(f"Name: {speaker['name']}")
print(f"Audio File: {speaker.get('audio_file_path', 'N/A')}")
@@ -94,19 +104,20 @@ async def list_speakers_api(host: str, port: int):
logger.error(f"โ Failed to list speakers: {result}")
return False
-async def identify_speaker_api(host: str, port: int, audio_file_path: str, start_time=None, end_time=None):
+
+async def identify_speaker_api(
+ host: str, port: int, audio_file_path: str, start_time=None, end_time=None
+):
"""Test speaker identification."""
url = f"http://{host}:{port}/api/speakers/identify"
-
- data = {
- "audio_file_path": audio_file_path
- }
-
+
+ data = {"audio_file_path": audio_file_path}
+
if start_time is not None:
data["start_time"] = start_time
if end_time is not None:
data["end_time"] = end_time
-
+
async with aiohttp.ClientSession() as session:
async with session.post(url, json=data) as response:
result = await response.json()
@@ -122,52 +133,56 @@ async def identify_speaker_api(host: str, port: int, audio_file_path: str, start
logger.error(f"โ Failed to identify speaker: {result}")
return False
+
def record_audio(duration: float, output_file: Path):
"""Record audio from microphone."""
try:
import numpy as np
import sounddevice as sd
import soundfile as sf
-
+
logger.info(f"๐ค Recording audio for {duration} seconds...")
logger.info("๐ก Speak clearly into your microphone now!")
-
+
# Record audio
sample_rate = 16000 # Same as backend configuration
- audio_data = sd.rec(int(duration * sample_rate), samplerate=sample_rate, channels=1, dtype=np.float32)
+ audio_data = sd.rec(
+ int(duration * sample_rate), samplerate=sample_rate, channels=1, dtype=np.float32
+ )
sd.wait() # Wait until recording is finished
-
+
# Save to file
sf.write(output_file, audio_data, sample_rate)
logger.info(f"โ
Audio saved to: {output_file}")
return True
-
+
except Exception as e:
logger.error(f"โ Failed to record audio: {e}")
return False
+
async def main():
parser = argparse.ArgumentParser(description="Speaker enrollment for OMI backend")
parser.add_argument("--host", default=DEFAULT_HOST, help="Server host")
parser.add_argument("--port", type=int, default=DEFAULT_PORT, help="Server port")
-
+
# Speaker enrollment options
parser.add_argument("--id", help="Speaker ID (unique identifier)")
parser.add_argument("--name", help="Speaker name (human readable)")
parser.add_argument("--file", help="Audio file path (relative to audio_chunks/)")
parser.add_argument("--start", type=float, help="Start time in seconds")
parser.add_argument("--end", type=float, help="End time in seconds")
-
+
# Recording option
parser.add_argument("--record", action="store_true", help="Record new audio")
parser.add_argument("--duration", type=float, default=5.0, help="Recording duration in seconds")
-
+
# Utility options
parser.add_argument("--list", action="store_true", help="List enrolled speakers")
parser.add_argument("--identify", help="Test speaker identification on audio file")
-
+
args = parser.parse_args()
-
+
# Check server connection
try:
response = requests.get(f"http://{args.host}:{args.port}/health", timeout=5)
@@ -178,42 +193,40 @@ async def main():
logger.error(f"โ Cannot connect to server at {args.host}:{args.port}")
logger.error(" Make sure the backend is running!")
return
-
+
logger.info(f"โ
Connected to server at {args.host}:{args.port}")
-
+
# Handle different operations
if args.list:
await list_speakers_api(args.host, args.port)
-
+
elif args.identify:
await identify_speaker_api(args.host, args.port, args.identify, args.start, args.end)
-
+
elif args.record:
if not args.id or not args.name:
logger.error("โ --id and --name are required for recording")
return
-
+
# Generate filename based on speaker ID and timestamp
timestamp = int(time.time())
audio_file = Path(f"speaker_enrollment_{args.id}_{timestamp}.wav")
-
+
if record_audio(args.duration, audio_file):
# Enroll speaker using recorded audio
await enroll_speaker_api(
- args.host, args.port, args.id, args.name,
- str(audio_file), args.start, args.end
+ args.host, args.port, args.id, args.name, str(audio_file), args.start, args.end
)
-
+
elif args.file:
if not args.id or not args.name:
logger.error("โ --id and --name are required for enrollment")
return
-
+
await enroll_speaker_api(
- args.host, args.port, args.id, args.name,
- args.file, args.start, args.end
+ args.host, args.port, args.id, args.name, args.file, args.start, args.end
)
-
+
else:
parser.print_help()
print("\n๐ก Quick start examples:")
@@ -221,10 +234,11 @@ async def main():
print(f" python {parser.prog} --list")
print(f" ")
print(f" # Enroll from existing audio:")
- print(f" python {parser.prog} --id alice --name \"Alice\" --file sample.wav")
+ print(f' python {parser.prog} --id alice --name "Alice" --file sample.wav')
print(f" ")
print(f" # Record and enroll:")
- print(f" python {parser.prog} --id bob --name \"Bob\" --record --duration 5")
+ print(f' python {parser.prog} --id bob --name "Bob" --record --duration 5')
+
if __name__ == "__main__":
- asyncio.run(main())
\ No newline at end of file
+ asyncio.run(main())
diff --git a/backends/advanced-backend/scripts/laptop_client.py b/backends/advanced-backend/scripts/laptop_client.py
new file mode 100644
index 00000000..09a676e3
--- /dev/null
+++ b/backends/advanced-backend/scripts/laptop_client.py
@@ -0,0 +1,193 @@
+import argparse
+import asyncio
+import json
+import logging
+
+import aiohttp
+import websockets
+import websockets.exceptions
+from easy_audio_interfaces.extras.local_audio import InputMicStream
+
+logger = logging.getLogger(__name__)
+logging.basicConfig(level=logging.INFO)
+
+# Default WebSocket settings
+DEFAULT_HOST = "localhost"
+DEFAULT_PORT = 8000
+DEFAULT_ENDPOINT = "/ws_pcm"
+
+
+def build_websocket_uri(
+ host: str, port: int, endpoint: str, token: str | None = None, device_name: str = "laptop"
+) -> str:
+ """Build WebSocket URI with JWT token authentication."""
+ base_uri = f"ws://{host}:{port}{endpoint}"
+ params = []
+ if token:
+ params.append(f"token={token}")
+ if device_name:
+ params.append(f"device_name={device_name}")
+
+ if params:
+ base_uri += "?" + "&".join(params)
+ return base_uri
+
+
+async def authenticate_with_credentials(host: str, port: int, username: str, password: str) -> str:
+ """Authenticate with username/password and return JWT token."""
+ auth_url = f"http://{host}:{port}/auth/jwt/login"
+
+ # Prepare form data for authentication
+ form_data = aiohttp.FormData()
+ form_data.add_field("username", username)
+ form_data.add_field("password", password)
+
+ try:
+ async with aiohttp.ClientSession() as session:
+ async with session.post(auth_url, data=form_data) as response:
+ if response.status == 200:
+ result = await response.json()
+ token = result.get("access_token")
+ if token:
+ logger.info(f"Successfully authenticated user '{username}'")
+ return token
+ else:
+ raise Exception("No access token received from server")
+ elif response.status == 400:
+ error_detail = await response.text()
+ raise Exception(f"Authentication failed: Invalid credentials - {error_detail}")
+ else:
+ error_detail = await response.text()
+ raise Exception(
+ f"Authentication failed with status {response.status}: {error_detail}"
+ )
+ except aiohttp.ClientError as e:
+ raise Exception(f"Failed to connect to authentication server: {e}")
+
+
+def validate_auth_args(args):
+ """Validate that exactly one authentication method is provided."""
+ has_token = bool(args.token)
+ has_credentials = bool(args.username and args.password)
+
+ if not has_token and not has_credentials:
+ raise ValueError(
+ "Authentication required: Please provide either --token OR both --username and --password"
+ )
+
+ if has_token and has_credentials:
+ raise ValueError(
+ "Conflicting authentication methods: Please provide either --token OR --username/--password, not both"
+ )
+
+ if args.username and not args.password:
+ raise ValueError(
+ "Username provided but password missing: Both --username and --password are required"
+ )
+
+ if args.password and not args.username:
+ raise ValueError(
+ "Password provided but username missing: Both --username and --password are required"
+ )
+
+
+async def main():
+ # Parse command line arguments
+ parser = argparse.ArgumentParser(
+ description="Laptop audio client for OMI backend with dual authentication modes"
+ )
+ parser.add_argument("--host", default=DEFAULT_HOST, help="WebSocket server host")
+ parser.add_argument("--port", type=int, default=DEFAULT_PORT, help="WebSocket server port")
+ parser.add_argument("--endpoint", default=DEFAULT_ENDPOINT, help="WebSocket endpoint")
+
+ # Authentication options (mutually exclusive)
+ auth_group = parser.add_argument_group("authentication", "Choose one authentication method")
+ auth_group.add_argument("--token", help="JWT authentication token")
+ auth_group.add_argument("--username", help="Username for login authentication")
+ auth_group.add_argument("--password", help="Password for login authentication")
+
+ parser.add_argument(
+ "--device-name", default="laptop", help="Device name for client identification"
+ )
+ args = parser.parse_args()
+
+ # Validate authentication arguments
+ try:
+ validate_auth_args(args)
+ except ValueError as e:
+ logger.error(f"Authentication error: {e}")
+ parser.print_help()
+ return
+
+ # Get or obtain authentication token
+ token = None
+
+ if args.token:
+ # Use provided token directly
+ token = args.token
+ print(
+ f"Using provided JWT token: {token[:20]}...{token[-10:] if len(token) > 30 else token}"
+ )
+
+ elif args.username and args.password:
+ # Authenticate with username/password to get token
+ print(f"Authenticating with username: {args.username}")
+ try:
+ token = await authenticate_with_credentials(
+ args.host, args.port, args.username, args.password
+ )
+ print(
+ f"Authentication successful! Received token: {token[:20]}...{token[-10:] if len(token) > 30 else token}"
+ )
+ except Exception as e:
+ logger.error(f"Authentication failed: {e}")
+ return
+
+ # Build WebSocket URI
+ ws_uri = build_websocket_uri(args.host, args.port, args.endpoint, token, args.device_name)
+ print(f"Connecting to {ws_uri}")
+ print(f"Using device name: {args.device_name}")
+
+ try:
+ async with websockets.connect(ws_uri) as websocket:
+ print("Connected to WebSocket")
+
+ async def send_audio():
+ """Capture audio from microphone and send raw PCM bytes over WebSocket"""
+ async with InputMicStream(chunk_size=512) as stream:
+ while True:
+ try:
+ data = await stream.read()
+ if data and data.audio:
+ # Send raw PCM bytes directly to WebSocket
+ await websocket.send(data.audio)
+ logger.debug(f"Sent audio chunk: {len(data.audio)} bytes")
+ await asyncio.sleep(0.01) # Small delay to prevent overwhelming
+ except websockets.exceptions.ConnectionClosed:
+ logger.info("WebSocket connection closed during audio sending")
+ break
+ except Exception as e:
+ logger.error(f"Error sending audio: {e}")
+ break
+
+ async def receive_messages():
+ """Receive any messages from the WebSocket server"""
+ try:
+ async for message in websocket:
+ print(f"Received message: {message}")
+ except websockets.exceptions.ConnectionClosed:
+ logger.info("WebSocket connection closed during message receiving")
+ except Exception as e:
+ logger.error(f"Error receiving messages: {e}")
+
+ # Run both audio sending and message receiving concurrently
+ await asyncio.gather(send_audio(), receive_messages())
+
+ except ConnectionRefusedError:
+ logger.error(f"Could not connect to {ws_uri}. Make sure the server is running.")
+ except Exception as e:
+ logger.error(f"Error connecting to WebSocket: {e}")
+
+
+if __name__ == "__main__":
+ asyncio.run(main())
diff --git a/backends/advanced-backend/src/advanced_omi_backend/__init__.py b/backends/advanced-backend/src/advanced_omi_backend/__init__.py
new file mode 100644
index 00000000..46178127
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/__init__.py
@@ -0,0 +1,7 @@
+"""Advanced OMI Backend - AI-powered wearable ecosystem for audio capture and processing."""
+
+__version__ = "0.1.0"
+
+from .database import AudioChunksCollection
+
+__all__ = ["AudioChunksCollection"]
diff --git a/backends/advanced-backend/src/action_items_service.py b/backends/advanced-backend/src/advanced_omi_backend/action_items_service.py
similarity index 59%
rename from backends/advanced-backend/src/action_items_service.py
rename to backends/advanced-backend/src/advanced_omi_backend/action_items_service.py
index 6f9ef342..b9cbe59b 100644
--- a/backends/advanced-backend/src/action_items_service.py
+++ b/backends/advanced-backend/src/advanced_omi_backend/action_items_service.py
@@ -1,30 +1,42 @@
-import time
+import asyncio
import json
-from typing import List, Dict, Any, Optional
-from datetime import datetime
-from motor.motor_asyncio import AsyncIOMotorCollection
import logging
+import re
+import time
+from concurrent.futures import ThreadPoolExecutor
+from datetime import datetime
+from typing import Any, Dict, List, Optional
+
import ollama
+from motor.motor_asyncio import AsyncIOMotorCollection
# Set up logging
action_items_logger = logging.getLogger("action_items")
+# Timeout configurations
+OLLAMA_TIMEOUT_SECONDS = 30 # Timeout for Ollama operations
+EXTRACTION_TIMEOUT_SECONDS = 45 # Timeout for action item extraction
+
+# Thread pool for blocking operations
+_ACTION_EXECUTOR = ThreadPoolExecutor(max_workers=2, thread_name_prefix="action_ops")
+
+
class ActionItemsService:
"""
MongoDB-based action items service with full CRUD operations.
Replaces the Mem0-based implementation for better update capabilities.
"""
-
+
def __init__(self, collection: AsyncIOMotorCollection, ollama_client: ollama.Client):
self.collection = collection
self.ollama_client = ollama_client
self._initialized = False
-
+
async def initialize(self):
"""Initialize the service and create indexes for performance."""
if self._initialized:
return
-
+
try:
# Create indexes for better query performance
await self.collection.create_index([("user_id", 1), ("created_at", -1)])
@@ -32,40 +44,117 @@ async def initialize(self):
await self.collection.create_index([("user_id", 1), ("assignee", 1)])
await self.collection.create_index([("audio_uuid", 1)])
await self.collection.create_index([("description", "text")]) # Text search index
-
+
self._initialized = True
action_items_logger.info("Action items service initialized with MongoDB")
except Exception as e:
action_items_logger.error(f"Failed to initialize action items service: {e}")
raise
-
- async def extract_and_store_action_items(self, transcript: str, client_id: str, audio_uuid: str) -> int:
+
+ async def process_transcript_for_action_items(
+ self, transcript_text: str, client_id: str, audio_uuid: str, user_id: str, user_email: str
+ ) -> int:
"""
- Extract action items from transcript and store them in MongoDB.
- Returns the number of action items extracted and stored.
+ Process a transcript segment for action items with special keyphrase detection.
+
+ This method:
+ - Checks for the special keyphrase 'Simon says' (case-insensitive)
+ - If found, processes the modified text for action item extraction
+ - Returns the number of action items extracted and stored
"""
if not self._initialized:
await self.initialize()
-
+
try:
- # Extract action items from the transcript
- action_items = await self._extract_action_items_from_transcript(transcript, client_id, audio_uuid)
-
- if not action_items:
- action_items_logger.info(f"No action items found in transcript for {audio_uuid}")
+ # Check for the special keyphrase 'simon says' (case-insensitive, any spaces or dots)
+ keyphrase_pattern = re.compile(r"\bSimon says\b", re.IGNORECASE)
+
+ if keyphrase_pattern.search(transcript_text):
+ # Remove all occurrences of the keyphrase
+ modified_text = keyphrase_pattern.sub("Simon says", transcript_text)
+ action_items_logger.info(
+ f"๐ 'Simon says' keyphrase detected in transcript for {audio_uuid}. Extracting action items from: '{modified_text.strip()}'"
+ )
+
+ try:
+ action_item_count = await self.extract_and_store_action_items(
+ modified_text.strip(), client_id, audio_uuid, user_id, user_email
+ )
+ if action_item_count > 0:
+ action_items_logger.info(
+ f"๐ฏ Extracted {action_item_count} action items from 'Simon says' transcript segment for {audio_uuid}"
+ )
+ else:
+ action_items_logger.debug(
+ f"โน๏ธ No action items found in 'Simon says' transcript segment for {audio_uuid}"
+ )
+ return action_item_count
+ except Exception as e:
+ action_items_logger.error(
+ f"โ Error processing 'Simon says' action items for transcript segment in {audio_uuid}: {e}"
+ )
+ return 0
+ else:
+ # No keyphrase found, no action items to extract
+ action_items_logger.debug(
+ f"No 'Simon says' keyphrase found in transcript for {audio_uuid}"
+ )
return 0
-
- # Store action items in MongoDB
- success_count = await self._store_action_items(action_items, client_id, audio_uuid)
-
- action_items_logger.info(f"Successfully extracted and stored {success_count}/{len(action_items)} action items for {audio_uuid}")
- return success_count
-
+
+ except Exception as e:
+ action_items_logger.error(
+ f"Error processing transcript for action items in {audio_uuid}: {e}"
+ )
+ return 0
+
+ async def extract_and_store_action_items(
+ self, transcript: str, client_id: str, audio_uuid: str, user_id: str, user_email: str
+ ) -> int:
+ """
+ Extract action items from transcript and store them in MongoDB with timeout protection.
+ Returns the number of action items extracted and stored.
+ """
+ if not self._initialized:
+ await self.initialize()
+
+ try:
+ # Extract and store action items with overall timeout
+ async def _extract_and_store():
+ # Extract action items from the transcript
+ action_items = await self._extract_action_items_from_transcript(
+ transcript, client_id, audio_uuid
+ )
+
+ if not action_items:
+ action_items_logger.info(
+ f"No action items found in transcript for {audio_uuid}"
+ )
+ return 0
+
+ # Store action items in MongoDB
+ success_count = await self._store_action_items(
+ action_items, client_id, audio_uuid, user_id, user_email
+ )
+
+ action_items_logger.info(
+ f"Successfully extracted and stored {success_count}/{len(action_items)} action items for {audio_uuid}"
+ )
+ return success_count
+
+ return await asyncio.wait_for(_extract_and_store(), timeout=EXTRACTION_TIMEOUT_SECONDS)
+
+ except asyncio.TimeoutError:
+ action_items_logger.error(
+ f"Action item extraction and storage timed out after {EXTRACTION_TIMEOUT_SECONDS}s for {audio_uuid}"
+ )
+ return 0
except Exception as e:
action_items_logger.error(f"Error extracting action items for {audio_uuid}: {e}")
return 0
-
- async def _extract_action_items_from_transcript(self, transcript: str, client_id: str, audio_uuid: str) -> List[Dict[str, Any]]:
+
+ async def _extract_action_items_from_transcript(
+ self, transcript: str, client_id: str, audio_uuid: str
+ ) -> List[Dict[str, Any]]:
"""Extract action items from transcript using Ollama."""
try:
extraction_prompt = f"""
@@ -96,65 +185,112 @@ async def _extract_action_items_from_transcript(self, transcript: str, client_id
<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
"""
- response = self.ollama_client.generate(
- model="llama3.1:latest",
- prompt=extraction_prompt,
- options={"temperature": 0.1}
- )
-
- response_text = response['response'].strip()
+
+ # Use Ollama API
+ if self.ollama_client is None:
+ action_items_logger.error(f"No Ollama client available for {audio_uuid}")
+ return []
+ def _ollama_generate():
+ return self.ollama_client.generate(
+ model="llama3.1:latest", prompt=extraction_prompt, options={"temperature": 0.1}
+ )
+
+ loop = asyncio.get_running_loop()
+ response = await asyncio.wait_for(
+ loop.run_in_executor(_ACTION_EXECUTOR, _ollama_generate),
+ timeout=OLLAMA_TIMEOUT_SECONDS,
+ )
+
+ if response is None or "response" not in response:
+ action_items_logger.error(f"Invalid Ollama response for {audio_uuid}")
+ return []
+
+ response_text = response["response"].strip()
+
# Handle empty responses
- if not response_text or response_text.lower() in ['none', 'no action items', '[]']:
+ if not response_text or response_text.lower() in ["none", "no action items", "[]"]:
return []
-
+
# Parse JSON response
action_items = json.loads(response_text)
-
+
# Validate response format
if not isinstance(action_items, list):
- action_items_logger.warning(f"Action item extraction returned non-list for {audio_uuid}: {type(action_items)}")
+ action_items_logger.warning(
+ f"Action item extraction returned non-list for {audio_uuid}: {type(action_items)}"
+ )
return []
-
+
# Enrich each action item with metadata
for i, item in enumerate(action_items):
- item.update({
- "id": f"action_{audio_uuid}_{i}_{int(time.time())}",
- "status": "open",
- "created_at": int(time.time()),
- "updated_at": int(time.time()),
- "source": "transcript_extraction"
- })
-
+ item.update(
+ {
+ "id": f"action_{audio_uuid}_{i}_{int(time.time())}",
+ "status": "open",
+ "created_at": int(time.time()),
+ "updated_at": int(time.time()),
+ "source": "transcript_extraction",
+ }
+ )
+
# TODO: Handle all tools here, these can be imported from other files
- # Handle set_alarm tool, this can be another llm call to mcp with description as input
+ # Handle set_alarm tool, this can be another llm call to mcp with description as input
# Also handle sending notification via app or TTS
if item.get("tool") == "set_alarm":
description = item.get("description", "")
- action_items_logger.info(f"Calling set alarm service with description: {description}")
-
- action_items_logger.info(f"Extracted {len(action_items)} action items from {audio_uuid}")
+ action_items_logger.info(
+ f"Calling set alarm service with description: {description}"
+ )
+
+ action_items_logger.info(
+ f"Extracted {len(action_items)} action items from {audio_uuid}"
+ )
return action_items
-
+
+ except asyncio.TimeoutError:
+ action_items_logger.error(
+ f"Action item extraction timed out after {OLLAMA_TIMEOUT_SECONDS}s for {audio_uuid}"
+ )
+ return []
except json.JSONDecodeError as e:
action_items_logger.error(f"Failed to parse action items JSON for {audio_uuid}: {e}")
return []
except Exception as e:
- action_items_logger.error(f"Error extracting action items from transcript for {audio_uuid}: {e}")
+ action_items_logger.error(
+ f"Error extracting action items from transcript for {audio_uuid}: {e}"
+ )
return []
-
- async def _store_action_items(self, action_items: List[Dict[str, Any]], client_id: str, audio_uuid: str) -> int:
- """Store action items in MongoDB."""
+
+ async def _store_action_items(
+ self,
+ action_items: List[Dict[str, Any]],
+ client_id: str,
+ audio_uuid: str,
+ user_id: str,
+ user_email: str,
+ ) -> int:
+ """Store action items in MongoDB.
+
+ Args:
+ action_items: List of action item dictionaries
+ client_id: The client ID that generated the audio
+ audio_uuid: Unique identifier for the audio
+ user_id: Database user ID to associate the action items with
+ user_email: User email for identification
+ """
try:
if not action_items:
return 0
-
+
# Prepare documents for insertion
documents = []
for item in action_items:
document = {
"action_item_id": item.get("id"),
- "user_id": client_id,
+ "user_id": user_id, # Use database user_id instead of client_id
+ "client_id": client_id, # Store client_id for reference
+ "user_email": user_email, # Store user email for easy identification
"audio_uuid": audio_uuid,
"description": item.get("description", ""),
"assignee": item.get("assignee", "unassigned"),
@@ -164,53 +300,59 @@ async def _store_action_items(self, action_items: List[Dict[str, Any]], client_i
"context": item.get("context", ""),
"source": item.get("source", "transcript_extraction"),
"created_at": item.get("created_at", int(time.time())),
- "updated_at": item.get("updated_at", int(time.time()))
+ "updated_at": item.get("updated_at", int(time.time())),
}
documents.append(document)
-
+
# Insert all action items
result = await self.collection.insert_many(documents)
success_count = len(result.inserted_ids)
-
+
action_items_logger.info(f"Stored {success_count} action items for {audio_uuid}")
return success_count
-
+
except Exception as e:
action_items_logger.error(f"Error storing action items for {audio_uuid}: {e}")
return 0
-
- async def get_action_items(self, user_id: str, limit: int = 50, status_filter: Optional[str] = None) -> List[Dict[str, Any]]:
+
+ async def get_action_items(
+ self, user_id: str, limit: int = 50, status_filter: Optional[str] = None
+ ) -> List[Dict[str, Any]]:
"""Get action items for a user with optional status filtering."""
if not self._initialized:
await self.initialize()
-
+
try:
# Build query filter
query = {"user_id": user_id}
if status_filter:
query["status"] = status_filter
-
+
# Execute query with sorting (newest first)
cursor = self.collection.find(query).sort("created_at", -1).limit(limit)
action_items = []
-
+
async for doc in cursor:
# Convert MongoDB ObjectId to string and remove it
doc["_id"] = str(doc["_id"])
action_items.append(doc)
-
- action_items_logger.info(f"Retrieved {len(action_items)} action items for user {user_id} (status_filter: {status_filter})")
+
+ action_items_logger.info(
+ f"Retrieved {len(action_items)} action items for user {user_id} (status_filter: {status_filter})"
+ )
return action_items
-
+
except Exception as e:
action_items_logger.error(f"Error fetching action items for user {user_id}: {e}")
return []
-
- async def update_action_item_status(self, action_item_id: str, new_status: str, user_id: Optional[str] = None) -> bool:
+
+ async def update_action_item_status(
+ self, action_item_id: str, new_status: str, user_id: Optional[str] = None
+ ) -> bool:
"""Update the status of an action item."""
if not self._initialized:
await self.initialize()
-
+
try:
# Build query - use action_item_id or _id
query = {}
@@ -219,41 +361,43 @@ async def update_action_item_status(self, action_item_id: str, new_status: str,
else:
# Assume it's a MongoDB ObjectId
from bson import ObjectId
+
try:
query["_id"] = ObjectId(action_item_id)
except:
query["action_item_id"] = action_item_id
-
+
# Add user_id to query if provided for additional security
if user_id:
query["user_id"] = user_id
-
+
# Update the document
- update_data = {
- "$set": {
- "status": new_status,
- "updated_at": int(time.time())
- }
- }
-
+ update_data = {"$set": {"status": new_status, "updated_at": int(time.time())}}
+
result = await self.collection.update_one(query, update_data)
-
+
if result.modified_count > 0:
- action_items_logger.info(f"Updated action item {action_item_id} status to {new_status}")
+ action_items_logger.info(
+ f"Updated action item {action_item_id} status to {new_status}"
+ )
return True
else:
action_items_logger.warning(f"No action item found with id {action_item_id}")
return False
-
+
except Exception as e:
- action_items_logger.error(f"Error updating action item status for {action_item_id}: {e}")
+ action_items_logger.error(
+ f"Error updating action item status for {action_item_id}: {e}"
+ )
return False
-
- async def search_action_items(self, query: str, user_id: str, limit: int = 20) -> List[Dict[str, Any]]:
+
+ async def search_action_items(
+ self, query: str, user_id: str, limit: int = 20
+ ) -> List[Dict[str, Any]]:
"""Search action items by text query using MongoDB text search."""
if not self._initialized:
await self.initialize()
-
+
try:
# Use MongoDB text search if available, otherwise regex search
search_query = {
@@ -261,29 +405,31 @@ async def search_action_items(self, query: str, user_id: str, limit: int = 20) -
"$or": [
{"description": {"$regex": query, "$options": "i"}},
{"context": {"$regex": query, "$options": "i"}},
- {"assignee": {"$regex": query, "$options": "i"}}
- ]
+ {"assignee": {"$regex": query, "$options": "i"}},
+ ],
}
-
+
cursor = self.collection.find(search_query).sort("created_at", -1).limit(limit)
action_items = []
-
+
async for doc in cursor:
doc["_id"] = str(doc["_id"])
action_items.append(doc)
-
- action_items_logger.info(f"Search found {len(action_items)} action items for query '{query}'")
+
+ action_items_logger.info(
+ f"Search found {len(action_items)} action items for query '{query}'"
+ )
return action_items
-
+
except Exception as e:
action_items_logger.error(f"Error searching action items for user {user_id}: {e}")
return []
-
+
async def delete_action_item(self, action_item_id: str, user_id: Optional[str] = None) -> bool:
"""Delete a specific action item."""
if not self._initialized:
await self.initialize()
-
+
try:
# Build query - use action_item_id or _id
query = {}
@@ -291,39 +437,46 @@ async def delete_action_item(self, action_item_id: str, user_id: Optional[str] =
query["action_item_id"] = action_item_id
else:
from bson import ObjectId
+
try:
query["_id"] = ObjectId(action_item_id)
except:
query["action_item_id"] = action_item_id
-
+
# Add user_id to query if provided for additional security
if user_id:
query["user_id"] = user_id
-
+
result = await self.collection.delete_one(query)
-
+
if result.deleted_count > 0:
action_items_logger.info(f"Deleted action item with id {action_item_id}")
return True
else:
action_items_logger.warning(f"No action item found with id {action_item_id}")
return False
-
+
except Exception as e:
action_items_logger.error(f"Error deleting action item {action_item_id}: {e}")
return False
-
- async def create_action_item(self, user_id: str, description: str, assignee: str = "unassigned",
- due_date: str = "not_specified", priority: str = "medium",
- context: str = "") -> Optional[Dict[str, Any]]:
+
+ async def create_action_item(
+ self,
+ user_id: str,
+ description: str,
+ assignee: str = "unassigned",
+ due_date: str = "not_specified",
+ priority: str = "medium",
+ context: str = "",
+ ) -> Optional[Dict[str, Any]]:
"""Create a new action item manually."""
if not self._initialized:
await self.initialize()
-
+
try:
current_time = int(time.time())
action_item_id = f"manual_{user_id}_{current_time}"
-
+
document = {
"action_item_id": action_item_id,
"user_id": user_id,
@@ -336,84 +489,87 @@ async def create_action_item(self, user_id: str, description: str, assignee: str
"context": context,
"source": "manual_creation",
"created_at": current_time,
- "updated_at": current_time
+ "updated_at": current_time,
}
-
+
result = await self.collection.insert_one(document)
-
+
if result.inserted_id:
document["_id"] = str(result.inserted_id)
- action_items_logger.info(f"Created manual action item {action_item_id} for user {user_id}")
+ action_items_logger.info(
+ f"Created manual action item {action_item_id} for user {user_id}"
+ )
return document
else:
action_items_logger.error(f"Failed to create action item for user {user_id}")
return None
-
+
except Exception as e:
action_items_logger.error(f"Error creating action item for user {user_id}: {e}")
return None
-
+
async def get_action_item_stats(self, user_id: str) -> Dict[str, Any]:
"""Get comprehensive statistics for user's action items."""
if not self._initialized:
await self.initialize()
-
+
try:
# Use aggregation pipeline for statistics
pipeline = [
{"$match": {"user_id": user_id}},
- {"$group": {
- "_id": None,
- "total": {"$sum": 1},
- "by_status": {"$push": "$status"},
- "by_priority": {"$push": "$priority"},
- "by_assignee": {"$push": "$assignee"}
- }}
+ {
+ "$group": {
+ "_id": None,
+ "total": {"$sum": 1},
+ "by_status": {"$push": "$status"},
+ "by_priority": {"$push": "$priority"},
+ "by_assignee": {"$push": "$assignee"},
+ }
+ },
]
-
+
result = await self.collection.aggregate(pipeline).to_list(length=1)
-
+
if not result:
return {
"total": 0,
"by_status": {},
"by_priority": {},
"by_assignee": {},
- "recent_count": 0
+ "recent_count": 0,
}
-
+
data = result[0]
-
+
# Count by status
status_counts = {}
for status in data["by_status"]:
status_counts[status] = status_counts.get(status, 0) + 1
-
+
# Count by priority
priority_counts = {}
for priority in data["by_priority"]:
priority_counts[priority] = priority_counts.get(priority, 0) + 1
-
+
# Count by assignee
assignee_counts = {}
for assignee in data["by_assignee"]:
assignee_counts[assignee] = assignee_counts.get(assignee, 0) + 1
-
+
# Get recent count (last 7 days)
seven_days_ago = int(time.time()) - (7 * 24 * 60 * 60)
- recent_count = await self.collection.count_documents({
- "user_id": user_id,
- "created_at": {"$gte": seven_days_ago}
- })
-
+ recent_count = await self.collection.count_documents(
+ {"user_id": user_id, "created_at": {"$gte": seven_days_ago}}
+ )
+
return {
"total": data["total"],
"by_status": status_counts,
"by_priority": priority_counts,
"by_assignee": assignee_counts,
- "recent_count": recent_count
+ "recent_count": recent_count,
}
-
+
except Exception as e:
action_items_logger.error(f"Error getting action item stats for user {user_id}: {e}")
return {
@@ -421,9 +577,8 @@ async def get_action_item_stats(self, user_id: str) -> Dict[str, Any]:
"by_status": {},
"by_priority": {},
"by_assignee": {},
- "recent_count": 0
- }
-
+ "recent_count": 0,
+ }
# import pyperclip
@@ -456,4 +611,4 @@ async def get_action_item_stats(self, user_id: str) -> Dict[str, Any]:
# <|eot_id|>
# <|start_header_id|>assistant<|end_header_id|>
# """
-# pyperclip.copy(extraction_prompt)
\ No newline at end of file
+# pyperclip.copy(extraction_prompt)
diff --git a/backends/advanced-backend/src/advanced_omi_backend/audio_cropping_utils.py b/backends/advanced-backend/src/advanced_omi_backend/audio_cropping_utils.py
new file mode 100644
index 00000000..1961ec8b
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/audio_cropping_utils.py
@@ -0,0 +1,190 @@
+###############################################################################
+# AUDIO PROCESSING FUNCTIONS
+###############################################################################
+
+import asyncio
+import os
+import logging
+
+logger = logging.getLogger(__name__)
+
+
+async def _process_audio_cropping_with_relative_timestamps(
+ original_path: str,
+ speech_segments: list[tuple[float, float]],
+ output_path: str,
+ audio_uuid: str,
+) -> bool:
+ """
+ Process audio cropping with automatic relative timestamp conversion.
+ This function handles both live processing and reprocessing scenarios.
+ """
+ try:
+ # Convert absolute timestamps to relative timestamps
+ # Extract file start time from filename: timestamp_client_uuid.wav
+ filename = original_path.split("/")[-1]
+ logger.info(f"๐ Parsing filename: {filename}")
+ filename_parts = filename.split("_")
+ if len(filename_parts) < 3:
+ logger.error(
+ f"Invalid filename format: {filename}. Expected format: timestamp_client_id_audio_uuid.wav"
+ )
+ return False
+
+ try:
+ file_start_timestamp = float(filename_parts[0])
+ except ValueError as e:
+ logger.error(f"Cannot parse timestamp from filename {filename}: {e}")
+ return False
+
+ # Convert speech segments to relative timestamps
+ relative_segments = []
+ for start_abs, end_abs in speech_segments:
+ # Validate input timestamps
+ if start_abs >= end_abs:
+ logger.warning(
+ f"โ ๏ธ Invalid speech segment: start={start_abs} >= end={end_abs}, skipping"
+ )
+ continue
+
+ start_rel = start_abs - file_start_timestamp
+ end_rel = end_abs - file_start_timestamp
+
+ # Ensure relative timestamps are positive (sanity check)
+ if start_rel < 0:
+ logger.warning(
+ f"โ ๏ธ Negative start timestamp: {start_rel} (absolute: {start_abs}, file_start: {file_start_timestamp}), clamping to 0.0"
+ )
+ start_rel = 0.0
+ if end_rel < 0:
+ logger.warning(
+ f"โ ๏ธ Negative end timestamp: {end_rel} (absolute: {end_abs}, file_start: {file_start_timestamp}), skipping segment"
+ )
+ continue
+
+ relative_segments.append((start_rel, end_rel))
+
+ logger.info(
+ f"๐ Converting timestamps for {audio_uuid}: file_start={file_start_timestamp}"
+ )
+ logger.info(f"๐ Absolute segments: {speech_segments}")
+ logger.info(f"๐ Relative segments: {relative_segments}")
+
+ # Validate that we have valid relative segments after conversion
+ if not relative_segments:
+ logger.warning(
+ f"No valid relative segments after timestamp conversion for {audio_uuid}"
+ )
+ return False
+
+ success = await _crop_audio_with_ffmpeg(original_path, relative_segments, output_path)
+ if success:
+ # Update database with cropped file info (keep original absolute timestamps for reference)
+ cropped_filename = output_path.split("/")[-1]
+ await chunk_repo.update_cropped_audio(audio_uuid, cropped_filename, speech_segments)
+ logger.info(f"Successfully processed cropped audio: {cropped_filename}")
+ return True
+ else:
+ logger.error(f"Failed to crop audio for {audio_uuid}")
+ return False
+ except Exception as e:
+ logger.error(f"Error in audio cropping task for {audio_uuid}: {e}", exc_info=True)
+ return False
+
+
+async def _crop_audio_with_ffmpeg(
+ original_path: str, speech_segments: list[tuple[float, float]], output_path: str
+) -> bool:
+ """Use ffmpeg to crop audio - runs as async subprocess, no GIL issues"""
+ logger.info(
+ f"Cropping audio {original_path} with {len(speech_segments)} speech segments"
+ )
+
+ if not speech_segments:
+ logger.warning(f"No speech segments to crop for {original_path}")
+ return False
+
+ # Check if the original file exists
+ if not os.path.exists(original_path):
+ logger.error(f"Original audio file does not exist: {original_path}")
+ return False
+
+ # Filter out segments that are too short
+ filtered_segments = []
+ for start, end in speech_segments:
+ duration = end - start
+ if duration >= MIN_SPEECH_SEGMENT_DURATION:
+ # Add padding around speech segments
+ padded_start = max(0, start - CROPPING_CONTEXT_PADDING)
+ padded_end = end + CROPPING_CONTEXT_PADDING
+ filtered_segments.append((padded_start, padded_end))
+ else:
+ logger.debug(
+ f"Skipping short segment: {start}-{end} ({duration:.2f}s < {MIN_SPEECH_SEGMENT_DURATION}s)"
+ )
+
+ if not filtered_segments:
+ logger.warning(
+ f"No segments meet minimum duration ({MIN_SPEECH_SEGMENT_DURATION}s) for {original_path}"
+ )
+ return False
+
+ logger.info(
+ f"Cropping audio {original_path} with {len(filtered_segments)} speech segments (filtered from {len(speech_segments)})"
+ )
+
+ try:
+ # Build ffmpeg filter for concatenating speech segments
+ filter_parts = []
+ for i, (start, end) in enumerate(filtered_segments):
+ duration = end - start
+ filter_parts.append(
+ f"[0:a]atrim=start={start}:duration={duration},asetpts=PTS-STARTPTS[seg{i}]"
+ )
+
+ # Concatenate all segments
+ inputs = "".join(f"[seg{i}]" for i in range(len(filtered_segments)))
+ concat_filter = f"{inputs}concat=n={len(filtered_segments)}:v=0:a=1[out]"
+
+ full_filter = ";".join(filter_parts + [concat_filter])
+
+ # Run ffmpeg as async subprocess
+ cmd = [
+ "ffmpeg",
+ "-y", # -y = overwrite output
+ "-i",
+ original_path,
+ "-filter_complex",
+ full_filter,
+ "-map",
+ "[out]",
+ "-c:a",
+ "pcm_s16le", # Keep same format as original
+ output_path,
+ ]
+
+ logger.info(f"Running ffmpeg command: {' '.join(cmd)}")
+
+ process = await asyncio.create_subprocess_exec(
+ *cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
+ )
+
+ stdout, stderr = await process.communicate()
+ if stdout:
+ logger.debug(f"FFMPEG stdout: {stdout.decode()}")
+
+ if process.returncode == 0:
+ # Calculate cropped duration
+ cropped_duration = sum(end - start for start, end in filtered_segments)
+ logger.info(
+ f"Successfully cropped {original_path} -> {output_path} ({cropped_duration:.1f}s from {len(filtered_segments)} segments)"
+ )
+ return True
+ else:
+ error_msg = stderr.decode() if stderr else "Unknown ffmpeg error"
+ logger.error(f"ffmpeg failed for {original_path}: {error_msg}")
+ return False
+
+ except Exception as e:
+ logger.error(f"Error running ffmpeg on {original_path}: {e}", exc_info=True)
+ return False
diff --git a/backends/advanced-backend/src/advanced_omi_backend/auth.py b/backends/advanced-backend/src/advanced_omi_backend/auth.py
new file mode 100644
index 00000000..1aaee209
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/auth.py
@@ -0,0 +1,220 @@
+"""Authentication configuration for fastapi-users with email/password and JWT."""
+
+import logging
+import os
+import re
+from typing import Literal, Optional, overload
+
+from beanie import PydanticObjectId
+from dotenv import load_dotenv
+from fastapi import Depends, Request
+from fastapi_users import BaseUserManager, FastAPIUsers
+from fastapi_users.authentication import (
+ AuthenticationBackend,
+ BearerTransport,
+ CookieTransport,
+ JWTStrategy,
+)
+
+from advanced_omi_backend.users import User, UserCreate, get_user_db
+
+logger = logging.getLogger(__name__)
+
+load_dotenv()
+
+
+@overload
+def _verify_configured(var_name: str, *, optional: Literal[False] = False) -> str: ...
+@overload
+def _verify_configured(var_name: str, *, optional: Literal[True]) -> Optional[str]: ...
+
+
+def _verify_configured(var_name: str, *, optional: bool = False) -> Optional[str]:
+ value = os.getenv(var_name)
+ if not optional and not value:
+ raise ValueError(f"{var_name} is not set")
+ return value
+
+
+# Configuration from environment variables
+SECRET_KEY = _verify_configured("AUTH_SECRET_KEY")
+COOKIE_SECURE = _verify_configured("COOKIE_SECURE", optional=True) == "true"
+
+# Admin user configuration
+ADMIN_PASSWORD = _verify_configured("ADMIN_PASSWORD")
+ADMIN_EMAIL = _verify_configured("ADMIN_EMAIL", optional=True) or "admin@example.com"
+
+
+class UserManager(BaseUserManager[User, PydanticObjectId]):
+ """User manager with minimal customization for fastapi-users."""
+
+ reset_password_token_secret = SECRET_KEY
+ verification_token_secret = SECRET_KEY
+
+ def parse_id(self, value: str) -> PydanticObjectId:
+ """Parse string ID to PydanticObjectId for MongoDB compatibility."""
+ try:
+ return PydanticObjectId(value)
+ except Exception as e:
+ raise ValueError(f"Invalid ObjectId format: {value}") from e
+
+ async def on_after_register(self, user: User, request: Optional[Request] = None):
+ """Called after a user registers."""
+ logger.info(f"User {user.user_id} ({user.email}) has registered.")
+
+ async def on_after_forgot_password(
+ self, user: User, token: str, request: Optional[Request] = None
+ ):
+ """Called after a user requests password reset."""
+ logger.info(
+ f"User {user.user_id} ({user.email}) has forgot their password. Reset token: {token}"
+ )
+
+ async def on_after_request_verify(
+ self, user: User, token: str, request: Optional[Request] = None
+ ):
+ """Called after a user requests verification."""
+ logger.info(
+ f"Verification requested for user {user.user_id} ({user.email}). Verification token: {token}"
+ )
+
+
+async def get_user_manager(user_db=Depends(get_user_db)):
+ """Get user manager instance for dependency injection."""
+ yield UserManager(user_db)
+
+
+# Transport configurations
+cookie_transport = CookieTransport(
+ cookie_max_age=3600, # 1 hour
+ cookie_secure=COOKIE_SECURE, # Set to False in development if not using HTTPS
+ cookie_httponly=True,
+ cookie_samesite="lax",
+)
+
+bearer_transport = BearerTransport(tokenUrl="auth/jwt/login")
+
+
+def get_jwt_strategy() -> JWTStrategy:
+ """Get JWT strategy for token generation and validation."""
+ return JWTStrategy(secret=SECRET_KEY, lifetime_seconds=3600)
+
+
+# Authentication backends
+cookie_backend = AuthenticationBackend(
+ name="cookie",
+ transport=cookie_transport,
+ get_strategy=get_jwt_strategy,
+)
+
+bearer_backend = AuthenticationBackend(
+ name="bearer",
+ transport=bearer_transport,
+ get_strategy=get_jwt_strategy,
+)
+
+# FastAPI Users instance
+fastapi_users = FastAPIUsers[User, PydanticObjectId](
+ get_user_manager,
+ [cookie_backend, bearer_backend],
+)
+
+# User dependencies for protecting endpoints
+current_active_user = fastapi_users.current_user(active=True)
+current_superuser = fastapi_users.current_user(active=True, superuser=True)
+
+
+def get_accessible_user_ids(user: User) -> list[str] | None:
+ """
+ Get list of user IDs that the current user can access data for.
+ Returns None for superusers (can access all), or [user.id] for regular users.
+ """
+ if user.is_superuser:
+ return None # Can access all data
+ else:
+ return [str(user.id)] # Can only access own data
+
+
+async def create_admin_user_if_needed():
+ """Create admin user during startup if it doesn't exist and credentials are provided."""
+ if not ADMIN_PASSWORD:
+ logger.warning("Skipping admin user creation - ADMIN_PASSWORD not set")
+ return
+
+ try:
+ # Get user database
+ user_db_gen = get_user_db()
+ user_db = await user_db_gen.__anext__()
+
+ # Check if admin user already exists by email
+ existing_admin = await user_db.get_by_email(ADMIN_EMAIL)
+
+ if existing_admin:
+ logger.info(
+ f"โ
Admin user already exists: {existing_admin.user_id} ({existing_admin.email})"
+ )
+ return
+
+ # Create admin user
+ user_manager_gen = get_user_manager(user_db)
+ user_manager = await user_manager_gen.__anext__()
+
+ admin_create = UserCreate(
+ email=ADMIN_EMAIL,
+ password=ADMIN_PASSWORD,
+ is_superuser=True,
+ is_verified=True,
+ display_name="Administrator",
+ )
+
+ admin_user = await user_manager.create(admin_create)
+ logger.info(
+ f"โ
Created admin user: {admin_user.user_id} ({admin_user.email}) (ID: {admin_user.id})"
+ )
+
+ except Exception as e:
+ logger.error(f"Failed to create admin user: {e}")
+
+
+async def websocket_auth(websocket, token: Optional[str] = None) -> Optional[User]:
+ """
+ WebSocket authentication that supports both cookie and token-based auth.
+ Returns None if authentication fails (allowing graceful handling).
+ """
+ strategy = get_jwt_strategy()
+
+ # Try JWT token from query parameter first
+ if token:
+ logger.debug("Attempting WebSocket auth with query token.")
+ try:
+ user_db_gen = get_user_db()
+ user_db = await user_db_gen.__anext__()
+ user_manager = UserManager(user_db)
+ user = await strategy.read_token(token, user_manager)
+ if user and user.is_active:
+ logger.info(f"WebSocket auth successful for user {user.user_id} using query token.")
+ return user
+ except Exception as e:
+ logger.warning(f"WebSocket auth with query token failed: {e}")
+
+ # Try cookie authentication
+ logger.debug("Attempting WebSocket auth with cookie.")
+ try:
+ cookie_header = next(
+ (v.decode() for k, v in websocket.headers.items() if k.lower() == b"cookie"), None
+ )
+ if cookie_header:
+ match = re.search(r"fastapiusersauth=([^;]+)", cookie_header)
+ if match:
+ user_db_gen = get_user_db()
+ user_db = await user_db_gen.__anext__()
+ user_manager = UserManager(user_db)
+ user = await strategy.read_token(match.group(1), user_manager)
+ if user and user.is_active:
+ logger.info(f"WebSocket auth successful for user {user.user_id} using cookie.")
+ return user
+ except Exception as e:
+ logger.warning(f"WebSocket auth with cookie failed: {e}")
+
+ logger.warning("WebSocket authentication failed.")
+ return None
diff --git a/backends/advanced-backend/src/advanced_omi_backend/client.py b/backends/advanced-backend/src/advanced_omi_backend/client.py
new file mode 100644
index 00000000..45b5658b
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/client.py
@@ -0,0 +1,734 @@
+import asyncio
+import logging
+import os
+import time
+import uuid
+from pathlib import Path
+from typing import Optional, Tuple
+
+from easy_audio_interfaces.filesystem.filesystem_interfaces import LocalFileSink
+from wyoming.audio import AudioChunk
+
+from advanced_omi_backend.audio_cropping_utils import (
+ _process_audio_cropping_with_relative_timestamps,
+)
+from advanced_omi_backend.debug_system_tracker import (
+ PipelineStage,
+ get_debug_tracker,
+)
+from advanced_omi_backend.memory import get_memory_service
+from advanced_omi_backend.transcription import TranscriptionManager
+from advanced_omi_backend.users import get_user_by_client_id
+
+# Get loggers
+audio_logger = logging.getLogger("audio_processing")
+
+# Configuration constants
+NEW_CONVERSATION_TIMEOUT_MINUTES = float(os.getenv("NEW_CONVERSATION_TIMEOUT_MINUTES", "1.5"))
+AUDIO_CROPPING_ENABLED = os.getenv("AUDIO_CROPPING_ENABLED", "true").lower() == "true"
+MIN_SPEECH_SEGMENT_DURATION = float(os.getenv("MIN_SPEECH_SEGMENT_DURATION", "1.0"))
+CROPPING_CONTEXT_PADDING = float(os.getenv("CROPPING_CONTEXT_PADDING", "0.1"))
+
+# Audio configuration constants
+OMI_SAMPLE_RATE = 16_000
+OMI_CHANNELS = 1
+OMI_SAMPLE_WIDTH = 2
+SEGMENT_SECONDS = 60
+TARGET_SAMPLES = OMI_SAMPLE_RATE * SEGMENT_SECONDS
+
+# Get services
+memory_service = get_memory_service()
+
+class ClientState:
+ """Manages all state for a single client connection."""
+
+ def __init__(self, client_id: str, audio_chunks_db_collection, action_items_service, chunk_dir: Path, user_id: Optional[str] = None, user_email: Optional[str] = None):
+ self.client_id = client_id
+ self.connected = True
+ self.chunk_repo = audio_chunks_db_collection
+ self.action_items_service = action_items_service
+ self.chunk_dir = chunk_dir
+ # Store minimal user data needed for memory processing (avoids tight coupling to User model)
+ self.user_id = user_id
+ self.user_email = user_email
+
+ # Per-client queues
+ self.chunk_queue = asyncio.Queue[Optional[AudioChunk]]()
+ self.transcription_queue = asyncio.Queue[Tuple[Optional[str], Optional[AudioChunk]]]()
+ self.memory_queue = asyncio.Queue[
+ Tuple[Optional[str], Optional[str], Optional[str]]
+ ]() # (transcript, client_id, audio_uuid)
+ self.action_item_queue = asyncio.Queue[
+ Tuple[Optional[str], Optional[str], Optional[str]]
+ ]() # (transcript_text, client_id, audio_uuid)
+
+ # Per-client file sink
+ self.file_sink: Optional[LocalFileSink] = None
+ self.current_audio_uuid: Optional[str] = None
+
+ # Per-client transcription manager
+ self.transcription_manager: Optional[TranscriptionManager] = None
+
+ # Conversation timeout tracking
+ self.last_transcript_time: Optional[float] = None
+ self.conversation_start_time: float = time.time()
+
+ # Prevent double conversation closure
+ self.conversation_closed: bool = False
+
+ # Speech segment tracking for audio cropping
+ self.speech_segments: dict[str, list[tuple[float, float]]] = (
+ {}
+ ) # audio_uuid -> [(start, end), ...]
+ self.current_speech_start: dict[str, Optional[float]] = {} # audio_uuid -> start_time
+
+ # Conversation transcript collection for end-of-conversation memory processing
+ self.conversation_transcripts: list[str] = (
+ []
+ ) # Collect all transcripts for this conversation
+
+ # Tasks for this client
+ self.saver_task: Optional[asyncio.Task] = None
+ self.transcription_task: Optional[asyncio.Task] = None
+ self.memory_task: Optional[asyncio.Task] = None
+ self.action_item_task: Optional[asyncio.Task] = None
+ self.background_memory_task: Optional[asyncio.Task] = None
+
+ # Debug tracking
+ self.transaction_id: Optional[str] = None
+
+ def _new_local_file_sink(self, file_path):
+ """Create a properly configured LocalFileSink with all wave parameters set."""
+ # TODO: Use client.sample_rate etc here
+ sink = LocalFileSink(
+ file_path=file_path,
+ sample_rate=int(OMI_SAMPLE_RATE),
+ channels=int(OMI_CHANNELS),
+ sample_width=int(OMI_SAMPLE_WIDTH),
+ )
+ return sink
+
+ def record_speech_start(self, audio_uuid: str, timestamp: float):
+ """Record the start of a speech segment."""
+ self.current_speech_start[audio_uuid] = timestamp
+ audio_logger.info(f"Recorded speech start for {audio_uuid}: {timestamp}")
+
+ def record_speech_end(self, audio_uuid: str, timestamp: float):
+ """Record the end of a speech segment."""
+ if (
+ audio_uuid in self.current_speech_start
+ and self.current_speech_start[audio_uuid] is not None
+ ):
+ start_time = self.current_speech_start[audio_uuid]
+ if start_time is not None: # Type guard
+ if audio_uuid not in self.speech_segments:
+ self.speech_segments[audio_uuid] = []
+ self.speech_segments[audio_uuid].append((start_time, timestamp))
+ self.current_speech_start[audio_uuid] = None
+ duration = timestamp - start_time
+ audio_logger.info(
+ f"Recorded speech segment for {audio_uuid}: {start_time:.3f} -> {timestamp:.3f} (duration: {duration:.3f}s)"
+ )
+ else:
+ audio_logger.warning(f"Speech end recorded for {audio_uuid} but no start time found")
+
+ async def start_processing(self):
+ """Start the processing tasks for this client."""
+ self.saver_task = asyncio.create_task(self._audio_saver())
+ self.transcription_task = asyncio.create_task(self._transcription_processor())
+ self.memory_task = asyncio.create_task(self._memory_processor())
+ self.action_item_task = asyncio.create_task(self._action_item_processor())
+ audio_logger.info(f"Started processing tasks for client {self.client_id}")
+
+ async def disconnect(self):
+ """Clean disconnect of client state."""
+ if not self.connected:
+ return
+
+ self.connected = False
+ audio_logger.info(f"Disconnecting client {self.client_id}")
+
+ # Close current conversation with all processing before signaling shutdown
+ await self._close_current_conversation()
+
+ # Signal processors to stop
+ await self.chunk_queue.put(None)
+ await self.transcription_queue.put((None, None))
+ await self.memory_queue.put((None, None, None))
+ await self.action_item_queue.put((None, None, None))
+
+ # Wait for tasks to complete gracefully, with cancellation fallback
+ # Use longer timeouts for transcription tasks that may be waiting on Deepgram API
+ transcription_timeout = 60.0 # 1 minute for transcription (Deepgram can take time for large files)
+ saver_timeout = 60.0 # 1 minute for saver (handles conversation closure and memory processing)
+ default_timeout = 15.0 # 15 seconds for other tasks (increased from 3s)
+
+ tasks_to_cleanup = []
+ if self.saver_task:
+ tasks_to_cleanup.append(("saver", self.saver_task, saver_timeout))
+ if self.transcription_task:
+ tasks_to_cleanup.append(("transcription", self.transcription_task, transcription_timeout))
+ if self.memory_task:
+ tasks_to_cleanup.append(("memory", self.memory_task, default_timeout))
+ if self.action_item_task:
+ tasks_to_cleanup.append(("action_item", self.action_item_task, default_timeout))
+
+ # Background memory task gets much longer timeout since it could be doing Ollama processing
+ if self.background_memory_task:
+ tasks_to_cleanup.append(
+ ("background_memory", self.background_memory_task, 300.0)
+ ) # 5 minutes
+
+ for task_name, task, timeout in tasks_to_cleanup:
+ try:
+ # Try to wait for graceful completion with task-specific timeout
+ await asyncio.wait_for(task, timeout=timeout)
+ audio_logger.debug(
+ f"Task {task_name} completed gracefully for client {self.client_id}"
+ )
+ except asyncio.TimeoutError:
+ audio_logger.warning(
+ f"Task {task_name} did not complete gracefully after {timeout}s, cancelling for client {self.client_id}"
+ )
+ task.cancel()
+ try:
+ await task
+ except asyncio.CancelledError:
+ audio_logger.debug(
+ f"Task {task_name} cancelled successfully for client {self.client_id}"
+ )
+ except Exception as e:
+ audio_logger.error(
+ f"Error waiting for task {task_name} to complete for client {self.client_id}: {e}"
+ )
+ task.cancel()
+
+ # Clean up transcription manager
+ if self.transcription_manager:
+ await self.transcription_manager.disconnect()
+ self.transcription_manager = None
+
+ # Clean up any remaining speech segment tracking
+ self.speech_segments.clear()
+ self.current_speech_start.clear()
+ self.conversation_transcripts.clear() # Clear conversation transcripts
+
+ audio_logger.info(f"Client {self.client_id} disconnected and cleaned up")
+
+ def _should_start_new_conversation(self) -> bool:
+ """Check if we should start a new conversation based on timeout."""
+ if self.last_transcript_time is None:
+ return False # No transcript yet, keep current conversation
+
+ current_time = time.time()
+ time_since_last_transcript = current_time - self.last_transcript_time
+ timeout_seconds = NEW_CONVERSATION_TIMEOUT_MINUTES * 60
+
+ return time_since_last_transcript > timeout_seconds
+
+ async def _close_current_conversation(self):
+ """Close the current conversation with proper cleanup including audio cropping and speaker processing."""
+ # Prevent double closure
+ if self.conversation_closed:
+ audio_logger.debug(f"๐ Conversation already closed for client {self.client_id}, skipping")
+ return
+
+ self.conversation_closed = True
+
+ if self.file_sink:
+ # Store current audio info before closing
+ current_uuid = self.current_audio_uuid
+ current_path = self.file_sink.file_path
+
+ audio_logger.info(f"๐ Closing conversation {current_uuid}, file: {current_path}")
+
+ # Flush any remaining transcript from ASR before waiting for queue
+ if self.transcription_manager:
+ try:
+ # Calculate audio duration for proportional timeout
+ audio_duration = time.time() - self.conversation_start_time
+ audio_logger.info(
+ f"๐ Flushing final transcript for {current_uuid} (duration: {audio_duration:.1f}s)"
+ )
+ await self.transcription_manager.flush_final_transcript(audio_duration)
+ except Exception as e:
+ audio_logger.error(f"Error flushing final transcript for {current_uuid}: {e}")
+
+ # Wait for transcription queue to finish with timeout to prevent hanging
+ try:
+ await asyncio.wait_for(
+ self.transcription_queue.join(), timeout=60.0
+ ) # Increased timeout for final transcript
+ audio_logger.info("Transcription queue processing completed")
+ except asyncio.TimeoutError:
+ audio_logger.warning(
+ f"Transcription queue join timed out after 15 seconds for {current_uuid}"
+ )
+
+ # Small delay to allow final processing to complete
+ await asyncio.sleep(0.5)
+
+ # Process memory at end of conversation if we have transcripts
+ if self.conversation_transcripts and current_uuid:
+ full_conversation = " ".join(self.conversation_transcripts).strip()
+
+ # MODIFIED: Process all transcripts for memory storage, regardless of length
+ # Additional safety check - ensure we have some content
+ if len(full_conversation) < 1:
+ audio_logger.info(
+ f"๐ญ Skipping memory processing for conversation {current_uuid} - completely empty"
+ )
+ else:
+ # Process even very short conversations to ensure all transcripts are stored
+ audio_logger.info(
+ f"๐ญ Queuing memory processing for conversation {current_uuid} with {len(self.conversation_transcripts)} transcript segments (length: {len(full_conversation)} chars)"
+ )
+ audio_logger.info(f"๐ญ Individual transcripts: {self.conversation_transcripts}")
+ audio_logger.info(
+ f"๐ญ Full conversation text: {full_conversation[:200]}..."
+ ) # Log first 200 chars
+
+ # Use stored user information instead of database lookup
+ # This prevents lookup failures after client cleanup
+ if self.user_id and self.user_email:
+ # Process memory in background to avoid blocking conversation close
+ self.background_memory_task = asyncio.create_task(
+ self._process_memory_background(full_conversation, current_uuid, self.user_id, self.user_email)
+ )
+ else:
+ audio_logger.error(
+ f"๐ญ Cannot process memory for {current_uuid}: no user information stored for client {self.client_id}"
+ )
+
+ audio_logger.info(f"๐ญ Memory processing queued in background for {current_uuid}")
+ else:
+ audio_logger.info(
+ f"โน๏ธ No transcripts to process for memory in conversation {current_uuid}"
+ )
+
+ if self.file_sink:
+ await self.file_sink.close()
+ else:
+ audio_logger.warning(f"File sink was None during close for client {self.client_id}")
+
+ # Track successful audio chunk save in metrics
+ try:
+ # Removed old metrics call - using SystemTracker instead
+ file_path = Path(current_path)
+ if file_path.exists():
+ # Estimate duration (60 seconds per chunk is TARGET_SAMPLES)
+ duration_seconds = SEGMENT_SECONDS
+
+ # Calculate voice activity if we have speech segments
+ voice_activity_seconds = 0
+ if current_uuid and current_uuid in self.speech_segments:
+ for start, end in self.speech_segments[current_uuid]:
+ voice_activity_seconds += end - start
+
+ audio_logger.debug(
+ f"๐ Recorded audio chunk metrics: {duration_seconds}s total, {voice_activity_seconds}s voice activity"
+ )
+ else:
+ audio_logger.warning(f"๐ Audio file not found after save: {current_path}")
+ except Exception as e:
+ audio_logger.error(f"๐ Error recording audio metrics: {e}")
+
+ self.file_sink = None
+
+ # Process audio cropping if we have speech segments
+ if current_uuid and current_path:
+ if current_uuid in self.speech_segments:
+ speech_segments = self.speech_segments[current_uuid]
+ audio_logger.info(
+ f"๐ฏ Found {len(speech_segments)} speech segments for {current_uuid}: {speech_segments}"
+ )
+ audio_logger.info(f"๐ฏ Audio file path: {current_path}")
+ if speech_segments: # Only crop if we have speech segments
+ cropped_path = str(current_path).replace(".wav", "_cropped.wav")
+
+ # Process in background - won't block
+ asyncio.create_task(
+ self._process_audio_cropping(
+ f"{self.chunk_dir}/{current_path}",
+ speech_segments,
+ f"{self.chunk_dir}/{cropped_path}",
+ current_uuid,
+ )
+ )
+ audio_logger.info(
+ f"โ๏ธ Queued audio cropping for {current_path} with {len(speech_segments)} speech segments"
+ )
+ else:
+ audio_logger.info(
+ f"โ ๏ธ Empty speech segments list found for {current_path}, skipping cropping"
+ )
+
+ # Clean up segments for this conversation
+ del self.speech_segments[current_uuid]
+ if current_uuid in self.current_speech_start:
+ del self.current_speech_start[current_uuid]
+ else:
+ audio_logger.info(
+ f"โ ๏ธ No speech segments found for {current_path} (uuid: {current_uuid}), skipping cropping"
+ )
+
+ else:
+ audio_logger.info(f"๐ No active file sink to close for client {self.client_id}")
+
+ async def start_new_conversation(self):
+ """Start a new conversation by closing current conversation and resetting state."""
+ await self._close_current_conversation()
+
+ # Reset conversation state
+ self.current_audio_uuid = None
+ self.conversation_start_time = time.time()
+ self.last_transcript_time = None
+ self.conversation_transcripts.clear() # Clear collected transcripts for new conversation
+ self.conversation_closed = False # Reset closure flag for new conversation
+
+ audio_logger.info(
+ f"Client {self.client_id}: Started new conversation due to {NEW_CONVERSATION_TIMEOUT_MINUTES}min timeout"
+ )
+
+ async def _process_audio_cropping(
+ self,
+ original_path: str,
+ speech_segments: list[tuple[float, float]],
+ output_path: str,
+ audio_uuid: str,
+ ):
+ """Background task for audio cropping using ffmpeg."""
+ await _process_audio_cropping_with_relative_timestamps(
+ original_path, speech_segments, output_path, audio_uuid
+ )
+
+ async def _process_memory_background(self, full_conversation: str, audio_uuid: str, user_id: str, user_email: str):
+ """Background task for memory processing to avoid blocking conversation close."""
+ start_time = time.time()
+
+ # User information is now passed directly to avoid database lookup issues after cleanup
+
+ tracker = get_debug_tracker()
+ transaction_id = tracker.create_transaction(user_id, self.client_id, audio_uuid)
+ tracker.track_event(
+ transaction_id,
+ PipelineStage.MEMORY_STARTED,
+ metadata={"conversation_length": len(full_conversation)},
+ )
+
+ try:
+ # Track memory storage request
+ # Removed old metrics call - using SystemTracker instead
+
+ # Add general memory with fallback handling
+ memory_result = await memory_service.add_memory(
+ full_conversation, self.client_id, audio_uuid, user_id, user_email,
+ chunk_repo=self.chunk_repo
+ )
+
+ if memory_result:
+ audio_logger.info(f"โ
Successfully added conversation memory for {audio_uuid}")
+ tracker.track_event(
+ transaction_id,
+ PipelineStage.MEMORY_COMPLETED,
+ metadata={"processing_time": time.time() - start_time},
+ )
+ else:
+ audio_logger.warning(
+ f"โ ๏ธ Memory service returned False for {audio_uuid} - may have timed out"
+ )
+ tracker.track_event(
+ transaction_id,
+ PipelineStage.MEMORY_COMPLETED,
+ success=False,
+ error_message="Memory service returned False",
+ metadata={"processing_time": time.time() - start_time},
+ )
+
+ except Exception as e:
+ audio_logger.error(f"โ Error processing memory for {audio_uuid}: {e}")
+ tracker.track_event(
+ transaction_id,
+ PipelineStage.MEMORY_COMPLETED,
+ success=False,
+ error_message=f"Exception during memory processing: {str(e)}",
+ metadata={"processing_time": time.time() - start_time},
+ )
+
+ # Log processing summary
+ processing_time_ms = (time.time() - start_time) * 1000
+ audio_logger.info(
+ f"๐ Completed background memory processing for {audio_uuid} in {processing_time_ms:.1f}ms"
+ )
+
+ async def _audio_saver(self):
+ """Per-client audio saver consumer."""
+ try:
+ while self.connected:
+ audio_chunk = await self.chunk_queue.get()
+
+ if audio_chunk is None: # Disconnect signal
+ self.chunk_queue.task_done()
+ break
+
+ try:
+ # Check if we should start a new conversation due to timeout
+ if self._should_start_new_conversation():
+ await self.start_new_conversation()
+
+ if self.file_sink is None:
+ # Create new file sink for this client
+ self.current_audio_uuid = uuid.uuid4().hex
+ timestamp = audio_chunk.timestamp or int(time.time())
+ wav_filename = f"{timestamp}_{self.client_id}_{self.current_audio_uuid}.wav"
+ audio_logger.info(
+ f"Creating file sink with: rate={int(OMI_SAMPLE_RATE)}, channels={int(OMI_CHANNELS)}, width={int(OMI_SAMPLE_WIDTH)}"
+ )
+ self.file_sink = self._new_local_file_sink(f"{self.chunk_dir}/{wav_filename}")
+ await self.file_sink.open()
+
+ # Reset conversation closure flag when starting new audio
+ self.conversation_closed = False
+
+ await self.chunk_repo.create_chunk(
+ audio_uuid=self.current_audio_uuid,
+ audio_path=wav_filename,
+ client_id=self.client_id,
+ timestamp=timestamp,
+ )
+
+ await self.file_sink.write(audio_chunk)
+
+ # Queue for transcription
+ await self.transcription_queue.put((self.current_audio_uuid, audio_chunk))
+
+ except Exception as e:
+ audio_logger.error(
+ f"Error processing audio chunk for client {self.client_id}: {e}"
+ )
+ finally:
+ # Always mark task as done
+ self.chunk_queue.task_done()
+
+ except Exception as e:
+ audio_logger.error(
+ f"Error in audio saver for client {self.client_id}: {e}", exc_info=True
+ )
+ finally:
+ # Close current conversation with all processing when audio saver ends
+ await self._close_current_conversation()
+
+ async def _transcription_processor(self):
+ """Per-client transcription processor."""
+ try:
+ while self.connected:
+ try:
+ audio_uuid, chunk = await self.transcription_queue.get()
+
+ if audio_uuid is None or chunk is None: # Disconnect signal
+ self.transcription_queue.task_done()
+ break
+
+ try:
+ # Track audio processing
+ user = await get_user_by_client_id(self.client_id)
+ transaction_id = None
+
+ # Get or create transcription manager
+ if self.transcription_manager is None:
+ # Create callback function to queue action items
+ async def action_item_callback(transcript_text, client_id, audio_uuid):
+ try:
+ await self.action_item_queue.put(
+ (transcript_text, client_id, audio_uuid)
+ )
+ except Exception:
+ pass # Ignore errors during shutdown
+
+ self.transcription_manager = TranscriptionManager(
+ action_item_callback=action_item_callback,
+ chunk_repo=self.chunk_repo
+ )
+ try:
+ await self.transcription_manager.connect(self.client_id)
+ except Exception as e:
+ audio_logger.error(
+ f"Failed to create transcription manager for client {self.client_id}: {e}"
+ )
+ self.transcription_queue.task_done()
+ continue
+
+ # Process transcription
+ try:
+ await self.transcription_manager.transcribe_chunk(
+ audio_uuid, chunk, self.client_id
+ )
+ # Track transcription success
+ pass
+ except Exception as e:
+ audio_logger.error(
+ f"Error transcribing for client {self.client_id}: {e}"
+ )
+ # Track transcription failure
+ pass
+ # Recreate transcription manager on error
+ if self.transcription_manager:
+ await self.transcription_manager.disconnect()
+ self.transcription_manager = None
+
+ except Exception as e:
+ audio_logger.error(
+ f"Error processing transcription item for client {self.client_id}: {e}"
+ )
+ finally:
+ # Always mark task as done
+ self.transcription_queue.task_done()
+
+ except asyncio.CancelledError:
+ # Handle cancellation gracefully
+ audio_logger.debug(
+ f"Transcription processor cancelled for client {self.client_id}"
+ )
+ break
+ except Exception as e:
+ audio_logger.error(
+ f"Error in transcription processor loop for client {self.client_id}: {e}",
+ exc_info=True,
+ )
+
+ except asyncio.CancelledError:
+ audio_logger.debug(f"Transcription processor cancelled for client {self.client_id}")
+ except Exception as e:
+ audio_logger.error(
+ f"Error in transcription processor for client {self.client_id}: {e}",
+ exc_info=True,
+ )
+ finally:
+ audio_logger.debug(f"Transcription processor stopped for client {self.client_id}")
+
+ async def _memory_processor(self):
+ """Per-client memory processor - currently unused as memory processing happens at conversation end."""
+ try:
+ while self.connected:
+ try:
+ transcript, client_id, audio_uuid = await self.memory_queue.get()
+
+ if (
+ transcript is None or client_id is None or audio_uuid is None
+ ): # Disconnect signal
+ self.memory_queue.task_done()
+ break
+
+ try:
+ # Memory processing now happens at conversation end, so this is effectively a no-op
+ # Keeping the processor running to avoid breaking the queue system
+ audio_logger.debug(
+ f"Memory processor received item but processing is now done at conversation end"
+ )
+ except Exception as e:
+ audio_logger.error(
+ f"Error processing memory item for client {self.client_id}: {e}"
+ )
+ finally:
+ # Always mark task as done
+ self.memory_queue.task_done()
+
+ except asyncio.CancelledError:
+ # Handle cancellation gracefully
+ audio_logger.debug(f"Memory processor cancelled for client {self.client_id}")
+ break
+ except Exception as e:
+ audio_logger.error(
+ f"Error in memory processor loop for client {self.client_id}: {e}",
+ exc_info=True,
+ )
+
+ except asyncio.CancelledError:
+ audio_logger.debug(f"Memory processor cancelled for client {self.client_id}")
+ except Exception as e:
+ audio_logger.error(
+ f"Error in memory processor for client {self.client_id}: {e}",
+ exc_info=True,
+ )
+ finally:
+ audio_logger.debug(f"Memory processor stopped for client {self.client_id}")
+
+ async def _action_item_processor(self):
+ """
+ Processes transcript segments from the per-client action item queue.
+
+ This processor handles queue management and delegates the actual
+ action item processing to the ActionItemsService.
+ """
+ try:
+ while self.connected:
+ try:
+ transcript_text, client_id, audio_uuid = await self.action_item_queue.get()
+
+ if (
+ transcript_text is None or client_id is None or audio_uuid is None
+ ): # Disconnect signal
+ self.action_item_queue.task_done()
+ break
+
+ try:
+ # Resolve client_id to user information
+ user = await get_user_by_client_id(client_id)
+ if not user:
+ audio_logger.error(
+ f"Could not resolve client_id {client_id} to user for action item processing"
+ )
+ continue
+
+ # Track action item processing start
+
+ try:
+ # Delegate action item processing to the service
+ action_item_count = (
+ await self.action_items_service.process_transcript_for_action_items(
+ transcript_text, client_id, audio_uuid, user.user_id, user.email
+ )
+ )
+
+ if action_item_count > 0:
+ audio_logger.info(
+ f"๐ฏ Action item processor completed: {action_item_count} items processed for {audio_uuid}"
+ )
+ else:
+ audio_logger.debug(
+ f"โน๏ธ Action item processor completed: no items found for {audio_uuid}"
+ )
+
+ except Exception as e:
+ audio_logger.error(
+ f"Error processing action item for client {self.client_id}: {e}"
+ )
+ except Exception as e:
+ audio_logger.error(
+ f"Error processing action item for client {self.client_id}: {e}"
+ )
+ finally:
+ # Always mark task as done
+ self.action_item_queue.task_done()
+
+ except asyncio.CancelledError:
+ # Handle cancellation gracefully
+ audio_logger.debug(
+ f"Action item processor cancelled for client {self.client_id}"
+ )
+ break
+ except Exception as e:
+ audio_logger.error(
+ f"Error in action item processor loop for client {self.client_id}: {e}",
+ exc_info=True,
+ )
+
+ except asyncio.CancelledError:
+ audio_logger.debug(f"Action item processor cancelled for client {self.client_id}")
+ except Exception as e:
+ audio_logger.error(
+ f"Error in action item processor for client {self.client_id}: {e}",
+ exc_info=True,
+ )
+ finally:
+ audio_logger.debug(f"Action item processor stopped for client {self.client_id}")
diff --git a/backends/advanced-backend/src/advanced_omi_backend/client_manager.py b/backends/advanced-backend/src/advanced_omi_backend/client_manager.py
new file mode 100644
index 00000000..5d7c98ba
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/client_manager.py
@@ -0,0 +1,371 @@
+"""
+Client manager service for centralizing active_clients access and client-user relationships.
+
+This service provides a centralized way to manage active client connections,
+their state, and client-user relationships, allowing API endpoints to access
+this information without tight coupling to the main.py module.
+"""
+
+import logging
+from typing import TYPE_CHECKING, Dict, Optional
+
+if TYPE_CHECKING:
+ # Import ClientState type for type hints without circular import
+ from advanced_omi_backend.main import ClientState
+
+logger = logging.getLogger(__name__)
+
+# Global client-to-user mappings
+# These will be initialized by main.py
+_client_to_user_mapping: Dict[str, str] = {} # Active clients only
+_all_client_user_mappings: Dict[str, str] = {} # All clients including disconnected
+
+
+class ClientManager:
+ """
+ Centralized manager for active client connections and client-user relationships.
+
+ This service provides thread-safe access to active client information
+ and client-user relationship management for use in API endpoints and other services.
+ """
+
+ def __init__(self):
+ self._active_clients: Dict[str, "ClientState"] = {}
+ self._initialized = False
+
+ def initialize(self, active_clients_dict: Dict[str, "ClientState"]):
+ """
+ Initialize the client manager with a reference to the active_clients dict.
+
+ This should be called from main.py during startup to provide access
+ to the global active_clients dictionary.
+ """
+ self._active_clients = active_clients_dict
+ self._initialized = True
+ logger.info("ClientManager initialized with active_clients reference")
+
+ def is_initialized(self) -> bool:
+ """Check if the client manager has been initialized."""
+ return self._initialized
+
+ def get_client(self, client_id: str) -> Optional["ClientState"]:
+ """
+ Get a specific client by ID.
+
+ Args:
+ client_id: The client ID to lookup
+
+ Returns:
+ ClientState object if found, None otherwise
+ """
+ if not self._initialized:
+ logger.warning("ClientManager not initialized, cannot get client")
+ return None
+ return self._active_clients.get(client_id)
+
+ def has_client(self, client_id: str) -> bool:
+ """
+ Check if a client is currently active.
+
+ Args:
+ client_id: The client ID to check
+
+ Returns:
+ True if client is active, False otherwise
+ """
+ if not self._initialized:
+ logger.warning("ClientManager not initialized, cannot check client")
+ return False
+ return client_id in self._active_clients
+
+ def get_all_clients(self) -> Dict[str, "ClientState"]:
+ """
+ Get all active clients.
+
+ Returns:
+ Dictionary of client_id -> ClientState mappings
+ """
+ if not self._initialized:
+ logger.warning("ClientManager not initialized, returning empty dict")
+ return {}
+ return self._active_clients.copy()
+
+ def get_client_count(self) -> int:
+ """
+ Get the number of active clients.
+
+ Returns:
+ Number of active clients
+ """
+ if not self._initialized:
+ logger.warning("ClientManager not initialized, returning 0")
+ return 0
+ return len(self._active_clients)
+
+ def get_client_info_summary(self) -> list:
+ """
+ Get summary information about all active clients.
+
+ Returns:
+ List of client info dictionaries suitable for API responses
+ """
+ if not self._initialized:
+ logger.warning("ClientManager not initialized, returning empty list")
+ return []
+
+ client_info = []
+ for client_id, client_state in self._active_clients.items():
+ current_audio_uuid = client_state.current_audio_uuid
+ client_data = {
+ "client_id": client_id,
+ "connected": getattr(client_state, "connected", True),
+ "current_audio_uuid": current_audio_uuid,
+ "last_transcript_time": client_state.last_transcript_time,
+ "conversation_start_time": client_state.conversation_start_time,
+ "has_active_conversation": current_audio_uuid is not None,
+ "conversation_transcripts_count": len(
+ getattr(client_state, "conversation_transcripts", [])
+ ),
+ "queues": {
+ "chunk_queue_size": (
+ client_state.chunk_queue.qsize()
+ ),
+ "transcription_queue_size": (
+ client_state.transcription_queue.qsize()
+ ),
+ "memory_queue_size": (
+ client_state.memory_queue.qsize()
+ ),
+ "action_item_queue_size": (
+ client_state.action_item_queue.qsize()
+ ),
+ },
+ }
+ client_info.append(client_data)
+
+ return client_info
+
+ # Client-user relationship methods
+ def client_belongs_to_user(self, client_id: str, user_id: str) -> bool:
+ """
+ Check if a client belongs to a specific user.
+
+ Args:
+ client_id: The client ID to check
+ user_id: The user ID to check ownership against
+
+ Returns:
+ True if the client belongs to the user, False otherwise
+ """
+ # Check in all mappings (includes disconnected clients)
+ mapped_user_id = _all_client_user_mappings.get(client_id)
+ if mapped_user_id is None:
+ logger.warning(f"Client {client_id} not found in user mapping")
+ return False
+
+ return mapped_user_id == user_id
+
+ def get_user_clients_all(self, user_id: str) -> list[str]:
+ """
+ Get all client IDs (active and inactive) that belong to a specific user.
+
+ Args:
+ user_id: The user ID to get clients for
+
+ Returns:
+ List of client IDs belonging to the user
+ """
+ return [
+ client_id
+ for client_id, mapped_user_id in _all_client_user_mappings.items()
+ if mapped_user_id == user_id
+ ]
+
+ def get_user_clients_active(self, user_id: str) -> list[str]:
+ """
+ Get active client IDs that belong to a specific user.
+
+ Args:
+ user_id: The user ID to get clients for
+
+ Returns:
+ List of active client IDs belonging to the user
+ """
+ return [
+ client_id
+ for client_id, mapped_user_id in _client_to_user_mapping.items()
+ if mapped_user_id == user_id
+ ]
+
+
+# Global instance
+_client_manager: Optional[ClientManager] = None
+
+
+def get_client_manager() -> ClientManager:
+ """
+ Get the global client manager instance.
+
+ Returns:
+ ClientManager singleton instance
+ """
+ global _client_manager
+ if _client_manager is None:
+ _client_manager = ClientManager()
+ return _client_manager
+
+
+def init_client_manager(active_clients_dict: Dict[str, "ClientState"]):
+ """
+ Initialize the global client manager with active_clients reference.
+
+ This should be called from main.py during startup.
+
+ Args:
+ active_clients_dict: Reference to the global active_clients dictionary
+ """
+ client_manager = get_client_manager()
+ client_manager.initialize(active_clients_dict)
+ return client_manager
+
+
+# Client-user relationship initialization and utility functions
+def init_client_user_mapping(
+ active_mapping_dict: Dict[str, str], all_mapping_dict: Optional[Dict[str, str]] = None
+):
+ """
+ Initialize the client-user mapping with references to the global mappings.
+
+ This should be called from main.py during startup.
+
+ Args:
+ active_mapping_dict: Reference to the active client_to_user_mapping dictionary
+ all_mapping_dict: Reference to the all_client_user_mappings dictionary (optional)
+ """
+ global _client_to_user_mapping, _all_client_user_mappings
+ _client_to_user_mapping = active_mapping_dict
+ if all_mapping_dict is not None:
+ _all_client_user_mappings = all_mapping_dict
+ logger.info("Client-user mapping initialized")
+
+
+def register_client_user_mapping(client_id: str, user_id: str):
+ """
+ Register a client-user mapping for active clients.
+
+ Args:
+ client_id: The client ID
+ user_id: The user ID that owns this client
+ """
+ _client_to_user_mapping[client_id] = user_id
+ logger.debug(f"Registered active client {client_id} to user {user_id}")
+
+
+def unregister_client_user_mapping(client_id: str):
+ """
+ Unregister a client-user mapping from active clients.
+
+ Args:
+ client_id: The client ID to unregister
+ """
+ if client_id in _client_to_user_mapping:
+ user_id = _client_to_user_mapping.pop(client_id)
+ logger.debug(f"Unregistered active client {client_id} from user {user_id}")
+
+
+def track_client_user_relationship(client_id: str, user_id: str):
+ """
+ Track that a client belongs to a user (persists after disconnection for database queries).
+
+ Args:
+ client_id: The client ID
+ user_id: The user ID that owns this client
+ """
+ _all_client_user_mappings[client_id] = user_id
+ logger.debug(f"Tracked client {client_id} relationship to user {user_id}")
+
+
+def client_belongs_to_user(client_id: str, user_id: str) -> bool:
+ """
+ Check if a client belongs to a specific user.
+
+ Args:
+ client_id: The client ID to check
+ user_id: The user ID to check ownership against
+
+ Returns:
+ True if the client belongs to the user, False otherwise
+ """
+ # Check in all mappings (includes disconnected clients)
+ mapped_user_id = _all_client_user_mappings.get(client_id)
+ if mapped_user_id is None:
+ logger.warning(f"Client {client_id} not found in user mapping")
+ return False
+
+ return mapped_user_id == user_id
+
+
+def get_user_clients_all(user_id: str) -> list[str]:
+ """
+ Get all client IDs (active and inactive) that belong to a specific user.
+
+ Args:
+ user_id: The user ID to get clients for
+
+ Returns:
+ List of client IDs belonging to the user
+ """
+ return [
+ client_id
+ for client_id, mapped_user_id in _all_client_user_mappings.items()
+ if mapped_user_id == user_id
+ ]
+
+
+def get_user_clients_active(user_id: str) -> list[str]:
+ """
+ Get active client IDs that belong to a specific user.
+
+ Args:
+ user_id: The user ID to get clients for
+
+ Returns:
+ List of active client IDs belonging to the user
+ """
+ return [
+ client_id
+ for client_id, mapped_user_id in _client_to_user_mapping.items()
+ if mapped_user_id == user_id
+ ]
+
+
+def get_client_owner(client_id: str) -> Optional[str]:
+ """
+ Get the user ID that owns a specific client.
+
+ Args:
+ client_id: The client ID to look up
+
+ Returns:
+ User ID if found, None otherwise
+ """
+ return _all_client_user_mappings.get(client_id)
+
+
+# FastAPI dependency function
+async def get_client_manager_dependency() -> ClientManager:
+ """
+ FastAPI dependency to inject the client manager into route handlers.
+
+ Usage:
+ @router.get("/some-endpoint")
+ async def some_endpoint(client_manager: ClientManager = Depends(get_client_manager_dependency)):
+ clients = client_manager.get_all_clients()
+ ...
+ """
+ client_manager = get_client_manager()
+ if not client_manager.is_initialized():
+ logger.error("ClientManager dependency requested but not initialized")
+ # In a real application, you might want to raise an exception here
+ # For now, we'll return the uninitialized manager and let the caller handle it
+ return client_manager
diff --git a/backends/advanced-backend/src/advanced_omi_backend/database.py b/backends/advanced-backend/src/advanced_omi_backend/database.py
new file mode 100644
index 00000000..90c53428
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/database.py
@@ -0,0 +1,188 @@
+"""
+Database configuration and utilities for the Friend-Lite backend.
+
+This module provides centralized database access to avoid duplication
+across main.py and router modules.
+"""
+
+import logging
+import os
+from datetime import UTC, datetime
+
+from motor.motor_asyncio import AsyncIOMotorClient
+
+logger = logging.getLogger(__name__)
+
+# MongoDB Configuration
+MONGODB_URI = os.getenv("MONGODB_URI", "mongodb://mongo:27017")
+mongo_client = AsyncIOMotorClient(MONGODB_URI)
+db = mongo_client.get_default_database("friend-lite")
+
+# Collection references
+chunks_col = db["audio_chunks"]
+users_col = db["users"]
+speakers_col = db["speakers"]
+action_items_col = db["action_items"]
+
+
+def get_database():
+ """Get the MongoDB database instance."""
+ return db
+
+
+def get_collections():
+ """Get commonly used collection references."""
+ return {
+ "chunks_col": chunks_col,
+ "users_col": users_col,
+ "speakers_col": speakers_col,
+ "action_items_col": action_items_col,
+ }
+
+
+class AudioChunksCollection:
+ """Async helpers for the audio_chunks collection."""
+
+ def __init__(self, collection):
+ self.col = collection
+
+ async def create_chunk(
+ self,
+ *,
+ audio_uuid,
+ audio_path,
+ client_id,
+ timestamp,
+ transcript=None,
+ speakers_identified=None,
+ memories=None,
+ ):
+ doc = {
+ "audio_uuid": audio_uuid,
+ "audio_path": audio_path,
+ "client_id": client_id,
+ "timestamp": timestamp,
+ "transcript": transcript or [], # List of conversation segments
+ "speakers_identified": speakers_identified or [], # List of identified speakers
+ "memories": memories or [], # List of memory references created from this audio
+ }
+ await self.col.insert_one(doc)
+
+ async def add_transcript_segment(self, audio_uuid, transcript_segment):
+ """Add a single transcript segment to the conversation."""
+ await self.col.update_one(
+ {"audio_uuid": audio_uuid}, {"$push": {"transcript": transcript_segment}}
+ )
+
+ async def add_speaker(self, audio_uuid, speaker_id):
+ """Add a speaker to the speakers_identified list if not already present."""
+ await self.col.update_one(
+ {"audio_uuid": audio_uuid},
+ {"$addToSet": {"speakers_identified": speaker_id}},
+ )
+
+ async def add_memory_reference(self, audio_uuid: str, memory_id: str, status: str = "created"):
+ """Add memory reference to audio chunk."""
+ memory_ref = {
+ "memory_id": memory_id,
+ "created_at": datetime.now(UTC).isoformat(),
+ "status": status,
+ "updated_at": datetime.now(UTC).isoformat(),
+ }
+ result = await self.col.update_one(
+ {"audio_uuid": audio_uuid},
+ {"$push": {"memories": memory_ref}}
+ )
+ if result.modified_count > 0:
+ logger.info(f"Added memory reference {memory_id} to audio {audio_uuid}")
+ return result.modified_count > 0
+
+ async def update_memory_status(self, audio_uuid: str, memory_id: str, status: str):
+ """Update memory status in audio chunk."""
+ result = await self.col.update_one(
+ {"audio_uuid": audio_uuid, "memories.memory_id": memory_id},
+ {"$set": {
+ "memories.$.status": status,
+ "memories.$.updated_at": datetime.now(UTC).isoformat()
+ }}
+ )
+ if result.modified_count > 0:
+ logger.info(f"Updated memory {memory_id} status to {status} for audio {audio_uuid}")
+ return result.modified_count > 0
+
+ async def remove_memory_reference(self, audio_uuid: str, memory_id: str):
+ """Remove memory reference from audio chunk."""
+ result = await self.col.update_one(
+ {"audio_uuid": audio_uuid},
+ {"$pull": {"memories": {"memory_id": memory_id}}}
+ )
+ if result.modified_count > 0:
+ logger.info(f"Removed memory reference {memory_id} from audio {audio_uuid}")
+ return result.modified_count > 0
+
+ async def get_chunk_by_audio_uuid(self, audio_uuid: str):
+ """Get a chunk document by audio_uuid."""
+ return await self.col.find_one({"audio_uuid": audio_uuid})
+
+ async def get_chunks_with_memories(self, client_ids: list = None, limit: int = 100):
+ """Get chunks that have memory references, optionally filtered by client IDs."""
+ query = {"memories": {"$exists": True, "$not": {"$size": 0}}}
+ if client_ids:
+ query["client_id"] = {"$in": client_ids}
+
+ cursor = self.col.find(query).sort("timestamp", -1).limit(limit)
+ return await cursor.to_list()
+
+ async def update_transcript(self, audio_uuid, full_transcript):
+ """Update the entire transcript list (for compatibility)."""
+ await self.col.update_one(
+ {"audio_uuid": audio_uuid}, {"$set": {"transcript": full_transcript}}
+ )
+
+ async def update_segment_timing(self, audio_uuid, segment_index, start_time, end_time):
+ """Update timing information for a specific transcript segment."""
+ await self.col.update_one(
+ {"audio_uuid": audio_uuid},
+ {
+ "$set": {
+ f"transcript.{segment_index}.start": start_time,
+ f"transcript.{segment_index}.end": end_time,
+ }
+ },
+ )
+
+ async def update_segment_speaker(self, audio_uuid, segment_index, speaker_id):
+ """Update the speaker for a specific transcript segment."""
+ result = await self.col.update_one(
+ {"audio_uuid": audio_uuid},
+ {"$set": {f"transcript.{segment_index}.speaker": speaker_id}},
+ )
+ if result.modified_count > 0:
+ logger.info(f"Updated segment {segment_index} speaker to {speaker_id} for {audio_uuid}")
+ return result.modified_count > 0
+
+ async def update_cropped_audio(
+ self,
+ audio_uuid: str,
+ cropped_path: str,
+ speech_segments: list[tuple[float, float]],
+ ):
+ """Update the chunk with cropped audio information."""
+ cropped_duration = sum(end - start for start, end in speech_segments)
+
+ result = await self.col.update_one(
+ {"audio_uuid": audio_uuid},
+ {
+ "$set": {
+ "cropped_audio_path": cropped_path,
+ "speech_segments": [
+ {"start": start, "end": end} for start, end in speech_segments
+ ],
+ "cropped_duration": cropped_duration,
+ "cropped_at": datetime.now(UTC),
+ }
+ },
+ )
+ if result.modified_count > 0:
+ logger.info(f"Updated cropped audio info for {audio_uuid}: {cropped_path}")
+ return result.modified_count > 0
diff --git a/backends/advanced-backend/src/advanced_omi_backend/debug_system_tracker.py b/backends/advanced-backend/src/advanced_omi_backend/debug_system_tracker.py
new file mode 100644
index 00000000..92071567
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/debug_system_tracker.py
@@ -0,0 +1,466 @@
+"""
+Debug System Tracker - Single source for all system monitoring and debugging
+
+This module provides centralized tracking for the audio processing pipeline:
+Audio โ Transcription โ Memory โ Action Items
+
+Tracks transactions and highlights issues like "transcription finished but memory creation error"
+"""
+
+import asyncio
+import json
+import os
+import threading
+import time
+from collections import deque
+from dataclasses import dataclass, field
+from datetime import UTC, datetime
+from enum import Enum
+from pathlib import Path
+from typing import Dict, List, Optional, Set
+from uuid import uuid4
+
+
+class PipelineStage(Enum):
+ """Pipeline stages for tracking audio processing flow"""
+
+ AUDIO_RECEIVED = "audio_received"
+ TRANSCRIPTION_STARTED = "transcription_started"
+ TRANSCRIPTION_COMPLETED = "transcription_completed"
+ MEMORY_STARTED = "memory_started"
+ MEMORY_COMPLETED = "memory_completed"
+ ACTION_ITEMS_STARTED = "action_items_started"
+ ACTION_ITEMS_COMPLETED = "action_items_completed"
+ CONVERSATION_ENDED = "conversation_ended"
+
+
+class TransactionStatus(Enum):
+ """Status of a pipeline transaction"""
+
+ IN_PROGRESS = "in_progress"
+ COMPLETED = "completed"
+ FAILED = "failed"
+ STALLED = "stalled" # Started but no progress in reasonable time
+
+
+@dataclass
+class PipelineEvent:
+ """Single event in the pipeline"""
+
+ timestamp: datetime = field(default_factory=lambda: datetime.now(UTC))
+ stage: PipelineStage = PipelineStage.AUDIO_RECEIVED
+ success: bool = True
+ error_message: Optional[str] = None
+ metadata: Dict = field(default_factory=dict)
+
+
+@dataclass
+class Transaction:
+ """Complete transaction through the pipeline"""
+
+ user_id: str
+ client_id: str
+ transaction_id: str = field(default_factory=lambda: str(uuid4()))
+ conversation_id: Optional[str] = None
+ created_at: datetime = field(default_factory=lambda: datetime.now(UTC))
+ updated_at: datetime = field(default_factory=lambda: datetime.now(UTC))
+ events: List[PipelineEvent] = field(default_factory=list)
+ status: TransactionStatus = TransactionStatus.IN_PROGRESS
+ current_stage: Optional[PipelineStage] = None
+
+ def add_event(
+ self,
+ stage: PipelineStage,
+ success: bool = True,
+ error_message: Optional[str] = None,
+ **metadata,
+ ):
+ """Add an event to this transaction"""
+ event = PipelineEvent(
+ stage=stage, success=success, error_message=error_message, metadata=metadata
+ )
+ self.events.append(event)
+ self.current_stage = stage
+ self.updated_at = datetime.now(UTC)
+
+ if not success:
+ self.status = TransactionStatus.FAILED
+ elif stage == PipelineStage.CONVERSATION_ENDED and success:
+ self.status = TransactionStatus.COMPLETED
+
+ def get_stage_status(self, stage: PipelineStage) -> Optional[bool]:
+ """Get success status for a specific stage, None if not reached"""
+ for event in reversed(self.events):
+ if event.stage == stage:
+ return event.success
+ return None
+
+ def get_issue_description(self) -> Optional[str]:
+ """Get human-readable description of any pipeline issues"""
+ if self.status == TransactionStatus.COMPLETED:
+ return None
+
+ # Check for specific failure patterns
+ transcription_done = self.get_stage_status(PipelineStage.TRANSCRIPTION_COMPLETED)
+ memory_done = self.get_stage_status(PipelineStage.MEMORY_COMPLETED)
+
+ if transcription_done and memory_done is False:
+ return "Transcription completed but memory creation failed"
+
+ if transcription_done and memory_done is None:
+ elapsed = (datetime.now(UTC) - self.updated_at).total_seconds()
+ if elapsed > 30: # 30 seconds without memory processing
+ return "Transcription completed but memory processing stalled"
+
+ # Check for other patterns
+ for event in self.events:
+ if not event.success:
+ return f"Failed at {event.stage.value}: {event.error_message or 'Unknown error'}"
+
+ return None
+
+
+@dataclass
+class SystemMetrics:
+ """Current system metrics and status"""
+
+ system_start_time: datetime = field(default_factory=lambda: datetime.now(UTC))
+ total_transactions: int = 0
+ active_transactions: int = 0
+ completed_transactions: int = 0
+ failed_transactions: int = 0
+ stalled_transactions: int = 0
+ active_websockets: int = 0
+ total_audio_chunks_processed: int = 0
+ total_transcriptions: int = 0
+ total_memories_created: int = 0
+ total_action_items_created: int = 0
+ last_activity: Optional[datetime] = None
+
+ def uptime_hours(self) -> float:
+ """Get system uptime in hours"""
+ return (datetime.now(UTC) - self.system_start_time).total_seconds() / 3600
+
+
+class DebugSystemTracker:
+ """
+ Single source for all system monitoring and debugging.
+
+ Thread-safe tracker for the audio processing pipeline with real-time issue detection.
+ """
+
+ def __init__(self):
+ self.lock = threading.RLock()
+ self.metrics = SystemMetrics()
+
+ # Transaction tracking
+ self.transactions: Dict[str, Transaction] = {}
+ self.active_websockets: Set[str] = set()
+
+ # Recent activity for dashboard
+ self.recent_transactions = deque(maxlen=100) # Last 100 transactions
+ self.recent_issues = deque(maxlen=50) # Last 50 issues
+
+ # Per-user tracking
+ self.user_activity: Dict[str, datetime] = {}
+
+ # Debug dump directory
+ self.debug_dir = Path(os.getenv("DEBUG_DUMP_DIR", "debug_dumps"))
+ self.debug_dir.mkdir(parents=True, exist_ok=True)
+
+ # Background task for stalled transaction detection
+ self._monitor_task = None
+ self._monitoring = False
+
+ def start_monitoring(self):
+ """Start background monitoring for stalled transactions"""
+ if self._monitoring:
+ return
+ self._monitoring = True
+ self._monitor_task = asyncio.create_task(self._monitor_stalled_transactions())
+
+ def stop_monitoring(self):
+ """Stop background monitoring"""
+ self._monitoring = False
+ if self._monitor_task:
+ self._monitor_task.cancel()
+
+ async def _monitor_stalled_transactions(self):
+ """Background task to detect stalled transactions"""
+ while self._monitoring:
+ try:
+ now = datetime.now(UTC)
+ with self.lock:
+ for transaction in self.transactions.values():
+ if transaction.status == TransactionStatus.IN_PROGRESS:
+ elapsed = (now - transaction.updated_at).total_seconds()
+ if elapsed > 60: # 1 minute without progress
+ transaction.status = TransactionStatus.STALLED
+ self.metrics.stalled_transactions += 1
+ self.metrics.active_transactions -= 1
+
+ issue = f"Transaction {transaction.transaction_id[:8]} stalled after {transaction.current_stage.value if transaction.current_stage else 'unknown stage'}"
+ self.recent_issues.append(
+ {
+ "timestamp": now.isoformat(),
+ "transaction_id": transaction.transaction_id,
+ "user_id": transaction.user_id,
+ "issue": issue,
+ }
+ )
+
+ await asyncio.sleep(30) # Check every 30 seconds
+
+ except asyncio.CancelledError:
+ break
+ except Exception as e:
+ # Log error but continue monitoring
+ pass
+
+ def create_transaction(
+ self, user_id: str, client_id: str, conversation_id: Optional[str] = None
+ ) -> str:
+ """Create a new pipeline transaction"""
+ with self.lock:
+ transaction = Transaction(
+ user_id=user_id, client_id=client_id, conversation_id=conversation_id
+ )
+
+ self.transactions[transaction.transaction_id] = transaction
+ self.recent_transactions.append(transaction.transaction_id)
+
+ self.metrics.total_transactions += 1
+ self.metrics.active_transactions += 1
+ self.metrics.last_activity = datetime.now(UTC)
+ self.user_activity[user_id] = datetime.now(UTC)
+
+ return transaction.transaction_id
+
+ def track_event(
+ self,
+ transaction_id: str,
+ stage: PipelineStage,
+ success: bool = True,
+ error_message: Optional[str] = None,
+ **metadata,
+ ):
+ """Track an event in a transaction"""
+ with self.lock:
+ if transaction_id not in self.transactions:
+ return
+
+ transaction = self.transactions[transaction_id]
+ transaction.add_event(stage, success, error_message, **metadata)
+
+ # Update metrics based on stage
+ if success:
+ if stage == PipelineStage.TRANSCRIPTION_COMPLETED:
+ self.metrics.total_transcriptions += 1
+ elif stage == PipelineStage.MEMORY_COMPLETED:
+ self.metrics.total_memories_created += 1
+ elif stage == PipelineStage.ACTION_ITEMS_COMPLETED:
+ self.metrics.total_action_items_created += 1
+ elif stage == PipelineStage.CONVERSATION_ENDED:
+ self.metrics.completed_transactions += 1
+ self.metrics.active_transactions -= 1
+ else:
+ # Track failure
+ if transaction.status == TransactionStatus.FAILED:
+ self.metrics.failed_transactions += 1
+ self.metrics.active_transactions -= 1
+
+ # Add to recent issues
+ issue_desc = transaction.get_issue_description()
+ if issue_desc:
+ self.recent_issues.append(
+ {
+ "timestamp": datetime.now(UTC).isoformat(),
+ "transaction_id": transaction_id,
+ "user_id": transaction.user_id,
+ "issue": issue_desc,
+ }
+ )
+
+ self.metrics.last_activity = datetime.now(UTC)
+
+ def track_audio_chunk(self, transaction_id: str, chunk_size: int = 0):
+ """Track audio chunk processing"""
+ with self.lock:
+ self.metrics.total_audio_chunks_processed += 1
+ self.track_event(
+ transaction_id, PipelineStage.AUDIO_RECEIVED, metadata={"chunk_size": chunk_size}
+ )
+
+ def track_websocket_connected(self, user_id: str, client_id: str):
+ """Track WebSocket connection"""
+ with self.lock:
+ self.active_websockets.add(client_id)
+ self.metrics.active_websockets = len(self.active_websockets)
+ self.user_activity[user_id] = datetime.now(UTC)
+
+ def track_websocket_disconnected(self, client_id: str):
+ """Track WebSocket disconnection"""
+ with self.lock:
+ self.active_websockets.discard(client_id)
+ self.metrics.active_websockets = len(self.active_websockets)
+
+ def get_dashboard_data(self) -> Dict:
+ """Get formatted data for Streamlit dashboard"""
+ with self.lock:
+ # Update stalled count
+ now = datetime.now(UTC)
+ stalled_count = 0
+ for transaction in self.transactions.values():
+ if (
+ transaction.status == TransactionStatus.IN_PROGRESS
+ and (now - transaction.updated_at).total_seconds() > 60
+ ):
+ stalled_count += 1
+
+ return {
+ "system_metrics": {
+ "uptime_hours": self.metrics.uptime_hours(),
+ "total_transactions": self.metrics.total_transactions,
+ "active_transactions": self.metrics.active_transactions,
+ "completed_transactions": self.metrics.completed_transactions,
+ "failed_transactions": self.metrics.failed_transactions,
+ "stalled_transactions": stalled_count,
+ "active_websockets": self.metrics.active_websockets,
+ "total_audio_chunks": self.metrics.total_audio_chunks_processed,
+ "total_transcriptions": self.metrics.total_transcriptions,
+ "total_memories": self.metrics.total_memories_created,
+ "total_action_items": self.metrics.total_action_items_created,
+ "last_activity": (
+ self.metrics.last_activity.isoformat()
+ if self.metrics.last_activity
+ else None
+ ),
+ },
+ "recent_transactions": [
+ {
+ "id": t_id[:8],
+ "user_id": (
+ self.transactions[t_id].user_id[-6:]
+ if t_id in self.transactions
+ else "unknown"
+ ),
+ "status": (
+ self.transactions[t_id].status.value
+ if t_id in self.transactions
+ else "unknown"
+ ),
+ "current_stage": (
+ self.transactions[t_id].current_stage.value
+ if t_id in self.transactions
+ and self.transactions[t_id].current_stage is not None
+ else "none"
+ ),
+ "created_at": (
+ self.transactions[t_id].created_at.isoformat()
+ if t_id in self.transactions
+ else "unknown"
+ ),
+ "issue": (
+ self.transactions[t_id].get_issue_description()
+ if t_id in self.transactions
+ else None
+ ),
+ }
+ for t_id in list(self.recent_transactions)[-10:] # Last 10 transactions
+ if t_id in self.transactions
+ ],
+ "recent_issues": list(self.recent_issues)[-10:], # Last 10 issues
+ "active_users": len(
+ [
+ uid
+ for uid, last_seen in self.user_activity.items()
+ if (now - last_seen).total_seconds() < 300 # Active in last 5 minutes
+ ]
+ ),
+ }
+
+ def get_transaction(self, transaction_id: str) -> Optional[Transaction]:
+ """Get transaction by ID"""
+ with self.lock:
+ return self.transactions.get(transaction_id)
+
+ def get_user_transactions(self, user_id: str, limit: int = 10) -> List[Transaction]:
+ """Get recent transactions for a user"""
+ with self.lock:
+ user_transactions = [t for t in self.transactions.values() if t.user_id == user_id]
+ user_transactions.sort(key=lambda x: x.created_at, reverse=True)
+ return user_transactions[:limit]
+
+ def export_debug_dump(self) -> Path:
+ """Export comprehensive debug data to JSON file"""
+ with self.lock:
+ dump_data = {
+ "export_metadata": {
+ "generated_at": datetime.now(UTC).isoformat(),
+ "system_start_time": self.metrics.system_start_time.isoformat(),
+ "uptime_hours": self.metrics.uptime_hours(),
+ },
+ "system_metrics": self.get_dashboard_data()["system_metrics"],
+ "transactions": [
+ {
+ "transaction_id": t.transaction_id,
+ "user_id": t.user_id,
+ "client_id": t.client_id,
+ "conversation_id": t.conversation_id,
+ "created_at": t.created_at.isoformat(),
+ "updated_at": t.updated_at.isoformat(),
+ "status": t.status.value,
+ "current_stage": t.current_stage.value if t.current_stage else None,
+ "issue": t.get_issue_description(),
+ "events": [
+ {
+ "timestamp": e.timestamp.isoformat(),
+ "stage": e.stage.value,
+ "success": e.success,
+ "error_message": e.error_message,
+ "metadata": e.metadata,
+ }
+ for e in t.events
+ ],
+ }
+ for t in self.transactions.values()
+ ],
+ "recent_issues": list(self.recent_issues),
+ "active_websockets": list(self.active_websockets),
+ "user_activity": {
+ uid: last_seen.isoformat() for uid, last_seen in self.user_activity.items()
+ },
+ }
+
+ dump_file = self.debug_dir / f"debug_dump_{int(time.time())}.json"
+ with open(dump_file, "w") as f:
+ json.dump(dump_data, f, indent=2)
+
+ return dump_file
+
+
+# Global instance
+_debug_tracker: Optional[DebugSystemTracker] = None
+
+
+def get_debug_tracker() -> DebugSystemTracker:
+ """Get the global debug tracker instance"""
+ global _debug_tracker
+ if _debug_tracker is None:
+ _debug_tracker = DebugSystemTracker()
+ return _debug_tracker
+
+
+def init_debug_tracker():
+ """Initialize the debug tracker (called at startup)"""
+ global _debug_tracker
+ _debug_tracker = DebugSystemTracker()
+ _debug_tracker.start_monitoring()
+ return _debug_tracker
+
+
+def shutdown_debug_tracker():
+ """Shutdown the debug tracker (called at shutdown)"""
+ global _debug_tracker
+ if _debug_tracker:
+ _debug_tracker.stop_monitoring()
diff --git a/backends/advanced-backend/src/enroll_speaker_service.py b/backends/advanced-backend/src/advanced_omi_backend/enroll_speaker_service.py
similarity index 87%
rename from backends/advanced-backend/src/enroll_speaker_service.py
rename to backends/advanced-backend/src/advanced_omi_backend/enroll_speaker_service.py
index 7c213d15..36673142 100644
--- a/backends/advanced-backend/src/enroll_speaker_service.py
+++ b/backends/advanced-backend/src/advanced_omi_backend/enroll_speaker_service.py
@@ -7,13 +7,13 @@
Usage examples:
# Enroll from an existing audio file
python enroll_speaker_service.py --id john_doe --name "John Doe" --file audio_chunks/sample.wav
-
+
# Enroll from a specific segment of an audio file
python enroll_speaker_service.py --id jane_smith --name "Jane Smith" --file audio_chunks/sample.wav --start 10.0 --end 15.0
-
+
# List enrolled speakers
python enroll_speaker_service.py --list
-
+
# Remove a speaker
python enroll_speaker_service.py --remove john_doe
"""
@@ -36,22 +36,30 @@
DEFAULT_SPEAKER_HOST = "localhost"
DEFAULT_SPEAKER_PORT = 8001
-async def enroll_speaker_api(speaker_host: str, speaker_port: int, speaker_id: str,
- speaker_name: str, audio_file_path: str, start_time=None, end_time=None):
+
+async def enroll_speaker_api(
+ speaker_host: str,
+ speaker_port: int,
+ speaker_id: str,
+ speaker_name: str,
+ audio_file_path: str,
+ start_time=None,
+ end_time=None,
+):
"""Call the speaker service API to enroll a speaker."""
url = f"http://{speaker_host}:{speaker_port}/enroll"
-
+
data = {
"speaker_id": speaker_id,
"speaker_name": speaker_name,
- "audio_file_path": audio_file_path
+ "audio_file_path": audio_file_path,
}
-
+
if start_time is not None:
data["start_time"] = start_time
if end_time is not None:
data["end_time"] = end_time
-
+
async with aiohttp.ClientSession() as session:
try:
async with session.post(url, json=data) as response:
@@ -66,10 +74,11 @@ async def enroll_speaker_api(speaker_host: str, speaker_port: int, speaker_id: s
logger.error(f"โ Connection error: {e}")
return False
+
async def list_speakers_api(speaker_host: str, speaker_port: int):
"""List all enrolled speakers."""
url = f"http://{speaker_host}:{speaker_port}/speakers"
-
+
async with aiohttp.ClientSession() as session:
try:
async with session.get(url) as response:
@@ -93,10 +102,11 @@ async def list_speakers_api(speaker_host: str, speaker_port: int):
logger.error(f"โ Connection error: {e}")
return False
+
async def remove_speaker_api(speaker_host: str, speaker_port: int, speaker_id: str):
"""Remove a speaker."""
url = f"http://{speaker_host}:{speaker_port}/speakers/{speaker_id}"
-
+
async with aiohttp.ClientSession() as session:
try:
async with session.delete(url) as response:
@@ -111,20 +121,20 @@ async def remove_speaker_api(speaker_host: str, speaker_port: int, speaker_id: s
logger.error(f"โ Connection error: {e}")
return False
-async def identify_speaker_api(speaker_host: str, speaker_port: int, audio_file_path: str,
- start_time=None, end_time=None):
+
+async def identify_speaker_api(
+ speaker_host: str, speaker_port: int, audio_file_path: str, start_time=None, end_time=None
+):
"""Test speaker identification."""
url = f"http://{speaker_host}:{speaker_port}/identify"
-
- data = {
- "audio_file_path": audio_file_path
- }
-
+
+ data = {"audio_file_path": audio_file_path}
+
if start_time is not None:
data["start_time"] = start_time
if end_time is not None:
data["end_time"] = end_time
-
+
async with aiohttp.ClientSession() as session:
try:
async with session.post(url, json=data) as response:
@@ -144,10 +154,11 @@ async def identify_speaker_api(speaker_host: str, speaker_port: int, audio_file_
logger.error(f"โ Connection error: {e}")
return False
+
async def check_service_health(speaker_host: str, speaker_port: int):
"""Check if the speaker recognition service is running."""
url = f"http://{speaker_host}:{speaker_port}/health"
-
+
async with aiohttp.ClientSession() as session:
try:
async with session.get(url, timeout=5) as response:
@@ -165,55 +176,67 @@ async def check_service_health(speaker_host: str, speaker_port: int):
logger.error(f"โ Cannot connect to speaker service: {e}")
return False
+
async def main():
- parser = argparse.ArgumentParser(description="Speaker enrollment for OMI backend via speaker service")
+ parser = argparse.ArgumentParser(
+ description="Speaker enrollment for OMI backend via speaker service"
+ )
parser.add_argument("--speaker-host", default=DEFAULT_SPEAKER_HOST, help="Speaker service host")
- parser.add_argument("--speaker-port", type=int, default=DEFAULT_SPEAKER_PORT, help="Speaker service port")
-
+ parser.add_argument(
+ "--speaker-port", type=int, default=DEFAULT_SPEAKER_PORT, help="Speaker service port"
+ )
+
# Speaker enrollment options
parser.add_argument("--id", help="Speaker ID (unique identifier)")
parser.add_argument("--name", help="Speaker name (human readable)")
parser.add_argument("--file", help="Audio file path (relative to shared audio directory)")
parser.add_argument("--start", type=float, help="Start time in seconds")
parser.add_argument("--end", type=float, help="End time in seconds")
-
+
# Utility options
parser.add_argument("--list", action="store_true", help="List enrolled speakers")
parser.add_argument("--identify", help="Test speaker identification on audio file")
parser.add_argument("--remove", help="Remove a speaker by ID")
-
+
args = parser.parse_args()
-
+
# Check speaker service connection
if not await check_service_health(args.speaker_host, args.speaker_port):
logger.error(" Make sure the speaker recognition service is running!")
logger.error(" Try: docker-compose up speaker-recognition")
return
-
+
# Handle different operations
if args.list:
await list_speakers_api(args.speaker_host, args.speaker_port)
-
+
elif args.identify:
- await identify_speaker_api(args.speaker_host, args.speaker_port, args.identify, args.start, args.end)
-
+ await identify_speaker_api(
+ args.speaker_host, args.speaker_port, args.identify, args.start, args.end
+ )
+
elif args.remove:
await remove_speaker_api(args.speaker_host, args.speaker_port, args.remove)
-
+
elif args.id and args.name and args.file:
# Convert relative path to absolute path
audio_file_path = os.path.abspath(args.file)
if not os.path.exists(audio_file_path):
logger.error(f"โ Audio file not found: {audio_file_path}")
return
-
+
await enroll_speaker_api(
- args.speaker_host, args.speaker_port,
- args.id, args.name, audio_file_path,
- args.start, args.end
+ args.speaker_host,
+ args.speaker_port,
+ args.id,
+ args.name,
+ audio_file_path,
+ args.start,
+ args.end,
)
else:
parser.print_help()
+
if __name__ == "__main__":
- asyncio.run(main())
\ No newline at end of file
+ asyncio.run(main())
diff --git a/backends/advanced-backend/src/advanced_omi_backend/main.py b/backends/advanced-backend/src/advanced_omi_backend/main.py
new file mode 100644
index 00000000..ef19f668
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/main.py
@@ -0,0 +1,716 @@
+#!/usr/bin/env python3
+"""Unified Omi-audio service
+
+* Accepts Opus packets over a WebSocket (`/ws`) or PCM over a WebSocket (`/ws_pcm`).
+* Uses a central queue to decouple audio ingestion from processing.
+* A saver consumer buffers PCM and writes 30-second WAV chunks to `./audio_chunks/`.
+* A transcription consumer sends each chunk to a Wyoming ASR service.
+* The transcript is stored in **mem0** and MongoDB.
+
+"""
+import logging
+
+logging.basicConfig(level=logging.INFO)
+
+import asyncio
+import concurrent.futures
+import os
+import time
+import uuid
+from contextlib import asynccontextmanager
+from functools import partial
+from pathlib import Path
+from typing import Optional, Tuple
+
+import ollama
+
+# Import Beanie for user management
+from beanie import init_beanie
+from dotenv import load_dotenv
+from easy_audio_interfaces.filesystem.filesystem_interfaces import LocalFileSink
+from fastapi import (
+ FastAPI,
+ Query,
+ WebSocket,
+ WebSocketDisconnect,
+)
+from fastapi.responses import JSONResponse
+from fastapi.staticfiles import StaticFiles
+from motor.motor_asyncio import AsyncIOMotorClient
+from omi.decoder import OmiOpusDecoder
+from wyoming.audio import AudioChunk
+from wyoming.client import AsyncTcpClient
+
+from advanced_omi_backend.action_items_service import ActionItemsService
+from advanced_omi_backend.client import ClientState
+
+# Import authentication components
+from advanced_omi_backend.auth import (
+ bearer_backend,
+ cookie_backend,
+ create_admin_user_if_needed,
+ fastapi_users,
+ websocket_auth,
+)
+from advanced_omi_backend.database import AudioChunksCollection
+from advanced_omi_backend.debug_system_tracker import (
+ get_debug_tracker,
+ init_debug_tracker,
+ shutdown_debug_tracker,
+)
+from advanced_omi_backend.memory import (
+ get_memory_service,
+ init_memory_config,
+ shutdown_memory_service,
+)
+from advanced_omi_backend.users import (
+ User,
+ generate_client_id,
+ register_client_to_user,
+)
+
+###############################################################################
+# SETUP
+###############################################################################
+
+# Load environment variables first
+load_dotenv()
+
+# Logging setup
+logging.basicConfig(level=logging.INFO)
+logger = logging.getLogger("advanced-backend")
+audio_logger = logging.getLogger("audio_processing")
+
+# Conditional Deepgram import
+try:
+ from deepgram import DeepgramClient, FileSource, PrerecordedOptions # type: ignore
+
+ DEEPGRAM_AVAILABLE = True
+ logger.info("โ
Deepgram SDK available")
+except ImportError:
+ DEEPGRAM_AVAILABLE = False
+ logger.warning("Deepgram SDK not available. Install with: pip install deepgram-sdk")
+audio_cropper_logger = logging.getLogger("audio_cropper")
+
+
+###############################################################################
+# CONFIGURATION
+###############################################################################
+
+# MongoDB Configuration
+MONGODB_URI = os.getenv("MONGODB_URI", "mongodb://mongo:27017")
+mongo_client = AsyncIOMotorClient(MONGODB_URI)
+db = mongo_client.get_default_database("friend-lite")
+chunks_col = db["audio_chunks"]
+users_col = db["users"]
+speakers_col = db["speakers"]
+action_items_col = db["action_items"]
+
+# Audio Configuration
+OMI_SAMPLE_RATE = 16_000 # Hz
+OMI_CHANNELS = 1
+OMI_SAMPLE_WIDTH = 2 # bytes (16โbit)
+SEGMENT_SECONDS = 60 # length of each stored chunk
+TARGET_SAMPLES = OMI_SAMPLE_RATE * SEGMENT_SECONDS
+
+# Conversation timeout configuration
+NEW_CONVERSATION_TIMEOUT_MINUTES = float(os.getenv("NEW_CONVERSATION_TIMEOUT_MINUTES", "1.5"))
+
+# Audio cropping configuration
+AUDIO_CROPPING_ENABLED = os.getenv("AUDIO_CROPPING_ENABLED", "true").lower() == "true"
+MIN_SPEECH_SEGMENT_DURATION = float(os.getenv("MIN_SPEECH_SEGMENT_DURATION", "1.0")) # seconds
+CROPPING_CONTEXT_PADDING = float(
+ os.getenv("CROPPING_CONTEXT_PADDING", "0.1")
+) # seconds of padding around speech
+
+# Directory where WAV chunks are written
+CHUNK_DIR = Path("./audio_chunks")
+CHUNK_DIR.mkdir(parents=True, exist_ok=True)
+
+
+# ASR Configuration
+DEEPGRAM_API_KEY = os.getenv("DEEPGRAM_API_KEY")
+USE_DEEPGRAM = bool(DEEPGRAM_API_KEY)
+OFFLINE_ASR_TCP_URI = os.getenv("OFFLINE_ASR_TCP_URI", "tcp://localhost:8765")
+
+# Deepgram client placeholder (not needed for WebSocket implementation)
+deepgram_client = None
+
+# Ollama & Qdrant Configuration
+OLLAMA_BASE_URL = os.getenv("OLLAMA_BASE_URL", "http://ollama:11434")
+QDRANT_BASE_URL = os.getenv("QDRANT_BASE_URL", "qdrant")
+
+# Memory configuration is now handled in the memory module
+# Initialize it with our Ollama and Qdrant URLs
+init_memory_config(
+ ollama_base_url=OLLAMA_BASE_URL,
+ qdrant_base_url=QDRANT_BASE_URL,
+)
+
+# Speaker service configuration
+
+# Thread pool executors
+_DEC_IO_EXECUTOR = concurrent.futures.ThreadPoolExecutor(
+ max_workers=os.cpu_count() or 4,
+ thread_name_prefix="opus_io",
+)
+
+# Initialize memory service, speaker service, and ollama client
+memory_service = get_memory_service()
+ollama_client = ollama.Client(host=OLLAMA_BASE_URL)
+
+action_items_service = ActionItemsService(action_items_col, ollama_client)
+
+###############################################################################
+# UTILITY FUNCTIONS & HELPER CLASSES
+###############################################################################
+
+
+# Initialize repository and global state
+audio_chunks_db_collection = AudioChunksCollection(chunks_col)
+active_clients: dict[str, ClientState] = {}
+
+# Client-to-user mapping for reliable permission checking
+client_to_user_mapping: dict[str, str] = {} # client_id -> user_id
+
+# Initialize client manager with active_clients reference
+from advanced_omi_backend.client_manager import init_client_manager
+
+init_client_manager(active_clients)
+
+# Initialize client utilities with the mapping dictionaries
+from advanced_omi_backend.client_manager import (
+ client_belongs_to_user,
+ get_user_clients_all,
+ init_client_user_mapping,
+ register_client_user_mapping,
+ track_client_user_relationship,
+ unregister_client_user_mapping,
+)
+
+# Client ownership tracking for database records
+# Since we're in development, we'll track all client-user relationships in memory
+# This will be populated when clients connect and persisted in database records
+all_client_user_mappings: dict[str, str] = (
+ {}
+) # client_id -> user_id (includes disconnected clients)
+
+# Initialize client user mapping with both dictionaries
+init_client_user_mapping(client_to_user_mapping, all_client_user_mappings)
+
+
+def get_user_clients(user_id: str) -> list[str]:
+ """Get all currently active client IDs that belong to a specific user."""
+ return [
+ client_id
+ for client_id, mapped_user_id in client_to_user_mapping.items()
+ if mapped_user_id == user_id
+ ]
+
+
+async def create_client_state(
+ client_id: str, user: User, device_name: Optional[str] = None
+) -> ClientState:
+ """Create and register a new client state."""
+ client_state = ClientState(client_id, audio_chunks_db_collection, action_items_service, CHUNK_DIR, user.user_id, user.email)
+ active_clients[client_id] = client_state
+
+ # Register client-user mapping (for active clients)
+ register_client_user_mapping(client_id, user.user_id)
+
+ # Also track in persistent mapping (for database queries)
+ track_client_user_relationship(client_id, user.user_id)
+
+ # Register client in user model (persistent)
+ await register_client_to_user(user, client_id, device_name)
+
+ await client_state.start_processing()
+
+ return client_state
+
+
+async def cleanup_client_state(client_id: str):
+ """Clean up and remove client state."""
+ if client_id in active_clients:
+ client_state = active_clients[client_id]
+ await client_state.disconnect()
+ del active_clients[client_id]
+
+ # Unregister client-user mapping
+ unregister_client_user_mapping(client_id)
+
+
+###############################################################################
+# CORE APPLICATION LOGIC
+###############################################################################
+
+
+@asynccontextmanager
+async def lifespan(app: FastAPI):
+ """Manage application lifespan events."""
+ # Startup
+ audio_logger.info("Starting application...")
+
+ # Initialize Beanie for user management
+ try:
+ await init_beanie(
+ database=mongo_client.get_default_database("friend-lite"),
+ document_models=[User],
+ )
+ audio_logger.info("Beanie initialized for user management")
+ except Exception as e:
+ audio_logger.error(f"Failed to initialize Beanie: {e}")
+ raise
+
+ # Create admin user if needed
+ try:
+ await create_admin_user_if_needed()
+ except Exception as e:
+ audio_logger.error(f"Failed to create admin user: {e}")
+ # Don't raise here as this is not critical for startup
+
+ # Start metrics collection
+ # Initialize debug tracker
+ init_debug_tracker()
+ audio_logger.info("Metrics collection started")
+
+ # Pre-initialize memory service to avoid blocking during first use
+ try:
+ audio_logger.info("Pre-initializing memory service...")
+ await asyncio.wait_for(
+ memory_service.initialize(), timeout=120
+ ) # 2 minute timeout for startup
+ audio_logger.info("Memory service pre-initialized successfully")
+ except asyncio.TimeoutError:
+ audio_logger.warning(
+ "Memory service pre-initialization timed out - will initialize on first use"
+ )
+ except Exception as e:
+ audio_logger.warning(
+ f"Memory service pre-initialization failed: {e} - will initialize on first use"
+ )
+
+ # SystemTracker is used for monitoring and debugging
+ audio_logger.info("Using SystemTracker for monitoring and debugging")
+
+ audio_logger.info("Application ready - clients will have individual processing pipelines.")
+
+ try:
+ yield
+ finally:
+ # Shutdown
+ audio_logger.info("Shutting down application...")
+
+ # Clean up all active clients
+ for client_id in list(active_clients.keys()):
+ await cleanup_client_state(client_id)
+
+ # Stop metrics collection and save final report
+ # Shutdown debug tracker
+ shutdown_debug_tracker()
+ audio_logger.info("Metrics collection stopped")
+
+ # Shutdown memory service and speaker service
+ shutdown_memory_service()
+ audio_logger.info("Memory and speaker services shut down.")
+
+
+ audio_logger.info("Shutdown complete.")
+
+
+# FastAPI Application
+app = FastAPI(lifespan=lifespan)
+app.mount("/audio", StaticFiles(directory=CHUNK_DIR), name="audio")
+
+# Add authentication routers
+app.include_router(
+ fastapi_users.get_auth_router(cookie_backend),
+ prefix="/auth/cookie",
+ tags=["auth"],
+)
+app.include_router(
+ fastapi_users.get_auth_router(bearer_backend),
+ prefix="/auth/jwt",
+ tags=["auth"],
+)
+
+
+# API endpoints
+from advanced_omi_backend.routers.api_router import router as api_router
+
+app.include_router(api_router)
+
+
+@app.websocket("/ws_omi")
+async def ws_endpoint(
+ ws: WebSocket,
+ token: Optional[str] = Query(None),
+ device_name: Optional[str] = Query(None),
+):
+ """Accepts WebSocket connections, decodes Opus audio, and processes per-client."""
+ # TODO: Accept parameters or some type of "audio config" message from the client to setup
+ # the proper file sink.
+
+ # Authenticate user before accepting WebSocket connection
+ user = await websocket_auth(ws, token)
+ if not user:
+ await ws.close(code=1008, reason="Authentication required")
+ return
+
+ await ws.accept()
+
+ # Generate proper client_id using user and device_name
+ client_id = generate_client_id(user, device_name)
+ audio_logger.info(
+ f"๐ WebSocket connection accepted - User: {user.user_id} ({user.email}), Client: {client_id}"
+ )
+
+ decoder = OmiOpusDecoder()
+ _decode_packet = partial(decoder.decode_packet, strip_header=False)
+
+ # Create client state and start processing
+ client_state = await create_client_state(client_id, user, device_name)
+
+ # Track WebSocket connection
+ # tracker = get_debug_tracker()
+ # tracker.track_websocket_connected(user.user_id, client_id)
+
+ try:
+ packet_count = 0
+ total_bytes = 0
+ while True:
+ packet = await ws.receive_bytes()
+ packet_count += 1
+ total_bytes += len(packet)
+
+ start_time = time.time()
+ loop = asyncio.get_running_loop()
+ pcm_data = await loop.run_in_executor(_DEC_IO_EXECUTOR, _decode_packet, packet)
+ decode_time = time.time() - start_time
+
+ if pcm_data:
+ audio_logger.debug(
+ f"๐ต Decoded packet #{packet_count}: {len(packet)} bytes -> {len(pcm_data)} PCM bytes (took {decode_time:.3f}s)"
+ )
+ chunk = AudioChunk(
+ audio=pcm_data,
+ rate=OMI_SAMPLE_RATE,
+ width=OMI_SAMPLE_WIDTH,
+ channels=OMI_CHANNELS,
+ timestamp=int(time.time()),
+ )
+ await client_state.chunk_queue.put(chunk)
+
+ # # Track audio chunk with debug tracker
+ # if packet_count == 1: # Create transaction on first audio chunk
+ # client_state.transaction_id = tracker.create_transaction(
+ # user.user_id, client_id
+ # )
+ # if hasattr(client_state, "transaction_id") and client_state.transaction_id:
+ # tracker.track_audio_chunk(client_state.transaction_id, len(pcm_data))
+
+ # Log every 1000th packet to avoid spam
+ if packet_count % 1000 == 0:
+ audio_logger.info(
+ f"๐ Processed {packet_count} packets ({total_bytes} bytes total) for client {client_id}"
+ )
+
+ except WebSocketDisconnect:
+ audio_logger.info(
+ f"๐ WebSocket disconnected - Client: {client_id}, Packets: {packet_count}, Total bytes: {total_bytes}"
+ )
+ except Exception as e:
+ audio_logger.error(f"โ WebSocket error for client {client_id}: {e}", exc_info=True)
+ finally:
+ # # Track WebSocket disconnection
+ # tracker = get_debug_tracker()
+ # tracker.track_websocket_disconnected(client_id)
+
+ # Clean up client state
+ await cleanup_client_state(client_id)
+
+
+@app.websocket("/ws_pcm")
+async def ws_endpoint_pcm(
+ ws: WebSocket, token: Optional[str] = Query(None), device_name: Optional[str] = Query(None)
+):
+ """Accepts WebSocket connections, processes PCM audio per-client."""
+ # Authenticate user before accepting WebSocket connection
+ user = await websocket_auth(ws, token)
+ if not user:
+ await ws.close(code=1008, reason="Authentication required")
+ return
+
+ await ws.accept()
+
+ # Generate proper client_id using user and device_name
+ client_id = generate_client_id(user, device_name)
+ audio_logger.info(
+ f"๐ PCM WebSocket connection accepted - User: {user.user_id} ({user.email}), Client: {client_id}"
+ )
+
+ # Create client state and start processing
+ client_state = await create_client_state(client_id, user, device_name)
+
+ # Track WebSocket connection
+ tracker = get_debug_tracker()
+ tracker.track_websocket_connected(user.user_id, client_id)
+
+ try:
+ packet_count = 0
+ total_bytes = 0
+ while True:
+ packet = await ws.receive_bytes()
+ packet_count += 1
+ total_bytes += len(packet)
+
+ if packet:
+ audio_logger.debug(f"๐ต Received PCM packet #{packet_count}: {len(packet)} bytes")
+ chunk = AudioChunk(
+ audio=packet,
+ rate=16000,
+ width=2,
+ channels=1,
+ timestamp=int(time.time()),
+ )
+ await client_state.chunk_queue.put(chunk)
+
+ # Track audio chunk with debug tracker
+ if packet_count == 1: # Create transaction on first audio chunk
+ client_state.transaction_id = tracker.create_transaction(
+ user.user_id, client_id
+ )
+ if hasattr(client_state, "transaction_id") and client_state.transaction_id:
+ tracker.track_audio_chunk(client_state.transaction_id, len(packet))
+
+ # Log every 1000th packet to avoid spam
+ if packet_count % 1000 == 0:
+ audio_logger.info(
+ f"๐ Processed {packet_count} PCM packets ({total_bytes} bytes total) for client {client_id}"
+ )
+ except WebSocketDisconnect:
+ audio_logger.info(
+ f"๐ PCM WebSocket disconnected - Client: {client_id}, Packets: {packet_count}, Total bytes: {total_bytes}"
+ )
+ except Exception as e:
+ audio_logger.error(f"โ PCM WebSocket error for client {client_id}: {e}", exc_info=True)
+ finally:
+ # Track WebSocket disconnection
+ tracker = get_debug_tracker()
+ tracker.track_websocket_disconnected(client_id)
+
+ # Clean up client state
+ await cleanup_client_state(client_id)
+
+
+@app.get("/health")
+async def health_check():
+ """Comprehensive health check for all services."""
+ health_status = {
+ "status": "healthy",
+ "timestamp": int(time.time()),
+ "services": {},
+ "config": {
+ "mongodb_uri": MONGODB_URI,
+ "ollama_url": OLLAMA_BASE_URL,
+ "qdrant_url": f"http://{QDRANT_BASE_URL}:6333",
+ "transcription_service": ("Deepgram WebSocket" if USE_DEEPGRAM else "Offline ASR"),
+ "asr_uri": (OFFLINE_ASR_TCP_URI if not USE_DEEPGRAM else "wss://api.deepgram.com"),
+ "deepgram_enabled": USE_DEEPGRAM,
+ "chunk_dir": str(CHUNK_DIR),
+ "active_clients": len(active_clients),
+ "new_conversation_timeout_minutes": NEW_CONVERSATION_TIMEOUT_MINUTES,
+ "action_items_enabled": True,
+ "audio_cropping_enabled": AUDIO_CROPPING_ENABLED,
+ "llm_provider": os.getenv("LLM_PROVIDER", "ollama"),
+ "llm_model": os.getenv("OPENAI_MODEL" if os.getenv("LLM_PROVIDER", "ollama").lower() == "openai" else "OLLAMA_MODEL", "gpt-4o" if os.getenv("LLM_PROVIDER", "ollama").lower() == "openai" else "gemma3n:e4b"),
+ },
+ }
+
+ overall_healthy = True
+ critical_services_healthy = True
+
+ # Check MongoDB (critical service)
+ try:
+ await asyncio.wait_for(mongo_client.admin.command("ping"), timeout=5.0)
+ health_status["services"]["mongodb"] = {
+ "status": "โ
Connected",
+ "healthy": True,
+ "critical": True,
+ }
+ except asyncio.TimeoutError:
+ health_status["services"]["mongodb"] = {
+ "status": "โ Connection Timeout (5s)",
+ "healthy": False,
+ "critical": True,
+ }
+ overall_healthy = False
+ critical_services_healthy = False
+ except Exception as e:
+ health_status["services"]["mongodb"] = {
+ "status": f"โ Connection Failed: {str(e)}",
+ "healthy": False,
+ "critical": True,
+ }
+ overall_healthy = False
+ critical_services_healthy = False
+
+ # Check Ollama (non-critical service - may not be running)
+ try:
+ # Run in executor to avoid blocking the main thread
+ loop = asyncio.get_running_loop()
+ models = await asyncio.wait_for(loop.run_in_executor(None, ollama_client.list), timeout=8.0)
+ model_count = len(models.get("models", []))
+ health_status["services"]["ollama"] = {
+ "status": "โ
Connected",
+ "healthy": True,
+ "models": model_count,
+ "critical": False,
+ }
+ except asyncio.TimeoutError:
+ health_status["services"]["ollama"] = {
+ "status": "โ ๏ธ Connection Timeout (8s) - Service may not be running",
+ "healthy": False,
+ "critical": False,
+ }
+ overall_healthy = False
+ except Exception as e:
+ health_status["services"]["ollama"] = {
+ "status": f"โ ๏ธ Connection Failed: {str(e)} - Service may not be running",
+ "healthy": False,
+ "critical": False,
+ }
+ overall_healthy = False
+
+ # Check mem0 (depends on Ollama and Qdrant)
+ try:
+ # Test memory service connection with timeout
+ test_success = await memory_service.test_connection()
+ if test_success:
+ health_status["services"]["mem0"] = {
+ "status": "โ
Connected",
+ "healthy": True,
+ "critical": False,
+ }
+ else:
+ health_status["services"]["mem0"] = {
+ "status": "โ ๏ธ Connection Test Failed",
+ "healthy": False,
+ "critical": False,
+ }
+ overall_healthy = False
+ except asyncio.TimeoutError:
+ health_status["services"]["mem0"] = {
+ "status": "โ ๏ธ Connection Test Timeout (60s) - Depends on Ollama/Qdrant",
+ "healthy": False,
+ "critical": False,
+ }
+ overall_healthy = False
+ except Exception as e:
+ health_status["services"]["mem0"] = {
+ "status": f"โ ๏ธ Connection Test Failed: {str(e)} - Check Ollama/Qdrant services",
+ "healthy": False,
+ "critical": False,
+ }
+ overall_healthy = False
+
+ # Check ASR service based on configuration
+ if USE_DEEPGRAM:
+ # Check Deepgram WebSocket connectivity
+ if DEEPGRAM_API_KEY:
+ health_status["services"]["deepgram"] = {
+ "status": "โ
API Key Configured",
+ "healthy": True,
+ "type": "WebSocket",
+ "critical": False,
+ }
+ else:
+ health_status["services"]["deepgram"] = {
+ "status": "โ API Key Missing",
+ "healthy": False,
+ "type": "WebSocket",
+ "critical": False,
+ }
+ overall_healthy = False
+ else:
+ # Check offline ASR service (non-critical - may be external)
+ try:
+ test_client = AsyncTcpClient.from_uri(OFFLINE_ASR_TCP_URI)
+ await asyncio.wait_for(test_client.connect(), timeout=5.0)
+ await test_client.disconnect()
+ health_status["services"]["asr"] = {
+ "status": "โ
Connected",
+ "healthy": True,
+ "uri": OFFLINE_ASR_TCP_URI,
+ "critical": False,
+ }
+ except asyncio.TimeoutError:
+ health_status["services"]["asr"] = {
+ "status": f"โ ๏ธ Connection Timeout (5s) - Check external ASR service",
+ "healthy": False,
+ "uri": OFFLINE_ASR_TCP_URI,
+ "critical": False,
+ }
+ overall_healthy = False
+ except Exception as e:
+ health_status["services"]["asr"] = {
+ "status": f"โ ๏ธ Connection Failed: {str(e)} - Check external ASR service",
+ "healthy": False,
+ "uri": OFFLINE_ASR_TCP_URI,
+ "critical": False,
+ }
+ overall_healthy = False
+
+ # Track health check results in debug tracker
+ try:
+ tracker = get_debug_tracker()
+ # Can add health check tracking to debug tracker if needed
+ pass
+ except Exception as e:
+ audio_logger.error(f"Failed to record health check metrics: {e}")
+
+ # Set overall status
+ health_status["overall_healthy"] = overall_healthy
+ health_status["critical_services_healthy"] = critical_services_healthy
+
+ if not critical_services_healthy:
+ health_status["status"] = "critical"
+ elif not overall_healthy:
+ health_status["status"] = "degraded"
+ else:
+ health_status["status"] = "healthy"
+
+ # Add helpful messages
+ if not overall_healthy:
+ messages = []
+ if not critical_services_healthy:
+ messages.append(
+ "Critical services (MongoDB) are unavailable - core functionality will not work"
+ )
+
+ unhealthy_optional = [
+ name
+ for name, service in health_status["services"].items()
+ if not service["healthy"] and not service.get("critical", True)
+ ]
+ if unhealthy_optional:
+ messages.append(f"Optional services unavailable: {', '.join(unhealthy_optional)}")
+
+ health_status["message"] = "; ".join(messages)
+
+ return JSONResponse(content=health_status, status_code=200)
+
+
+@app.get("/readiness")
+async def readiness_check():
+ """Simple readiness check for container orchestration."""
+ return JSONResponse(content={"status": "ready", "timestamp": int(time.time())}, status_code=200)
+
+
+if __name__ == "__main__":
+ import uvicorn
+
+ host = os.getenv("HOST", "0.0.0.0")
+ port = int(os.getenv("PORT", "8000"))
+ audio_logger.info("Starting Omi unified service at ws://%s:%s/ws", host, port)
+ uvicorn.run("main:app", host=host, port=port, reload=False)
diff --git a/backends/advanced-backend/src/memory/README.md b/backends/advanced-backend/src/advanced_omi_backend/memory/README.md
similarity index 100%
rename from backends/advanced-backend/src/memory/README.md
rename to backends/advanced-backend/src/advanced_omi_backend/memory/README.md
diff --git a/backends/advanced-backend/src/memory/__init__.py b/backends/advanced-backend/src/advanced_omi_backend/memory/__init__.py
similarity index 93%
rename from backends/advanced-backend/src/memory/__init__.py
rename to backends/advanced-backend/src/advanced_omi_backend/memory/__init__.py
index 4fccd040..7bb74b44 100644
--- a/backends/advanced-backend/src/memory/__init__.py
+++ b/backends/advanced-backend/src/advanced_omi_backend/memory/__init__.py
@@ -8,14 +8,14 @@
from .memory_service import (
MemoryService,
- init_memory_config,
get_memory_service,
+ init_memory_config,
shutdown_memory_service,
)
__all__ = [
"MemoryService",
- "init_memory_config",
+ "init_memory_config",
"get_memory_service",
"shutdown_memory_service",
-]
\ No newline at end of file
+]
diff --git a/backends/advanced-backend/src/advanced_omi_backend/memory/memory_service.py b/backends/advanced-backend/src/advanced_omi_backend/memory/memory_service.py
new file mode 100644
index 00000000..2a04e3c7
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/memory/memory_service.py
@@ -0,0 +1,1309 @@
+"""Memory service implementation for Omi-audio service.
+
+This module provides:
+- Memory configuration and initialization
+- Memory operations (add, get, search, delete)
+- Action item extraction and management
+- Debug tracking and configurable extraction
+"""
+
+import asyncio
+import json
+import logging
+import os
+import time
+from concurrent.futures import ThreadPoolExecutor
+from typing import Optional
+
+from mem0 import Memory
+
+# Import debug tracker and config loader
+from advanced_omi_backend.debug_system_tracker import PipelineStage, get_debug_tracker
+from advanced_omi_backend.memory_config_loader import get_config_loader
+
+# Configure Mem0 telemetry based on environment variable
+# Set default to False for privacy unless explicitly enabled
+if not os.getenv("MEM0_TELEMETRY"):
+ os.environ["MEM0_TELEMETRY"] = "False"
+
+# Enable detailed mem0 logging to capture LLM responses
+mem0_logger = logging.getLogger("mem0")
+mem0_logger.setLevel(logging.DEBUG)
+
+# Also enable detailed ollama client logging
+ollama_logger = logging.getLogger("ollama")
+ollama_logger.setLevel(logging.DEBUG)
+
+# Enable httpx logging to see raw HTTP requests/responses to Ollama
+httpx_logger = logging.getLogger("httpx")
+httpx_logger.setLevel(logging.DEBUG)
+
+# Logger for memory operations
+memory_logger = logging.getLogger("memory_service")
+
+# Memory configuration
+MEM0_ORGANIZATION_ID = os.getenv("MEM0_ORGANIZATION_ID", "friend-lite-org")
+MEM0_PROJECT_ID = os.getenv("MEM0_PROJECT_ID", "audio-conversations")
+MEM0_APP_ID = os.getenv("MEM0_APP_ID", "omi-backend")
+
+# Ollama & Qdrant Configuration (these should match main config)
+OLLAMA_BASE_URL = os.getenv("OLLAMA_BASE_URL", "http://ollama:11434")
+QDRANT_BASE_URL = os.getenv("QDRANT_BASE_URL", "qdrant")
+
+# Timeout configurations
+OLLAMA_TIMEOUT_SECONDS = 1200 # Timeout for Ollama operations
+MEMORY_INIT_TIMEOUT_SECONDS = 60 # Timeout for memory initialization
+
+# Thread pool for blocking operations
+_MEMORY_EXECUTOR = ThreadPoolExecutor(max_workers=2, thread_name_prefix="memory_ops")
+
+
+def _build_mem0_config() -> dict:
+ """Build Mem0 configuration from YAML config and environment variables."""
+ config_loader = get_config_loader()
+ memory_config = config_loader.get_memory_extraction_config()
+ fact_config = config_loader.get_fact_extraction_config()
+ llm_settings = memory_config.get("llm_settings", {})
+
+ # Get LLM provider from environment or config
+ llm_provider = os.getenv("LLM_PROVIDER", "ollama").lower()
+
+ # Build LLM configuration based on provider
+ if llm_provider == "openai":
+ # Use dedicated OPENAI_MODEL environment variable with GPT-4o as default for better JSON parsing
+ openai_model = os.getenv("OPENAI_MODEL", "gpt-4o")
+
+ # Allow YAML config to override environment variable
+ model = llm_settings.get("model", openai_model)
+
+ memory_logger.info(f"Using OpenAI provider with model: {model}")
+
+ llm_config = {
+ "provider": "openai",
+ "config": {
+ "model": model,
+ "api_key": os.getenv("OPENAI_API_KEY"),
+ "temperature": llm_settings.get("temperature", 0.1),
+ "max_tokens": llm_settings.get("max_tokens", 2000),
+ },
+ }
+ # For OpenAI, use OpenAI embeddings
+ embedder_config = {
+ "provider": "openai",
+ "config": {
+ "model": "text-embedding-3-small",
+ "embedding_dims": 1536,
+ "api_key": os.getenv("OPENAI_API_KEY"),
+ },
+ }
+ embedding_dims = 1536
+ else: # Default to ollama
+ # Use dedicated OLLAMA_MODEL environment variable with fallback
+ ollama_model = os.getenv("OLLAMA_MODEL", "gemma3n:e4b")
+
+ # Allow YAML config to override environment variable
+ model = llm_settings.get("model", ollama_model)
+
+ memory_logger.info(f"Using Ollama provider with model: {model}")
+
+ llm_config = {
+ "provider": "ollama",
+ "config": {
+ "model": model,
+ "ollama_base_url": OLLAMA_BASE_URL,
+ "temperature": llm_settings.get("temperature", 0.1),
+ "max_tokens": llm_settings.get("max_tokens", 2000),
+ },
+ }
+ # For Ollama, use Ollama embeddings
+ embedder_config = {
+ "provider": "ollama",
+ "config": {
+ "model": "nomic-embed-text:latest",
+ "embedding_dims": 768,
+ "ollama_base_url": OLLAMA_BASE_URL,
+ },
+ }
+ embedding_dims = 768
+
+ # Valid mem0 configuration format based on official documentation
+ # See: https://docs.mem0.ai/platform/quickstart and https://github.com/mem0ai/mem0
+ mem0_config = {
+ "llm": llm_config,
+ "embedder": embedder_config,
+ "vector_store": {
+ "provider": "qdrant",
+ "config": {
+ "collection_name": "omi_memories",
+ "embedding_model_dims": embedding_dims,
+ "host": QDRANT_BASE_URL,
+ "port": 6333,
+ },
+ },
+ "version": "v1.1"
+ }
+
+ # Configure fact extraction - ALWAYS ENABLE for proper memory creation
+ fact_enabled = config_loader.is_fact_extraction_enabled()
+ memory_logger.info(f"YAML fact extraction enabled: {fact_enabled}")
+
+ # FORCE ENABLE fact extraction with working prompt format - UPDATED for more inclusive extraction
+ # Using custom_fact_extraction_prompt as documented in mem0 repo: https://github.com/mem0ai/mem0
+ formatted_fact_prompt = """
+Please extract ALL relevant facts from the conversation, including topics discussed, activities mentioned, people referenced, emotions expressed, and any other notable details.
+Extract granular, specific facts rather than broad summaries. Be inclusive and extract multiple facts even from casual conversations.
+
+Here are some few shot examples:
+
+Input: Hi.
+Output: {"facts" : ["Greeting exchanged"]}
+
+Input: I need to buy groceries tomorrow.
+Output: {"facts" : ["Need to buy groceries tomorrow", "Shopping task mentioned", "Time reference to tomorrow"]}
+
+Input: The meeting is at 3 PM on Friday.
+Output: {"facts" : ["Meeting scheduled for 3 PM on Friday", "Business meeting mentioned", "Specific time commitment", "Friday scheduling"]}
+
+Input: We are talking about unicorns.
+Output: {"facts" : ["Conversation about unicorns", "Fantasy topic discussed", "Mythical creatures mentioned"]}
+
+Input: My alarm keeps ringing.
+Output: {"facts" : ["Alarm is ringing", "Audio disturbance mentioned", "Repetitive sound issue", "Device malfunction or setting"]}
+
+Input: Bro, he just did it for the funny. Every move does not need to be perfect.
+Output: {"facts" : ["Gaming strategy discussed", "Casual conversation with friend", "Philosophy about game moves", "Humorous game action mentioned", "Perfectionism topic", "Gaming advice given"]}
+
+Now extract facts from the following conversation. Return only JSON format with "facts" key. Be thorough and extract multiple specific facts. ALWAYS extract at least one fact unless the input is completely empty or meaningless.
+"""
+ mem0_config["custom_fact_extraction_prompt"] = formatted_fact_prompt
+ memory_logger.info(f"โ
FORCED fact extraction enabled with working JSON prompt format")
+
+ memory_logger.debug(f"Final mem0_config: {json.dumps(mem0_config, indent=2)}")
+ return mem0_config
+
+
+# Global memory configuration - built dynamically from YAML config
+MEM0_CONFIG = _build_mem0_config()
+
+# Action item extraction is now handled by ActionItemsService
+# using configuration from memory_config.yaml
+
+# Global instances
+_memory_service = None
+_process_memory = None # For worker processes
+
+
+def init_memory_config(
+ ollama_base_url: Optional[str] = None,
+ qdrant_base_url: Optional[str] = None,
+ organization_id: Optional[str] = None,
+ project_id: Optional[str] = None,
+ app_id: Optional[str] = None,
+) -> dict:
+ """Initialize and return memory configuration with optional overrides."""
+ global MEM0_CONFIG, MEM0_ORGANIZATION_ID, MEM0_PROJECT_ID, MEM0_APP_ID
+
+ memory_logger.info(
+ f"Initializing MemoryService with Qdrant URL: {qdrant_base_url} and Ollama base URL: {ollama_base_url}"
+ )
+
+ if ollama_base_url:
+ MEM0_CONFIG["llm"]["config"]["ollama_base_url"] = ollama_base_url
+ MEM0_CONFIG["embedder"]["config"]["ollama_base_url"] = ollama_base_url
+
+ if qdrant_base_url:
+ MEM0_CONFIG["vector_store"]["config"]["host"] = qdrant_base_url
+
+ if organization_id:
+ MEM0_ORGANIZATION_ID = organization_id
+
+ if project_id:
+ MEM0_PROJECT_ID = project_id
+
+ if app_id:
+ MEM0_APP_ID = app_id
+
+ return MEM0_CONFIG
+
+
+def _init_process_memory():
+ """Initialize memory instance once per worker process."""
+ global _process_memory
+ if _process_memory is None:
+ # Build fresh config to ensure we get latest YAML settings
+ config = _build_mem0_config()
+ # Log config in chunks to avoid truncation
+ memory_logger.info("=== MEM0 CONFIG START ===")
+ for key, value in config.items():
+ memory_logger.info(f" {key}: {json.dumps(value, indent=4)}")
+ memory_logger.info("=== MEM0 CONFIG END ===")
+ _process_memory = Memory.from_config(config)
+ return _process_memory
+
+
+def _add_memory_to_store(
+ transcript: str,
+ client_id: str,
+ audio_uuid: str,
+ user_id: str,
+ user_email: str,
+ allow_update: bool = False,
+) -> tuple[bool, list[str]]:
+ """
+ Function to add memory in a separate process.
+ This function will be pickled and run in a process pool.
+ Uses a persistent memory instance per process.
+
+ Args:
+ transcript: The conversation transcript
+ client_id: The client ID that generated the audio
+ audio_uuid: Unique identifier for the audio
+ user_id: Database user ID to associate the memory with
+ user_email: User email for easy identification
+
+ Returns:
+ tuple: (success: bool, memory_ids: list[str])
+ """
+ start_time = time.time()
+ created_memory_ids = []
+
+ try:
+ # Get configuration and debug tracker
+ config_loader = get_config_loader()
+ debug_tracker = get_debug_tracker()
+
+ # Create a transaction for memory processing tracking
+ transaction_id = debug_tracker.create_transaction(
+ user_id=user_id,
+ client_id=client_id,
+ conversation_id=audio_uuid, # Use audio_uuid as conversation_id
+ )
+
+ # Start memory processing stage
+ debug_tracker.track_event(
+ transaction_id,
+ PipelineStage.MEMORY_STARTED,
+ True,
+ transcript_length=len(transcript) if transcript else 0,
+ user_email=user_email,
+ audio_uuid=audio_uuid,
+ )
+
+ # Check if transcript is empty or too short to be meaningful
+ # MODIFIED: Reduced minimum length from 10 to 1 character to process almost all transcripts
+ if not transcript or len(transcript.strip()) < 1:
+ debug_tracker.track_event(
+ transaction_id,
+ PipelineStage.MEMORY_COMPLETED,
+ False,
+ error_message=f"Transcript empty: {len(transcript.strip()) if transcript else 0} chars",
+ )
+ memory_logger.info(
+ f"Skipping memory processing for {audio_uuid} - transcript completely empty: {len(transcript.strip()) if transcript else 0} chars"
+ )
+ return True, [] # Not an error, just skipped
+
+ # Check if conversation should be skipped - BUT always process if we have any content
+ # MODIFIED: Only skip if explicitly disabled, not based on quality control for short transcripts
+ if config_loader.should_skip_conversation(transcript):
+ # If transcript is very short (< 10 chars), force processing anyway to ensure all transcripts are stored
+ if len(transcript.strip()) < 10:
+ memory_logger.info(
+ f"Overriding quality control skip for short transcript {audio_uuid} - ensuring all transcripts are stored"
+ )
+ else:
+ debug_tracker.track_event(
+ transaction_id,
+ PipelineStage.MEMORY_COMPLETED,
+ False,
+ error_message="Conversation skipped due to quality control",
+ )
+ memory_logger.info(
+ f"Skipping memory processing for {audio_uuid} due to quality control"
+ )
+ return True, [] # Not an error, just skipped
+
+ # Get memory extraction configuration
+ memory_config = config_loader.get_memory_extraction_config()
+ if not memory_config.get("enabled", True):
+ debug_tracker.track_event(
+ transaction_id,
+ PipelineStage.MEMORY_COMPLETED,
+ False,
+ error_message="Memory extraction disabled",
+ )
+ memory_logger.info(f"Memory extraction disabled for {audio_uuid}")
+ return True, []
+
+ # Get or create the persistent memory instance for this process
+ process_memory = _init_process_memory()
+
+ # Use configured prompt or default
+ prompt = memory_config.get(
+ "prompt", "Please extract summary of the conversation - any topics or names"
+ )
+
+ # Get LLM settings for logging and testing
+ llm_settings = memory_config.get("llm_settings", {})
+ model_name = llm_settings.get("model", "gemma3n:e4b")
+
+ # Add the memory with configured settings and error handling
+ memory_logger.info(f"Adding memory for {audio_uuid} with prompt: {prompt[:100]}...")
+ memory_logger.info(f"Transcript length: {len(transcript)} chars")
+ memory_logger.info(f"Transcript preview: {transcript[:300]}...")
+
+ # Additional validation - transcript quality has already been checked above
+ memory_logger.info(f"Processing transcript with {len(transcript.strip())} characters")
+
+ # Log LLM model being used
+ memory_logger.info(f"Using LLM model: {model_name}")
+
+ memory_logger.info(f"Starting mem0 processing for {audio_uuid}...")
+ mem0_start_time = time.time()
+
+ # DEBUGGING: Test OpenAI directly before Mem0 call
+ memory_logger.info(f"๐ DEBUGGING: Testing OpenAI connection directly...")
+ try:
+ import openai
+ import os
+
+ openai_api_key = os.getenv("OPENAI_API_KEY")
+ llm_provider = os.getenv("LLM_PROVIDER", "").lower()
+ openai_model = os.getenv("OPENAI_MODEL", "gpt-4o")
+
+ memory_logger.info(f"๐ OpenAI API Key present: {bool(openai_api_key)}")
+ memory_logger.info(f"๐ LLM Provider: {llm_provider}")
+ memory_logger.info(f"๐ OpenAI Model: {openai_model}")
+ memory_logger.info(f"๐ Full prompt being sent: {prompt}")
+ memory_logger.info(f"๐ Full transcript being processed: {transcript}")
+
+ if llm_provider == "openai" and openai_api_key:
+ # Test direct OpenAI call with same system prompt mem0 uses
+ client = openai.OpenAI(api_key=openai_api_key)
+
+ # Try the exact same call that mem0 would make for memory extraction
+ memory_extraction_prompt = f"""
+ You are an expert at extracting memories from conversations.
+
+ Instructions:
+ 1. Extract key facts, topics, and insights from the conversation
+ 2. Focus on memorable information that could be useful later
+ 3. Include names, places, events, preferences, and important details
+ 4. Format as clear, concise memories
+ 5. If the conversation contains meaningful content, always extract something
+
+ Custom prompt: {prompt}
+
+ Extract memories from this conversation:
+ """
+
+ test_response = client.chat.completions.create(
+ model=openai_model,
+ messages=[
+ {"role": "system", "content": memory_extraction_prompt},
+ {"role": "user", "content": transcript}
+ ],
+ temperature=0.1,
+ max_tokens=1000
+ )
+
+ response_content = test_response.choices[0].message.content
+ memory_logger.info(f"๐ DIRECT OpenAI Response: {response_content}")
+ memory_logger.info(f"๐ OpenAI Response Usage: {test_response.usage}")
+ memory_logger.info(f"๐ Response Length: {len(response_content) if response_content else 0} chars")
+
+ # Also test with a simpler prompt to see if it's a prompt issue
+ simple_response = client.chat.completions.create(
+ model=openai_model,
+ messages=[
+ {"role": "system", "content": "Extract key information from this conversation as bullet points:"},
+ {"role": "user", "content": transcript}
+ ],
+ temperature=0.1,
+ max_tokens=500
+ )
+
+ simple_content = simple_response.choices[0].message.content
+ memory_logger.info(f"๐ SIMPLE OpenAI Response: {simple_content}")
+
+ else:
+ memory_logger.warning(f"๐ OpenAI not configured properly for direct test")
+
+ except Exception as e:
+ memory_logger.error(f"๐ Direct OpenAI test failed: {e}")
+
+ try:
+ memory_logger.info(f"๐ Now calling Mem0 with the same transcript...")
+
+ # Log the mem0 configuration being used
+ memory_logger.info(f"๐ Mem0 config LLM provider: {MEM0_CONFIG.get('llm', {}).get('provider', 'unknown')}")
+ memory_logger.info(f"๐ Mem0 config LLM model: {MEM0_CONFIG.get('llm', {}).get('config', {}).get('model', 'unknown')}")
+ memory_logger.info(f"๐ Mem0 config custom prompt: {MEM0_CONFIG.get('custom_prompt', 'none')}")
+ memory_logger.info(f"๐ Mem0 fact extraction disabled: {MEM0_CONFIG.get('custom_fact_extraction_prompt', 'not_set') == ''}")
+
+ # Log the exact parameters being passed to mem0
+ metadata = {
+ "source": "offline_streaming",
+ "client_id": client_id,
+ "user_email": user_email,
+ "audio_uuid": audio_uuid,
+ "timestamp": int(time.time()),
+ "conversation_context": "audio_transcription",
+ "device_type": "audio_recording",
+ "organization_id": MEM0_ORGANIZATION_ID,
+ "project_id": MEM0_PROJECT_ID,
+ "app_id": MEM0_APP_ID,
+ "extraction_method": "configurable",
+ "config_enabled": True,
+ }
+
+ memory_logger.info(f"๐ Mem0 add() parameters:")
+ memory_logger.info(f"๐ - transcript: {transcript}")
+ memory_logger.info(f"๐ - user_id: {user_id}")
+ memory_logger.info(f"๐ - metadata: {json.dumps(metadata, indent=2)}")
+ memory_logger.info(f"๐ - prompt: {prompt}")
+
+ result = process_memory.add(
+ transcript,
+ user_id=user_id, # Use database user_id instead of client_id
+ metadata=metadata,
+ prompt=prompt,
+ )
+
+ mem0_duration = time.time() - mem0_start_time
+ memory_logger.info(f"Mem0 processing completed in {mem0_duration:.2f}s")
+ memory_logger.info(
+ f"Successfully added memory for {audio_uuid}, result type: {type(result)}"
+ )
+
+ # Log detailed memory result to understand what's being stored
+ memory_logger.info(f"Raw mem0 result for {audio_uuid}: {result}")
+ memory_logger.info(
+ f"Result keys: {list(result.keys()) if isinstance(result, dict) else 'not a dict'}"
+ )
+
+ # Extract memory IDs from the result
+ if isinstance(result, dict):
+ # Check for multiple memories in results list
+ results_list = result.get("results", [])
+ if results_list:
+ for memory_item in results_list:
+ memory_id = memory_item.get("id")
+ if memory_id:
+ created_memory_ids.append(memory_id)
+ memory_logger.info(f"Extracted memory ID: {memory_id}")
+ else:
+ # Check for single memory (old format or fallback)
+ memory_id = result.get("id")
+ if memory_id:
+ created_memory_ids.append(memory_id)
+ memory_logger.info(f"Extracted single memory ID: {memory_id}")
+
+ # Check if mem0 returned empty results (this can be legitimate)
+ if isinstance(result, dict) and result.get("results") == []:
+ memory_logger.info(
+ f"Mem0 returned empty results for {audio_uuid} - LLM determined no memorable content"
+ )
+ # Create a minimal tracking entry for debugging purposes
+ # MODIFIED: Enhanced to create a proper memory entry that will be visible in UI
+ import uuid
+
+ unique_suffix = str(uuid.uuid4())[:8]
+
+ # Create a more descriptive memory entry for transcripts without memorable content
+ memory_text = f"Conversation transcript: {transcript}"
+ if len(memory_text) > 200:
+ memory_text = f"Conversation transcript: {transcript[:180]}... (truncated)"
+
+ fallback_memory_id = f"transcript_{audio_uuid}_{int(time.time() * 1000)}_{unique_suffix}"
+ created_memory_ids.append(fallback_memory_id)
+
+ result = {
+ "id": fallback_memory_id,
+ "memory": memory_text,
+ "user_id": user_id, # Ensure user_id is included for proper retrieval
+ "metadata": {
+ "empty_results": True,
+ "audio_uuid": audio_uuid,
+ "client_id": client_id,
+ "user_email": user_email,
+ "timestamp": int(time.time()),
+ "llm_model": model_name,
+ "reason": "llm_returned_empty_results",
+ "source": "offline_streaming",
+ "conversation_context": "audio_transcription",
+ "device_type": "audio_recording",
+ "organization_id": MEM0_ORGANIZATION_ID,
+ "project_id": MEM0_PROJECT_ID,
+ "app_id": MEM0_APP_ID,
+ "full_transcript": transcript, # Store full transcript for reference
+ "transcript_length": len(transcript),
+ "processing_forced": True, # Indicate this was processed despite empty results
+ },
+ "results": [], # Keep the original empty results for consistency
+ "created_at": time.strftime("%Y-%m-%dT%H:%M:%S.%fZ"),
+ }
+ memory_logger.info(
+ f"Created enhanced memory entry for transcript without memorable content: {result['id']}"
+ )
+
+ # Also try to store this in the actual mem0 system as a basic memory
+ try:
+ # Create a simple memory entry that mem0 can store
+ fallback_result = process_memory.add(
+ f"Transcript recorded: {transcript[:100]}{'...' if len(transcript) > 100 else ''}",
+ user_id=user_id,
+ metadata={
+ "source": "offline_streaming",
+ "client_id": client_id,
+ "user_email": user_email,
+ "audio_uuid": audio_uuid,
+ "timestamp": int(time.time()),
+ "conversation_context": "audio_transcription",
+ "device_type": "audio_recording",
+ "organization_id": MEM0_ORGANIZATION_ID,
+ "project_id": MEM0_PROJECT_ID,
+ "app_id": MEM0_APP_ID,
+ "forced_storage": True,
+ "original_transcript": transcript,
+ "processing_reason": "ensure_all_transcripts_stored",
+ },
+ prompt="Store this transcript as a basic memory entry.",
+ )
+ if fallback_result and isinstance(fallback_result, dict):
+ fallback_memory_id = fallback_result.get("id")
+ if fallback_memory_id and fallback_memory_id not in created_memory_ids:
+ created_memory_ids.append(fallback_memory_id)
+ memory_logger.info(
+ f"Successfully stored fallback memory entry for {audio_uuid}"
+ )
+ result = fallback_result # Use the successful mem0 result
+ else:
+ memory_logger.info(
+ f"Fallback memory storage failed, using tracking entry for {audio_uuid}"
+ )
+ except Exception as fallback_error:
+ memory_logger.warning(
+ f"Failed to store fallback memory for {audio_uuid}: {fallback_error}"
+ )
+ # Continue with the tracking entry we created above
+
+ if isinstance(result, dict):
+ results_list = result.get("results", [])
+ if results_list:
+ memory_count = len(results_list)
+ memory_logger.info(
+ f"Successfully created {memory_count} memories for {audio_uuid}"
+ )
+
+ # Log details of each memory
+ for i, memory_item in enumerate(results_list):
+ memory_id = memory_item.get("id", "unknown")
+ memory_text = memory_item.get("memory", "unknown")
+ event_type = memory_item.get("event", "unknown")
+ memory_logger.info(
+ f"Memory {i+1}: ID={memory_id[:8]}..., Event={event_type}, Text={memory_text[:80]}..."
+ )
+ else:
+ # Check for old format (direct id/memory keys)
+ memory_id = result.get("id", result.get("memory_id", "unknown"))
+ memory_text = result.get(
+ "memory", result.get("text", result.get("content", "unknown"))
+ )
+ memory_logger.info(
+ f"Single memory - ID: {memory_id}, Text: {memory_text[:100] if isinstance(memory_text, str) else memory_text}..."
+ )
+
+ memory_logger.info(f"Memory metadata: {result.get('metadata', {})}")
+
+ # Check for other possible keys in result
+ for key, value in result.items():
+ if key not in ["results", "id", "memory", "metadata"]:
+ memory_logger.info(f"Additional result key '{key}': {str(value)[:100]}...")
+
+ except TimeoutError:
+ # Handle timeout gracefully
+ error_type = "TimeoutError"
+ memory_logger.error(f"Timeout while adding memory for {audio_uuid}")
+
+ # Create a fallback memory entry
+ try:
+ # Store the transcript as a basic memory without using mem0
+ import uuid
+
+ unique_suffix = str(uuid.uuid4())[:8]
+ fallback_memory_id = f"fallback_{audio_uuid}_{int(time.time() * 1000)}_{unique_suffix}"
+ created_memory_ids.append(fallback_memory_id)
+
+ result = {
+ "id": fallback_memory_id,
+ "memory": f"Conversation summary: {transcript[:500]}{'...' if len(transcript) > 500 else ''}",
+ "metadata": {
+ "fallback_reason": error_type,
+ "original_error": "Timeout during memory processing",
+ "audio_uuid": audio_uuid,
+ "client_id": client_id,
+ "user_email": user_email,
+ "timestamp": int(time.time()),
+ "mem0_bypassed": True,
+ },
+ }
+ memory_logger.warning(
+ f"Created fallback memory for {audio_uuid} due to timeout"
+ )
+ except Exception as fallback_error:
+ memory_logger.error(
+ f"Failed to create fallback memory for {audio_uuid}: {fallback_error}"
+ )
+ raise TimeoutError(f"Memory processing timeout for {audio_uuid}")
+
+ except Exception as error:
+ # Handle other errors gracefully
+ error_type = type(error).__name__
+ memory_logger.error(f"Error while adding memory for {audio_uuid}: {error}")
+
+ # Create a fallback memory entry
+ try:
+ # Store the transcript as a basic memory without using mem0
+ import uuid
+
+ unique_suffix = str(uuid.uuid4())[:8]
+ fallback_memory_id = f"fallback_{audio_uuid}_{int(time.time() * 1000)}_{unique_suffix}"
+ created_memory_ids.append(fallback_memory_id)
+
+ result = {
+ "id": fallback_memory_id,
+ "memory": f"Conversation summary: {transcript[:500]}{'...' if len(transcript) > 500 else ''}",
+ "metadata": {
+ "fallback_reason": error_type,
+ "original_error": str(error),
+ "audio_uuid": audio_uuid,
+ "client_id": client_id,
+ "user_email": user_email,
+ "timestamp": int(time.time()),
+ "mem0_bypassed": True,
+ },
+ }
+ memory_logger.warning(
+ f"Created fallback memory for {audio_uuid} due to mem0 error: {error_type}"
+ )
+ except Exception as fallback_error:
+ memory_logger.error(
+ f"Failed to create fallback memory for {audio_uuid}: {fallback_error}"
+ )
+ raise error # Re-raise original error if fallback fails
+
+ # Record successful memory completion
+ processing_time_ms = (time.time() - start_time) * 1000
+
+ # Record the memory extraction
+ memory_id = result.get("id") if isinstance(result, dict) else str(result)
+ memory_text = result.get("memory") if isinstance(result, dict) else str(result)
+
+ debug_tracker.track_event(
+ transaction_id,
+ PipelineStage.MEMORY_COMPLETED,
+ True,
+ processing_time_ms=processing_time_ms,
+ memory_id=memory_id,
+ memory_text=str(memory_text)[:100] if memory_text else "none",
+ transcript_length=len(transcript),
+ llm_model=memory_config.get("llm_settings", {}).get("model", "llama3.1:latest"),
+ )
+
+ memory_logger.info(f"Successfully processed memory for {audio_uuid}, created {len(created_memory_ids)} memories: {created_memory_ids}")
+ return True, created_memory_ids
+
+ except Exception as e:
+ processing_time_ms = (time.time() - start_time) * 1000
+ memory_logger.error(f"Error adding memory for {audio_uuid}: {e}")
+
+ # Record debug information for failure
+ debug_tracker.track_event(
+ transaction_id,
+ PipelineStage.MEMORY_COMPLETED,
+ False,
+ error_message=str(e),
+ processing_time_ms=processing_time_ms,
+ transcript_length=len(transcript) if transcript else 0,
+ )
+
+ return False, []
+
+
+# Action item extraction functions removed - now handled by ActionItemsService
+# See action_items_service.py for the main action item processing logic
+
+
+# Action item storage functions removed - now handled by ActionItemsService
+# See action_items_service.py for the main action item processing logic
+
+
+class MemoryService:
+ """Service class for managing memory operations."""
+
+ def __init__(self):
+ self.memory = None
+ self._initialized = False
+
+ async def initialize(self):
+ """Initialize the memory service with timeout protection."""
+ if self._initialized:
+ return
+
+ try:
+ # Log Qdrant and LLM URLs
+ llm_url = MEM0_CONFIG['llm']['config'].get('ollama_base_url', MEM0_CONFIG['llm']['config'].get('api_key', 'OpenAI'))
+ memory_logger.info(
+ f"Initializing MemoryService with Qdrant URL: {MEM0_CONFIG['vector_store']['config']['host']} and LLM: {llm_url}"
+ )
+
+ # Initialize main memory instance with timeout protection
+ loop = asyncio.get_running_loop()
+ # Build fresh config to ensure we get latest YAML settings
+ config = _build_mem0_config()
+ self.memory = await asyncio.wait_for(
+ loop.run_in_executor(_MEMORY_EXECUTOR, Memory.from_config, config),
+ timeout=MEMORY_INIT_TIMEOUT_SECONDS,
+ )
+ self._initialized = True
+ memory_logger.info("Memory service initialized successfully")
+
+ except asyncio.TimeoutError:
+ memory_logger.error(
+ f"Memory service initialization timed out after {MEMORY_INIT_TIMEOUT_SECONDS}s"
+ )
+ raise Exception("Memory service initialization timeout")
+ except Exception as e:
+ memory_logger.error(f"Failed to initialize memory service: {e}")
+ raise
+
+ async def add_memory(
+ self,
+ transcript: str,
+ client_id: str,
+ audio_uuid: str,
+ user_id: str,
+ user_email: str,
+ allow_update: bool = False,
+ chunk_repo=None,
+ ) -> bool:
+ """Add memory in background process (non-blocking).
+
+ Args:
+ transcript: The conversation transcript
+ client_id: The client ID that generated the audio
+ audio_uuid: Unique identifier for the audio
+ user_id: Database user ID to associate the memory with
+ user_email: User email for identification
+ allow_update: Whether to allow updating existing memories for this audio_uuid
+ chunk_repo: ChunkRepo instance to update database relationships (optional)
+ """
+ if not self._initialized:
+ try:
+ await asyncio.wait_for(self.initialize(), timeout=MEMORY_INIT_TIMEOUT_SECONDS)
+ except asyncio.TimeoutError:
+ memory_logger.error(f"Memory initialization timed out for {audio_uuid}")
+ return False
+
+ try:
+ # Run the blocking operation in executor with timeout
+ loop = asyncio.get_running_loop()
+ success, created_memory_ids = await asyncio.wait_for(
+ loop.run_in_executor(
+ _MEMORY_EXECUTOR,
+ _add_memory_to_store,
+ transcript,
+ client_id,
+ audio_uuid,
+ user_id,
+ user_email,
+ allow_update,
+ ),
+ timeout=OLLAMA_TIMEOUT_SECONDS,
+ )
+ if success:
+ memory_logger.info(
+ f"Added transcript for {audio_uuid} to mem0 (user: {user_email}, client: {client_id})"
+ )
+ # Update the database relationship if memories were created and chunk_repo is available
+ if created_memory_ids and chunk_repo:
+ try:
+ for memory_id in created_memory_ids:
+ await chunk_repo.add_memory_reference(audio_uuid, memory_id, "created")
+ memory_logger.info(f"Added memory reference {memory_id} to audio chunk {audio_uuid}")
+ except Exception as db_error:
+ memory_logger.error(f"Failed to update database relationship for {audio_uuid}: {db_error}")
+ # Don't fail the entire operation if database update fails
+ elif created_memory_ids and not chunk_repo:
+ memory_logger.warning(f"Created memories {created_memory_ids} for {audio_uuid} but no chunk_repo provided to update database relationship")
+ else:
+ memory_logger.error(f"Failed to add memory for {audio_uuid}")
+ return success
+ except asyncio.TimeoutError:
+ memory_logger.error(
+ f"Memory addition timed out after {OLLAMA_TIMEOUT_SECONDS}s for {audio_uuid}"
+ )
+ return False
+ except Exception as e:
+ memory_logger.error(f"Error adding memory for {audio_uuid}: {e}")
+ return False
+
+ # Action item methods removed - now handled by ActionItemsService
+ # See action_items_service.py for the main action item processing logic
+
+ # get_action_items method removed - now handled by ActionItemsService
+
+ # update_action_item_status method removed - now handled by ActionItemsService
+
+ # search_action_items method removed - now handled by ActionItemsService
+
+ # search_action_items and delete_action_item methods removed - now handled by ActionItemsService
+
+ def get_all_memories(self, user_id: str, limit: int = 100) -> list:
+ """Get all memories for a user, filtering and prioritizing semantic memories over fallback transcript memories."""
+ if not self._initialized:
+ # This is a sync method, so we need to handle initialization differently
+ loop = asyncio.get_event_loop()
+ if loop.is_running():
+ # If we're in an async context, we can't call initialize() directly
+ # This should be handled by the caller
+ raise Exception("Memory service not initialized - call await initialize() first")
+ else:
+ # We're in a sync context, run the async initialize
+ loop.run_until_complete(self.initialize())
+
+ assert self.memory is not None, "Memory service not initialized"
+ try:
+ # Get more memories than requested to account for filtering
+ fetch_limit = min(limit * 3, 500) # Get up to 3x requested amount for filtering
+ memories_response = self.memory.get_all(user_id=user_id, limit=fetch_limit)
+
+ # Handle different response formats from Mem0
+ raw_memories = []
+ if isinstance(memories_response, dict):
+ if "results" in memories_response:
+ # New paginated format - return the results list
+ raw_memories = memories_response["results"]
+ else:
+ # Old format - convert dict values to list
+ raw_memories = list(memories_response.values()) if memories_response else []
+ elif isinstance(memories_response, list):
+ # Already a list
+ raw_memories = memories_response
+ else:
+ memory_logger.warning(
+ f"Unexpected memory response format: {type(memories_response)}"
+ )
+ return []
+
+ # Filter and prioritize memories
+ semantic_memories = []
+ fallback_memories = []
+
+ for memory in raw_memories:
+ metadata = memory.get("metadata", {})
+ memory_id = memory.get("id", "")
+
+ # Check if this is a fallback transcript memory
+ is_fallback = (
+ metadata.get("empty_results") == True or
+ metadata.get("reason") == "llm_returned_empty_results" or
+ str(memory_id).startswith("transcript_")
+ )
+
+ if is_fallback:
+ fallback_memories.append(memory)
+ else:
+ semantic_memories.append(memory)
+
+ # Prioritize semantic memories, but include fallback if no semantic memories exist
+ if semantic_memories:
+ # Return semantic memories first, up to the limit
+ result = semantic_memories[:limit]
+ memory_logger.info(f"Returning {len(result)} semantic memories for user {user_id} (filtered out {len(fallback_memories)} fallback memories)")
+ else:
+ # If no semantic memories, return fallback memories
+ result = fallback_memories[:limit]
+ memory_logger.info(f"No semantic memories found for user {user_id}, returning {len(result)} fallback memories")
+
+ return result
+
+ except Exception as e:
+ memory_logger.error(f"Error fetching memories for user {user_id}: {e}")
+ raise
+
+ def get_all_memories_unfiltered(self, user_id: str, limit: int = 100) -> list:
+ """Get all memories for a user without filtering fallback memories (for debugging)."""
+ if not self._initialized:
+ # This is a sync method, so we need to handle initialization differently
+ loop = asyncio.get_event_loop()
+ if loop.is_running():
+ # If we're in an async context, we can't call initialize() directly
+ # This should be handled by the caller
+ raise Exception("Memory service not initialized - call await initialize() first")
+ else:
+ # We're in a sync context, run the async initialize
+ loop.run_until_complete(self.initialize())
+
+ assert self.memory is not None, "Memory service not initialized"
+ try:
+ memories_response = self.memory.get_all(user_id=user_id, limit=limit)
+
+ # Handle different response formats from Mem0
+ if isinstance(memories_response, dict):
+ if "results" in memories_response:
+ # New paginated format - return the results list
+ return memories_response["results"]
+ else:
+ # Old format - convert dict values to list
+ return list(memories_response.values()) if memories_response else []
+ elif isinstance(memories_response, list):
+ # Already a list
+ return memories_response
+ else:
+ memory_logger.warning(
+ f"Unexpected memory response format: {type(memories_response)}"
+ )
+ return []
+
+ except Exception as e:
+ memory_logger.error(f"Error fetching unfiltered memories for user {user_id}: {e}")
+ raise
+
+ def search_memories(self, query: str, user_id: str, limit: int = 10) -> list:
+ """Search memories using semantic similarity, prioritizing semantic memories over fallback."""
+ if not self._initialized:
+ # This is a sync method, so we need to handle initialization differently
+ loop = asyncio.get_event_loop()
+ if loop.is_running():
+ # If we're in an async context, we can't call initialize() directly
+ # This should be handled by the caller
+ raise Exception("Memory service not initialized - call await initialize() first")
+ else:
+ # We're in a sync context, run the async initialize
+ loop.run_until_complete(self.initialize())
+
+ assert self.memory is not None, "Memory service not initialized"
+ try:
+ # Get more results than requested to account for filtering
+ search_limit = min(limit * 3, 100)
+ memories_response = self.memory.search(query=query, user_id=user_id, limit=search_limit)
+
+ # Handle different response formats from Mem0
+ raw_memories = []
+ if isinstance(memories_response, dict):
+ if "results" in memories_response:
+ # New paginated format - return the results list
+ raw_memories = memories_response["results"]
+ else:
+ # Old format - convert dict values to list
+ raw_memories = list(memories_response.values()) if memories_response else []
+ elif isinstance(memories_response, list):
+ # Already a list
+ raw_memories = memories_response
+ else:
+ memory_logger.warning(
+ f"Unexpected search response format: {type(memories_response)}"
+ )
+ return []
+
+ # Filter and prioritize memories
+ semantic_memories = []
+ fallback_memories = []
+
+ for memory in raw_memories:
+ metadata = memory.get("metadata", {})
+ memory_id = memory.get("id", "")
+
+ # Check if this is a fallback transcript memory
+ is_fallback = (
+ metadata.get("empty_results") == True or
+ metadata.get("reason") == "llm_returned_empty_results" or
+ str(memory_id).startswith("transcript_")
+ )
+
+ if is_fallback:
+ fallback_memories.append(memory)
+ else:
+ semantic_memories.append(memory)
+
+ # Prioritize semantic memories in search results
+ if semantic_memories:
+ result = semantic_memories[:limit]
+ memory_logger.info(f"Search returned {len(result)} semantic memories for query '{query}' (filtered out {len(fallback_memories)} fallback memories)")
+ else:
+ # If no semantic memories match, include fallback memories
+ result = fallback_memories[:limit]
+ memory_logger.info(f"Search found no semantic memories for query '{query}', returning {len(result)} fallback memories")
+
+ return result
+
+ except Exception as e:
+ memory_logger.error(f"Error searching memories for user {user_id}: {e}")
+ raise
+
+ def delete_memory(self, memory_id: str) -> bool:
+ """Delete a specific memory by ID."""
+ if not self._initialized:
+ # This is a sync method, so we need to handle initialization differently
+ loop = asyncio.get_event_loop()
+ if loop.is_running():
+ # If we're in an async context, we can't call initialize() directly
+ # This should be handled by the caller
+ raise Exception("Memory service not initialized - call await initialize() first")
+ else:
+ # We're in a sync context, run the async initialize
+ loop.run_until_complete(self.initialize())
+
+ assert self.memory is not None, "Memory service not initialized"
+ try:
+ self.memory.delete(memory_id=memory_id)
+ memory_logger.info(f"Deleted memory {memory_id}")
+ return True
+ except Exception as e:
+ memory_logger.error(f"Error deleting memory {memory_id}: {e}")
+ raise
+
+ def get_all_memories_debug(self, limit: int = 200) -> list:
+ """Get all memories across all users for admin debugging. Admin only."""
+ if not self._initialized:
+ # This is a sync method, so we need to handle initialization differently
+ loop = asyncio.get_event_loop()
+ if loop.is_running():
+ # If we're in an async context, we can't call initialize() directly
+ # This should be handled by the caller
+ raise Exception("Memory service not initialized - call await initialize() first")
+ else:
+ # We're in a sync context, run the async initialize
+ loop.run_until_complete(self.initialize())
+
+ assert self.memory is not None, "Memory service not initialized"
+ try:
+ all_memories = []
+
+ # First, we need to get a list of all users who have memories
+ # We'll do this by getting user_ids from the database or using a small Qdrant query
+ # to find unique user_ids, then use the proper memory service methods
+
+ from advanced_omi_backend.users import get_all_users
+
+ # Get all users from the database
+ users = get_all_users()
+ memory_logger.info(f"๐ Found {len(users)} users for admin debug")
+
+ for user in users:
+ user_id = str(user.id)
+ try:
+ # Use the proper memory service method for each user
+ user_memories = self.get_all_memories(user_id)
+
+ # Add user metadata to each memory for admin debugging
+ for memory in user_memories:
+ memory_text = memory.get('memory', 'No content')
+ memory_logger.info(f"๐ DEBUG memory structure: {memory}")
+ memory_logger.info(f"๐ Memory text extracted: '{memory_text}'")
+
+ memory_entry = {
+ 'id': memory.get('id', 'unknown'),
+ 'memory': memory_text,
+ 'user_id': user_id,
+ 'client_id': memory.get('metadata', {}).get('client_id', 'unknown'),
+ 'audio_uuid': memory.get('metadata', {}).get('audio_uuid', 'unknown'),
+ 'created_at': memory.get('created_at', 'unknown'),
+ 'owner_email': user.email,
+ 'metadata': memory.get('metadata', {}),
+ 'collection': 'omi_memories',
+ }
+ all_memories.append(memory_entry)
+
+ except Exception as e:
+ memory_logger.warning(f"Error getting memories for user {user_id}: {e}")
+ continue
+
+ # Limit total memories returned
+ if len(all_memories) >= limit:
+ break
+
+ memory_logger.info(f"Retrieved {len(all_memories)} memories for admin debug view using proper memory service methods")
+ return all_memories[:limit] # Ensure we don't exceed limit
+
+ except Exception as e:
+ memory_logger.error(f"Error fetching all memories for admin: {e}")
+ # Return empty list instead of raising to avoid breaking admin interface
+ return []
+
+ def delete_all_user_memories(self, user_id: str) -> int:
+ """Delete all memories for a user and return count of deleted memories."""
+ if not self._initialized:
+ # This is a sync method, so we need to handle initialization differently
+ loop = asyncio.get_event_loop()
+ if loop.is_running():
+ # If we're in an async context, we can't call initialize() directly
+ # This should be handled by the caller
+ raise Exception("Memory service not initialized - call await initialize() first")
+ else:
+ # We're in a sync context, run the async initialize
+ loop.run_until_complete(self.initialize())
+
+ try:
+ assert self.memory is not None, "Memory service not initialized"
+ # Get all memories first to count them
+ user_memories_response = self.memory.get_all(user_id=user_id)
+ memory_count = 0
+
+ # Handle different response formats from get_all
+ if isinstance(user_memories_response, dict):
+ if "results" in user_memories_response:
+ # New paginated format
+ memory_count = len(user_memories_response["results"])
+ else:
+ # Old dict format (deprecated)
+ memory_count = len(user_memories_response)
+ elif isinstance(user_memories_response, list):
+ # Just in case it returns a list
+ memory_count = len(user_memories_response)
+ else:
+ memory_count = 0
+
+ # Delete all memories for this user
+ if memory_count > 0:
+ self.memory.delete_all(user_id=user_id)
+ memory_logger.info(f"Deleted {memory_count} memories for user {user_id}")
+
+ return memory_count
+
+ except Exception as e:
+ memory_logger.error(f"Error deleting memories for user {user_id}: {e}")
+ raise
+
+ async def test_connection(self) -> bool:
+ """Test memory service connection with timeout protection."""
+ try:
+ if not self._initialized:
+ await asyncio.wait_for(self.initialize(), timeout=MEMORY_INIT_TIMEOUT_SECONDS)
+ return True
+ except asyncio.TimeoutError:
+ memory_logger.error(
+ f"Memory service connection test timed out after {MEMORY_INIT_TIMEOUT_SECONDS}s"
+ )
+ return False
+ except Exception as e:
+ memory_logger.error(f"Memory service connection test failed: {e}")
+ return False
+
+ def shutdown(self):
+ """Shutdown the memory service."""
+ self._initialized = False
+ memory_logger.info("Memory service shut down")
+
+ async def get_memories_with_transcripts(self, user_id: str, limit: int = 100) -> list:
+ """Get memories with their source transcripts using database relationship."""
+ if not self._initialized:
+ await self.initialize()
+
+ assert self.memory is not None, "Memory service not initialized"
+
+ try:
+ # Get all memories for the user (this is sync)
+ memories = self.get_all_memories(user_id, limit)
+
+ # Import Motor connection here to avoid circular imports
+ from advanced_omi_backend.database import chunks_col
+
+ enriched_memories = []
+
+ for memory in memories:
+ # Create enriched memory entry
+ enriched_memory = {
+ "memory_id": memory.get("id", "unknown"),
+ "memory_text": memory.get("memory", memory.get("text", "")),
+ "created_at": memory.get("created_at", ""),
+ "metadata": memory.get("metadata", {}),
+ "audio_uuid": None,
+ "transcript": None,
+ "client_id": None,
+ "user_email": None,
+ "compression_ratio": 0,
+ "transcript_length": 0,
+ "memory_length": 0
+ }
+
+ # Extract audio_uuid from memory metadata
+ metadata = memory.get("metadata", {})
+ audio_uuid = metadata.get("audio_uuid")
+
+ if audio_uuid:
+ enriched_memory["audio_uuid"] = audio_uuid
+ enriched_memory["client_id"] = metadata.get("client_id")
+ enriched_memory["user_email"] = metadata.get("user_email")
+
+ # Get transcript from database using Motor (async)
+ try:
+ memory_logger.debug(f"๐ Looking up transcript for audio_uuid: {audio_uuid}")
+
+ # Use existing Motor connection instead of creating new PyMongo clients
+ chunk = await chunks_col.find_one({"audio_uuid": audio_uuid})
+
+ if chunk:
+ memory_logger.debug(f"๐ Found chunk for {audio_uuid}, extracting transcript segments")
+ # Extract transcript from chunk
+ transcript_segments = chunk.get("transcript", [])
+ if transcript_segments:
+ # Combine all transcript segments into a single text
+ full_transcript = " ".join([
+ segment.get("text", "")
+ for segment in transcript_segments
+ if isinstance(segment, dict) and segment.get("text")
+ ])
+
+ if full_transcript.strip():
+ enriched_memory["transcript"] = full_transcript
+ enriched_memory["transcript_length"] = len(full_transcript)
+
+ memory_text = enriched_memory["memory_text"]
+ enriched_memory["memory_length"] = len(memory_text)
+
+ # Calculate compression ratio
+ if len(full_transcript) > 0:
+ enriched_memory["compression_ratio"] = round(
+ (len(memory_text) / len(full_transcript)) * 100, 1
+ )
+ memory_logger.debug(f"โ
Successfully enriched memory {audio_uuid} with {len(full_transcript)} char transcript")
+ else:
+ memory_logger.debug(f"โ ๏ธ Empty transcript found for {audio_uuid}")
+ else:
+ memory_logger.debug(f"โ ๏ธ No transcript segments found for {audio_uuid}")
+ else:
+ memory_logger.debug(f"โ ๏ธ No chunk found for audio_uuid: {audio_uuid}")
+
+ except Exception as db_error:
+ memory_logger.warning(f"Failed to get transcript for audio_uuid {audio_uuid}: {db_error}")
+ # Continue processing other memories even if one fails
+
+ enriched_memories.append(enriched_memory)
+
+ transcript_count = sum(1 for m in enriched_memories if m.get("transcript"))
+ memory_logger.info(f"Enriched {len(enriched_memories)} memories with transcripts for user {user_id} ({transcript_count} with actual transcript data)")
+ return enriched_memories
+
+ except Exception as e:
+ memory_logger.error(f"Error getting memories with transcripts for user {user_id}: {e}")
+ raise
+
+
+# Global service instance
+def get_memory_service() -> MemoryService:
+ """Get the global memory service instance."""
+ global _memory_service
+ if _memory_service is None:
+ _memory_service = MemoryService()
+ return _memory_service
+
+
+def shutdown_memory_service():
+ """Shutdown the global memory service."""
+ global _memory_service
+ if _memory_service:
+ _memory_service.shutdown()
+ _memory_service = None
diff --git a/backends/advanced-backend/src/advanced_omi_backend/memory_config_loader.py b/backends/advanced-backend/src/advanced_omi_backend/memory_config_loader.py
new file mode 100644
index 00000000..33eff3c8
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/memory_config_loader.py
@@ -0,0 +1,349 @@
+"""
+Memory Configuration Loader
+
+This module loads and manages memory extraction configuration from YAML files.
+"""
+
+import logging
+import os
+from typing import Any, Dict
+
+import yaml
+
+# Logger for configuration
+config_logger = logging.getLogger("memory_config")
+
+
+class MemoryConfigLoader:
+ """
+ Loads and manages memory extraction configuration from YAML files.
+ """
+
+ def __init__(self, config_path: str | None = None):
+ """
+ Initialize the config loader.
+
+ Args:
+ config_path: Path to the configuration YAML file
+ """
+ if config_path is None:
+ # Default to memory_config.yaml in the backend root
+ config_path = os.path.join(
+ os.path.dirname(os.path.dirname(__file__)), "memory_config.yaml"
+ )
+
+ self.config_path = config_path
+ self.config = self._load_config()
+
+ # Set up logging level from config
+ debug_config = self.config.get("debug", {})
+ log_level = debug_config.get("log_level", "INFO")
+ numeric_level = getattr(logging, log_level.upper(), logging.INFO)
+ config_logger.setLevel(numeric_level)
+
+ config_logger.info(f"Loaded memory configuration from {config_path}")
+
+ def _load_config(self) -> Dict[str, Any]:
+ """Load configuration from YAML file."""
+ try:
+ with open(self.config_path, "r") as file:
+ config = yaml.safe_load(file)
+ return config
+ except FileNotFoundError:
+ config_logger.error(f"Configuration file not found: {self.config_path}")
+ return self._get_default_config()
+ except yaml.YAMLError as e:
+ config_logger.error(f"Error parsing YAML configuration: {e}")
+ return self._get_default_config()
+
+ def _get_default_config(self) -> Dict[str, Any]:
+ """Return default configuration if file loading fails."""
+ # Get model from environment or use a fallback
+ default_model = os.getenv("OLLAMA_MODEL", "gemma3n:e4b")
+
+ return {
+ "memory_extraction": {
+ "enabled": True,
+ "prompt": "Extract anything relevant about this conversation.",
+ "llm_settings": {"temperature": 0.1, "max_tokens": 2000, "model": default_model},
+ },
+ "fact_extraction": {
+ "enabled": False,
+ "prompt": "Extract specific facts from this conversation.",
+ "llm_settings": {"temperature": 0.0, "max_tokens": 1500, "model": default_model},
+ },
+ "action_item_extraction": {
+ "enabled": True,
+ "trigger_phrases": ["simon says", "action item", "todo"],
+ "prompt": "Extract action items from this conversation.",
+ "llm_settings": {"temperature": 0.1, "max_tokens": 1000, "model": default_model},
+ },
+ "categorization": {
+ "enabled": False,
+ "categories": ["work", "personal", "meeting", "other"],
+ "prompt": "Categorize this conversation.",
+ "llm_settings": {"temperature": 0.2, "max_tokens": 100, "model": default_model},
+ },
+ "quality_control": {
+ "min_conversation_length": 50,
+ "max_conversation_length": 50000,
+ "skip_low_content": True,
+ "min_content_ratio": 0.3,
+ "skip_patterns": ["^(um|uh|hmm|yeah|ok|okay)\\s*$"],
+ },
+ "processing": {
+ "parallel_processing": True,
+ "max_concurrent_tasks": 1,
+ "processing_timeout": 600,
+ "retry_failed": True,
+ "max_retries": 2,
+ "retry_delay": 5,
+ },
+ "storage": {
+ "store_metadata": True,
+ "store_prompts": True,
+ "store_llm_responses": True,
+ "store_timing": True,
+ },
+ "debug": {
+ "enabled": True,
+ "db_path": "/app/debug/memory_debug.db",
+ "log_level": "INFO",
+ "log_full_conversations": False,
+ "log_extracted_memories": True,
+ },
+ }
+
+ def reload_config(self) -> bool:
+ """Reload configuration from file."""
+ try:
+ self.config = self._load_config()
+ config_logger.info("Configuration reloaded successfully")
+ return True
+ except Exception as e:
+ config_logger.error(f"Failed to reload configuration: {e}")
+ return False
+
+ def get_memory_extraction_config(self) -> Dict[str, Any]:
+ """Get memory extraction configuration."""
+ return self.config.get("memory_extraction", {})
+
+ def get_fact_extraction_config(self) -> Dict[str, Any]:
+ """Get fact extraction configuration."""
+ return self.config.get("fact_extraction", {})
+
+ def get_action_item_extraction_config(self) -> Dict[str, Any]:
+ """Get action item extraction configuration."""
+ return self.config.get("action_item_extraction", {})
+
+ def get_categorization_config(self) -> Dict[str, Any]:
+ """Get categorization configuration."""
+ return self.config.get("categorization", {})
+
+ def get_quality_control_config(self) -> Dict[str, Any]:
+ """Get quality control configuration."""
+ return self.config.get("quality_control", {})
+
+ def get_processing_config(self) -> Dict[str, Any]:
+ """Get processing configuration."""
+ return self.config.get("processing", {})
+
+ def get_storage_config(self) -> Dict[str, Any]:
+ """Get storage configuration."""
+ return self.config.get("storage", {})
+
+ def get_debug_config(self) -> Dict[str, Any]:
+ """Get debug configuration."""
+ return self.config.get("debug", {})
+
+ def is_memory_extraction_enabled(self) -> bool:
+ """Check if memory extraction is enabled."""
+ return self.get_memory_extraction_config().get("enabled", True)
+
+ def is_fact_extraction_enabled(self) -> bool:
+ """Check if fact extraction is enabled."""
+ return self.get_fact_extraction_config().get("enabled", False)
+
+ def is_action_item_extraction_enabled(self) -> bool:
+ """Check if action item extraction is enabled."""
+ return self.get_action_item_extraction_config().get("enabled", True)
+
+ def is_categorization_enabled(self) -> bool:
+ """Check if categorization is enabled."""
+ return self.get_categorization_config().get("enabled", False)
+
+ def is_debug_enabled(self) -> bool:
+ """Check if debug tracking is enabled."""
+ return self.get_debug_config().get("enabled", True)
+
+ def get_memory_prompt(self) -> str:
+ """Get the memory extraction prompt."""
+ return self.get_memory_extraction_config().get(
+ "prompt", "Extract anything relevant about this conversation."
+ )
+
+ def get_fact_prompt(self) -> str:
+ """Get the fact extraction prompt."""
+ return self.get_fact_extraction_config().get(
+ "prompt", "Extract specific facts from this conversation."
+ )
+
+ def get_action_item_prompt(self) -> str:
+ """Get the action item extraction prompt."""
+ return self.get_action_item_extraction_config().get(
+ "prompt", "Extract action items from this conversation."
+ )
+
+ def get_categorization_prompt(self) -> str:
+ """Get the categorization prompt."""
+ return self.get_categorization_config().get("prompt", "Categorize this conversation.")
+
+ def get_llm_settings(self, extraction_type: str) -> Dict[str, Any]:
+ """
+ Get LLM settings for a specific extraction type.
+
+ Args:
+ extraction_type: One of 'memory', 'fact', 'action_item', 'categorization'
+ """
+ config_key = f"{extraction_type}_extraction"
+ if extraction_type == "memory":
+ config_key = "memory_extraction"
+ elif extraction_type == "fact":
+ config_key = "fact_extraction"
+ elif extraction_type == "action_item":
+ config_key = "action_item_extraction"
+ elif extraction_type == "categorization":
+ config_key = "categorization"
+
+ extraction_config = self.config.get(config_key, {})
+ return extraction_config.get("llm_settings", {})
+
+ def should_skip_conversation(self, conversation_text: str) -> bool:
+ """
+ Check if a conversation should be skipped based on quality control settings.
+
+ Args:
+ conversation_text: The full conversation text
+
+ Returns:
+ True if the conversation should be skipped
+ """
+ quality_config = self.get_quality_control_config()
+
+ # Check length constraints
+ min_length = quality_config.get("min_conversation_length", 50)
+ max_length = quality_config.get("max_conversation_length", 50000)
+
+ if len(conversation_text) < min_length:
+ config_logger.debug(
+ f"Skipping conversation: too short ({len(conversation_text)} < {min_length})"
+ )
+ return True
+
+ if len(conversation_text) > max_length:
+ config_logger.debug(
+ f"Skipping conversation: too long ({len(conversation_text)} > {max_length})"
+ )
+ return True
+
+ # Check skip patterns
+ skip_patterns = quality_config.get("skip_patterns", [])
+ if skip_patterns:
+ import re
+
+ for pattern in skip_patterns:
+ if re.match(pattern, conversation_text.strip(), re.IGNORECASE):
+ config_logger.debug(f"Skipping conversation: matches skip pattern '{pattern}'")
+ return True
+
+ # Check content ratio (if enabled)
+ if quality_config.get("skip_low_content", False):
+ min_content_ratio = quality_config.get("min_content_ratio", 0.3)
+
+ # Simple heuristic: calculate ratio of meaningful words to total words
+ words = conversation_text.split()
+ if len(words) > 0:
+ filler_words = {
+ "um",
+ "uh",
+ "hmm",
+ "yeah",
+ "ok",
+ "okay",
+ "like",
+ "you",
+ "know",
+ "so",
+ "well",
+ }
+ meaningful_words = [
+ word for word in words if word.lower() not in filler_words and len(word) > 2
+ ]
+ content_ratio = len(meaningful_words) / len(words)
+
+ if content_ratio < min_content_ratio:
+ config_logger.debug(
+ f"Skipping conversation: low content ratio ({content_ratio:.2f} < {min_content_ratio})"
+ )
+ return True
+
+ return False
+
+ def get_action_item_triggers(self) -> list[str]:
+ """Get action item trigger phrases."""
+ return self.get_action_item_extraction_config().get("trigger_phrases", [])
+
+ def has_action_item_triggers(self, conversation_text: str) -> bool:
+ """Check if conversation contains action item trigger phrases."""
+ triggers = self.get_action_item_triggers()
+ conversation_lower = conversation_text.lower()
+
+ for trigger in triggers:
+ if trigger.lower() in conversation_lower:
+ return True
+
+ return False
+
+ def get_categories(self) -> list[str]:
+ """Get available categories for classification."""
+ return self.get_categorization_config().get("categories", [])
+
+ def get_debug_db_path(self) -> str:
+ """Get the debug database path."""
+ return self.get_debug_config().get("db_path", "/app/debug/memory_debug.db")
+
+ def should_log_full_conversations(self) -> bool:
+ """Check if full conversations should be logged."""
+ return self.get_debug_config().get("log_full_conversations", False)
+
+ def should_log_extracted_memories(self) -> bool:
+ """Check if extracted memories should be logged."""
+ return self.get_debug_config().get("log_extracted_memories", True)
+
+ def get_processing_timeout(self) -> int:
+ """Get the processing timeout in seconds."""
+ return self.get_processing_config().get("processing_timeout", 600)
+
+ def should_retry_failed(self) -> bool:
+ """Check if failed extractions should be retried."""
+ return self.get_processing_config().get("retry_failed", True)
+
+ def get_max_retries(self) -> int:
+ """Get the maximum number of retries."""
+ return self.get_processing_config().get("max_retries", 2)
+
+ def get_retry_delay(self) -> int:
+ """Get the delay between retries in seconds."""
+ return self.get_processing_config().get("retry_delay", 5)
+
+
+# Global instance
+_config_loader = None
+
+
+def get_config_loader() -> MemoryConfigLoader:
+ """Get the global configuration loader instance."""
+ global _config_loader
+ if _config_loader is None:
+ _config_loader = MemoryConfigLoader()
+ return _config_loader
diff --git a/backends/advanced-backend/src/advanced_omi_backend/routers/api_router.py b/backends/advanced-backend/src/advanced_omi_backend/routers/api_router.py
new file mode 100644
index 00000000..d889fb71
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/routers/api_router.py
@@ -0,0 +1,143 @@
+"""
+Main API router for Friend-Lite backend.
+
+This module aggregates all the functional router modules and provides
+a single entry point for the API endpoints.
+"""
+
+import asyncio
+import logging
+
+from fastapi import APIRouter, Depends
+from fastapi.responses import JSONResponse
+
+from advanced_omi_backend.auth import current_active_user, current_superuser
+from advanced_omi_backend.memory import get_memory_service
+from advanced_omi_backend.debug_system_tracker import get_debug_tracker
+from advanced_omi_backend.users import User
+
+from .modules import (
+ client_router,
+ conversation_router,
+ memory_router,
+ system_router,
+ user_router,
+)
+
+logger = logging.getLogger(__name__)
+audio_logger = logging.getLogger("audio_processing")
+
+# Create main API router
+router = APIRouter(prefix="/api", tags=["api"])
+
+# Include all sub-routers
+router.include_router(user_router)
+router.include_router(client_router)
+router.include_router(conversation_router)
+router.include_router(memory_router)
+router.include_router(system_router)
+
+# Admin endpoints for backward compatibility with Streamlit UI
+@router.get("/admin/memories")
+async def get_admin_memories(current_user: User = Depends(current_superuser), limit: int = 200):
+ """Get all memories across all users for admin review. Admin only. Compatibility endpoint."""
+ try:
+ memory_service = get_memory_service()
+
+ # Get debug tracker for additional context
+ debug_tracker = get_debug_tracker()
+
+ # Get all memories without user filtering
+ all_memories = await asyncio.get_running_loop().run_in_executor(
+ None, memory_service.get_all_memories_debug, limit
+ )
+
+ # Group by user for easier admin review
+ user_memories = {}
+ users_with_memories = set()
+ client_ids_with_memories = set()
+
+ for memory in all_memories:
+ user_id = memory.get("user_id", "unknown")
+ client_id = memory.get("client_id", "unknown")
+
+ if user_id not in user_memories:
+ user_memories[user_id] = []
+ user_memories[user_id].append(memory)
+
+ # Track users and clients for debug info
+ users_with_memories.add(user_id)
+ client_ids_with_memories.add(client_id)
+
+ # Enhanced stats combining both admin and debug information
+ stats = {
+ "total_memories": len(all_memories),
+ "total_users": len(user_memories),
+ "debug_tracker_initialized": debug_tracker is not None,
+ "users_with_memories": sorted(list(users_with_memories)),
+ "client_ids_with_memories": sorted(list(client_ids_with_memories)),
+ }
+
+ return {
+ "memories": all_memories, # Flat list for compatibility
+ "user_memories": user_memories, # Grouped by user
+ "stats": stats,
+ "total_users": len(user_memories),
+ "total_memories": len(all_memories),
+ "limit": limit,
+ }
+
+ except Exception as e:
+ audio_logger.error(f"Error fetching admin memories: {e}", exc_info=True)
+ return JSONResponse(status_code=500, content={"message": "Error fetching admin memories"})
+
+
+@router.get("/admin/memories/debug")
+async def get_admin_memories_debug(current_user: User = Depends(current_superuser), limit: int = 200):
+ """Get all memories across all users for debugging. Admin only. Compatibility endpoint that redirects to main admin endpoint."""
+ # This is now just a redirect to the main admin endpoint for compatibility
+ return await get_admin_memories(current_user, limit)
+
+
+# Active clients compatibility endpoint
+@router.get("/active_clients")
+async def get_active_clients_compat(current_user: User = Depends(current_active_user)):
+ """Get active clients. Compatibility endpoint for Streamlit UI."""
+ try:
+ from advanced_omi_backend.client_manager import get_client_manager, get_user_clients_active
+
+ client_manager = get_client_manager()
+
+ if not client_manager.is_initialized():
+ return JSONResponse(
+ status_code=503,
+ content={"error": "Client manager not available"},
+ )
+
+ if current_user.is_superuser:
+ # Admin: return all active clients
+ clients_info = client_manager.get_client_info_summary()
+ else:
+ # Regular user: return only their own clients
+ user_active_clients = get_user_clients_active(current_user.user_id)
+ all_clients = client_manager.get_client_info_summary()
+
+ # Filter to only the user's clients
+ clients_info = [
+ client for client in all_clients if client["client_id"] in user_active_clients
+ ]
+
+ return {
+ "clients": clients_info,
+ "active_clients_count": len(clients_info),
+ "total_count": len(clients_info),
+ }
+
+ except Exception as e:
+ audio_logger.error(f"Error getting active clients: {e}", exc_info=True)
+ return JSONResponse(
+ status_code=500,
+ content={"error": "Failed to get active clients"},
+ )
+
+logger.info("API router initialized with all sub-modules")
diff --git a/backends/advanced-backend/src/advanced_omi_backend/routers/modules/__init__.py b/backends/advanced-backend/src/advanced_omi_backend/routers/modules/__init__.py
new file mode 100644
index 00000000..f166a40d
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/routers/modules/__init__.py
@@ -0,0 +1,18 @@
+"""
+Router modules for Friend-Lite API.
+
+This package contains organized router modules for different functional areas:
+- user_routes: User management and authentication
+- client_routes: Active client monitoring and management
+- conversation_routes: Conversation CRUD and audio processing
+- memory_routes: Memory management, search, and debug
+- system_routes: System utilities, metrics, and file processing
+"""
+
+from .client_routes import router as client_router
+from .conversation_routes import router as conversation_router
+from .memory_routes import router as memory_router
+from .system_routes import router as system_router
+from .user_routes import router as user_router
+
+__all__ = ["user_router", "client_router", "conversation_router", "memory_router", "system_router"]
diff --git a/backends/advanced-backend/src/advanced_omi_backend/routers/modules/client_routes.py b/backends/advanced-backend/src/advanced_omi_backend/routers/modules/client_routes.py
new file mode 100644
index 00000000..90b15014
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/routers/modules/client_routes.py
@@ -0,0 +1,64 @@
+"""
+Client management routes for Friend-Lite API.
+
+Handles active client monitoring and management.
+"""
+
+import logging
+
+from fastapi import APIRouter, Depends
+from fastapi.responses import JSONResponse
+
+from advanced_omi_backend.auth import current_active_user, current_superuser
+from advanced_omi_backend.client_manager import (
+ ClientManager,
+ get_client_manager_dependency,
+ get_user_clients_active,
+)
+from advanced_omi_backend.users import User
+
+logger = logging.getLogger(__name__)
+
+router = APIRouter(prefix="/clients", tags=["clients"])
+
+
+@router.get("/active")
+async def get_active_clients(
+ current_user: User = Depends(current_active_user),
+ client_manager: ClientManager = Depends(get_client_manager_dependency),
+):
+ """Get information about active clients. Users see only their own clients, admins see all."""
+ try:
+ if not client_manager.is_initialized():
+ return JSONResponse(
+ status_code=503,
+ content={"error": "Client manager not available"},
+ )
+
+ if current_user.is_superuser:
+ # Admin: return all active clients
+ return {
+ "active_clients": client_manager.get_client_info_summary(),
+ "total_count": client_manager.get_client_count(),
+ }
+ else:
+ # Regular user: return only their own clients
+ user_active_clients = get_user_clients_active(current_user.user_id)
+ all_clients = client_manager.get_client_info_summary()
+
+ # Filter to only the user's clients
+ user_clients = [
+ client for client in all_clients if client["client_id"] in user_active_clients
+ ]
+
+ return {
+ "active_clients": user_clients,
+ "total_count": len(user_clients),
+ }
+
+ except Exception as e:
+ logger.error(f"Error getting active clients: {e}")
+ return JSONResponse(
+ status_code=500,
+ content={"error": "Failed to get active clients"},
+ )
diff --git a/backends/advanced-backend/src/advanced_omi_backend/routers/modules/conversation_routes.py b/backends/advanced-backend/src/advanced_omi_backend/routers/modules/conversation_routes.py
new file mode 100644
index 00000000..24afbf36
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/routers/modules/conversation_routes.py
@@ -0,0 +1,309 @@
+"""
+Conversation management routes for Friend-Lite API.
+
+Handles conversation CRUD operations, audio processing, and transcript management.
+"""
+
+import asyncio
+import logging
+import time
+from pathlib import Path
+from typing import Optional
+
+from fastapi import APIRouter, Depends, File, UploadFile
+from fastapi.responses import JSONResponse
+
+from advanced_omi_backend.audio_cropping_utils import (
+ _process_audio_cropping_with_relative_timestamps,
+)
+from advanced_omi_backend.auth import current_active_user, current_superuser
+from advanced_omi_backend.client_manager import (
+ ClientManager,
+ client_belongs_to_user,
+ get_client_manager_dependency,
+ get_user_clients_all,
+)
+from advanced_omi_backend.database import AudioChunksCollection, chunks_col
+from advanced_omi_backend.users import User
+
+logger = logging.getLogger(__name__)
+audio_logger = logging.getLogger("audio_processing")
+
+router = APIRouter(prefix="/conversations", tags=["conversations"])
+
+# Initialize chunk repository
+chunk_repo = AudioChunksCollection(chunks_col)
+
+
+@router.post("/{client_id}/close")
+async def close_current_conversation(
+ client_id: str,
+ current_user: User = Depends(current_active_user),
+ client_manager: ClientManager = Depends(get_client_manager_dependency),
+):
+ """Close the current conversation for a specific client. Users can only close their own conversations."""
+ # Validate client ownership
+ if not current_user.is_superuser and not client_belongs_to_user(
+ client_id, current_user.user_id
+ ):
+ logger.warning(
+ f"User {current_user.user_id} attempted to close conversation for client {client_id} without permission"
+ )
+ return JSONResponse(
+ content={
+ "error": "Access forbidden. You can only close your own conversations.",
+ "details": f"Client '{client_id}' does not belong to your account.",
+ },
+ status_code=403,
+ )
+
+ if not client_manager.has_client(client_id):
+ return JSONResponse(
+ content={"error": f"Client '{client_id}' not found or not connected"},
+ status_code=404,
+ )
+
+ client_state = client_manager.get_client(client_id)
+ if client_state is None:
+ return JSONResponse(
+ content={"error": f"Client '{client_id}' not found or not connected"},
+ status_code=404,
+ )
+
+ if not client_state.connected:
+ return JSONResponse(
+ content={"error": f"Client '{client_id}' is not connected"}, status_code=400
+ )
+
+ try:
+ # Close the current conversation
+ await client_state._close_current_conversation()
+
+ # Reset conversation state but keep client connected
+ client_state.current_audio_uuid = None
+ client_state.conversation_start_time = time.time()
+ client_state.last_transcript_time = None
+
+ logger.info(
+ f"Manually closed conversation for client {client_id} by user {current_user.id}"
+ )
+
+ return JSONResponse(
+ content={
+ "message": f"Successfully closed current conversation for client '{client_id}'",
+ "client_id": client_id,
+ "timestamp": int(time.time()),
+ }
+ )
+
+ except Exception as e:
+ logger.error(f"Error closing conversation for client {client_id}: {e}")
+ return JSONResponse(
+ content={"error": f"Failed to close conversation: {str(e)}"},
+ status_code=500,
+ )
+
+
+@router.get("")
+async def get_conversations(current_user: User = Depends(current_active_user)):
+ """Get conversations. Admins see all conversations, users see only their own."""
+ try:
+ # Build query based on user permissions
+ if not current_user.is_superuser:
+ # Regular users can only see their own conversations
+ user_client_ids = get_user_clients_all(current_user.user_id)
+ if not user_client_ids:
+ # User has no clients, return empty result
+ return {"conversations": {}}
+ query = {"client_id": {"$in": user_client_ids}}
+ else:
+ query = {}
+
+ # Get audio chunks and group by client_id
+ cursor = chunks_col.find(query).sort("timestamp", -1)
+ conversations = {}
+
+ async for chunk in cursor:
+ client_id = chunk["client_id"]
+ if client_id not in conversations:
+ conversations[client_id] = []
+
+ conversations[client_id].append(
+ {
+ "audio_uuid": chunk["audio_uuid"],
+ "audio_path": chunk["audio_path"],
+ "timestamp": chunk["timestamp"],
+ "transcript": chunk.get("transcript", []),
+ "speakers_identified": chunk.get("speakers_identified", []),
+ "cropped_audio_path": chunk.get("cropped_audio_path"),
+ "speech_segments": chunk.get("speech_segments"),
+ "cropped_duration": chunk.get("cropped_duration"),
+ "memories": chunk.get("memories", []), # Include memory references if they exist
+ "has_memory": bool(chunk.get("memories", [])), # Quick boolean check for UI
+ }
+ )
+
+ return {"conversations": conversations}
+
+ except Exception as e:
+ logger.error(f"Error fetching conversations: {e}")
+ return JSONResponse(status_code=500, content={"error": "Error fetching conversations"})
+
+
+@router.get("/{audio_uuid}/cropped")
+async def get_cropped_audio_info(
+ audio_uuid: str, current_user: User = Depends(current_active_user)
+):
+ """Get cropped audio information for a conversation. Users can only access their own conversations."""
+ try:
+ # Find the conversation
+ chunk = await chunks_col.find_one({"audio_uuid": audio_uuid})
+ if not chunk:
+ return JSONResponse(status_code=404, content={"error": "Conversation not found"})
+
+ # Check ownership for non-admin users
+ if not current_user.is_superuser:
+ if not client_belongs_to_user(chunk["client_id"], current_user.user_id):
+ return JSONResponse(status_code=404, content={"error": "Conversation not found"})
+
+ return {
+ "audio_uuid": audio_uuid,
+ "cropped_audio_path": chunk.get("cropped_audio_path"),
+ "speech_segments": chunk.get("speech_segments", []),
+ "cropped_duration": chunk.get("cropped_duration"),
+ "cropped_at": chunk.get("cropped_at"),
+ "original_audio_path": chunk.get("audio_path"),
+ }
+
+ except Exception as e:
+ logger.error(f"Error fetching cropped audio info: {e}")
+ return JSONResponse(status_code=500, content={"error": "Error fetching cropped audio info"})
+
+
+@router.post("/{audio_uuid}/reprocess")
+async def reprocess_audio_cropping(
+ audio_uuid: str, current_user: User = Depends(current_active_user)
+):
+ """Reprocess audio cropping for a conversation. Users can only reprocess their own conversations."""
+ try:
+ # Find the conversation
+ chunk = await chunks_col.find_one({"audio_uuid": audio_uuid})
+ if not chunk:
+ return JSONResponse(status_code=404, content={"error": "Conversation not found"})
+
+ # Check ownership for non-admin users
+ if not current_user.is_superuser:
+ if not client_belongs_to_user(chunk["client_id"], current_user.user_id):
+ return JSONResponse(status_code=404, content={"error": "Conversation not found"})
+
+ audio_path = chunk.get("audio_path")
+ if not audio_path:
+ return JSONResponse(
+ status_code=400, content={"error": "No audio file found for this conversation"}
+ )
+
+ # Check if file exists
+ if not Path(audio_path).exists():
+ return JSONResponse(status_code=404, content={"error": "Audio file not found on disk"})
+
+ # Reprocess the audio cropping
+ try:
+ result = await asyncio.get_running_loop().run_in_executor(
+ None, _process_audio_cropping_with_relative_timestamps, audio_path, audio_uuid
+ )
+
+ if result:
+ audio_logger.info(f"Successfully reprocessed audio cropping for {audio_uuid}")
+ return JSONResponse(
+ content={"message": f"Audio cropping reprocessed for {audio_uuid}"}
+ )
+ else:
+ audio_logger.error(f"Failed to reprocess audio cropping for {audio_uuid}")
+ return JSONResponse(
+ status_code=500, content={"error": "Failed to reprocess audio cropping"}
+ )
+
+ except Exception as processing_error:
+ audio_logger.error(f"Error during audio cropping reprocessing: {processing_error}")
+ return JSONResponse(
+ status_code=500,
+ content={"error": f"Audio processing failed: {str(processing_error)}"},
+ )
+
+ except Exception as e:
+ logger.error(f"Error reprocessing audio cropping: {e}")
+ return JSONResponse(status_code=500, content={"error": "Error reprocessing audio cropping"})
+
+
+@router.post("/{audio_uuid}/speakers")
+async def add_speaker_to_conversation(
+ audio_uuid: str, speaker_id: str, current_user: User = Depends(current_active_user)
+):
+ """Add a speaker to the speakers_identified list for a conversation. Users can only modify their own conversations."""
+ try:
+ # Find the conversation first
+ chunk = await chunks_col.find_one({"audio_uuid": audio_uuid})
+ if not chunk:
+ return JSONResponse(status_code=404, content={"error": "Conversation not found"})
+
+ # Check ownership for non-admin users
+ if not current_user.is_superuser:
+ if not client_belongs_to_user(chunk["client_id"], current_user.user_id):
+ return JSONResponse(status_code=404, content={"error": "Conversation not found"})
+
+ await chunk_repo.add_speaker(audio_uuid, speaker_id)
+ return JSONResponse(
+ content={"message": f"Speaker {speaker_id} added to conversation {audio_uuid}"}
+ )
+ except Exception as e:
+ audio_logger.error(f"Error adding speaker: {e}", exc_info=True)
+ return JSONResponse(status_code=500, content={"message": "Error adding speaker"})
+
+
+@router.put("/{audio_uuid}/transcript/{segment_index}")
+async def update_transcript_segment(
+ audio_uuid: str,
+ segment_index: int,
+ current_user: User = Depends(current_active_user),
+ speaker_id: Optional[str] = None,
+ start_time: Optional[float] = None,
+ end_time: Optional[float] = None,
+):
+ """Update a specific transcript segment with speaker or timing information. Users can only modify their own conversations."""
+ try:
+ # Find the conversation first
+ chunk = await chunks_col.find_one({"audio_uuid": audio_uuid})
+ if not chunk:
+ return JSONResponse(status_code=404, content={"error": "Conversation not found"})
+
+ # Check ownership for non-admin users
+ if not current_user.is_superuser:
+ if not client_belongs_to_user(chunk["client_id"], current_user.user_id):
+ return JSONResponse(status_code=404, content={"error": "Conversation not found"})
+
+ update_doc = {}
+
+ if speaker_id is not None:
+ update_doc[f"transcript.{segment_index}.speaker"] = speaker_id
+ # Also add to speakers_identified if not already present
+ await chunk_repo.add_speaker(audio_uuid, speaker_id)
+
+ if start_time is not None:
+ update_doc[f"transcript.{segment_index}.start"] = start_time
+
+ if end_time is not None:
+ update_doc[f"transcript.{segment_index}.end"] = end_time
+
+ if not update_doc:
+ return JSONResponse(status_code=400, content={"error": "No update parameters provided"})
+
+ result = await chunks_col.update_one({"audio_uuid": audio_uuid}, {"$set": update_doc})
+
+ if result.modified_count == 0:
+ return JSONResponse(status_code=400, content={"error": "No changes were made"})
+
+ return JSONResponse(content={"message": "Transcript segment updated successfully"})
+
+ except Exception as e:
+ audio_logger.error(f"Error updating transcript segment: {e}")
+ return JSONResponse(status_code=500, content={"error": "Internal server error"})
diff --git a/backends/advanced-backend/src/advanced_omi_backend/routers/modules/memory_routes.py b/backends/advanced-backend/src/advanced_omi_backend/routers/modules/memory_routes.py
new file mode 100644
index 00000000..70c7d197
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/routers/modules/memory_routes.py
@@ -0,0 +1,227 @@
+"""
+Memory management routes for Friend-Lite API.
+
+Handles memory CRUD operations, search, and debug functionality.
+"""
+
+import asyncio
+import logging
+from typing import Optional
+
+from fastapi import APIRouter, Depends, Query
+from fastapi.responses import JSONResponse
+
+from advanced_omi_backend.auth import current_active_user, current_superuser
+from advanced_omi_backend.client_manager import get_user_clients_all
+from advanced_omi_backend.debug_system_tracker import get_debug_tracker
+from advanced_omi_backend.memory import get_memory_service
+from advanced_omi_backend.users import User
+
+logger = logging.getLogger(__name__)
+audio_logger = logging.getLogger("audio_processing")
+
+router = APIRouter(prefix="/memories", tags=["memories"])
+
+
+@router.get("")
+async def get_memories(
+ current_user: User = Depends(current_active_user),
+ limit: int = Query(default=50, ge=1, le=1000),
+ user_id: Optional[str] = Query(default=None, description="User ID filter (admin only)"),
+):
+ """Get memories. Users see only their own memories, admins can see all or filter by user."""
+ try:
+ memory_service = get_memory_service()
+
+ # Determine which user's memories to fetch
+ target_user_id = current_user.user_id
+ if current_user.is_superuser and user_id:
+ target_user_id = user_id
+
+ # Execute memory retrieval in thread pool to avoid blocking
+ memories = await asyncio.get_running_loop().run_in_executor(
+ None, memory_service.get_all_memories, target_user_id, limit
+ )
+
+ return {"memories": memories, "count": len(memories), "user_id": target_user_id}
+
+ except Exception as e:
+ audio_logger.error(f"Error fetching memories: {e}", exc_info=True)
+ return JSONResponse(status_code=500, content={"message": "Error fetching memories"})
+
+
+@router.get("/with-transcripts")
+async def get_memories_with_transcripts(
+ current_user: User = Depends(current_active_user),
+ limit: int = Query(default=50, ge=1, le=1000),
+ user_id: Optional[str] = Query(default=None, description="User ID filter (admin only)"),
+):
+ """Get memories with their source transcripts. Users see only their own memories, admins can see all or filter by user."""
+ try:
+ memory_service = get_memory_service()
+
+ # Determine which user's memories to fetch
+ target_user_id = current_user.user_id
+ if current_user.is_superuser and user_id:
+ target_user_id = user_id
+
+ # Execute memory retrieval directly (now async)
+ memories_with_transcripts = await memory_service.get_memories_with_transcripts(target_user_id, limit)
+
+ return {
+ "memories": memories_with_transcripts, # Streamlit expects 'memories' key
+ "count": len(memories_with_transcripts),
+ "user_id": target_user_id,
+ }
+
+ except Exception as e:
+ audio_logger.error(f"Error fetching memories with transcripts: {e}", exc_info=True)
+ return JSONResponse(
+ status_code=500, content={"message": "Error fetching memories with transcripts"}
+ )
+
+
+@router.get("/search")
+async def search_memories(
+ query: str = Query(..., description="Search query"),
+ current_user: User = Depends(current_active_user),
+ limit: int = Query(default=20, ge=1, le=100),
+ user_id: Optional[str] = Query(default=None, description="User ID filter (admin only)"),
+):
+ """Search memories by text query. Users can only search their own memories, admins can search all or filter by user."""
+ try:
+ memory_service = get_memory_service()
+
+ # Determine which user's memories to search
+ target_user_id = current_user.user_id
+ if current_user.is_superuser and user_id:
+ target_user_id = user_id
+
+ # Execute search in thread pool to avoid blocking
+ search_results = await asyncio.get_running_loop().run_in_executor(
+ None, memory_service.search_memories, query, target_user_id, limit
+ )
+
+ return {
+ "query": query,
+ "results": search_results,
+ "count": len(search_results),
+ "user_id": target_user_id,
+ }
+
+ except Exception as e:
+ audio_logger.error(f"Error searching memories: {e}", exc_info=True)
+ return JSONResponse(status_code=500, content={"message": "Error searching memories"})
+
+
+@router.delete("/{memory_id}")
+async def delete_memory(memory_id: str, current_user: User = Depends(current_active_user)):
+ """Delete a memory by ID. Users can only delete their own memories, admins can delete any."""
+ try:
+ memory_service = get_memory_service()
+
+ # For non-admin users, verify memory ownership before deletion
+ if not current_user.is_superuser:
+ # Check if memory belongs to current user
+ user_memories = await asyncio.get_running_loop().run_in_executor(
+ None, memory_service.get_all_memories, current_user.user_id, 1000
+ )
+
+ memory_ids = [str(mem.get("id", mem.get("memory_id", ""))) for mem in user_memories]
+ if memory_id not in memory_ids:
+ return JSONResponse(status_code=404, content={"message": "Memory not found"})
+
+ # Delete the memory
+ success = await asyncio.get_running_loop().run_in_executor(
+ None, memory_service.delete_memory, memory_id
+ )
+
+ if success:
+ return JSONResponse(content={"message": f"Memory {memory_id} deleted successfully"})
+ else:
+ return JSONResponse(status_code=404, content={"message": "Memory not found"})
+
+ except Exception as e:
+ audio_logger.error(f"Error deleting memory: {e}", exc_info=True)
+ return JSONResponse(status_code=500, content={"message": "Error deleting memory"})
+
+
+@router.get("/unfiltered")
+async def get_memories_unfiltered(
+ current_user: User = Depends(current_active_user),
+ limit: int = Query(default=50, ge=1, le=1000),
+ user_id: Optional[str] = Query(default=None, description="User ID filter (admin only)"),
+):
+ """Get all memories including fallback transcript memories (for debugging). Users see only their own memories, admins can see all or filter by user."""
+ try:
+ memory_service = get_memory_service()
+
+ # Determine which user's memories to fetch
+ target_user_id = current_user.user_id
+ if current_user.is_superuser and user_id:
+ target_user_id = user_id
+
+ # Execute memory retrieval in thread pool to avoid blocking
+ memories = await asyncio.get_running_loop().run_in_executor(
+ None, memory_service.get_all_memories_unfiltered, target_user_id, limit
+ )
+
+ return {"memories": memories, "count": len(memories), "user_id": target_user_id, "includes_fallback": True}
+
+ except Exception as e:
+ audio_logger.error(f"Error fetching unfiltered memories: {e}", exc_info=True)
+ return JSONResponse(status_code=500, content={"message": "Error fetching unfiltered memories"})
+
+
+@router.get("/admin")
+async def get_all_memories_admin(current_user: User = Depends(current_superuser), limit: int = 200):
+ """Get all memories across all users for admin review. Admin only."""
+ try:
+ memory_service = get_memory_service()
+
+ # Get debug tracker for additional context
+ debug_tracker = get_debug_tracker()
+
+ # Get all memories without user filtering
+ all_memories = await asyncio.get_running_loop().run_in_executor(
+ None, memory_service.get_all_memories_debug, limit
+ )
+
+ # Group by user for easier admin review
+ user_memories = {}
+ users_with_memories = set()
+ client_ids_with_memories = set()
+
+ for memory in all_memories:
+ user_id = memory.get("user_id", "unknown")
+ client_id = memory.get("client_id", "unknown")
+
+ if user_id not in user_memories:
+ user_memories[user_id] = []
+ user_memories[user_id].append(memory)
+
+ # Track users and clients for debug info
+ users_with_memories.add(user_id)
+ client_ids_with_memories.add(client_id)
+
+ # Enhanced stats combining both admin and debug information
+ stats = {
+ "total_memories": len(all_memories),
+ "total_users": len(user_memories),
+ "debug_tracker_initialized": debug_tracker is not None,
+ "users_with_memories": sorted(list(users_with_memories)),
+ "client_ids_with_memories": sorted(list(client_ids_with_memories)),
+ }
+
+ return {
+ "memories": all_memories, # Flat list for compatibility
+ "user_memories": user_memories, # Grouped by user
+ "stats": stats,
+ "total_users": len(user_memories),
+ "total_memories": len(all_memories),
+ "limit": limit,
+ }
+
+ except Exception as e:
+ audio_logger.error(f"Error fetching admin memories: {e}", exc_info=True)
+ return JSONResponse(status_code=500, content={"message": "Error fetching admin memories"})
diff --git a/backends/advanced-backend/src/advanced_omi_backend/routers/modules/system_routes.py b/backends/advanced-backend/src/advanced_omi_backend/routers/modules/system_routes.py
new file mode 100644
index 00000000..0832f591
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/routers/modules/system_routes.py
@@ -0,0 +1,265 @@
+"""
+System and utility routes for Friend-Lite API.
+
+Handles metrics, auth config, file processing, and other system utilities.
+"""
+
+import asyncio
+import io
+import logging
+import os
+import time
+import uuid
+import wave
+from pathlib import Path
+from typing import Any, Dict
+
+import numpy as np
+from fastapi import APIRouter, Depends, File, Query, UploadFile
+from fastapi.responses import JSONResponse
+from wyoming.audio import AudioChunk
+
+from advanced_omi_backend.auth import current_active_user, current_superuser
+from advanced_omi_backend.database import chunks_col
+from advanced_omi_backend.debug_system_tracker import get_debug_tracker
+from advanced_omi_backend.users import User, generate_client_id
+
+
+logger = logging.getLogger(__name__)
+audio_logger = logging.getLogger("audio_processing")
+
+router = APIRouter(tags=["system"])
+
+
+@router.get("/metrics")
+async def get_current_metrics(current_user: User = Depends(current_superuser)):
+ """Get current system metrics. Admin only."""
+ try:
+ debug_tracker = get_debug_tracker()
+
+ # Get basic system metrics
+ metrics = {
+ "timestamp": int(time.time()),
+ "debug_tracker_available": debug_tracker is not None,
+ }
+
+ if debug_tracker:
+ # Add debug tracker metrics if available
+ recent_transactions = debug_tracker.get_recent_transactions(limit=10)
+ metrics.update(
+ {
+ "recent_transactions_count": len(recent_transactions),
+ "recent_transactions": recent_transactions,
+ }
+ )
+
+ return metrics
+
+ except Exception as e:
+ audio_logger.error(f"Error fetching metrics: {e}")
+ return JSONResponse(status_code=500, content={"error": "Failed to fetch metrics"})
+
+
+@router.get("/auth/config")
+async def get_auth_config():
+ """Get authentication configuration for frontend."""
+ return {
+ "auth_method": "email",
+ "registration_enabled": False, # Only admin can create users
+ "features": {
+ "email_login": True,
+ "user_id_login": False, # Deprecated
+ "registration": False,
+ },
+ }
+
+
+@router.post("/process-audio-files")
+async def process_audio_files(
+ current_user: User = Depends(current_superuser),
+ files: list[UploadFile] = File(...),
+ device_name: str = Query(default="upload"),
+ auto_generate_client: bool = Query(default=True),
+):
+ """Process uploaded audio files through the transcription pipeline. Admin only."""
+ # Import client state management functions
+ from advanced_omi_backend.main import create_client_state, cleanup_client_state
+ # Process files through complete transcription pipeline like WebSocket clients
+ try:
+ if not files:
+ return JSONResponse(status_code=400, content={"error": "No files provided"})
+
+ processed_files = []
+ processed_conversations = []
+
+ for file_index, file in enumerate(files):
+ try:
+ # Validate file type (only WAV for now)
+ if not file.filename or not file.filename.lower().endswith(".wav"):
+ processed_files.append(
+ {
+ "filename": file.filename or "unknown",
+ "status": "error",
+ "error": "Only WAV files are currently supported",
+ }
+ )
+ continue
+
+ # Generate unique client ID for each file to create separate conversations
+ file_device_name = f"{device_name}-{file_index + 1:03d}"
+ client_id = generate_client_id(current_user, file_device_name)
+
+ # Create separate client state for this file
+ client_state = await create_client_state(client_id, current_user, file_device_name)
+
+ audio_logger.info(
+ f"๐ Processing file {file_index + 1}/{len(files)}: {file.filename} with client_id: {client_id}"
+ )
+
+ # Read file content
+ content = await file.read()
+
+ # Process WAV file
+ with wave.open(io.BytesIO(content), "rb") as wav_file:
+ # Get audio parameters
+ sample_rate = wav_file.getframerate()
+ sample_width = wav_file.getsampwidth()
+ channels = wav_file.getnchannels()
+
+ # Read all audio data
+ audio_data = wav_file.readframes(wav_file.getnframes())
+
+ # Convert to mono if stereo
+ if channels == 2:
+ # Convert stereo to mono by averaging channels
+ if sample_width == 2:
+ audio_array = np.frombuffer(audio_data, dtype=np.int16)
+ else:
+ audio_array = np.frombuffer(audio_data, dtype=np.int32)
+
+ # Reshape to separate channels and average
+ audio_array = audio_array.reshape(-1, 2)
+ audio_data = (
+ np.mean(audio_array, axis=1).astype(audio_array.dtype).tobytes()
+ )
+ channels = 1
+
+ # Ensure sample rate is 16kHz (resample if needed)
+ if sample_rate != 16000:
+ audio_logger.warning(
+ f"File {file.filename} has sample rate {sample_rate}Hz, expected 16kHz. Processing anyway."
+ )
+
+ # Process audio in larger chunks for faster file processing
+ # Use larger chunks (32KB) for optimal performance
+ chunk_size = 32 * 1024 # 32KB chunks
+ base_timestamp = int(time.time())
+
+ for i in range(0, len(audio_data), chunk_size):
+ chunk_data = audio_data[i : i + chunk_size]
+
+ # Calculate relative timestamp for this chunk
+ chunk_offset_bytes = i
+ chunk_offset_seconds = chunk_offset_bytes / (
+ sample_rate * sample_width * channels
+ )
+ chunk_timestamp = base_timestamp + int(chunk_offset_seconds)
+
+ # Create AudioChunk
+ chunk = AudioChunk(
+ audio=chunk_data,
+ rate=sample_rate,
+ width=sample_width,
+ channels=channels,
+ timestamp=chunk_timestamp,
+ )
+
+ # Add to processing queue - this starts the transcription pipeline
+ await client_state.chunk_queue.put(chunk)
+
+ # Yield control occasionally to prevent blocking the event loop
+ if i % (chunk_size * 10) == 0: # Every 10 chunks (~320KB)
+ await asyncio.sleep(0)
+
+ processed_files.append(
+ {
+ "filename": file.filename,
+ "sample_rate": sample_rate,
+ "channels": channels,
+ "duration_seconds": len(audio_data)
+ / (sample_rate * sample_width * channels),
+ "size_bytes": len(audio_data),
+ "client_id": client_id,
+ "status": "processed",
+ }
+ )
+
+ audio_logger.info(
+ f"โ
Processed audio file: {file.filename} ({len(audio_data)} bytes)"
+ )
+
+ # Wait for this file's transcription processing to complete
+ audio_logger.info(f"๐ Waiting for transcription to process file: {file.filename}")
+
+ # Wait for chunks to be processed by the audio saver
+ await asyncio.sleep(1.0)
+
+ # Wait for transcription queue to be processed for this file
+ max_wait_time = 60 # 1 minute per file
+ wait_interval = 0.5
+ elapsed_time = 0
+
+ while elapsed_time < max_wait_time:
+ if (
+ client_state.transcription_queue.empty()
+ and client_state.chunk_queue.empty()
+ ):
+ audio_logger.info(f"๐ Transcription completed for file: {file.filename}")
+ break
+
+ await asyncio.sleep(wait_interval)
+ elapsed_time += wait_interval
+
+ if elapsed_time >= max_wait_time:
+ audio_logger.warning(f"๐ Transcription timed out for file: {file.filename}")
+
+ # Close this conversation by sending None to chunk queue
+ await client_state.chunk_queue.put(None)
+
+ # Give cleanup time to complete
+ await asyncio.sleep(0.5)
+
+ # Track conversation created
+ conversation_info = {
+ "client_id": client_id,
+ "filename": file.filename,
+ "status": "completed" if elapsed_time < max_wait_time else "timed_out",
+ }
+ processed_conversations.append(conversation_info)
+
+ # Clean up client state to prevent accumulation of active clients
+ await cleanup_client_state(client_id)
+ audio_logger.info(
+ f"๐ Completed processing file {file_index + 1}/{len(files)}: {file.filename} - client cleaned up"
+ )
+
+ except Exception as e:
+ audio_logger.error(f"Error processing file {file.filename}: {e}")
+ # Clean up client state even on error to prevent accumulation
+ if "client_state" in locals():
+ await cleanup_client_state(client_id)
+ processed_files.append(
+ {"filename": file.filename or "unknown", "status": "error", "error": str(e)}
+ )
+
+ return {
+ "message": f"Processed {len(files)} files",
+ "files": processed_files,
+ "conversations": processed_conversations,
+ "successful": len([f for f in processed_files if f.get("status") != "error"]),
+ "failed": len([f for f in processed_files if f.get("status") == "error"]),
+ }
+
+ except Exception as e:
+ audio_logger.error(f"Error in process_audio_files: {e}")
+ return JSONResponse(status_code=500, content={"error": f"File processing failed: {str(e)}"})
diff --git a/backends/advanced-backend/src/advanced_omi_backend/routers/modules/user_routes.py b/backends/advanced-backend/src/advanced_omi_backend/routers/modules/user_routes.py
new file mode 100644
index 00000000..61397db4
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/routers/modules/user_routes.py
@@ -0,0 +1,164 @@
+"""
+User management routes for Friend-Lite API.
+
+Handles user CRUD operations and admin user management.
+"""
+
+import asyncio
+import logging
+
+from bson import ObjectId
+from fastapi import APIRouter, Depends, HTTPException
+from fastapi.responses import JSONResponse
+
+from advanced_omi_backend.auth import (
+ ADMIN_EMAIL,
+ current_active_user,
+ current_superuser,
+ get_user_db,
+ get_user_manager,
+)
+from advanced_omi_backend.client_manager import get_user_clients_all
+from advanced_omi_backend.database import chunks_col, db, users_col
+from advanced_omi_backend.memory import get_memory_service
+from advanced_omi_backend.users import User, UserCreate
+
+logger = logging.getLogger(__name__)
+
+router = APIRouter(prefix="/users", tags=["users"])
+
+
+@router.get("", response_model=list[User])
+async def get_users(current_user: User = Depends(current_superuser)):
+ """Get all users. Admin only."""
+ try:
+ users = []
+ async for user_doc in users_col.find():
+ user = User(**user_doc)
+ users.append(user)
+ return users
+ except Exception as e:
+ logger.error(f"Error fetching users: {e}")
+ raise HTTPException(status_code=500, detail="Error fetching users")
+
+
+@router.post("/create")
+async def create_user(user_data: UserCreate, current_user: User = Depends(current_superuser)):
+ """Create a new user. Admin only."""
+ try:
+ user_db = get_user_db()
+ user_manager = get_user_manager()
+
+ # Check if user already exists
+ existing_user = await user_manager.get_by_email(user_data.email)
+ if existing_user is not None:
+ return JSONResponse(
+ status_code=409,
+ content={"message": f"User with email {user_data.email} already exists"},
+ )
+
+ # Create the user through the user manager
+ user = await user_manager.create(user_data)
+
+ return JSONResponse(
+ status_code=201,
+ content={
+ "message": f"User {user.email} created successfully",
+ "user_id": str(user.id),
+ "user_email": user.email,
+ },
+ )
+
+ except Exception as e:
+ logger.error(f"Error creating user: {e}")
+ return JSONResponse(
+ status_code=500,
+ content={"message": f"Error creating user: {str(e)}"},
+ )
+
+
+@router.delete("/{user_id}")
+async def delete_user(
+ user_id: str,
+ current_user: User = Depends(current_superuser),
+ delete_conversations: bool = False,
+ delete_memories: bool = False,
+):
+ """Delete a user and optionally their associated data. Admin only."""
+ try:
+ # Validate ObjectId format
+ try:
+ object_id = ObjectId(user_id)
+ except Exception:
+ return JSONResponse(
+ status_code=400,
+ content={
+ "message": f"Invalid user_id format: {user_id}. Must be a valid ObjectId."
+ },
+ )
+
+ # Check if user exists
+ existing_user = await users_col.find_one({"_id": object_id})
+ if not existing_user:
+ return JSONResponse(status_code=404, content={"message": f"User {user_id} not found"})
+
+ # Prevent deletion of administrator user
+ user_email = existing_user.get("email", "")
+ is_superuser = existing_user.get("is_superuser", False)
+
+ if is_superuser or user_email == ADMIN_EMAIL:
+ return JSONResponse(
+ status_code=403,
+ content={
+ "message": f"Cannot delete administrator user. Admin users are protected from deletion."
+ },
+ )
+
+ deleted_data = {}
+
+ # Delete user from users collection
+ user_result = await users_col.delete_one({"_id": object_id})
+ deleted_data["user_deleted"] = user_result.deleted_count > 0
+
+ if delete_conversations:
+ # Delete all conversations (audio chunks) for this user
+ conversations_result = await chunks_col.delete_many({"client_id": user_id})
+ deleted_data["conversations_deleted"] = conversations_result.deleted_count
+
+ if delete_memories:
+ # Delete all memories for this user using the memory service
+ try:
+ memory_service = get_memory_service()
+ memory_count = await asyncio.get_running_loop().run_in_executor(
+ None, memory_service.delete_all_user_memories, user_id
+ )
+ deleted_data["memories_deleted"] = memory_count
+ except Exception as mem_error:
+ logger.error(f"Error deleting memories for user {user_id}: {mem_error}")
+ deleted_data["memories_deleted"] = 0
+ deleted_data["memory_deletion_error"] = str(mem_error)
+
+ # Build message based on what was deleted
+ message = f"User {user_id} deleted successfully"
+ deleted_items = []
+ if delete_conversations and deleted_data.get("conversations_deleted", 0) > 0:
+ deleted_items.append(f"{deleted_data['conversations_deleted']} conversations")
+ if delete_memories and deleted_data.get("memories_deleted", 0) > 0:
+ deleted_items.append(f"{deleted_data['memories_deleted']} memories")
+
+ if deleted_items:
+ message += f" along with {', '.join(deleted_items)}"
+
+ return JSONResponse(
+ content={
+ "message": message,
+ "deleted_data": deleted_data,
+ }
+ )
+
+ except Exception as e:
+ logger.error(f"Error deleting user {user_id}: {e}")
+ return JSONResponse(
+ status_code=500,
+ content={"message": f"Error deleting user: {str(e)}"},
+ )
diff --git a/backends/advanced-backend/src/advanced_omi_backend/transcription.py b/backends/advanced-backend/src/advanced_omi_backend/transcription.py
new file mode 100644
index 00000000..6fb7f6c6
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/transcription.py
@@ -0,0 +1,574 @@
+import asyncio
+import logging
+import os
+import time
+from typing import Optional
+import httpx
+
+from wyoming.asr import Transcribe, Transcript
+from wyoming.audio import AudioChunk, AudioStart, AudioStop
+from wyoming.client import AsyncTcpClient
+from wyoming.vad import VoiceStarted, VoiceStopped
+
+from advanced_omi_backend.debug_system_tracker import PipelineStage, get_debug_tracker
+from advanced_omi_backend.client_manager import get_client_manager
+
+# ASR Configuration
+OFFLINE_ASR_TCP_URI = os.getenv("OFFLINE_ASR_TCP_URI", "tcp://192.168.0.110:8765/")
+DEEPGRAM_API_KEY = os.getenv("DEEPGRAM_API_KEY")
+USE_DEEPGRAM = bool(DEEPGRAM_API_KEY)
+
+logger = logging.getLogger(__name__)
+
+
+class TranscriptionManager:
+ """Manages transcription using either Deepgram batch API or offline ASR service."""
+
+ def __init__(self, action_item_callback=None, chunk_repo=None):
+ self.client = None
+ self._current_audio_uuid = None
+ self.use_deepgram = USE_DEEPGRAM
+ self._audio_buffer = [] # Buffer for collecting audio chunks
+ self._audio_start_time = None # Track when audio collection started
+ self._max_collection_time = 90.0 # 1.5 minutes timeout
+ self.action_item_callback = action_item_callback # Callback to queue action items
+ self._current_transaction_id = None # Track current debug transaction
+ self.chunk_repo = chunk_repo # Database repository for chunks
+ self.client_manager = get_client_manager() # Cached client manager instance
+
+ # Event-driven ASR event handling for offline ASR
+ self._event_queue = asyncio.Queue()
+ self._event_reader_task = None
+ self._stop_event = asyncio.Event()
+ self._client_id = None
+
+ # Collection state tracking
+ self._collecting = False
+ self._collection_task = None
+
+ def _get_current_client(self):
+ """Get the current client state using ClientManager."""
+ if not self._client_id:
+ return None
+ return self.client_manager.get_client(self._client_id)
+
+ def _get_or_create_transaction(self, user_id: str, client_id: str, audio_uuid: str) -> str:
+ """Get or create a debug transaction for tracking transcription progress."""
+ if not self._current_transaction_id:
+ debug_tracker = get_debug_tracker()
+ self._current_transaction_id = debug_tracker.create_transaction(
+ user_id=user_id, client_id=client_id, conversation_id=audio_uuid
+ )
+ return self._current_transaction_id
+
+ def _track_transcription_event(
+ self, stage: PipelineStage, success: bool = True, error_message: str = None, **metadata
+ ):
+ """Track a transcription event using the debug tracker."""
+ if self._current_transaction_id:
+ debug_tracker = get_debug_tracker()
+ debug_tracker.track_event(
+ self._current_transaction_id, stage, success, error_message, **metadata
+ )
+
+ async def connect(self, client_id: str | None = None):
+ """Initialize transcription service for the client."""
+ self._client_id = client_id
+
+ if self.use_deepgram:
+ # For Deepgram batch processing, we just need to validate the API key
+ if not DEEPGRAM_API_KEY:
+ raise Exception("DEEPGRAM_API_KEY is required for Deepgram transcription")
+ logger.info(f"Deepgram batch transcription initialized for client {self._client_id}")
+ return
+
+ try:
+ self.client = AsyncTcpClient.from_uri(OFFLINE_ASR_TCP_URI)
+ await self.client.connect()
+ logger.info(f"Connected to offline ASR service at {OFFLINE_ASR_TCP_URI}")
+
+ # Start the background event reader task for offline ASR
+ self._stop_event.clear()
+ self._event_reader_task = asyncio.create_task(self._read_events_continuously())
+ except Exception as e:
+ logger.error(f"Failed to connect to offline ASR service: {e}")
+ self.client = None
+ raise
+
+ async def flush_final_transcript(self, audio_duration_seconds: Optional[float] = None):
+ """Process collected audio and generate final transcript."""
+ if self.use_deepgram:
+ await self._process_collected_audio()
+ else:
+ await self._flush_offline_asr(audio_duration_seconds)
+
+ async def _process_collected_audio(self):
+ """Process all collected audio chunks using Deepgram file upload API."""
+ if not self._audio_buffer:
+ logger.info(f"No audio data collected for client {self._client_id}")
+ return
+
+ try:
+ logger.info(f"Processing {len(self._audio_buffer)} audio chunks for client {self._client_id}")
+
+ # Combine all audio chunks into a single buffer
+ combined_audio = b''.join(chunk.audio for chunk in self._audio_buffer if chunk.audio)
+
+ if not combined_audio:
+ logger.warning(f"No valid audio data found for client {self._client_id}")
+ return
+
+ # Send to Deepgram using file upload API
+ transcript_text = await self._transcribe_with_deepgram_api(combined_audio)
+
+ if transcript_text and self._current_audio_uuid:
+ logger.info(f"Deepgram batch transcript for {self._current_audio_uuid}: {transcript_text}")
+
+ # Create transcript segment
+ transcript_segment = {
+ "speaker": f"speaker_{self._client_id}",
+ "text": transcript_text,
+ "start": 0.0,
+ "end": 0.0,
+ }
+
+ # Store in database
+ if self.chunk_repo:
+ await self.chunk_repo.add_transcript_segment(
+ self._current_audio_uuid, transcript_segment
+ )
+ await self.chunk_repo.add_speaker(self._current_audio_uuid, f"speaker_{self._client_id}")
+
+ # Update client state
+ current_client = self._get_current_client()
+ if current_client:
+ current_client.last_transcript_time = time.time()
+ current_client.conversation_transcripts.append(transcript_text)
+
+ logger.info(f"Added Deepgram batch transcript for {self._current_audio_uuid} to DB")
+
+ except Exception as e:
+ logger.error(f"Error processing collected audio: {e}")
+ finally:
+ # Clear the buffer
+ self._audio_buffer.clear()
+ self._audio_start_time = None
+ self._collecting = False
+
+ async def _flush_offline_asr(self, audio_duration_seconds: Optional[float] = None):
+ """Flush final transcript from offline ASR by sending AudioStop."""
+ if self.client and self._current_audio_uuid:
+ try:
+ logger.info(
+ f"๐ Flushing final transcript from offline ASR for audio {self._current_audio_uuid}"
+ )
+ # Send AudioStop to signal end of audio stream
+ audio_stop = AudioStop(timestamp=int(time.time()))
+ await self.client.write_event(audio_stop.event())
+
+ # Calculate proportional timeout: 5 seconds per 30 seconds of audio
+ # Ratio: 5/30 = 1/6 โ 0.167
+ if audio_duration_seconds:
+ proportional_timeout = audio_duration_seconds / 6.0
+ # Set reasonable bounds: minimum 3 seconds, maximum 60 seconds
+ max_wait = max(3.0, min(60.0, proportional_timeout))
+ logger.info(
+ f"๐ Calculated timeout: {max_wait:.1f}s for {audio_duration_seconds:.1f}s of audio"
+ )
+ else:
+ max_wait = 5.0 # Default fallback
+ logger.info("๐ Using default timeout: 5.0s (no audio duration provided)")
+
+ start_time = time.time()
+
+ # Wait for events from the background queue instead of direct reading
+ # This avoids conflicts with the background event reader task
+ while (time.time() - start_time) < max_wait:
+ try:
+ # Try to get event from queue with a short timeout
+ event = await asyncio.wait_for(self._event_queue.get(), timeout=0.5)
+
+ logger.info(f"๐ Final flush - received event type: {event.type}")
+ if Transcript.is_type(event.type):
+ transcript_obj = Transcript.from_event(event)
+ transcript_text = transcript_obj.text.strip()
+ if transcript_text:
+ logger.info(f"๐ Final transcript: {transcript_text}")
+
+ # Process final transcript the same way
+ transcript_segment = {
+ "speaker": f"speaker_{self._client_id}",
+ "text": transcript_text,
+ "start": 0.0,
+ "end": 0.0,
+ }
+
+ if self.chunk_repo:
+ await self.chunk_repo.add_transcript_segment(
+ self._current_audio_uuid, transcript_segment
+ )
+
+ # Update client state
+ current_client = self._get_current_client()
+ if current_client:
+ current_client.conversation_transcripts.append(transcript_text)
+ logger.info(f"๐ Added final transcript to conversation")
+
+ except asyncio.TimeoutError:
+ # No more events available
+ break
+
+ logger.info(f"๐ Finished flushing ASR for {self._current_audio_uuid}")
+ except Exception as e:
+ logger.error(f"Error flushing offline ASR transcript: {e}")
+
+ async def disconnect(self):
+ """Cleanly disconnect from ASR service."""
+ if self.use_deepgram:
+ # For batch processing, just process any remaining audio
+ if self._collecting or self._audio_buffer:
+ await self._process_collected_audio()
+
+ # Cancel collection task if running
+ if self._collection_task and not self._collection_task.done():
+ self._collection_task.cancel()
+ try:
+ await self._collection_task
+ except asyncio.CancelledError:
+ pass
+
+ logger.info(f"Deepgram batch transcription disconnected for client {self._client_id}")
+ return
+
+ # Stop the background event reader task
+ if self._event_reader_task:
+ self._stop_event.set()
+ try:
+ await asyncio.wait_for(self._event_reader_task, timeout=2.0)
+ logger.debug("Event reader task completed gracefully")
+ except asyncio.TimeoutError:
+ logger.warning("Event reader task did not stop gracefully, cancelling")
+ self._event_reader_task.cancel()
+ try:
+ await self._event_reader_task
+ except asyncio.CancelledError:
+ logger.debug("Event reader task cancelled successfully")
+ except Exception as e:
+ logger.error(f"Error stopping event reader task: {e}")
+ self._event_reader_task.cancel()
+ finally:
+ self._event_reader_task = None
+
+ if self.client:
+ try:
+ await self.client.disconnect()
+ logger.info("Disconnected from offline ASR service")
+ except Exception as e:
+ logger.error(f"Error disconnecting from offline ASR service: {e}")
+ finally:
+ self.client = None
+
+ async def _read_events_continuously(self):
+ """Background task that continuously reads events from ASR and puts them in queue."""
+ logger.info("Started background ASR event reader task")
+ try:
+ while not self._stop_event.is_set() and self.client:
+ try:
+ # Read events without timeout - this maximizes streaming bandwidth
+ event = await self.client.read_event()
+ if event is None:
+ break
+
+ # Put event in queue for processing
+ await self._event_queue.put(event)
+
+ except Exception as e:
+ if not self._stop_event.is_set():
+ logger.error(f"Error reading ASR event: {e}")
+ # Brief pause before retry to avoid tight error loop
+ await asyncio.sleep(0.1)
+ break
+ except asyncio.CancelledError:
+ logger.info("Background ASR event reader task cancelled")
+ finally:
+ logger.info("Background ASR event reader task stopped")
+
+ async def _process_events_from_queue(self, audio_uuid: str, client_id: str):
+ """Process any available events from the queue (non-blocking)."""
+ try:
+ while True:
+ try:
+ # Get events from queue without blocking
+ event = self._event_queue.get_nowait()
+ await self._process_asr_event(event, audio_uuid, client_id)
+ except asyncio.QueueEmpty:
+ # No more events available, return
+ break
+ except Exception as e:
+ logger.error(f"Error processing events from queue: {e}")
+
+ async def _process_asr_event(self, event, audio_uuid: str, client_id: str):
+ """Process a single ASR event."""
+ logger.info(f"๐ค Received ASR event type: {event.type} for {audio_uuid}")
+
+ if Transcript.is_type(event.type):
+ transcript_obj = Transcript.from_event(event)
+ transcript_text = transcript_obj.text.strip()
+
+ # Handle both Transcript and StreamingTranscript types
+ # Check the 'final' attribute from the event data, not the reconstructed object
+ is_final = event.data.get("final", True) # Default to True for standard Transcript
+
+ # Only process final transcripts, ignore partial ones
+ if not is_final:
+ logger.info(f"Ignoring partial transcript for {audio_uuid}: {transcript_text}")
+ return
+
+ if transcript_text:
+ logger.info(f"Transcript for {audio_uuid}: {transcript_text} (final: {is_final})")
+
+ # Track successful transcription
+ # Note: Transaction tracking requires user_id which isn't available here
+ # Individual transcription success tracked in main processing pipeline
+
+ # Create transcript segment with new format
+ transcript_segment = {
+ "speaker": f"speaker_{client_id}",
+ "text": transcript_text,
+ "start": 0.0,
+ "end": 0.0,
+ }
+
+ # Store transcript segment in DB immediately
+ if self.chunk_repo:
+ await self.chunk_repo.add_transcript_segment(audio_uuid, transcript_segment)
+ await self.chunk_repo.add_speaker(audio_uuid, f"speaker_{client_id}")
+ logger.info(f"๐ Added transcript segment for {audio_uuid} to DB.")
+
+ # Update transcript time for conversation timeout tracking
+ current_client = self.client_manager.get_client(client_id)
+ if current_client:
+ current_client.last_transcript_time = time.time()
+ # Collect transcript for end-of-conversation memory processing
+ current_client.conversation_transcripts.append(transcript_text)
+ logger.info(f"Added transcript to conversation collection: '{transcript_text}'")
+
+ elif VoiceStarted.is_type(event.type):
+ logger.info(f"VoiceStarted event received for {audio_uuid}")
+ current_time = time.time()
+ current_client = self.client_manager.get_client(client_id)
+ if current_client:
+ current_client.record_speech_start(audio_uuid, current_time)
+ logger.info(f"๐ค Voice started for {audio_uuid} at {current_time}")
+ else:
+ logger.warning(
+ f"Client {client_id} not found in active_clients for VoiceStarted event"
+ )
+
+ elif VoiceStopped.is_type(event.type):
+ logger.info(f"VoiceStopped event received for {audio_uuid}")
+ current_time = time.time()
+ current_client = self.client_manager.get_client(client_id)
+ if current_client:
+ current_client.record_speech_end(audio_uuid, current_time)
+ logger.info(f"๐ Voice stopped for {audio_uuid} at {current_time}")
+ else:
+ logger.warning(
+ f"Client {client_id} not found in active_clients for VoiceStopped event"
+ )
+
+ async def _collection_timeout_handler(self):
+ """Handle collection timeout - process audio after 1.5 minutes."""
+ try:
+ await asyncio.sleep(self._max_collection_time)
+ if self._collecting and self._audio_buffer:
+ logger.info(f"Collection timeout reached for client {self._client_id}, processing audio")
+ await self._process_collected_audio()
+ except asyncio.CancelledError:
+ logger.debug(f"Collection timeout cancelled for client {self._client_id}")
+ except Exception as e:
+ logger.error(f"Error in collection timeout handler: {e}")
+
+ async def _transcribe_with_deepgram_api(self, audio_data: bytes) -> str:
+ """Transcribe audio using Deepgram's REST API."""
+ try:
+ url = "https://api.deepgram.com/v1/listen"
+
+ params = {
+ "model": "nova-3",
+ "language": "multi",
+ "smart_format": "true",
+ "punctuate": "true",
+ "diarize": "true",
+ "encoding": "linear16",
+ "sample_rate": "16000",
+ "channels": "1",
+ }
+
+ headers = {
+ "Authorization": f"Token {DEEPGRAM_API_KEY}",
+ "Content-Type": "audio/raw"
+ }
+
+ logger.info(f"Sending {len(audio_data)} bytes to Deepgram API for client {self._client_id}")
+
+ # Calculate dynamic timeout based on audio file size
+ # Estimate: ~1-2 seconds processing time per second of audio
+ # Audio duration estimate: bytes / (sample_rate * sample_width * channels)
+ estimated_duration = len(audio_data) / (16000 * 2 * 1) # 16kHz, 16-bit, mono
+ processing_timeout = max(120, int(estimated_duration * 3)) # Minimum 2 minutes, 3x audio duration
+
+ # Configure differentiated timeouts for different phases
+ # The issue was using a single timeout - large files need more time to WRITE (upload)
+ timeout_config = httpx.Timeout(
+ connect=30.0, # 30 seconds to establish connection
+ read=processing_timeout, # Dynamic timeout for reading response (based on audio length)
+ write=max(180.0, int(len(audio_data) / 16000)), # Upload timeout: 3 min minimum, or 1 sec per KB
+ pool=10.0 # 10 seconds to acquire connection from pool
+ )
+
+ logger.info(f"Estimated audio duration: {estimated_duration:.1f}s")
+ logger.info(f"Timeout config - read: {processing_timeout}s, write: {timeout_config.write}s, connect: {timeout_config.connect}s")
+
+ async with httpx.AsyncClient(timeout=timeout_config) as client:
+ response = await client.post(
+ url,
+ params=params,
+ headers=headers,
+ content=audio_data
+ )
+
+ if response.status_code == 200:
+ result = response.json()
+
+ # Extract transcript from response
+ if (result.get("results", {}).get("channels", []) and
+ result["results"]["channels"][0].get("alternatives", [])):
+
+ alternative = result["results"]["channels"][0]["alternatives"][0]
+ transcript = alternative.get("transcript", "").strip()
+
+ if transcript:
+ logger.info(f"Deepgram API transcription successful: {len(transcript)} characters")
+ return transcript
+ else:
+ logger.warning("Deepgram API returned empty transcript")
+ return ""
+ else:
+ logger.warning("Deepgram API response missing expected transcript structure")
+ return ""
+ else:
+ logger.error(f"Deepgram API error: {response.status_code} - {response.text}")
+ return ""
+
+ except asyncio.TimeoutError:
+ logger.error(f"Deepgram API timeout for {len(audio_data)} bytes - check timeout configuration")
+ return ""
+ except httpx.TimeoutException as e:
+ # More specific timeout error reporting
+ timeout_type = "unknown"
+ if "connect" in str(e).lower():
+ timeout_type = "connection"
+ elif "read" in str(e).lower():
+ timeout_type = "read"
+ elif "write" in str(e).lower():
+ timeout_type = "write (upload)"
+ elif "pool" in str(e).lower():
+ timeout_type = "connection pool"
+ logger.error(f"HTTP {timeout_type} timeout during Deepgram API call for {len(audio_data)} bytes: {e}")
+ return ""
+ except Exception as e:
+ logger.error(f"Error calling Deepgram API: {e}")
+ return ""
+
+
+
+ async def transcribe_chunk(self, audio_uuid: str, chunk: AudioChunk, client_id: str):
+ """Collect audio chunk for batch processing or transcribe using offline ASR."""
+ if self.use_deepgram:
+ await self._collect_audio_chunk(audio_uuid, chunk, client_id)
+ else:
+ await self._transcribe_chunk_offline(audio_uuid, chunk, client_id)
+
+ async def _collect_audio_chunk(self, audio_uuid: str, chunk: AudioChunk, client_id: str):
+ """Collect audio chunk for batch processing."""
+ try:
+ # Update current audio UUID
+ if self._current_audio_uuid != audio_uuid:
+ self._current_audio_uuid = audio_uuid
+ logger.info(f"New audio_uuid for Deepgram batch: {audio_uuid}")
+
+ # Reset collection state for new audio session
+ self._audio_buffer.clear()
+ self._audio_start_time = time.time()
+ self._collecting = True
+
+ # Start collection timeout task
+ if self._collection_task and not self._collection_task.done():
+ self._collection_task.cancel()
+ self._collection_task = asyncio.create_task(self._collection_timeout_handler())
+
+ # Add chunk to buffer if we have audio data
+ if chunk.audio and len(chunk.audio) > 0:
+ self._audio_buffer.append(chunk)
+ logger.debug(f"Collected {len(chunk.audio)} bytes for {audio_uuid} (total chunks: {len(self._audio_buffer)})")
+ else:
+ logger.warning(f"Empty audio chunk received for {audio_uuid}")
+
+ except Exception as e:
+ logger.error(f"Error collecting audio chunk for {audio_uuid}: {e}")
+
+ async def _transcribe_chunk_offline(self, audio_uuid: str, chunk: AudioChunk, client_id: str):
+ """Transcribe using offline ASR service."""
+ if not self.client:
+ logger.error(f"No ASR connection available for {audio_uuid}")
+ # Track transcription failure handled by main pipeline
+ return
+
+ # Track transcription request
+ start_time = time.time()
+ # Note: Transcription requests tracked by main pipeline
+
+ try:
+ if self._current_audio_uuid != audio_uuid:
+ self._current_audio_uuid = audio_uuid
+ logger.info(f"New audio_uuid: {audio_uuid}")
+ transcribe = Transcribe()
+ await self.client.write_event(transcribe.event())
+ audio_start = AudioStart(
+ rate=chunk.rate,
+ width=chunk.width,
+ channels=chunk.channels,
+ timestamp=chunk.timestamp,
+ )
+ await self.client.write_event(audio_start.event())
+
+ # Send the audio chunk
+ logger.debug(f"๐ต Sending {len(chunk.audio)} bytes audio chunk to ASR for {audio_uuid}")
+ await self.client.write_event(chunk.event())
+
+ # Process any available events from the background queue (non-blocking)
+ await self._process_events_from_queue(audio_uuid, client_id)
+
+ except Exception as e:
+ logger.error(f"Error in offline transcribe_chunk for {audio_uuid}: {e}")
+ # Track transcription failure handled by main pipeline
+ # Attempt to reconnect on error
+ await self._reconnect()
+
+
+ async def _reconnect(self):
+ """Attempt to reconnect to ASR service."""
+ if self.use_deepgram:
+ # For batch processing, no reconnection needed
+ logger.info("Deepgram batch processing - no reconnection required")
+ return
+
+ logger.info("Attempting to reconnect to ASR service...")
+
+ await self.disconnect()
+ await asyncio.sleep(2) # Brief delay before reconnecting
+ try:
+ await self.connect()
+ except Exception as e:
+ logger.error(f"Reconnection failed: {e}")
diff --git a/backends/advanced-backend/src/advanced_omi_backend/users.py b/backends/advanced-backend/src/advanced_omi_backend/users.py
new file mode 100644
index 00000000..0b6f012f
--- /dev/null
+++ b/backends/advanced-backend/src/advanced_omi_backend/users.py
@@ -0,0 +1,130 @@
+"""User models for fastapi-users integration with Beanie and MongoDB."""
+
+import logging
+import random
+import string
+from datetime import UTC, datetime
+from typing import Optional
+
+from beanie import Document, PydanticObjectId
+from fastapi_users.db import BeanieBaseUser, BeanieUserDatabase
+from fastapi_users.schemas import BaseUserCreate
+from pydantic import Field
+
+logger = logging.getLogger(__name__)
+
+
+class UserCreate(BaseUserCreate):
+ """Schema for creating new users."""
+
+ display_name: Optional[str] = None
+
+
+class User(BeanieBaseUser, Document):
+ """User model extending fastapi-users BeanieBaseUser with custom fields."""
+
+ display_name: Optional[str] = None
+ # Client tracking for audio devices
+ registered_clients: dict[str, dict] = Field(default_factory=dict)
+
+ @property
+ def user_id(self) -> str:
+ """Return string representation of MongoDB ObjectId for backward compatibility."""
+ return str(self.id)
+
+ def register_client(self, client_id: str, device_name: Optional[str] = None) -> None:
+ """Register a new client for this user."""
+ # Check if client already exists
+ if client_id in self.registered_clients:
+ # Update existing client
+ logger.info(f"Updating existing client {client_id} for user {self.user_id}")
+ self.registered_clients[client_id]["last_seen"] = datetime.now(UTC)
+ self.registered_clients[client_id]["device_name"] = (
+ device_name or self.registered_clients[client_id].get("device_name")
+ )
+ return
+
+ # Add new client
+ self.registered_clients[client_id] = {
+ "client_id": client_id,
+ "device_name": device_name,
+ "first_seen": datetime.now(UTC),
+ "last_seen": datetime.now(UTC),
+ "is_active": True,
+ }
+
+ def get_client_ids(self) -> list[str]:
+ """Get all client IDs registered to this user."""
+ return list(self.registered_clients.keys())
+
+ # def has_client(self, client_id: str) -> bool:
+ # """Check if a client is registered to this user."""
+ # return client_id in self.registered_clients
+
+ class Settings:
+ name = "users" # Collection name in MongoDB - standardized from "fastapi_users"
+ email_collation = {"locale": "en", "strength": 2} # Case-insensitive comparison
+
+
+async def get_user_db():
+ """Get the user database instance for dependency injection."""
+ yield BeanieUserDatabase(User) # type: ignore
+
+
+async def get_user_by_id(user_id: str) -> Optional[User]:
+ """Get user by MongoDB ObjectId string."""
+ try:
+ return await User.get(PydanticObjectId(user_id))
+ except Exception:
+ return None
+
+
+async def get_user_by_client_id(client_id: str) -> Optional[User]:
+ """Find the user that owns a specific client_id."""
+ return await User.find_one({"registered_clients.client_id": client_id})
+
+
+async def register_client_to_user(
+ user: User, client_id: str, device_name: Optional[str] = None
+) -> None:
+ """Register a client to a user and save to database."""
+ user.register_client(client_id, device_name)
+ await user.save()
+
+
+def generate_client_id(user: User, device_name: Optional[str] = None) -> str:
+ """
+ Generate a unique client_id in the format: user_id_suffix-device_suffix[-counter]
+
+ Args:
+ user: The User object
+ device_name: Optional device name (e.g., 'havpe', 'phone', 'tablet')
+
+ Returns:
+ client_id in format: user_id_suffix-device_suffix or user_id_suffix-device_suffix-N for duplicates
+ """
+ # Use last 6 characters of MongoDB ObjectId as user identifier
+ user_id_suffix = str(user.id)[-6:]
+
+ if device_name:
+ # Sanitize device name: lowercase, alphanumeric + hyphens only, max 10 chars
+ sanitized_device = "".join(c for c in device_name.lower() if c.isalnum() or c == "-")[:10]
+ base_client_id = f"{user_id_suffix}-{sanitized_device}"
+
+ # Check for existing client IDs to avoid conflicts
+ existing_client_ids = user.get_client_ids()
+
+ # If base client_id doesn't exist, use it
+ if base_client_id not in existing_client_ids:
+ return base_client_id
+
+ # If it exists, find the next available counter
+ counter = 2
+ while f"{base_client_id}-{counter}" in existing_client_ids:
+ counter += 1
+
+ return f"{base_client_id}-{counter}"
+ else:
+ # Generate random 4-character suffix if no device name provided
+ suffix = "".join(random.choices(string.ascii_lowercase + string.digits, k=4))
+ return f"{user_id_suffix}-{suffix}"
diff --git a/backends/advanced-backend/src/laptop_client.py b/backends/advanced-backend/src/laptop_client.py
deleted file mode 100644
index 9c4cd476..00000000
--- a/backends/advanced-backend/src/laptop_client.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import argparse
-import asyncio
-import logging
-
-import websockets
-import websockets.exceptions
-from easy_audio_interfaces.extras.local_audio import InputMicStream
-
-logger = logging.getLogger(__name__)
-logging.basicConfig(level=logging.INFO)
-
-# Default WebSocket settings
-DEFAULT_HOST = "localhost"
-DEFAULT_PORT = 8000
-DEFAULT_ENDPOINT = "/ws_pcm"
-
-
-def build_websocket_uri(host: str, port: int, endpoint: str, user_id: str | None = None) -> str:
- """Build WebSocket URI with optional user_id parameter."""
- base_uri = f"ws://{host}:{port}{endpoint}"
- if user_id:
- base_uri += f"?user_id={user_id}"
- return base_uri
-
-
-async def main():
- # Parse command line arguments
- parser = argparse.ArgumentParser(description="Laptop audio client for OMI backend")
- parser.add_argument("--host", default=DEFAULT_HOST, help="WebSocket server host")
- parser.add_argument("--port", type=int, default=DEFAULT_PORT, help="WebSocket server port")
- parser.add_argument("--endpoint", default=DEFAULT_ENDPOINT, help="WebSocket endpoint")
- parser.add_argument("--user-id", help="User ID for audio session (optional)")
- args = parser.parse_args()
-
- # Build WebSocket URI
- ws_uri = build_websocket_uri(args.host, args.port, args.endpoint, args.user_id)
- print(f"Connecting to {ws_uri}")
- if args.user_id:
- print(f"Using User ID: {args.user_id}")
-
- try:
- async with websockets.connect(ws_uri) as websocket:
- print("Connected to WebSocket")
-
- async def send_audio():
- """Capture audio from microphone and send raw PCM bytes over WebSocket"""
- async with InputMicStream(chunk_size=512) as stream:
- while True:
- try:
- data = await stream.read()
- if data and data.audio:
- # Send raw PCM bytes directly to WebSocket
- await websocket.send(data.audio)
- logger.debug(f"Sent audio chunk: {len(data.audio)} bytes")
- await asyncio.sleep(0.01) # Small delay to prevent overwhelming
- except websockets.exceptions.ConnectionClosed:
- logger.info("WebSocket connection closed during audio sending")
- break
- except Exception as e:
- logger.error(f"Error sending audio: {e}")
- break
-
- async def receive_messages():
- """Receive any messages from the WebSocket server"""
- try:
- async for message in websocket:
- print(f"Received message: {message}")
- except websockets.exceptions.ConnectionClosed:
- logger.info("WebSocket connection closed during message receiving")
- except Exception as e:
- logger.error(f"Error receiving messages: {e}")
-
- # Run both audio sending and message receiving concurrently
- await asyncio.gather(send_audio(), receive_messages())
-
- except ConnectionRefusedError:
- logger.error(f"Could not connect to {ws_uri}. Make sure the server is running.")
- except Exception as e:
- logger.error(f"Error connecting to WebSocket: {e}")
-
-
-if __name__ == "__main__":
- asyncio.run(main())
\ No newline at end of file
diff --git a/backends/advanced-backend/src/main.py b/backends/advanced-backend/src/main.py
deleted file mode 100644
index 87c3a016..00000000
--- a/backends/advanced-backend/src/main.py
+++ /dev/null
@@ -1,2016 +0,0 @@
-#!/usr/bin/env python3
-"""Unified Omi-audio service
-
-* Accepts Opus packets over a WebSocket (`/ws`) or PCM over a WebSocket (`/ws_pcm`).
-* Uses a central queue to decouple audio ingestion from processing.
-* A saver consumer buffers PCM and writes 30-second WAV chunks to `./audio_chunks/`.
-* A transcription consumer sends each chunk to a Wyoming ASR service.
-* The transcript is stored in **mem0** and MongoDB.
-
-"""
-
-import asyncio
-import concurrent.futures
-import logging
-import os
-import time
-import uuid
-from contextlib import asynccontextmanager
-from functools import partial
-from pathlib import Path
-from typing import Optional, Tuple
-import re
-
-import ollama
-from dotenv import load_dotenv
-from easy_audio_interfaces.filesystem.filesystem_interfaces import LocalFileSink
-from fastapi import FastAPI, Query, WebSocket, WebSocketDisconnect
-from fastapi.responses import JSONResponse
-from fastapi.staticfiles import StaticFiles
-from motor.motor_asyncio import AsyncIOMotorClient
-from omi.decoder import OmiOpusDecoder
-from wyoming.asr import Transcribe, Transcript
-from wyoming.audio import AudioChunk, AudioStart
-from wyoming.client import AsyncTcpClient
-from wyoming.vad import VoiceStarted, VoiceStopped
-
-# from debug_utils import memory_debug
-from memory import get_memory_service, init_memory_config, shutdown_memory_service
-from metrics import (
- get_metrics_collector,
- start_metrics_collection,
- stop_metrics_collection,
-)
-from action_items_service import ActionItemsService
-
-###############################################################################
-# SETUP
-###############################################################################
-
-# Load environment variables first
-load_dotenv()
-
-# Mem0 telemetry configuration is now handled in the memory module
-
-# Logging setup
-logging.basicConfig(level=logging.INFO)
-logger = logging.getLogger("advanced-backend")
-audio_logger = logging.getLogger("audio_processing")
-
-# Conditional Deepgram import
-try:
- from deepgram import DeepgramClient, FileSource, PrerecordedOptions # type: ignore
-
- DEEPGRAM_AVAILABLE = True
- logger.info("โ
Deepgram SDK available")
-except ImportError:
- DEEPGRAM_AVAILABLE = False
- logger.warning("Deepgram SDK not available. Install with: pip install deepgram-sdk")
-audio_cropper_logger = logging.getLogger("audio_cropper")
-
-
-###############################################################################
-# CONFIGURATION
-###############################################################################
-
-# MongoDB Configuration
-MONGODB_URI = os.getenv("MONGODB_URI", "mongodb://mongo:27017")
-mongo_client = AsyncIOMotorClient(MONGODB_URI)
-db = mongo_client.get_default_database("friend-lite")
-chunks_col = db["audio_chunks"]
-users_col = db["users"]
-speakers_col = db["speakers"] # New collection for speaker management
-action_items_col = db["action_items"] # New collection for action items
-
-# Audio Configuration
-OMI_SAMPLE_RATE = 16_000 # Hz
-OMI_CHANNELS = 1
-OMI_SAMPLE_WIDTH = 2 # bytes (16โbit)
-SEGMENT_SECONDS = 60 # length of each stored chunk
-TARGET_SAMPLES = OMI_SAMPLE_RATE * SEGMENT_SECONDS
-
-# Conversation timeout configuration
-NEW_CONVERSATION_TIMEOUT_MINUTES = float(
- os.getenv("NEW_CONVERSATION_TIMEOUT_MINUTES", "1.5")
-)
-
-# Audio cropping configuration
-AUDIO_CROPPING_ENABLED = os.getenv("AUDIO_CROPPING_ENABLED", "true").lower() == "true"
-MIN_SPEECH_SEGMENT_DURATION = float(
- os.getenv("MIN_SPEECH_SEGMENT_DURATION", "1.0")
-) # seconds
-CROPPING_CONTEXT_PADDING = float(
- os.getenv("CROPPING_CONTEXT_PADDING", "0.1")
-) # seconds of padding around speech
-
-# Directory where WAV chunks are written
-CHUNK_DIR = Path("./audio_chunks")
-CHUNK_DIR.mkdir(parents=True, exist_ok=True)
-
-# ASR Configuration
-OFFLINE_ASR_TCP_URI = os.getenv("OFFLINE_ASR_TCP_URI", "tcp://192.168.0.110:8765/")
-DEEPGRAM_API_KEY = os.getenv("DEEPGRAM_API_KEY")
-
-# Determine transcription strategy based on environment variables
-USE_DEEPGRAM = bool(DEEPGRAM_API_KEY and DEEPGRAM_AVAILABLE)
-if DEEPGRAM_API_KEY and not DEEPGRAM_AVAILABLE:
- audio_logger.error(
- "DEEPGRAM_API_KEY provided but Deepgram SDK not available. Falling back to offline ASR."
- )
-audio_logger.info(
- f"Transcription strategy: {'Deepgram' if USE_DEEPGRAM else 'Offline ASR'}"
-)
-
-# Deepgram client placeholder (not implemented)
-deepgram_client = None
-if USE_DEEPGRAM:
- audio_logger.warning(
- "Deepgram transcription requested but not yet implemented. Falling back to offline ASR."
- )
- USE_DEEPGRAM = False
-
-# Ollama & Qdrant Configuration
-OLLAMA_BASE_URL = os.getenv("OLLAMA_BASE_URL", "http://ollama:11434")
-QDRANT_BASE_URL = os.getenv("QDRANT_BASE_URL", "qdrant")
-
-# Memory configuration is now handled in the memory module
-# Initialize it with our Ollama and Qdrant URLs
-init_memory_config(
- ollama_base_url=OLLAMA_BASE_URL,
- qdrant_base_url=QDRANT_BASE_URL,
-)
-
-# Speaker service configuration
-
-# Thread pool executors
-_DEC_IO_EXECUTOR = concurrent.futures.ThreadPoolExecutor(
- max_workers=os.cpu_count() or 4,
- thread_name_prefix="opus_io",
-)
-
-# Initialize memory service, speaker service, and ollama client
-memory_service = get_memory_service()
-ollama_client = ollama.Client(host=OLLAMA_BASE_URL)
-
-action_items_service = ActionItemsService(action_items_col, ollama_client)
-
-###############################################################################
-# AUDIO PROCESSING FUNCTIONS
-###############################################################################
-
-
-async def _process_audio_cropping_with_relative_timestamps(
- original_path: str,
- speech_segments: list[tuple[float, float]],
- output_path: str,
- audio_uuid: str,
-) -> bool:
- """
- Process audio cropping with automatic relative timestamp conversion.
- This function handles both live processing and reprocessing scenarios.
- """
- try:
- # Convert absolute timestamps to relative timestamps
- # Extract file start time from filename: timestamp_client_uuid.wav
- filename = original_path.split("/")[-1]
- file_start_timestamp = float(filename.split("_")[0])
-
- # Convert speech segments to relative timestamps
- relative_segments = []
- for start_abs, end_abs in speech_segments:
- start_rel = start_abs - file_start_timestamp
- end_rel = end_abs - file_start_timestamp
-
- # Ensure relative timestamps are positive (sanity check)
- if start_rel < 0:
- audio_logger.warning(
- f"โ ๏ธ Negative start timestamp: {start_rel}, clamping to 0.0"
- )
- start_rel = 0.0
- if end_rel < 0:
- audio_logger.warning(
- f"โ ๏ธ Negative end timestamp: {end_rel}, skipping segment"
- )
- continue
-
- relative_segments.append((start_rel, end_rel))
-
- audio_logger.info(
- f"๐ Converting timestamps for {audio_uuid}: file_start={file_start_timestamp}"
- )
- audio_logger.info(f"๐ Absolute segments: {speech_segments}")
- audio_logger.info(f"๐ Relative segments: {relative_segments}")
-
- success = await _crop_audio_with_ffmpeg(
- original_path, relative_segments, output_path
- )
- if success:
- # Update database with cropped file info (keep original absolute timestamps for reference)
- cropped_filename = output_path.split("/")[-1]
- await chunk_repo.update_cropped_audio(
- audio_uuid, cropped_filename, speech_segments
- )
- audio_logger.info(
- f"Successfully processed cropped audio: {cropped_filename}"
- )
- return True
- else:
- audio_logger.error(f"Failed to crop audio for {audio_uuid}")
- return False
- except Exception as e:
- audio_logger.error(f"Error in audio cropping task for {audio_uuid}: {e}")
- return False
-
-
-async def _crop_audio_with_ffmpeg(
- original_path: str, speech_segments: list[tuple[float, float]], output_path: str
-) -> bool:
- """Use ffmpeg to crop audio - runs as async subprocess, no GIL issues"""
- audio_cropper_logger.info(
- f"Cropping audio {original_path} with {len(speech_segments)} speech segments"
- )
-
- if not AUDIO_CROPPING_ENABLED:
- audio_cropper_logger.info(f"Audio cropping disabled, skipping {original_path}")
- return False
-
- if not speech_segments:
- audio_cropper_logger.warning(f"No speech segments to crop for {original_path}")
- return False
-
- # Filter out segments that are too short
- filtered_segments = []
- for start, end in speech_segments:
- duration = end - start
- if duration >= MIN_SPEECH_SEGMENT_DURATION:
- # Add padding around speech segments
- padded_start = max(0, start - CROPPING_CONTEXT_PADDING)
- padded_end = end + CROPPING_CONTEXT_PADDING
- filtered_segments.append((padded_start, padded_end))
- else:
- audio_cropper_logger.debug(
- f"Skipping short segment: {start}-{end} ({duration:.2f}s < {MIN_SPEECH_SEGMENT_DURATION}s)"
- )
-
- if not filtered_segments:
- audio_cropper_logger.warning(
- f"No segments meet minimum duration ({MIN_SPEECH_SEGMENT_DURATION}s) for {original_path}"
- )
- return False
-
- audio_cropper_logger.info(
- f"Cropping audio {original_path} with {len(filtered_segments)} speech segments (filtered from {len(speech_segments)})"
- )
-
- try:
- # Build ffmpeg filter for concatenating speech segments
- filter_parts = []
- for i, (start, end) in enumerate(filtered_segments):
- duration = end - start
- filter_parts.append(
- f"[0:a]atrim=start={start}:duration={duration},asetpts=PTS-STARTPTS[seg{i}]"
- )
-
- # Concatenate all segments
- inputs = "".join(f"[seg{i}]" for i in range(len(filtered_segments)))
- concat_filter = f"{inputs}concat=n={len(filtered_segments)}:v=0:a=1[out]"
-
- full_filter = ";".join(filter_parts + [concat_filter])
-
- # Run ffmpeg as async subprocess
- cmd = [
- "ffmpeg",
- "-y", # -y = overwrite output
- "-i",
- original_path,
- "-filter_complex",
- full_filter,
- "-map",
- "[out]",
- "-c:a",
- "pcm_s16le", # Keep same format as original
- output_path,
- ]
-
- audio_cropper_logger.info(f"Running ffmpeg command: {' '.join(cmd)}")
-
- process = await asyncio.create_subprocess_exec(
- *cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
- )
-
- stdout, stderr = await process.communicate()
- if stdout:
- audio_cropper_logger.debug(f"FFMPEG stdout: {stdout.decode()}")
-
- if process.returncode == 0:
- # Calculate cropped duration
- cropped_duration = sum(end - start for start, end in filtered_segments)
- audio_cropper_logger.info(
- f"Successfully cropped {original_path} -> {output_path} ({cropped_duration:.1f}s from {len(filtered_segments)} segments)"
- )
- return True
- else:
- error_msg = stderr.decode() if stderr else "Unknown ffmpeg error"
- audio_logger.error(f"ffmpeg failed for {original_path}: {error_msg}")
- return False
-
- except Exception as e:
- audio_logger.error(f"Error running ffmpeg on {original_path}: {e}")
- return False
-
-
-###############################################################################
-# UTILITY FUNCTIONS & HELPER CLASSES
-###############################################################################
-
-
-def _new_local_file_sink(file_path):
- """Create a properly configured LocalFileSink with all wave parameters set."""
- sink = LocalFileSink(
- file_path=file_path,
- sample_rate=int(OMI_SAMPLE_RATE),
- channels=int(OMI_CHANNELS),
- sample_width=int(OMI_SAMPLE_WIDTH),
- )
- return sink
-
-
-class ChunkRepo:
- """Async helpers for the audio_chunks collection."""
-
- def __init__(self, collection):
- self.col = collection
-
- async def create_chunk(
- self,
- *,
- audio_uuid,
- audio_path,
- client_id,
- timestamp,
- transcript=None,
- speakers_identified=None,
- ):
- doc = {
- "audio_uuid": audio_uuid,
- "audio_path": audio_path,
- "client_id": client_id,
- "timestamp": timestamp,
- "transcript": transcript or [], # List of conversation segments
- "speakers_identified": speakers_identified
- or [], # List of identified speakers
- }
- await self.col.insert_one(doc)
-
- async def add_transcript_segment(self, audio_uuid, transcript_segment):
- """Add a single transcript segment to the conversation."""
- await self.col.update_one(
- {"audio_uuid": audio_uuid}, {"$push": {"transcript": transcript_segment}}
- )
-
- async def add_speaker(self, audio_uuid, speaker_id):
- """Add a speaker to the speakers_identified list if not already present."""
- await self.col.update_one(
- {"audio_uuid": audio_uuid},
- {"$addToSet": {"speakers_identified": speaker_id}},
- )
-
- async def update_transcript(self, audio_uuid, full_transcript):
- """Update the entire transcript list (for compatibility)."""
- await self.col.update_one(
- {"audio_uuid": audio_uuid}, {"$set": {"transcript": full_transcript}}
- )
-
- async def update_segment_timing(
- self, audio_uuid, segment_index, start_time, end_time
- ):
- """Update timing information for a specific transcript segment."""
- await self.col.update_one(
- {"audio_uuid": audio_uuid},
- {
- "$set": {
- f"transcript.{segment_index}.start": start_time,
- f"transcript.{segment_index}.end": end_time,
- }
- },
- )
-
- async def update_segment_speaker(self, audio_uuid, segment_index, speaker_id):
- """Update the speaker for a specific transcript segment."""
- result = await self.col.update_one(
- {"audio_uuid": audio_uuid},
- {"$set": {f"transcript.{segment_index}.speaker": speaker_id}},
- )
- if result.modified_count > 0:
- audio_logger.info(
- f"Updated segment {segment_index} speaker to {speaker_id} for {audio_uuid}"
- )
- return result.modified_count > 0
-
- async def update_cropped_audio(
- self,
- audio_uuid: str,
- cropped_path: str,
- speech_segments: list[tuple[float, float]],
- ):
- """Update the chunk with cropped audio information."""
- cropped_duration = sum(end - start for start, end in speech_segments)
-
- result = await self.col.update_one(
- {"audio_uuid": audio_uuid},
- {
- "$set": {
- "cropped_audio_path": cropped_path,
- "speech_segments": [
- {"start": start, "end": end} for start, end in speech_segments
- ],
- "cropped_duration": cropped_duration,
- "cropped_at": time.time(),
- }
- },
- )
- if result.modified_count > 0:
- audio_logger.info(
- f"Updated cropped audio info for {audio_uuid}: {cropped_path}"
- )
- return result.modified_count > 0
-
-
-class TranscriptionManager:
- """Manages transcription using either Deepgram or offline ASR service."""
-
- def __init__(self, action_item_callback=None):
- self.client = None
- self._current_audio_uuid = None
- self._streaming = False
- self.use_deepgram = USE_DEEPGRAM
- self.deepgram_client = deepgram_client
- self._audio_buffer = [] # Buffer for Deepgram batch processing
- self.action_item_callback = action_item_callback # Callback to queue action items
-
- async def connect(self):
- """Establish connection to ASR service (only for offline ASR)."""
- if self.use_deepgram:
- audio_logger.info("Using Deepgram transcription - no connection needed")
- return
-
- try:
- self.client = AsyncTcpClient.from_uri(OFFLINE_ASR_TCP_URI)
- await self.client.connect()
- audio_logger.info(
- f"Connected to offline ASR service at {OFFLINE_ASR_TCP_URI}"
- )
- except Exception as e:
- audio_logger.error(f"Failed to connect to offline ASR service: {e}")
- self.client = None
- raise
-
- async def disconnect(self):
- """Cleanly disconnect from ASR service."""
- if self.use_deepgram:
- audio_logger.info("Using Deepgram - no disconnection needed")
- return
-
- if self.client:
- try:
- await self.client.disconnect()
- audio_logger.info("Disconnected from offline ASR service")
- except Exception as e:
- audio_logger.error(f"Error disconnecting from offline ASR service: {e}")
- finally:
- self.client = None
-
- async def transcribe_chunk(
- self, audio_uuid: str, chunk: AudioChunk, client_id: str
- ):
- """Transcribe a single chunk using either Deepgram or offline ASR."""
- if self.use_deepgram:
- await self._transcribe_chunk_deepgram(audio_uuid, chunk, client_id)
- else:
- await self._transcribe_chunk_offline(audio_uuid, chunk, client_id)
-
- async def _transcribe_chunk_deepgram(
- self, audio_uuid: str, chunk: AudioChunk, client_id: str
- ):
- """Transcribe using Deepgram API."""
- raise NotImplementedError(
- "Deepgram transcription is not yet implemented. Please use offline ASR by not setting DEEPGRAM_API_KEY."
- )
-
- async def _process_deepgram_buffer(self, audio_uuid: str, client_id: str):
- """Process buffered audio with Deepgram."""
- raise NotImplementedError("Deepgram transcription is not yet implemented.")
-
- async def _transcribe_chunk_offline(
- self, audio_uuid: str, chunk: AudioChunk, client_id: str
- ):
- """Transcribe using offline ASR service."""
- if not self.client:
- audio_logger.error(f"No ASR connection available for {audio_uuid}")
- # Track transcription failure
- metrics_collector = get_metrics_collector()
- metrics_collector.record_transcription_result(False)
- return
-
- # Track transcription request
- start_time = time.time()
- metrics_collector = get_metrics_collector()
- metrics_collector.record_transcription_request()
-
- try:
- if self._current_audio_uuid != audio_uuid:
- self._current_audio_uuid = audio_uuid
- audio_logger.info(f"New audio_uuid: {audio_uuid}")
- transcribe = Transcribe()
- await self.client.write_event(transcribe.event())
- audio_start = AudioStart(
- rate=chunk.rate,
- width=chunk.width,
- channels=chunk.channels,
- timestamp=chunk.timestamp,
- )
- await self.client.write_event(audio_start.event())
-
- # Send the audio chunk
- await self.client.write_event(chunk.event())
-
- # Read and process any available events (non-blocking)
- try:
- while True:
- event = await asyncio.wait_for(
- self.client.read_event(), timeout=0.001
- ) # this is a quick poll, feels like a better solution can exist
- if event is None:
- break
-
- if Transcript.is_type(event.type):
- transcript_obj = Transcript.from_event(event)
- transcript_text = transcript_obj.text.strip()
-
- # Handle both Transcript and StreamingTranscript types
- # Check the 'final' attribute from the event data, not the reconstructed object
- is_final = event.data.get(
- "final", True
- ) # Default to True for standard Transcript
-
- # Only process final transcripts, ignore partial ones
- if not is_final:
- audio_logger.info(
- f"Ignoring partial transcript for {audio_uuid}: {transcript_text}"
- )
- continue
-
- if transcript_text:
- audio_logger.info(
- f"Transcript for {audio_uuid}: {transcript_text} (final: {is_final})"
- )
-
- # Track successful transcription with latency
- latency_ms = (time.time() - start_time) * 1000
- metrics_collector.record_transcription_result(
- True, latency_ms
- )
-
- # Create transcript segment with new format
- transcript_segment = {
- "speaker": f"speaker_{client_id}",
- "text": transcript_text,
- "start": 0.0,
- "end": 0.0,
- }
-
- # Store transcript segment in DB immediately
-
- await chunk_repo.add_transcript_segment(audio_uuid, transcript_segment)
-
- # Queue for action item processing using callback (async, non-blocking)
- if self.action_item_callback:
- await self.action_item_callback(transcript_text, client_id, audio_uuid)
-
- await chunk_repo.add_speaker(audio_uuid, f"speaker_{client_id}")
- audio_logger.info(f"Added transcript segment for {audio_uuid} to DB.")
-
- # Update transcript time for conversation timeout tracking
- if client_id in active_clients:
- active_clients[client_id].last_transcript_time = (
- time.time()
- )
- # Collect transcript for end-of-conversation memory processing
- active_clients[
- client_id
- ].conversation_transcripts.append(transcript_text)
- audio_logger.info(
- f"Added transcript to conversation collection: '{transcript_text}'"
- )
-
- elif VoiceStarted.is_type(event.type):
- audio_logger.info(
- f"VoiceStarted event received for {audio_uuid}"
- )
- current_time = time.time()
- if client_id in active_clients:
- active_clients[client_id].record_speech_start(
- audio_uuid, current_time
- )
- audio_logger.info(
- f"๐ค Voice started for {audio_uuid} at {current_time}"
- )
-
- elif VoiceStopped.is_type(event.type):
- audio_logger.info(
- f"VoiceStopped event received for {audio_uuid}"
- )
- current_time = time.time()
- if client_id in active_clients:
- active_clients[client_id].record_speech_end(
- audio_uuid, current_time
- )
- audio_logger.info(
- f"๐ Voice stopped for {audio_uuid} at {current_time}"
- )
-
- except asyncio.TimeoutError:
- # No events available right now, that's fine
- pass
-
- except Exception as e:
- audio_logger.error(
- f"Error in offline transcribe_chunk for {audio_uuid}: {e}"
- )
- # Track transcription failure
- metrics_collector.record_transcription_result(False)
- # Attempt to reconnect on error
- await self._reconnect()
-
- async def _reconnect(self):
- """Attempt to reconnect to ASR service."""
- audio_logger.info("Attempting to reconnect to ASR service...")
-
- # Track reconnection attempt
- metrics_collector = get_metrics_collector()
- metrics_collector.record_service_reconnection("asr-service")
-
- await self.disconnect()
- await asyncio.sleep(2) # Brief delay before reconnecting
- try:
- await self.connect()
- except Exception as e:
- audio_logger.error(f"Reconnection failed: {e}")
-
-
-class ClientState:
- """Manages all state for a single client connection."""
-
- def __init__(self, client_id: str):
- self.client_id = client_id
- self.connected = True
-
- # Per-client queues
- self.chunk_queue = asyncio.Queue[Optional[AudioChunk]]()
- self.transcription_queue = asyncio.Queue[Tuple[Optional[str], Optional[AudioChunk]]]()
- self.memory_queue = asyncio.Queue[Tuple[Optional[str], Optional[str], Optional[str]]]() # (transcript, client_id, audio_uuid)
- self.action_item_queue = asyncio.Queue[Tuple[Optional[str], Optional[str], Optional[str]]]() # (transcript_text, client_id, audio_uuid)
-
- # Per-client file sink
- self.file_sink: Optional[LocalFileSink] = None
- self.current_audio_uuid: Optional[str] = None
-
- # Per-client transcription manager
- self.transcription_manager: Optional[TranscriptionManager] = None
-
- # Conversation timeout tracking
- self.last_transcript_time: Optional[float] = None
- self.conversation_start_time: float = time.time()
-
- # Speech segment tracking for audio cropping
- self.speech_segments: dict[str, list[tuple[float, float]]] = (
- {}
- ) # audio_uuid -> [(start, end), ...]
- self.current_speech_start: dict[str, Optional[float]] = (
- {}
- ) # audio_uuid -> start_time
-
- # Conversation transcript collection for end-of-conversation memory processing
- self.conversation_transcripts: list[str] = (
- []
- ) # Collect all transcripts for this conversation
-
- # Tasks for this client
- self.saver_task: Optional[asyncio.Task] = None
- self.transcription_task: Optional[asyncio.Task] = None
- self.memory_task: Optional[asyncio.Task] = None
- self.action_item_task: Optional[asyncio.Task] = None
-
- def record_speech_start(self, audio_uuid: str, timestamp: float):
- """Record the start of a speech segment."""
- self.current_speech_start[audio_uuid] = timestamp
- audio_logger.info(f"Recorded speech start for {audio_uuid}: {timestamp}")
-
- def record_speech_end(self, audio_uuid: str, timestamp: float):
- """Record the end of a speech segment."""
- if (
- audio_uuid in self.current_speech_start
- and self.current_speech_start[audio_uuid] is not None
- ):
- start_time = self.current_speech_start[audio_uuid]
- if start_time is not None: # Type guard
- if audio_uuid not in self.speech_segments:
- self.speech_segments[audio_uuid] = []
- self.speech_segments[audio_uuid].append((start_time, timestamp))
- self.current_speech_start[audio_uuid] = None
- duration = timestamp - start_time
- audio_logger.info(
- f"Recorded speech segment for {audio_uuid}: {start_time:.3f} -> {timestamp:.3f} (duration: {duration:.3f}s)"
- )
- else:
- audio_logger.warning(
- f"Speech end recorded for {audio_uuid} but no start time found"
- )
-
- async def start_processing(self):
- """Start the processing tasks for this client."""
- self.saver_task = asyncio.create_task(self._audio_saver())
- self.transcription_task = asyncio.create_task(self._transcription_processor())
- self.memory_task = asyncio.create_task(self._memory_processor())
- self.action_item_task = asyncio.create_task(self._action_item_processor())
- audio_logger.info(f"Started processing tasks for client {self.client_id}")
-
- async def disconnect(self):
- """Clean disconnect of client state."""
- if not self.connected:
- return
-
- self.connected = False
- audio_logger.info(f"Disconnecting client {self.client_id}")
-
- # Close current conversation with all processing before signaling shutdown
- await self._close_current_conversation()
-
- # Signal processors to stop
- await self.chunk_queue.put(None)
- await self.transcription_queue.put((None, None))
- await self.memory_queue.put((None, None, None))
- await self.action_item_queue.put((None, None, None))
-
- # Wait for tasks to complete
- if self.saver_task:
- await self.saver_task
- if self.transcription_task:
- await self.transcription_task
- if self.memory_task:
- await self.memory_task
- if self.action_item_task:
- await self.action_item_task
-
- # Clean up transcription manager
- if self.transcription_manager:
- await self.transcription_manager.disconnect()
- self.transcription_manager = None
-
- # Clean up any remaining speech segment tracking
- self.speech_segments.clear()
- self.current_speech_start.clear()
- self.conversation_transcripts.clear() # Clear conversation transcripts
-
- audio_logger.info(f"Client {self.client_id} disconnected and cleaned up")
-
- def _should_start_new_conversation(self) -> bool:
- """Check if we should start a new conversation based on timeout."""
- if self.last_transcript_time is None:
- return False # No transcript yet, keep current conversation
-
- current_time = time.time()
- time_since_last_transcript = current_time - self.last_transcript_time
- timeout_seconds = NEW_CONVERSATION_TIMEOUT_MINUTES * 60
-
- return time_since_last_transcript > timeout_seconds
-
- async def _close_current_conversation(self):
- """Close the current conversation with proper cleanup including audio cropping and speaker processing."""
- if self.file_sink:
- # Store current audio info before closing
- current_uuid = self.current_audio_uuid
- current_path = self.file_sink.file_path
-
- audio_logger.info(
- f"๐ Closing conversation {current_uuid}, file: {current_path}"
- )
-
- # Process memory at end of conversation if we have transcripts
- if self.conversation_transcripts and current_uuid:
- full_conversation = " ".join(self.conversation_transcripts)
- audio_logger.info(
- f"๐ญ Processing memory for conversation {current_uuid} with {len(self.conversation_transcripts)} transcript segments"
- )
- audio_logger.info(
- f"๐ญ Individual transcripts: {self.conversation_transcripts}"
- )
- audio_logger.info(
- f"๐ญ Full conversation text: {full_conversation[:200]}..."
- ) # Log first 200 chars
-
- start_time = time.time()
- memories_created = []
- action_items_created = []
- processing_success = True
- error_message = None
-
- try:
- # Track memory storage request
- metrics_collector = get_metrics_collector()
- metrics_collector.record_memory_storage_request()
-
- # Add general memory
- memory_result = memory_service.add_memory(
- full_conversation, self.client_id, current_uuid
- )
- if memory_result:
- audio_logger.info(
- f"โ
Successfully added conversation memory for {current_uuid}"
- )
- metrics_collector.record_memory_storage_result(True)
-
- # Use the actual memory objects returned from mem0's add() method
- memory_results = memory_result.get("results", [])
- memories_created = []
-
- for mem in memory_results:
- memory_text = mem.get("memory", "Memory text unavailable")
- memory_id = mem.get("id", "unknown")
- event = mem.get("event", "UNKNOWN")
- memories_created.append(
- {"id": memory_id, "text": memory_text, "event": event}
- )
-
- audio_logger.info(
- f"Created {len(memories_created)} memory objects: {[m['event'] for m in memories_created]}"
- )
- else:
- audio_logger.error(
- f"โ Failed to add conversation memory for {current_uuid}"
- )
- metrics_collector.record_memory_storage_result(False)
- processing_success = False
- error_message = "Failed to add general memory"
-
- except Exception as e:
- audio_logger.error(
- f"โ Error processing memory and action items for {current_uuid}: {e}"
- )
- processing_success = False
- error_message = str(e)
-
- # Log debug information
- processing_time_ms = (time.time() - start_time) * 1000
- # memory_debug.log_memory_processing(
- # user_id=self.client_id,
- # audio_uuid=current_uuid,
- # transcript_text=full_conversation,
- # memories_created=memories_created,
- # action_items_created=action_items_created,
- # processing_success=processing_success,
- # error_message=error_message,
- # processing_time_ms=processing_time_ms,
- # )
- else:
- audio_logger.info(
- f"โน๏ธ No transcripts to process for memory in conversation {current_uuid}"
- )
- # Log empty processing for debug
- if current_uuid:
- pass
- # memory_debug.log_memory_processing(
- # user_id=self.client_id,
- # audio_uuid=current_uuid,
- # transcript_text="",
- # memories_created=[],
- # action_items_created=[],
- # processing_success=True,
- # error_message="No transcripts available for processing",
- # processing_time_ms=0,
- # )
-
- await self.file_sink.close()
-
- # Track successful audio chunk save in metrics
- try:
- metrics_collector = get_metrics_collector()
- file_path = Path(current_path)
- if file_path.exists():
- # Estimate duration (60 seconds per chunk is TARGET_SAMPLES)
- duration_seconds = SEGMENT_SECONDS
-
- # Calculate voice activity if we have speech segments
- voice_activity_seconds = 0
- if current_uuid and current_uuid in self.speech_segments:
- for start, end in self.speech_segments[current_uuid]:
- voice_activity_seconds += end - start
-
- metrics_collector.record_audio_chunk_saved(
- duration_seconds, voice_activity_seconds
- )
- audio_logger.debug(
- f"๐ Recorded audio chunk metrics: {duration_seconds}s total, {voice_activity_seconds}s voice activity"
- )
- else:
- metrics_collector.record_audio_chunk_failed()
- audio_logger.warning(
- f"๐ Audio file not found after save: {current_path}"
- )
- except Exception as e:
- audio_logger.error(f"๐ Error recording audio metrics: {e}")
-
- self.file_sink = None
-
- # Process audio cropping if we have speech segments
- if current_uuid and current_path:
- if current_uuid in self.speech_segments:
- speech_segments = self.speech_segments[current_uuid]
- audio_logger.info(
- f"๐ฏ Found {len(speech_segments)} speech segments for {current_uuid}: {speech_segments}"
- )
- if speech_segments: # Only crop if we have speech segments
- cropped_path = str(current_path).replace(".wav", "_cropped.wav")
-
- # Process in background - won't block
- asyncio.create_task(
- self._process_audio_cropping(
- f"{CHUNK_DIR}/{current_path}",
- speech_segments,
- f"{CHUNK_DIR}/{cropped_path}",
- current_uuid,
- )
- )
- audio_logger.info(
- f"โ๏ธ Queued audio cropping for {current_path} with {len(speech_segments)} speech segments"
- )
- else:
- audio_logger.info(
- f"โ ๏ธ Empty speech segments list found for {current_path}, skipping cropping"
- )
-
- # Clean up segments for this conversation
- del self.speech_segments[current_uuid]
- if current_uuid in self.current_speech_start:
- del self.current_speech_start[current_uuid]
- else:
- audio_logger.info(
- f"โ ๏ธ No speech segments found for {current_path} (uuid: {current_uuid}), skipping cropping"
- )
-
- else:
- audio_logger.info(
- f"๐ No active file sink to close for client {self.client_id}"
- )
-
- async def start_new_conversation(self):
- """Start a new conversation by closing current conversation and resetting state."""
- await self._close_current_conversation()
-
- # Reset conversation state
- self.current_audio_uuid = None
- self.conversation_start_time = time.time()
- self.last_transcript_time = None
- self.conversation_transcripts.clear() # Clear collected transcripts for new conversation
-
- audio_logger.info(
- f"Client {self.client_id}: Started new conversation due to {NEW_CONVERSATION_TIMEOUT_MINUTES}min timeout"
- )
-
- async def _process_audio_cropping(
- self,
- original_path: str,
- speech_segments: list[tuple[float, float]],
- output_path: str,
- audio_uuid: str,
- ):
- """Background task for audio cropping using ffmpeg."""
- await _process_audio_cropping_with_relative_timestamps(
- original_path, speech_segments, output_path, audio_uuid
- )
-
- async def _audio_saver(self):
- """Per-client audio saver consumer."""
- try:
- while self.connected:
- audio_chunk = await self.chunk_queue.get()
-
- if audio_chunk is None: # Disconnect signal
- break
-
- # Check if we should start a new conversation due to timeout
- if self._should_start_new_conversation():
- await self.start_new_conversation()
-
- if self.file_sink is None:
- # Create new file sink for this client
- self.current_audio_uuid = uuid.uuid4().hex
- timestamp = audio_chunk.timestamp or int(time.time())
- wav_filename = (
- f"{timestamp}_{self.client_id}_{self.current_audio_uuid}.wav"
- )
- audio_logger.info(
- f"Creating file sink with: rate={int(OMI_SAMPLE_RATE)}, channels={int(OMI_CHANNELS)}, width={int(OMI_SAMPLE_WIDTH)}"
- )
- self.file_sink = _new_local_file_sink(f"{CHUNK_DIR}/{wav_filename}")
- await self.file_sink.open()
-
- await chunk_repo.create_chunk(
- audio_uuid=self.current_audio_uuid,
- audio_path=wav_filename,
- client_id=self.client_id,
- timestamp=timestamp,
- )
-
- await self.file_sink.write(audio_chunk)
-
- # Queue for transcription
- await self.transcription_queue.put(
- (self.current_audio_uuid, audio_chunk)
- )
-
- except Exception as e:
- audio_logger.error(
- f"Error in audio saver for client {self.client_id}: {e}", exc_info=True
- )
- finally:
- # Close current conversation with all processing when audio saver ends
- await self._close_current_conversation()
-
- async def _transcription_processor(self):
- """Per-client transcription processor."""
- try:
- while self.connected:
- audio_uuid, chunk = await self.transcription_queue.get()
-
- if audio_uuid is None or chunk is None: # Disconnect signal
- break
-
- # Get or create transcription manager
- if self.transcription_manager is None:
- # Create callback function to queue action items
- async def action_item_callback(transcript_text, client_id, audio_uuid):
- await self.action_item_queue.put((transcript_text, client_id, audio_uuid))
-
- self.transcription_manager = TranscriptionManager(action_item_callback=action_item_callback)
- try:
- await self.transcription_manager.connect()
- except Exception as e:
- audio_logger.error(
- f"Failed to create transcription manager for client {self.client_id}: {e}"
- )
- continue
-
- # Process transcription
- try:
- await self.transcription_manager.transcribe_chunk(
- audio_uuid, chunk, self.client_id
- )
- except Exception as e:
- audio_logger.error(
- f"Error transcribing for client {self.client_id}: {e}"
- )
- # Recreate transcription manager on error
- if self.transcription_manager:
- await self.transcription_manager.disconnect()
- self.transcription_manager = None
-
- except Exception as e:
- audio_logger.error(
- f"Error in transcription processor for client {self.client_id}: {e}",
- exc_info=True,
- )
-
- async def _memory_processor(self):
- """Per-client memory processor - currently unused as memory processing happens at conversation end."""
- try:
- while self.connected:
- transcript, client_id, audio_uuid = await self.memory_queue.get()
-
- if (
- transcript is None or client_id is None or audio_uuid is None
- ): # Disconnect signal
- break
-
- # Memory processing now happens at conversation end, so this is effectively a no-op
- # Keeping the processor running to avoid breaking the queue system
- audio_logger.debug(
- f"Memory processor received item but processing is now done at conversation end"
- )
-
- except Exception as e:
- audio_logger.error(
- f"Error in memory processor for client {self.client_id}: {e}",
- exc_info=True,
- )
-
- async def _action_item_processor(self):
- """
- Processes transcript segments from the per-client action item queue.
-
- For each transcript segment, this processor:
- - Checks if the special keyphrase 'Simon says' (case-insensitive, as a phrase) appears in the text.
- - If found, it replaces all occurrences of the keyphrase with 'Simon says' (canonical form) and extracts action items from the modified text.
- - Logs the detection and extraction process for this special case.
- - If the keyphrase is not found, it extracts action items from the original transcript text.
- - All extraction is performed using the action_items_service.
- - Logs the number of action items extracted or any errors encountered.
- """
- try:
- while self.connected:
- transcript_text, client_id, audio_uuid = await self.action_item_queue.get()
-
- if transcript_text is None or client_id is None or audio_uuid is None: # Disconnect signal
- break
-
- # Check for the special keyphrase 'simon says' (case-insensitive, any spaces or dots)
- keyphrase_pattern = re.compile(r'\bSimon says\b', re.IGNORECASE)
- if keyphrase_pattern.search(transcript_text):
- # Remove all occurrences of the keyphrase
- modified_text = keyphrase_pattern.sub('Simon says', transcript_text)
- audio_logger.info(f"๐ 'simon says' keyphrase detected in transcript for {audio_uuid}. Extracting action items from: '{modified_text.strip()}'")
- try:
- action_item_count = await action_items_service.extract_and_store_action_items(
- modified_text.strip(), client_id, audio_uuid
- )
- if action_item_count > 0:
- audio_logger.info(f"๐ฏ Extracted {action_item_count} action items from 'simon says' transcript segment for {audio_uuid}")
- else:
- audio_logger.debug(f"โน๏ธ No action items found in 'simon says' transcript segment for {audio_uuid}")
- except Exception as e:
- audio_logger.error(f"โ Error processing 'simon says' action items for transcript segment in {audio_uuid}: {e}")
- continue # Skip the normal extraction for this case
-
- except Exception as e:
- audio_logger.error(f"Error in action item processor for client {self.client_id}: {e}", exc_info=True)
-
-
-# Initialize repository and global state
-chunk_repo = ChunkRepo(chunks_col)
-active_clients: dict[str, ClientState] = {}
-
-
-async def create_client_state(client_id: str) -> ClientState:
- """Create and register a new client state."""
- client_state = ClientState(client_id)
- active_clients[client_id] = client_state
- await client_state.start_processing()
-
- # Track client connection in metrics
- metrics_collector = get_metrics_collector()
- metrics_collector.record_client_connection(client_id)
-
- return client_state
-
-
-async def cleanup_client_state(client_id: str):
- """Clean up and remove client state."""
- if client_id in active_clients:
- client_state = active_clients[client_id]
- await client_state.disconnect()
- del active_clients[client_id]
-
- # Track client disconnection in metrics
- metrics_collector = get_metrics_collector()
- metrics_collector.record_client_disconnection(client_id)
-
-
-###############################################################################
-# CORE APPLICATION LOGIC
-###############################################################################
-
-
-@asynccontextmanager
-async def lifespan(app: FastAPI):
- """Manage application lifespan events."""
- # Startup
- audio_logger.info("Starting application...")
-
- # Start metrics collection
- await start_metrics_collection()
- audio_logger.info("Metrics collection started")
-
- audio_logger.info(
- "Application ready - clients will have individual processing pipelines."
- )
-
- try:
- yield
- finally:
- # Shutdown
- audio_logger.info("Shutting down application...")
-
- # Clean up all active clients
- for client_id in list(active_clients.keys()):
- await cleanup_client_state(client_id)
-
- # Stop metrics collection and save final report
- await stop_metrics_collection()
- audio_logger.info("Metrics collection stopped")
-
- # Shutdown memory service and speaker service
- shutdown_memory_service()
- audio_logger.info("Memory and speaker services shut down.")
-
- audio_logger.info("Shutdown complete.")
-
-
-# FastAPI Application
-app = FastAPI(lifespan=lifespan)
-app.mount("/audio", StaticFiles(directory=CHUNK_DIR), name="audio")
-
-
-@app.websocket("/ws")
-async def ws_endpoint(ws: WebSocket, user_id: Optional[str] = Query(None)):
- """Accepts WebSocket connections, decodes Opus audio, and processes per-client."""
- await ws.accept()
-
- # Use user_id if provided, otherwise generate a random client_id
- client_id = user_id if user_id else f"client_{str(uuid.uuid4())}"
- audio_logger.info(f"๐ WebSocket connection accepted - Client: {client_id}, User ID: {user_id}")
-
- decoder = OmiOpusDecoder()
- _decode_packet = partial(decoder.decode_packet, strip_header=False)
-
- # Create client state and start processing
- client_state = await create_client_state(client_id)
-
- try:
- packet_count = 0
- total_bytes = 0
- while True:
- packet = await ws.receive_bytes()
- packet_count += 1
- total_bytes += len(packet)
-
- start_time = time.time()
- loop = asyncio.get_running_loop()
- pcm_data = await loop.run_in_executor(
- _DEC_IO_EXECUTOR, _decode_packet, packet
- )
- decode_time = time.time() - start_time
-
- if pcm_data:
- audio_logger.debug(f"๐ต Decoded packet #{packet_count}: {len(packet)} bytes -> {len(pcm_data)} PCM bytes (took {decode_time:.3f}s)")
- chunk = AudioChunk(
- audio=pcm_data,
- rate=OMI_SAMPLE_RATE,
- width=OMI_SAMPLE_WIDTH,
- channels=OMI_CHANNELS,
- timestamp=int(time.time()),
- )
- await client_state.chunk_queue.put(chunk)
-
- # Log every 1000th packet to avoid spam
- if packet_count % 1000 == 0:
- audio_logger.info(f"๐ Processed {packet_count} packets ({total_bytes} bytes total) for client {client_id}")
-
- # Track audio chunk received in metrics
- metrics_collector = get_metrics_collector()
- metrics_collector.record_audio_chunk_received(client_id)
- metrics_collector.record_client_activity(client_id)
-
- except WebSocketDisconnect:
- audio_logger.info(f"๐ WebSocket disconnected - Client: {client_id}, Packets: {packet_count}, Total bytes: {total_bytes}")
- except Exception as e:
- audio_logger.error(f"โ WebSocket error for client {client_id}: {e}", exc_info=True)
- finally:
- # Clean up client state
- await cleanup_client_state(client_id)
-
-
-@app.websocket("/ws_pcm")
-async def ws_endpoint_pcm(ws: WebSocket, user_id: Optional[str] = Query(None)):
- """Accepts WebSocket connections, processes PCM audio per-client."""
- await ws.accept()
-
- # Use user_id if provided, otherwise generate a random client_id
- client_id = user_id if user_id else f"client_{uuid.uuid4().hex[:8]}"
- audio_logger.info(f"๐ PCM WebSocket connection accepted - Client: {client_id}, User ID: {user_id}")
-
- # Create client state and start processing
- client_state = await create_client_state(client_id)
-
- try:
- packet_count = 0
- total_bytes = 0
- while True:
- packet = await ws.receive_bytes()
- packet_count += 1
- total_bytes += len(packet)
-
- if packet:
- audio_logger.debug(f"๐ต Received PCM packet #{packet_count}: {len(packet)} bytes")
- chunk = AudioChunk(
- audio=packet,
- rate=16000,
- width=2,
- channels=1,
- timestamp=int(time.time()),
- )
- await client_state.chunk_queue.put(chunk)
-
- # Log every 1000th packet to avoid spam
- if packet_count % 1000 == 0:
- audio_logger.info(f"๐ Processed {packet_count} PCM packets ({total_bytes} bytes total) for client {client_id}")
-
-
- # Track audio chunk received in metrics
- metrics_collector = get_metrics_collector()
- metrics_collector.record_audio_chunk_received(client_id)
- metrics_collector.record_client_activity(client_id)
- except WebSocketDisconnect:
- audio_logger.info(f"๐ PCM WebSocket disconnected - Client: {client_id}, Packets: {packet_count}, Total bytes: {total_bytes}")
- except Exception as e:
- audio_logger.error(f"โ PCM WebSocket error for client {client_id}: {e}", exc_info=True)
- finally:
- # Clean up client state
- await cleanup_client_state(client_id)
-
-
-@app.get("/api/conversations")
-async def get_conversations():
- """Get all conversations grouped by client_id."""
- try:
- # Get all audio chunks and group by client_id
- cursor = chunks_col.find({}).sort("timestamp", -1)
- conversations = {}
-
- async for chunk in cursor:
- client_id = chunk.get("client_id", "unknown")
- if client_id not in conversations:
- conversations[client_id] = []
-
- conversations[client_id].append(
- {
- "audio_uuid": chunk["audio_uuid"],
- "audio_path": chunk["audio_path"],
- "cropped_audio_path": chunk.get("cropped_audio_path"),
- "timestamp": chunk["timestamp"],
- "transcript": chunk.get("transcript", []),
- "speakers_identified": chunk.get("speakers_identified", []),
- "speech_segments": chunk.get("speech_segments", []),
- "cropped_duration": chunk.get("cropped_duration"),
- }
- )
-
- return {"conversations": conversations}
- except Exception as e:
- audio_logger.error(f"Error getting conversations: {e}")
- return JSONResponse(status_code=500, content={"error": str(e)})
-
-
-@app.get("/api/conversations/{audio_uuid}/cropped")
-async def get_cropped_audio_info(audio_uuid: str):
- """Get cropped audio information for a specific conversation."""
- try:
- chunk = await chunks_col.find_one({"audio_uuid": audio_uuid})
- if not chunk:
- return JSONResponse(
- status_code=404, content={"error": "Conversation not found"}
- )
-
- return {
- "audio_uuid": audio_uuid,
- "original_audio_path": chunk["audio_path"],
- "cropped_audio_path": chunk.get("cropped_audio_path"),
- "speech_segments": chunk.get("speech_segments", []),
- "cropped_duration": chunk.get("cropped_duration"),
- "cropped_at": chunk.get("cropped_at"),
- "has_cropped_version": bool(chunk.get("cropped_audio_path")),
- }
- except Exception as e:
- audio_logger.error(f"Error getting cropped audio info: {e}")
- return JSONResponse(status_code=500, content={"error": str(e)})
-
-
-@app.post("/api/conversations/{audio_uuid}/reprocess")
-async def reprocess_audio_cropping(audio_uuid: str):
- """Trigger reprocessing of audio cropping for a specific conversation."""
- try:
- chunk = await chunks_col.find_one({"audio_uuid": audio_uuid})
- if not chunk:
- return JSONResponse(
- status_code=404, content={"error": "Conversation not found"}
- )
-
- original_path = f"{CHUNK_DIR}/{chunk['audio_path']}"
- if not Path(original_path).exists():
- return JSONResponse(
- status_code=404, content={"error": "Original audio file not found"}
- )
-
- # Check if we have speech segments
- speech_segments = chunk.get("speech_segments", [])
- if not speech_segments:
- return JSONResponse(
- status_code=400,
- content={"error": "No speech segments available for cropping"},
- )
-
- # Convert speech segments from dict format to tuple format
- speech_segments_tuples = [(seg["start"], seg["end"]) for seg in speech_segments]
-
- cropped_filename = chunk["audio_path"].replace(".wav", "_cropped.wav")
- cropped_path = f"{CHUNK_DIR}/{cropped_filename}"
-
- # Process in background using shared logic
- async def reprocess_task():
- audio_logger.info(f"๐ Starting reprocess for {audio_uuid}")
- await _process_audio_cropping_with_relative_timestamps(
- original_path, speech_segments_tuples, cropped_path, audio_uuid
- )
-
- asyncio.create_task(reprocess_task())
-
- return {"message": "Reprocessing started", "audio_uuid": audio_uuid}
- except Exception as e:
- audio_logger.error(f"Error reprocessing audio: {e}")
- return JSONResponse(status_code=500, content={"error": str(e)})
-
-
-@app.get("/api/users")
-async def get_users():
- """Retrieves all users from the database."""
- try:
- cursor = users_col.find()
- users = []
- for doc in await cursor.to_list(length=100):
- doc["_id"] = str(doc["_id"]) # Convert ObjectId to string
- users.append(doc)
- return JSONResponse(content=users)
- except Exception as e:
- audio_logger.error(f"Error fetching users: {e}", exc_info=True)
- return JSONResponse(
- status_code=500, content={"message": "Error fetching users"}
- )
-
-
-@app.post("/api/create_user")
-async def create_user(user_id: str):
- """Creates a new user in the database."""
- try:
- # Check if user already exists
- existing_user = await users_col.find_one({"user_id": user_id})
- if existing_user:
- return JSONResponse(
- status_code=409, content={"message": f"User {user_id} already exists"}
- )
-
- # Create new user
- result = await users_col.insert_one({"user_id": user_id})
- return JSONResponse(
- status_code=201,
- content={
- "message": f"User {user_id} created successfully",
- "id": str(result.inserted_id),
- },
- )
- except Exception as e:
- audio_logger.error(f"Error creating user: {e}", exc_info=True)
- return JSONResponse(status_code=500, content={"message": "Error creating user"})
-
-
-@app.delete("/api/delete_user")
-async def delete_user(
- user_id: str, delete_conversations: bool = False, delete_memories: bool = False
-):
- """Deletes a user from the database with optional data cleanup."""
- try:
- # Check if user exists
- existing_user = await users_col.find_one({"user_id": user_id})
- if not existing_user:
- return JSONResponse(
- status_code=404, content={"message": f"User {user_id} not found"}
- )
-
- deleted_data = {}
-
- # Delete user from users collection
- user_result = await users_col.delete_one({"user_id": user_id})
- deleted_data["user_deleted"] = user_result.deleted_count > 0
-
- if delete_conversations:
- # Delete all conversations (audio chunks) for this user
- conversations_result = await chunks_col.delete_many({"client_id": user_id})
- deleted_data["conversations_deleted"] = conversations_result.deleted_count
-
- if delete_memories:
- # Delete all memories for this user using the memory service
- try:
- memory_count = memory_service.delete_all_user_memories(user_id)
- deleted_data["memories_deleted"] = memory_count
- except Exception as mem_error:
- audio_logger.error(
- f"Error deleting memories for user {user_id}: {mem_error}"
- )
- deleted_data["memories_deleted"] = 0
- deleted_data["memory_deletion_error"] = str(mem_error)
-
- # Build message based on what was deleted
- message = f"User {user_id} deleted successfully"
- deleted_items = []
- if delete_conversations and deleted_data.get("conversations_deleted", 0) > 0:
- deleted_items.append(
- f"{deleted_data['conversations_deleted']} conversations"
- )
- if delete_memories and deleted_data.get("memories_deleted", 0) > 0:
- deleted_items.append(f"{deleted_data['memories_deleted']} memories")
-
- if deleted_items:
- message += f" along with {' and '.join(deleted_items)}"
-
- return JSONResponse(
- status_code=200, content={"message": message, "deleted_data": deleted_data}
- )
- except Exception as e:
- audio_logger.error(f"Error deleting user: {e}", exc_info=True)
- return JSONResponse(status_code=500, content={"message": "Error deleting user"})
-
-
-@app.get("/api/memories")
-async def get_memories(user_id: str, limit: int = 100):
- """Retrieves memories from the mem0 store with optional filtering."""
- try:
- all_memories = memory_service.get_all_memories(user_id=user_id, limit=limit)
- return JSONResponse(content=all_memories)
- except Exception as e:
- audio_logger.error(f"Error fetching memories: {e}", exc_info=True)
- return JSONResponse(
- status_code=500, content={"message": "Error fetching memories"}
- )
-
-
-@app.get("/api/memories/search")
-async def search_memories(user_id: str, query: str, limit: int = 10):
- """Search memories using semantic similarity for better retrieval."""
- try:
- relevant_memories = memory_service.search_memories(
- query=query, user_id=user_id, limit=limit
- )
- return JSONResponse(content=relevant_memories)
- except Exception as e:
- audio_logger.error(f"Error searching memories: {e}", exc_info=True)
- return JSONResponse(
- status_code=500, content={"message": "Error searching memories"}
- )
-
-
-@app.delete("/api/memories/{memory_id}")
-async def delete_memory(memory_id: str):
- """Delete a specific memory by ID."""
- try:
- memory_service.delete_memory(memory_id=memory_id)
- return JSONResponse(
- content={"message": f"Memory {memory_id} deleted successfully"}
- )
- except Exception as e:
- audio_logger.error(f"Error deleting memory {memory_id}: {e}", exc_info=True)
- return JSONResponse(
- status_code=500, content={"message": "Error deleting memory"}
- )
-
-
-@app.post("/api/conversations/{audio_uuid}/speakers")
-async def add_speaker_to_conversation(audio_uuid: str, speaker_id: str):
- """Add a speaker to the speakers_identified list for a conversation."""
- try:
- await chunk_repo.add_speaker(audio_uuid, speaker_id)
- return JSONResponse(
- content={
- "message": f"Speaker {speaker_id} added to conversation {audio_uuid}"
- }
- )
- except Exception as e:
- audio_logger.error(f"Error adding speaker: {e}", exc_info=True)
- return JSONResponse(
- status_code=500, content={"message": "Error adding speaker"}
- )
-
-
-@app.put("/api/conversations/{audio_uuid}/transcript/{segment_index}")
-async def update_transcript_segment(
- audio_uuid: str,
- segment_index: int,
- speaker_id: Optional[str] = None,
- start_time: Optional[float] = None,
- end_time: Optional[float] = None,
-):
- """Update a specific transcript segment with speaker or timing information."""
- try:
- update_doc = {}
-
- if speaker_id is not None:
- update_doc[f"transcript.{segment_index}.speaker"] = speaker_id
- # Also add to speakers_identified if not already present
- await chunk_repo.add_speaker(audio_uuid, speaker_id)
-
- if start_time is not None:
- update_doc[f"transcript.{segment_index}.start"] = start_time
-
- if end_time is not None:
- update_doc[f"transcript.{segment_index}.end"] = end_time
-
- if not update_doc:
- return JSONResponse(
- status_code=400, content={"error": "No update parameters provided"}
- )
-
- result = await chunks_col.update_one(
- {"audio_uuid": audio_uuid}, {"$set": update_doc}
- )
-
- if result.matched_count == 0:
- return JSONResponse(
- status_code=404, content={"error": "Conversation not found"}
- )
-
- return JSONResponse(
- content={"message": "Transcript segment updated successfully"}
- )
-
- except Exception as e:
- audio_logger.error(f"Error updating transcript segment: {e}")
- return JSONResponse(status_code=500, content={"error": "Internal server error"})
-
-
-# class SpeakerEnrollmentRequest(BaseModel):
-# speaker_id: str
-# speaker_name: str
-# audio_file_path: str
-# start_time: Optional[float] = None
-# end_time: Optional[float] = None
-
-
-# class SpeakerIdentificationRequest(BaseModel):
-# audio_file_path: str
-# start_time: Optional[float] = None
-# end_time: Optional[float] = None
-
-
-# class ActionItemUpdateRequest(BaseModel):
-# status: str # "open", "in_progress", "completed", "cancelled"
-
-
-# class ActionItemCreateRequest(BaseModel):
-# description: str
-# assignee: Optional[str] = "unassigned"
-# due_date: Optional[str] = "not_specified"
-# priority: Optional[str] = "medium"
-# context: Optional[str] = ""
-
-
-@app.get("/health")
-async def health_check():
- """Comprehensive health check for all services."""
- health_status = {
- "status": "healthy",
- "timestamp": int(time.time()),
- "services": {},
- "config": {
- "mongodb_uri": MONGODB_URI,
- "ollama_url": OLLAMA_BASE_URL,
- "qdrant_url": f"http://{QDRANT_BASE_URL}:6333",
- "asr_uri": OFFLINE_ASR_TCP_URI,
- "chunk_dir": str(CHUNK_DIR),
- "active_clients": len(active_clients),
- "new_conversation_timeout_minutes": NEW_CONVERSATION_TIMEOUT_MINUTES,
- "action_items_enabled": True,
- "audio_cropping_enabled": AUDIO_CROPPING_ENABLED,
- },
- }
-
- overall_healthy = True
- critical_services_healthy = True
-
- # Check MongoDB (critical service)
- try:
- await asyncio.wait_for(mongo_client.admin.command("ping"), timeout=5.0)
- health_status["services"]["mongodb"] = {
- "status": "โ
Connected",
- "healthy": True,
- "critical": True,
- }
- except asyncio.TimeoutError:
- health_status["services"]["mongodb"] = {
- "status": "โ Connection Timeout (5s)",
- "healthy": False,
- "critical": True,
- }
- overall_healthy = False
- critical_services_healthy = False
- except Exception as e:
- health_status["services"]["mongodb"] = {
- "status": f"โ Connection Failed: {str(e)}",
- "healthy": False,
- "critical": True,
- }
- overall_healthy = False
- critical_services_healthy = False
-
- # Check Ollama (non-critical service - may not be running)
- try:
- # Run in executor to avoid blocking the main thread
- loop = asyncio.get_running_loop()
- models = await asyncio.wait_for(
- loop.run_in_executor(None, ollama_client.list), timeout=8.0
- )
- model_count = len(models.get("models", []))
- health_status["services"]["ollama"] = {
- "status": "โ
Connected",
- "healthy": True,
- "models": model_count,
- "critical": False,
- }
- except asyncio.TimeoutError:
- health_status["services"]["ollama"] = {
- "status": "โ ๏ธ Connection Timeout (8s) - Service may not be running",
- "healthy": False,
- "critical": False,
- }
- overall_healthy = False
- except Exception as e:
- health_status["services"]["ollama"] = {
- "status": f"โ ๏ธ Connection Failed: {str(e)} - Service may not be running",
- "healthy": False,
- "critical": False,
- }
- overall_healthy = False
-
- # Check mem0 (depends on Ollama and Qdrant)
- try:
- # Test memory service connection with timeout
- test_success = memory_service.test_connection()
- if test_success:
- health_status["services"]["mem0"] = {
- "status": "โ
Connected",
- "healthy": True,
- "critical": False,
- }
- else:
- health_status["services"]["mem0"] = {
- "status": "โ ๏ธ Connection Test Failed",
- "healthy": False,
- "critical": False,
- }
- overall_healthy = False
- except asyncio.TimeoutError:
- health_status["services"]["mem0"] = {
- "status": "โ ๏ธ Connection Test Timeout (10s) - Depends on Ollama/Qdrant",
- "healthy": False,
- "critical": False,
- }
- overall_healthy = False
- except Exception as e:
- health_status["services"]["mem0"] = {
- "status": f"โ ๏ธ Connection Test Failed: {str(e)} - Check Ollama/Qdrant services",
- "healthy": False,
- "critical": False,
- }
- overall_healthy = False
-
- # Check ASR service (non-critical - may be external)
- try:
- test_client = AsyncTcpClient.from_uri(OFFLINE_ASR_TCP_URI)
- await asyncio.wait_for(test_client.connect(), timeout=5.0)
- await test_client.disconnect()
- health_status["services"]["asr"] = {
- "status": "โ
Connected",
- "healthy": True,
- "uri": OFFLINE_ASR_TCP_URI,
- "critical": False,
- }
- except asyncio.TimeoutError:
- health_status["services"]["asr"] = {
- "status": f"โ ๏ธ Connection Timeout (5s) - Check external ASR service",
- "healthy": False,
- "uri": OFFLINE_ASR_TCP_URI,
- "critical": False,
- }
- overall_healthy = False
- except Exception as e:
- health_status["services"]["asr"] = {
- "status": f"โ ๏ธ Connection Failed: {str(e)} - Check external ASR service",
- "healthy": False,
- "uri": OFFLINE_ASR_TCP_URI,
- "critical": False,
- }
- overall_healthy = False
-
- # Track health check results in metrics
- try:
- metrics_collector = get_metrics_collector()
- for service_name, service_info in health_status["services"].items():
- success = service_info.get("healthy", False)
- failure_reason = (
- None if success else service_info.get("status", "Unknown failure")
- )
- metrics_collector.record_service_health_check(
- service_name, success, failure_reason
- )
-
- # Also track overall system health
- metrics_collector.record_service_health_check(
- "friend-backend", overall_healthy, "System health check"
- )
- except Exception as e:
- audio_logger.error(f"Failed to record health check metrics: {e}")
-
- # Set overall status
- health_status["overall_healthy"] = overall_healthy
- health_status["critical_services_healthy"] = critical_services_healthy
-
- if not critical_services_healthy:
- health_status["status"] = "critical"
- elif not overall_healthy:
- health_status["status"] = "degraded"
- else:
- health_status["status"] = "healthy"
-
- # Add helpful messages
- if not overall_healthy:
- messages = []
- if not critical_services_healthy:
- messages.append(
- "Critical services (MongoDB) are unavailable - core functionality will not work"
- )
-
- unhealthy_optional = [
- name
- for name, service in health_status["services"].items()
- if not service["healthy"] and not service.get("critical", True)
- ]
- if unhealthy_optional:
- messages.append(
- f"Optional services unavailable: {', '.join(unhealthy_optional)}"
- )
-
- health_status["message"] = "; ".join(messages)
-
- return JSONResponse(content=health_status, status_code=200)
-
-
-@app.get("/readiness")
-async def readiness_check():
- """Simple readiness check for container orchestration."""
- return JSONResponse(
- content={"status": "ready", "timestamp": int(time.time())}, status_code=200
- )
-
-
-@app.post("/api/close_conversation")
-async def close_current_conversation(client_id: str):
- """Close the current conversation for a specific client."""
- if client_id not in active_clients:
- return JSONResponse(
- content={"error": f"Client '{client_id}' not found or not connected"},
- status_code=404,
- )
-
- client_state = active_clients[client_id]
- if not client_state.connected:
- return JSONResponse(
- content={"error": f"Client '{client_id}' is not connected"}, status_code=400
- )
-
- try:
- # Close the current conversation
- await client_state._close_current_conversation()
-
- # Reset conversation state but keep client connected
- client_state.current_audio_uuid = None
- client_state.conversation_start_time = time.time()
- client_state.last_transcript_time = None
-
- logger.info(f"Manually closed conversation for client {client_id}")
-
- return JSONResponse(
- content={
- "message": f"Successfully closed current conversation for client '{client_id}'",
- "client_id": client_id,
- "timestamp": int(time.time()),
- }
- )
-
- except Exception as e:
- logger.error(f"Error closing conversation for client {client_id}: {e}")
- return JSONResponse(
- content={"error": f"Failed to close conversation: {str(e)}"},
- status_code=500,
- )
-
-
-@app.get("/api/active_clients")
-async def get_active_clients():
- """Get list of currently active/connected clients."""
- client_info = {}
-
- for client_id, client_state in active_clients.items():
- client_info[client_id] = {
- "connected": client_state.connected,
- "current_audio_uuid": client_state.current_audio_uuid,
- "conversation_start_time": client_state.conversation_start_time,
- "last_transcript_time": client_state.last_transcript_time,
- "has_active_conversation": client_state.current_audio_uuid is not None,
- }
-
- return JSONResponse(
- content={"active_clients_count": len(active_clients), "clients": client_info}
- )
-
-
-@app.get("/api/debug/speech_segments")
-async def debug_speech_segments():
- """Debug endpoint to check current speech segments for all active clients."""
- debug_info = {
- "active_clients": len(active_clients),
- "audio_cropping_enabled": AUDIO_CROPPING_ENABLED,
- "min_speech_duration": MIN_SPEECH_SEGMENT_DURATION,
- "cropping_padding": CROPPING_CONTEXT_PADDING,
- "clients": {},
- }
-
- for client_id, client_state in active_clients.items():
- debug_info["clients"][client_id] = {
- "current_audio_uuid": client_state.current_audio_uuid,
- "speech_segments": {
- uuid: segments
- for uuid, segments in client_state.speech_segments.items()
- },
- "current_speech_start": dict(client_state.current_speech_start),
- "connected": client_state.connected,
- "last_transcript_time": client_state.last_transcript_time,
- }
-
- return JSONResponse(content=debug_info)
-
-
-@app.get("/api/debug/memory-processing")
-async def debug_memory_processing(
- user_id: Optional[str] = None,
- limit: int = 50,
- since_timestamp: Optional[int] = None,
-):
- """Get debug information about memory processing operations."""
- try:
- # debug_entries = memory_debug.get_debug_entries(
- # user_id=user_id, limit=limit, since_timestamp=since_timestamp
- # )
-
- pass
- # return JSONResponse(
- # content={
- # "debug_entries": debug_entries,
- # "total_entries": len(debug_entries),
- # "user_filter": user_id,
- # "limit": limit,
- # "since_timestamp": since_timestamp,
- # }
- # )
-
- except Exception as e:
- audio_logger.error(f"Error getting memory processing debug info: {e}")
- return JSONResponse(
- status_code=500, content={"error": "Failed to get debug information"}
- )
-
-
-@app.get("/api/debug/memory-processing/stats")
-async def debug_memory_processing_stats(user_id: Optional[str] = None):
- """Get statistics about memory processing operations."""
- try:
- # stats = memory_debug.get_debug_stats(user_id=user_id)
-
- pass
- # return JSONResponse(content={"user_id": user_id, "statistics": stats})
-
- except Exception as e:
- audio_logger.error(f"Error getting memory processing stats: {e}")
- return JSONResponse(
- status_code=500, content={"error": "Failed to get debug statistics"}
- )
-
-
-@app.get("/api/metrics")
-async def get_current_metrics():
- """Get current metrics summary for monitoring dashboard."""
- try:
- metrics_collector = get_metrics_collector()
- metrics_summary = metrics_collector.get_current_metrics_summary()
- return metrics_summary
- except Exception as e:
- audio_logger.error(f"Error getting current metrics: {e}")
- return JSONResponse(status_code=500, content={"error": str(e)})
-
-
-###############################################################################
-# ENTRYPOINT
-###############################################################################
-
-if __name__ == "__main__":
- import uvicorn
-
- host = os.getenv("HOST", "0.0.0.0")
- port = int(os.getenv("PORT", "8000"))
- audio_logger.info("Starting Omi unified service at ws://%s:%s/ws", host, port)
- uvicorn.run("main:app", host=host, port=port, reload=False)
diff --git a/backends/advanced-backend/src/memory/memory_service.py b/backends/advanced-backend/src/memory/memory_service.py
deleted file mode 100644
index f2a3f12f..00000000
--- a/backends/advanced-backend/src/memory/memory_service.py
+++ /dev/null
@@ -1,712 +0,0 @@
-"""Memory service implementation for Omi-audio service.
-
-This module provides:
-- Memory configuration and initialization
-- Memory operations (add, get, search, delete)
-- Action item extraction and management
-"""
-
-import logging
-import os
-import time
-import json
-from typing import Optional, List, Dict, Any
-
-from mem0 import Memory
-import ollama
-
-# Configure Mem0 telemetry based on environment variable
-# Set default to False for privacy unless explicitly enabled
-if not os.getenv("MEM0_TELEMETRY"):
- os.environ["MEM0_TELEMETRY"] = "False"
-
-# Logger for memory operations
-memory_logger = logging.getLogger("memory_service")
-
-# Memory configuration
-MEM0_ORGANIZATION_ID = os.getenv("MEM0_ORGANIZATION_ID", "friend-lite-org")
-MEM0_PROJECT_ID = os.getenv("MEM0_PROJECT_ID", "audio-conversations")
-MEM0_APP_ID = os.getenv("MEM0_APP_ID", "omi-backend")
-
-# Ollama & Qdrant Configuration (these should match main config)
-OLLAMA_BASE_URL = os.getenv("OLLAMA_BASE_URL", "http://ollama:11434")
-QDRANT_BASE_URL = os.getenv("QDRANT_BASE_URL", "qdrant")
-
-# Global memory configuration
-MEM0_CONFIG = {
- "llm": {
- "provider": "ollama",
- "config": {
- "model": "llama3.1:latest",
- "ollama_base_url": OLLAMA_BASE_URL,
- "temperature": 0,
- "max_tokens": 2000,
- },
- },
- "embedder": {
- "provider": "ollama",
- "config": {
- "model": "nomic-embed-text:latest",
- "embedding_dims": 768,
- "ollama_base_url": OLLAMA_BASE_URL,
- },
- },
- "vector_store": {
- "provider": "qdrant",
- "config": {
- "collection_name": "omi_memories",
- "embedding_model_dims": 768,
- "host": QDRANT_BASE_URL,
- "port": 6333,
- },
- },
- "custom_prompt": "Extract action items from the conversation. Don't extract likes and dislikes.",
-}
-
-# Action item extraction configuration
-ACTION_ITEM_EXTRACTION_PROMPT = """
-You are an AI assistant specialized in extracting actionable tasks from meeting transcripts and conversations.
-
-Analyze the following conversation transcript and extract all action items, tasks, and commitments mentioned.
-
-For each action item you find, return a JSON object with these fields:
-- "description": A clear, specific description of the task
-- "assignee": The person responsible (use "unassigned" if not specified)
-- "due_date": The deadline if mentioned (use "not_specified" if not mentioned)
-- "priority": The urgency level ("high", "medium", "low", or "not_specified")
-- "status": Always set to "open" for new items
-- "context": A brief context about when/why this was mentioned
-
-Return ONLY a valid JSON array of action items. If no action items are found, return an empty array [].
-
-Examples of action items to look for:
-- "I'll send you the report by Friday"
-- "We need to schedule a follow-up meeting"
-- "Can you review the document before tomorrow?"
-- "Let's get that bug fixed"
-- "I'll call the client next week"
-
-Transcript:
-{transcript}
-"""
-
-# Global instances
-_memory_service = None
-_process_memory = None # For worker processes
-
-
-def init_memory_config(
- ollama_base_url: Optional[str] = None,
- qdrant_base_url: Optional[str] = None,
- organization_id: Optional[str] = None,
- project_id: Optional[str] = None,
- app_id: Optional[str] = None,
-) -> dict:
- """Initialize and return memory configuration with optional overrides."""
- global MEM0_CONFIG, MEM0_ORGANIZATION_ID, MEM0_PROJECT_ID, MEM0_APP_ID
-
- memory_logger.info(f"Initializing MemoryService with Qdrant URL: {qdrant_base_url} and Ollama base URL: {ollama_base_url}")
-
- if ollama_base_url:
- MEM0_CONFIG["llm"]["config"]["ollama_base_url"] = ollama_base_url
- MEM0_CONFIG["embedder"]["config"]["ollama_base_url"] = ollama_base_url
-
- if qdrant_base_url:
- MEM0_CONFIG["vector_store"]["config"]["host"] = qdrant_base_url
-
- if organization_id:
- MEM0_ORGANIZATION_ID = organization_id
-
- if project_id:
- MEM0_PROJECT_ID = project_id
-
- if app_id:
- MEM0_APP_ID = app_id
-
- return MEM0_CONFIG
-
-
-def _init_process_memory():
- """Initialize memory instance once per worker process."""
- global _process_memory
- if _process_memory is None:
- _process_memory = Memory.from_config(MEM0_CONFIG)
- return _process_memory
-
-
-def _add_memory_to_store(transcript: str, client_id: str, audio_uuid: str) -> bool:
- """
- Function to add memory in a separate process.
- This function will be pickled and run in a process pool.
- Uses a persistent memory instance per process.
- """
- try:
- # Get or create the persistent memory instance for this process
- process_memory = _init_process_memory()
- process_memory.add(
- transcript,
- user_id=client_id,
- metadata={
- "source": "offline_streaming",
- "audio_uuid": audio_uuid,
- "timestamp": int(time.time()),
- "conversation_context": "audio_transcription",
- "device_type": "audio_recording",
- "organization_id": MEM0_ORGANIZATION_ID,
- "project_id": MEM0_PROJECT_ID,
- "app_id": MEM0_APP_ID,
- },
- )
- return True
- except Exception as e:
- memory_logger.error(f"Error adding memory for {audio_uuid}: {e}")
- return False
-
-
-def _extract_action_items_from_transcript(transcript: str, client_id: str, audio_uuid: str) -> List[Dict[str, Any]]:
- """
- Extract action items from transcript using Ollama.
- This function will be used in the processing pipeline.
- """
- try:
- # Get or create the persistent memory instance for this process
- process_memory = _init_process_memory()
-
- # Initialize Ollama client with the same config as Mem0
- ollama_client = ollama.Client(host=OLLAMA_BASE_URL)
-
- # Format the prompt with the transcript
- prompt = ACTION_ITEM_EXTRACTION_PROMPT.format(transcript=transcript)
-
- # Call Ollama to extract action items
- response = ollama_client.chat(
- model="llama3.1:latest",
- messages=[
- {"role": "system", "content": "You are an expert at extracting action items from conversations. Always return valid JSON."},
- {"role": "user", "content": prompt}
- ],
- options={
- "temperature": 0.1, # Low temperature for consistent extraction
- "num_predict": 1000, # Enough tokens for multiple action items
- }
- )
-
- # Parse the response
- response_text = response['message']['content'].strip()
-
- # Try to parse JSON from the response
- try:
- # Clean up the response if it has markdown formatting
- if response_text.startswith('```json'):
- response_text = response_text.replace('```json', '').replace('```', '').strip()
- elif response_text.startswith('```'):
- response_text = response_text.replace('```', '').strip()
-
- action_items = json.loads(response_text)
-
- # Validate that we got a list
- if not isinstance(action_items, list):
- memory_logger.warning(f"Action item extraction returned non-list for {audio_uuid}: {type(action_items)}")
- return []
-
- # Add metadata to each action item
- for item in action_items:
- if isinstance(item, dict):
- item.update({
- "audio_uuid": audio_uuid,
- "client_id": client_id,
- "created_at": int(time.time()),
- "source": "transcript_extraction",
- "id": f"action_{audio_uuid}_{len(action_items)}_{int(time.time())}"
- })
-
- memory_logger.info(f"Extracted {len(action_items)} action items from {audio_uuid}")
- return action_items
-
- except json.JSONDecodeError as e:
- memory_logger.error(f"Failed to parse action items JSON for {audio_uuid}: {e}")
- memory_logger.error(f"Raw response: {response_text}")
- return []
-
- except Exception as e:
- memory_logger.error(f"Error extracting action items for {audio_uuid}: {e}")
- return []
-
-
-def _add_action_items_to_store(action_items: List[Dict[str, Any]], client_id: str, audio_uuid: str) -> bool:
- """
- Store extracted action items in Mem0 with proper metadata.
- """
- try:
- if not action_items:
- return True # Nothing to store, but not an error
-
- # Get or create the persistent memory instance for this process
- process_memory = _init_process_memory()
-
- for item in action_items:
- # Format the action item as a message for Mem0
- action_text = f"Action Item: {item.get('description', 'No description')}"
- if item.get('assignee') and item.get('assignee') != 'unassigned':
- action_text += f" (Assigned to: {item['assignee']})"
- if item.get('due_date') and item.get('due_date') != 'not_specified':
- action_text += f" (Due: {item['due_date']})"
-
- # Store in Mem0 with infer=False to preserve exact content
- process_memory.add(
- action_text,
- user_id=client_id,
- metadata={
- "type": "action_item",
- "source": "transcript_extraction",
- "audio_uuid": audio_uuid,
- "timestamp": int(time.time()),
- "action_item_data": item, # Store the full action item data
- "organization_id": MEM0_ORGANIZATION_ID,
- "project_id": MEM0_PROJECT_ID,
- "app_id": MEM0_APP_ID,
- },
- infer=False # Don't let Mem0 modify our action items
- )
-
- memory_logger.info(f"Stored {len(action_items)} action items for {audio_uuid}")
- return True
-
- except Exception as e:
- memory_logger.error(f"Error storing action items for {audio_uuid}: {e}")
- return False
-
-
-class MemoryService:
- """Service class for managing memory operations."""
-
- def __init__(self):
- self.memory = None
- self._initialized = False
-
- def initialize(self):
- """Initialize the memory service."""
- if self._initialized:
- return
-
- try:
- # Log Qdrant and Ollama URLs
- memory_logger.info(f"Initializing MemoryService with Qdrant URL: {MEM0_CONFIG['vector_store']['config']['host']} and Ollama base URL: {MEM0_CONFIG['llm']['config']['ollama_base_url']}")
- # Initialize main memory instance
- self.memory = Memory.from_config(MEM0_CONFIG)
- self._initialized = True
- memory_logger.info("Memory service initialized successfully")
-
- except Exception as e:
- memory_logger.error(f"Failed to initialize memory service: {e}")
- raise
-
- def add_memory(self, transcript: str, client_id: str, audio_uuid: str) -> bool:
- """Add memory in background process (non-blocking)."""
- if not self._initialized:
- self.initialize()
-
- try:
- success = _add_memory_to_store(transcript, client_id, audio_uuid)
- if success:
- memory_logger.info(f"Added transcript for {audio_uuid} to mem0 (client: {client_id})")
- else:
- memory_logger.error(f"Failed to add memory for {audio_uuid}")
- return success
- except Exception as e:
- memory_logger.error(f"Error adding memory for {audio_uuid}: {e}")
- return False
-
- def extract_and_store_action_items(self, transcript: str, client_id: str, audio_uuid: str) -> int:
- """
- Extract action items from transcript and store them in Mem0.
- Returns the number of action items extracted and stored.
- """
- if not self._initialized:
- self.initialize()
-
- try:
- # Extract action items from the transcript
- action_items = _extract_action_items_from_transcript(transcript, client_id, audio_uuid)
-
- if not action_items:
- memory_logger.info(f"No action items found in transcript for {audio_uuid}")
- return 0
-
- # Store action items in Mem0
- success = _add_action_items_to_store(action_items, client_id, audio_uuid)
-
- if success:
- memory_logger.info(f"Successfully extracted and stored {len(action_items)} action items for {audio_uuid}")
- return len(action_items)
- else:
- memory_logger.error(f"Failed to store action items for {audio_uuid}")
- return 0
-
- except Exception as e:
- memory_logger.error(f"Error extracting action items for {audio_uuid}: {e}")
- return 0
-
- def get_action_items(self, user_id: str, limit: int = 50, status_filter: Optional[str] = None) -> List[Dict[str, Any]]:
- """
- Get action items for a user with optional status filtering.
- """
- if not self._initialized:
- self.initialize()
-
- assert self.memory is not None, "Memory service not initialized"
- try:
- # First, let's try to get all memories and filter manually to debug the issue
- all_memories = self.memory.get_all(user_id=user_id, limit=200)
-
- memory_logger.info(f"All memories response type: {type(all_memories)}")
- memory_logger.info(f"All memories keys: {list(all_memories.keys()) if isinstance(all_memories, dict) else 'not a dict'}")
-
- # Handle different formats
- if isinstance(all_memories, dict):
- if "results" in all_memories:
- memories_list = all_memories["results"]
- else:
- memories_list = list(all_memories.values())
- else:
- memories_list = all_memories if isinstance(all_memories, list) else []
-
- memory_logger.info(f"Found {len(memories_list)} total memories for user {user_id}")
-
- # Filter for action items manually
- action_item_memories = []
- for memory in memories_list:
- if isinstance(memory, dict):
- metadata = memory.get('metadata', {})
- memory_logger.info(f"Memory {memory.get('id', 'unknown')}: metadata = {metadata}")
-
- if metadata.get('type') == 'action_item':
- action_item_memories.append(memory)
- memory_logger.info(f"Found action item memory: {memory.get('memory', '')}")
-
- memory_logger.info(f"Found {len(action_item_memories)} action item memories")
-
- # Extract action item data from memories
- action_items = []
-
- for memory in action_item_memories:
- metadata = memory.get('metadata', {})
- action_item_data = metadata.get('action_item_data', {})
-
- # If no action_item_data, try to parse from memory text
- if not action_item_data:
- memory_logger.warning(f"No action_item_data found in metadata for memory {memory.get('id')}")
- # Try to create basic action item from memory text
- memory_text = memory.get('memory', '')
- if memory_text.startswith('Action Item:'):
- action_item_data = {
- 'description': memory_text.replace('Action Item:', '').strip(),
- 'status': 'open',
- 'assignee': 'unassigned',
- 'due_date': 'not_specified',
- 'priority': 'not_specified'
- }
-
- # Apply status filter if specified
- if status_filter and action_item_data.get('status') != status_filter:
- continue
-
- # Enrich with memory metadata
- action_item_data.update({
- "memory_id": memory.get('id'),
- "memory_text": memory.get('memory'),
- "created_at": metadata.get('timestamp'),
- "audio_uuid": metadata.get('audio_uuid')
- })
-
- action_items.append(action_item_data)
-
- memory_logger.info(f"Returning {len(action_items)} action items after filtering")
- return action_items
-
- except Exception as e:
- memory_logger.error(f"Error fetching action items for user {user_id}: {e}")
- raise
-
- def update_action_item_status(self, memory_id: str, new_status: str, user_id: Optional[str] = None) -> bool:
- """
- Update the status of an action item using proper Mem0 API.
- """
- if not self._initialized:
- self.initialize()
-
- assert self.memory is not None, "Memory service not initialized"
- try:
- # First, get the current memory to retrieve its metadata
- target_memory = self.memory.get(memory_id=memory_id)
-
- if not target_memory:
- memory_logger.error(f"Action item with memory_id {memory_id} not found")
- return False
-
- # Extract and update the action item data in metadata
- metadata = target_memory.get('metadata', {})
- action_item_data = metadata.get('action_item_data', {})
-
- if not action_item_data:
- memory_logger.error(f"No action_item_data found in memory {memory_id}")
- return False
-
- # Update the status in action_item_data
- action_item_data['status'] = new_status
- action_item_data['updated_at'] = int(time.time())
-
- # Create updated memory text with the new status
- updated_memory_text = f"Action Item: {action_item_data.get('description', 'No description')} (Status: {new_status})"
- if action_item_data.get('assignee') and action_item_data.get('assignee') != 'unassigned':
- updated_memory_text += f" (Assigned to: {action_item_data['assignee']})"
- if action_item_data.get('due_date') and action_item_data.get('due_date') != 'not_specified':
- updated_memory_text += f" (Due: {action_item_data['due_date']})"
-
- # Use Mem0's proper update method
- result = self.memory.update(
- memory_id=memory_id,
- data=updated_memory_text
- )
-
- memory_logger.info(f"Updated action item {memory_id} status to {new_status}")
- return True
-
- except Exception as e:
- memory_logger.error(f"Error updating action item status for {memory_id}: {e}")
- return False
-
- def search_action_items(self, query: str, user_id: str, limit: int = 20) -> List[Dict[str, Any]]:
- """
- Search action items by text query using proper Mem0 search with filters.
- """
- if not self._initialized:
- self.initialize()
-
- assert self.memory is not None, "Memory service not initialized"
- try:
- # Use Mem0's search with filters to find action items
- # According to docs, we can pass custom filters
- memories = self.memory.search(
- query=query,
- user_id=user_id,
- limit=limit,
- filters={"metadata.type": "action_item"}
- )
-
- # Extract action item data
- action_items = []
-
- # Handle different response formats from Mem0 search
- if isinstance(memories, dict) and "results" in memories:
- memories_list = memories["results"]
- elif isinstance(memories, list):
- memories_list = memories
- else:
- memory_logger.warning(f"Unexpected search response format: {type(memories)}")
- memories_list = []
-
- for memory in memories_list:
- if not isinstance(memory, dict):
- memory_logger.warning(f"Skipping non-dict memory: {type(memory)}")
- continue
-
- metadata = memory.get('metadata', {})
-
- # Double-check it's an action item
- if metadata.get('type') != 'action_item':
- continue
-
- action_item_data = metadata.get('action_item_data', {})
-
- # If no structured action item data, try to parse from memory text
- if not action_item_data:
- memory_text = memory.get('memory', '')
- if memory_text.startswith('Action Item:'):
- action_item_data = {
- 'description': memory_text.replace('Action Item:', '').strip(),
- 'status': 'open',
- 'assignee': 'unassigned',
- 'due_date': 'not_specified',
- 'priority': 'not_specified'
- }
-
- # Enrich with memory metadata
- action_item_data.update({
- "memory_id": memory.get('id'),
- "memory_text": memory.get('memory'),
- "relevance_score": memory.get('score', 0),
- "created_at": metadata.get('timestamp'),
- "audio_uuid": metadata.get('audio_uuid')
- })
-
- action_items.append(action_item_data)
-
- memory_logger.info(f"Search found {len(action_items)} action items for query '{query}'")
- return action_items
-
- except Exception as e:
- memory_logger.error(f"Error searching action items for user {user_id} with query '{query}': {e}")
- # Fallback: get all action items and do basic text matching
- try:
- all_action_items = self.get_action_items(user_id=user_id, limit=100)
-
- if not all_action_items:
- return []
-
- # Simple text matching fallback
- search_results = []
- query_lower = query.lower()
-
- for item in all_action_items:
- description = item.get('description', '').lower()
- assignee = item.get('assignee', '').lower()
- context = item.get('context', '').lower()
-
- # Check if query appears in any field
- if (query_lower in description or
- query_lower in assignee or
- query_lower in context):
-
- # Add relevance score based on where the match was found
- relevance_score = 0.0
- if query_lower in description:
- relevance_score += 0.7
- if query_lower in assignee:
- relevance_score += 0.2
- if query_lower in context:
- relevance_score += 0.1
-
- item['relevance_score'] = relevance_score
- search_results.append(item)
-
- # Sort by relevance score (highest first) and limit results
- search_results.sort(key=lambda x: x.get('relevance_score', 0), reverse=True)
- memory_logger.info(f"Fallback search found {len(search_results)} matches")
- return search_results[:limit]
-
- except Exception as fallback_e:
- memory_logger.error(f"Fallback search also failed: {fallback_e}")
- return []
-
- def delete_action_item(self, memory_id: str) -> bool:
- """Delete a specific action item by memory ID."""
- if not self._initialized:
- self.initialize()
-
- assert self.memory is not None, "Memory service not initialized"
- try:
- self.memory.delete(memory_id=memory_id)
- memory_logger.info(f"Deleted action item with memory_id {memory_id}")
- return True
- except Exception as e:
- memory_logger.error(f"Error deleting action item {memory_id}: {e}")
- return False
-
- def get_all_memories(self, user_id: str, limit: int = 100) -> dict:
- """Get all memories for a user."""
- if not self._initialized:
- self.initialize()
-
- assert self.memory is not None, "Memory service not initialized"
- try:
- memories = self.memory.get_all(user_id=user_id, limit=limit)
- return memories
- except Exception as e:
- memory_logger.error(f"Error fetching memories for user {user_id}: {e}")
- raise
-
- def search_memories(self, query: str, user_id: str, limit: int = 10) -> dict:
- """Search memories using semantic similarity."""
- if not self._initialized:
- self.initialize()
-
- assert self.memory is not None, "Memory service not initialized"
- try:
- memories = self.memory.search(query=query, user_id=user_id, limit=limit)
- return memories
- except Exception as e:
- memory_logger.error(f"Error searching memories for user {user_id}: {e}")
- raise
-
- def delete_memory(self, memory_id: str) -> bool:
- """Delete a specific memory by ID."""
- if not self._initialized:
- self.initialize()
-
- assert self.memory is not None, "Memory service not initialized"
- try:
- self.memory.delete(memory_id=memory_id)
- memory_logger.info(f"Deleted memory {memory_id}")
- return True
- except Exception as e:
- memory_logger.error(f"Error deleting memory {memory_id}: {e}")
- raise
-
- def delete_all_user_memories(self, user_id: str) -> int:
- """Delete all memories for a user and return count of deleted memories."""
- if not self._initialized:
- self.initialize()
-
- try:
- assert self.memory is not None, "Memory service not initialized"
- # Get all memories first to count them
- user_memories_response = self.memory.get_all(user_id=user_id)
- memory_count = 0
-
- # Handle different response formats from get_all
- if isinstance(user_memories_response, dict):
- if "results" in user_memories_response:
- # New paginated format
- memory_count = len(user_memories_response["results"])
- else:
- # Old dict format (deprecated)
- memory_count = len(user_memories_response)
- elif isinstance(user_memories_response, list):
- # Just in case it returns a list
- memory_count = len(user_memories_response)
- else:
- memory_count = 0
-
- # Delete all memories for this user
- if memory_count > 0:
- self.memory.delete_all(user_id=user_id)
- memory_logger.info(f"Deleted {memory_count} memories for user {user_id}")
-
- return memory_count
-
- except Exception as e:
- memory_logger.error(f"Error deleting memories for user {user_id}: {e}")
- raise
-
- def test_connection(self) -> bool:
- """Test memory service connection."""
- try:
- if not self._initialized:
- self.initialize()
- return True
- except Exception as e:
- memory_logger.error(f"Memory service connection test failed: {e}")
- return False
-
- def shutdown(self):
- """Shutdown the memory service."""
- self._initialized = False
- memory_logger.info("Memory service shut down")
-
-
-# Global service instance
-def get_memory_service() -> MemoryService:
- """Get the global memory service instance."""
- global _memory_service
- if _memory_service is None:
- _memory_service = MemoryService()
- return _memory_service
-
-
-def shutdown_memory_service():
- """Shutdown the global memory service."""
- global _memory_service
- if _memory_service:
- _memory_service.shutdown()
- _memory_service = None
\ No newline at end of file
diff --git a/backends/advanced-backend/src/metrics.py b/backends/advanced-backend/src/metrics.py
deleted file mode 100644
index 9b4072ba..00000000
--- a/backends/advanced-backend/src/metrics.py
+++ /dev/null
@@ -1,370 +0,0 @@
-import asyncio
-import json
-import logging
-import time
-from collections import deque
-from dataclasses import dataclass, field
-from datetime import datetime
-from pathlib import Path
-from typing import Dict, List, Optional
-
-# Configure metrics logger
-metrics_logger = logging.getLogger("metrics")
-
-
-@dataclass
-class ServiceMetrics:
- """Metrics for individual services"""
- name: str
- start_time: float = field(default_factory=time.time)
- total_uptime_seconds: float = 0.0
- last_health_check: Optional[float] = None
- health_check_successes: int = 0
- health_check_failures: int = 0
- reconnection_attempts: int = 0
- last_failure_time: Optional[float] = None
- failure_reasons: List[str] = field(default_factory=list)
-
-
-@dataclass
-class ClientMetrics:
- """Metrics for individual client connections"""
- client_id: str
- connection_start: float = field(default_factory=time.time)
- connection_end: Optional[float] = None
- total_connection_time: float = 0.0
- websocket_reconnections: int = 0
- audio_chunks_received: int = 0
- last_activity: float = field(default_factory=time.time)
-
-
-@dataclass
-class AudioProcessingMetrics:
- """Audio processing related metrics"""
- total_audio_duration_seconds: float = 0.0
- total_voice_activity_seconds: float = 0.0
- total_silence_seconds: float = 0.0
- chunks_processed_successfully: int = 0
- chunks_failed_processing: int = 0
- transcription_requests: int = 0
- transcription_successes: int = 0
- transcription_failures: int = 0
- memory_storage_requests: int = 0
- memory_storage_successes: int = 0
- memory_storage_failures: int = 0
- average_transcription_latency_ms: float = 0.0
- transcription_latencies: deque = field(default_factory=lambda: deque(maxlen=1000))
-
-
-@dataclass
-class SystemMetrics:
- """Overall system metrics"""
- system_start_time: float = field(default_factory=time.time)
- last_report_time: Optional[float] = None
- services: Dict[str, ServiceMetrics] = field(default_factory=dict)
- clients: Dict[str, ClientMetrics] = field(default_factory=dict)
- audio: AudioProcessingMetrics = field(default_factory=AudioProcessingMetrics)
- active_client_count: int = 0
-
-
-class MetricsCollector:
- """Central metrics collection and reporting system"""
-
- def __init__(self, debug_dir: str | Path):
- self.debug_dir = Path(debug_dir)
- self.debug_dir.mkdir(parents=True, exist_ok=True)
-
- self.metrics = SystemMetrics()
- self._report_task: Optional[asyncio.Task] = None
- self._running = False
-
- # Initialize core services
- self._init_core_services()
-
- metrics_logger.info(f"Metrics collector initialized, reports will be saved to: {self.debug_dir}")
-
- def _init_core_services(self):
- """Initialize metrics tracking for core services"""
- core_services = [
- "friend-backend",
- "mongodb",
- "qdrant",
- "asr-service",
- "memory-service",
- "speaker-service"
- ]
-
- for service_name in core_services:
- self.metrics.services[service_name] = ServiceMetrics(name=service_name)
-
- async def start(self):
- """Start the metrics collection and reporting"""
- if self._running:
- return
-
- self._running = True
- self._report_task = asyncio.create_task(self._periodic_report_loop())
- metrics_logger.info("Metrics collection started")
-
- async def stop(self):
- """Stop metrics collection and save final report"""
- if not self._running:
- return
-
- self._running = False
- if self._report_task:
- self._report_task.cancel()
- try:
- await self._report_task
- except asyncio.CancelledError:
- pass
-
- # Save final report
- await self._generate_report()
- metrics_logger.info("Metrics collection stopped")
-
- # Service Health Tracking
- def record_service_health_check(self, service_name: str, success: bool, failure_reason: str | None = None):
- """Record service health check result"""
- if service_name not in self.metrics.services:
- self.metrics.services[service_name] = ServiceMetrics(name=service_name)
-
- service = self.metrics.services[service_name]
- service.last_health_check = time.time()
-
- if success:
- service.health_check_successes += 1
- else:
- service.health_check_failures += 1
- service.last_failure_time = time.time()
- if failure_reason:
- service.failure_reasons.append(f"{datetime.now().isoformat()}: {failure_reason}")
- # Keep only last 10 failure reasons
- service.failure_reasons = service.failure_reasons[-10:]
-
- def record_service_reconnection(self, service_name: str):
- """Record service reconnection attempt"""
- if service_name not in self.metrics.services:
- self.metrics.services[service_name] = ServiceMetrics(name=service_name)
-
- self.metrics.services[service_name].reconnection_attempts += 1
-
- def update_service_uptime(self, service_name: str, uptime_seconds: float):
- """Update service uptime"""
- if service_name not in self.metrics.services:
- self.metrics.services[service_name] = ServiceMetrics(name=service_name)
-
- self.metrics.services[service_name].total_uptime_seconds = uptime_seconds
-
- # Client Connection Tracking
- def record_client_connection(self, client_id: str):
- """Record new client connection"""
- self.metrics.clients[client_id] = ClientMetrics(client_id=client_id)
- self.metrics.active_client_count = len([c for c in self.metrics.clients.values() if c.connection_end is None])
- metrics_logger.info(f"Client connected: {client_id}, active clients: {self.metrics.active_client_count}")
-
- def record_client_disconnection(self, client_id: str):
- """Record client disconnection"""
- if client_id in self.metrics.clients:
- client = self.metrics.clients[client_id]
- client.connection_end = time.time()
- client.total_connection_time = client.connection_end - client.connection_start
- self.metrics.active_client_count = len([c for c in self.metrics.clients.values() if c.connection_end is None])
- metrics_logger.info(f"Client disconnected: {client_id}, active clients: {self.metrics.active_client_count}")
-
- def record_client_reconnection(self, client_id: str):
- """Record client WebSocket reconnection"""
- if client_id in self.metrics.clients:
- self.metrics.clients[client_id].websocket_reconnections += 1
-
- def record_client_activity(self, client_id: str):
- """Update client last activity time"""
- if client_id in self.metrics.clients:
- self.metrics.clients[client_id].last_activity = time.time()
-
- def record_audio_chunk_received(self, client_id: str):
- """Record audio chunk received from client"""
- if client_id in self.metrics.clients:
- self.metrics.clients[client_id].audio_chunks_received += 1
-
- # Audio Processing Tracking
- def record_audio_chunk_saved(self, duration_seconds: float, voice_activity_seconds: float | None = None):
- """Record successful audio chunk save"""
- self.metrics.audio.total_audio_duration_seconds += duration_seconds
- self.metrics.audio.chunks_processed_successfully += 1
-
- if voice_activity_seconds is not None:
- self.metrics.audio.total_voice_activity_seconds += voice_activity_seconds
- self.metrics.audio.total_silence_seconds += (duration_seconds - voice_activity_seconds)
-
- def record_audio_chunk_failed(self):
- """Record failed audio chunk processing"""
- self.metrics.audio.chunks_failed_processing += 1
-
- def record_transcription_request(self):
- """Record transcription request sent"""
- self.metrics.audio.transcription_requests += 1
-
- def record_transcription_result(self, success: bool, latency_ms: float | None = None):
- """Record transcription result"""
- if success:
- self.metrics.audio.transcription_successes += 1
- else:
- self.metrics.audio.transcription_failures += 1
-
- if latency_ms is not None:
- self.metrics.audio.transcription_latencies.append(latency_ms)
- # Update rolling average
- if self.metrics.audio.transcription_latencies:
- self.metrics.audio.average_transcription_latency_ms = sum(self.metrics.audio.transcription_latencies) / len(self.metrics.audio.transcription_latencies)
-
- def record_memory_storage_request(self):
- """Record memory storage request"""
- self.metrics.audio.memory_storage_requests += 1
-
- def record_memory_storage_result(self, success: bool):
- """Record memory storage result"""
- if success:
- self.metrics.audio.memory_storage_successes += 1
- else:
- self.metrics.audio.memory_storage_failures += 1
-
- # Report Generation
- async def _periodic_report_loop(self):
- """Run periodic report generation loop (every 30 minutes)"""
- while self._running:
- try:
- # Wait 30 minutes between reports
- sleep_seconds = 30 * 60 # 30 minutes in seconds
-
- metrics_logger.info(f"Next metrics report in {sleep_seconds/60:.0f} minutes")
-
- await asyncio.sleep(sleep_seconds)
- await self._generate_report()
-
- except asyncio.CancelledError:
- break
- except Exception as e:
- metrics_logger.error(f"Error in periodic report loop: {e}")
- await asyncio.sleep(1800) # Wait 30 minutes before retry
-
- async def _generate_report(self):
- """Generate and save periodic metrics report"""
- try:
- report_time = datetime.now()
- system_uptime = time.time() - self.metrics.system_start_time
-
- # Calculate derived metrics
- total_recording_time = self.metrics.audio.total_audio_duration_seconds
- total_voice_activity = self.metrics.audio.total_voice_activity_seconds
-
- # Service uptime percentages
- service_uptimes = {}
- for name, service in self.metrics.services.items():
- uptime_percentage = min(100.0, (service.total_uptime_seconds / system_uptime) * 100) if system_uptime > 0 else 0
- service_uptimes[name] = {
- "uptime_seconds": service.total_uptime_seconds,
- "uptime_percentage": round(uptime_percentage, 2),
- "health_check_success_rate": round((service.health_check_successes / max(1, service.health_check_successes + service.health_check_failures)) * 100, 2),
- "reconnection_attempts": service.reconnection_attempts,
- "last_failure": service.last_failure_time,
- "recent_failures": service.failure_reasons[-5:] if service.failure_reasons else []
- }
-
- # Client connection metrics
- client_stats = {
- "active_connections": self.metrics.active_client_count,
- "total_clients_seen": len(self.metrics.clients),
- "total_reconnections": sum(c.websocket_reconnections for c in self.metrics.clients.values()),
- "average_connection_duration_minutes": round(sum(c.total_connection_time for c in self.metrics.clients.values() if c.connection_end) / max(1, len([c for c in self.metrics.clients.values() if c.connection_end])) / 60, 2)
- }
-
- # Audio processing success rates
- audio_stats = {
- "total_recording_time_hours": round(total_recording_time / 3600, 2),
- "total_voice_activity_hours": round(total_voice_activity / 3600, 2),
- "voice_activity_percentage": round((total_voice_activity / max(1, total_recording_time)) * 100, 2),
- "chunk_processing_success_rate": round((self.metrics.audio.chunks_processed_successfully / max(1, self.metrics.audio.chunks_processed_successfully + self.metrics.audio.chunks_failed_processing)) * 100, 2),
- "transcription_success_rate": round((self.metrics.audio.transcription_successes / max(1, self.metrics.audio.transcription_requests)) * 100, 2),
- "memory_storage_success_rate": round((self.metrics.audio.memory_storage_successes / max(1, self.metrics.audio.memory_storage_requests)) * 100, 2),
- "average_transcription_latency_ms": round(self.metrics.audio.average_transcription_latency_ms, 2)
- }
-
- # Generate comprehensive report
- report = {
- "report_metadata": {
- "generated_at": report_time.isoformat(),
- "system_start_time": datetime.fromtimestamp(self.metrics.system_start_time).isoformat(),
- "system_uptime_hours": round(system_uptime / 3600, 2),
- "report_period_hours": round((time.time() - self.metrics.last_report_time) / 3600, 2) if self.metrics.last_report_time else round(system_uptime / 3600, 2)
- },
- "uptime_metrics": {
- "system_uptime_vs_recording_time": {
- "system_uptime_hours": round(system_uptime / 3600, 2),
- "recording_time_hours": round(total_recording_time / 3600, 2),
- "recording_efficiency_percentage": round((total_recording_time / max(1, system_uptime)) * 100, 2)
- },
- "service_uptimes": service_uptimes,
- "client_connections": client_stats
- },
- "audio_processing_metrics": audio_stats,
- "raw_counters": {
- "chunks_processed": self.metrics.audio.chunks_processed_successfully,
- "chunks_failed": self.metrics.audio.chunks_failed_processing,
- "transcription_requests": self.metrics.audio.transcription_requests,
- "transcription_successes": self.metrics.audio.transcription_successes,
- "memory_storage_requests": self.metrics.audio.memory_storage_requests,
- "memory_storage_successes": self.metrics.audio.memory_storage_successes
- }
- }
-
- # Save report to file
- filename = f"metrics_report_{report_time.strftime('%Y%m%d_%H%M%S')}.json"
- filepath = self.debug_dir / filename
-
- with open(filepath, 'w') as f:
- json.dump(report, f, indent=2, default=str)
-
- self.metrics.last_report_time = time.time()
-
- metrics_logger.info(f"Metrics report saved: {filepath}")
- metrics_logger.info(f"System uptime: {system_uptime/3600:.1f}h, Recording: {total_recording_time/3600:.1f}h, Voice activity: {total_voice_activity/3600:.1f}h")
-
- except Exception as e:
- metrics_logger.error(f"Failed to generate metrics report: {e}")
-
- def get_current_metrics_summary(self) -> dict:
- """Get current metrics summary for API endpoints"""
- system_uptime = time.time() - self.metrics.system_start_time
-
- return {
- "system_uptime_hours": round(system_uptime / 3600, 2),
- "recording_time_hours": round(self.metrics.audio.total_audio_duration_seconds / 3600, 2),
- "active_clients": self.metrics.active_client_count,
- "chunks_processed": self.metrics.audio.chunks_processed_successfully,
- "transcription_success_rate": round((self.metrics.audio.transcription_successes / max(1, self.metrics.audio.transcription_requests)) * 100, 2),
- "voice_activity_hours": round(self.metrics.audio.total_voice_activity_seconds / 3600, 2),
- "services_status": {name: service.health_check_successes > service.health_check_failures for name, service in self.metrics.services.items()}
- }
-
-
-# Global metrics collector instance
-_metrics_collector: Optional[MetricsCollector] = None
-
-def get_metrics_collector() -> MetricsCollector:
- """Get the global metrics collector instance"""
- global _metrics_collector
- if _metrics_collector is None:
- debug_dir = "/app/debug_dir" # this is only for docker right now
- _metrics_collector = MetricsCollector(debug_dir)
- return _metrics_collector
-
-async def start_metrics_collection():
- """Start metrics collection"""
- collector = get_metrics_collector()
- await collector.start()
-
-async def stop_metrics_collection():
- """Stop metrics collection"""
- collector = get_metrics_collector()
- await collector.stop()
\ No newline at end of file
diff --git a/backends/advanced-backend/webui/Dockerfile b/backends/advanced-backend/src/webui/Dockerfile
similarity index 100%
rename from backends/advanced-backend/webui/Dockerfile
rename to backends/advanced-backend/src/webui/Dockerfile
diff --git a/backends/advanced-backend/webui/README.md b/backends/advanced-backend/src/webui/README.md
similarity index 100%
rename from backends/advanced-backend/webui/README.md
rename to backends/advanced-backend/src/webui/README.md
diff --git a/backends/advanced-backend/webui/USAGE.md b/backends/advanced-backend/src/webui/USAGE.md
similarity index 100%
rename from backends/advanced-backend/webui/USAGE.md
rename to backends/advanced-backend/src/webui/USAGE.md
diff --git a/backends/advanced-backend/webui/screenshot.png b/backends/advanced-backend/src/webui/screenshot.png
similarity index 100%
rename from backends/advanced-backend/webui/screenshot.png
rename to backends/advanced-backend/src/webui/screenshot.png
diff --git a/backends/advanced-backend/src/webui/streamlit_app.py b/backends/advanced-backend/src/webui/streamlit_app.py
new file mode 100644
index 00000000..8b4431a1
--- /dev/null
+++ b/backends/advanced-backend/src/webui/streamlit_app.py
@@ -0,0 +1,3069 @@
+import json
+import logging
+import os
+import random
+import time
+from datetime import datetime
+from pathlib import Path
+
+import pandas as pd
+import requests
+import streamlit as st
+from dotenv import load_dotenv
+
+from advanced_omi_backend.debug_system_tracker import get_debug_tracker
+
+load_dotenv()
+
+# Create logs directory for Streamlit app
+LOGS_DIR = Path("./logs")
+LOGS_DIR.mkdir(parents=True, exist_ok=True)
+
+# Configure comprehensive logging for Streamlit app
+logging.basicConfig(
+ level=logging.DEBUG if os.getenv("DEBUG", "false").lower() == "true" else logging.INFO,
+ format='%(asctime)s | %(levelname)-8s | %(name)-20s | %(message)s',
+ handlers=[
+ logging.StreamHandler(),
+ logging.FileHandler(LOGS_DIR / 'streamlit.log')
+ ]
+)
+
+logger = logging.getLogger("streamlit-ui")
+logger.info("๐ Starting Friend-Lite Streamlit Dashboard")
+
+# ---- Configuration ---- #
+BACKEND_API_URL = os.getenv("BACKEND_API_URL", "http://192.168.0.110:8000")
+
+BACKEND_PUBLIC_URL = os.getenv("BACKEND_PUBLIC_URL", BACKEND_API_URL)
+
+logger.info(f"๐ง Configuration loaded - Backend API: {BACKEND_API_URL}, Public URL: {BACKEND_PUBLIC_URL}")
+
+# ---- Authentication Functions ---- #
+def init_auth_state():
+ """Initialize authentication state in session state."""
+ if 'authenticated' not in st.session_state:
+ st.session_state.authenticated = False
+ if 'user_info' not in st.session_state:
+ st.session_state.user_info = None
+ if 'auth_token' not in st.session_state:
+ st.session_state.auth_token = None
+ if 'auth_method' not in st.session_state:
+ st.session_state.auth_method = None
+ if 'auth_config' not in st.session_state:
+ st.session_state.auth_config = None
+
+@st.cache_data(ttl=300) # Cache for 5 minutes
+def get_auth_config():
+ """Get authentication configuration from backend."""
+ try:
+ response = requests.get(f"{BACKEND_API_URL}/api/auth/config", timeout=5)
+ if response.status_code == 200:
+ return response.json()
+ else:
+ logger.warning(f"Failed to get auth config: {response.status_code}")
+ return None
+ except Exception as e:
+ logger.warning(f"Error getting auth config: {e}")
+ return None
+
+def get_auth_headers():
+ """Get authentication headers for API requests."""
+ if st.session_state.get('auth_token'):
+ return {'Authorization': f'Bearer {st.session_state.auth_token}'}
+ return {}
+
+def check_auth_from_url():
+ """Check for authentication token in URL parameters."""
+ try:
+ # Check URL parameters for token
+ query_params = st.query_params
+ if 'token' in query_params:
+ token = query_params['token']
+ logger.info("๐ Authentication token found in URL parameters")
+
+ # Validate token by calling a protected endpoint
+ headers = {'Authorization': f'Bearer {token}'}
+ response = requests.get(f"{BACKEND_API_URL}/api/users", headers=headers, timeout=5)
+
+ if response.status_code == 200:
+ st.session_state.authenticated = True
+ st.session_state.auth_token = token
+ st.session_state.auth_method = 'token'
+
+ # Try to get user info from token (decode JWT payload)
+ try:
+ import base64
+
+ # Split JWT token and decode payload
+ token_parts = token.split('.')
+ if len(token_parts) >= 2:
+ # Add padding if needed
+ payload = token_parts[1]
+ payload += '=' * (4 - len(payload) % 4)
+ decoded = base64.b64decode(payload)
+ user_data = json.loads(decoded)
+ st.session_state.user_info = {
+ 'user_id': user_data.get('sub', 'Unknown'),
+ 'email': user_data.get('email', 'Unknown'),
+ 'name': user_data.get('name', user_data.get('email', 'Unknown'))
+ }
+ except Exception as e:
+ logger.warning(f"Could not decode user info from token: {e}")
+ st.session_state.user_info = {'user_id': 'Unknown', 'email': 'Unknown'}
+
+ logger.info("โ
Authentication successful from URL token")
+
+ # Clear the token from URL to avoid confusion
+ st.query_params.clear()
+ st.rerun()
+ return True
+ else:
+ logger.warning("โ Token validation failed")
+ return False
+
+ # Check for error in URL
+ if 'error' in query_params:
+ error = query_params['error']
+ logger.error(f"โ Authentication error in URL: {error}")
+ st.error(f"Authentication error: {error}")
+ st.query_params.clear()
+ return False
+
+ except Exception as e:
+ logger.error(f"โ Error checking authentication from URL: {e}")
+ return False
+
+ return False
+
+def login_with_credentials(email, password):
+ """Login with email and password."""
+ try:
+ logger.info(f"๐ Attempting login for email: {email}")
+ response = requests.post(
+ f"{BACKEND_API_URL}/auth/jwt/login",
+ data={'username': email, 'password': password},
+ headers={'Content-Type': 'application/x-www-form-urlencoded'},
+ timeout=10
+ )
+
+ if response.status_code == 200:
+ auth_data = response.json()
+ token = auth_data.get('access_token')
+
+ if token:
+ st.session_state.authenticated = True
+ st.session_state.auth_token = token
+ st.session_state.auth_method = 'credentials'
+ st.session_state.user_info = {
+ 'user_id': email,
+ 'email': email,
+ 'name': email
+ }
+ logger.info("โ
Credential login successful")
+ return True, "Login successful!"
+ else:
+ logger.error("โ No access token in response")
+ return False, "No access token received"
+ else:
+ error_msg = "Invalid credentials"
+ try:
+ error_data = response.json()
+ error_msg = error_data.get('detail', error_msg)
+ except:
+ pass
+ logger.error(f"โ Login failed: {error_msg}")
+ return False, error_msg
+
+ except requests.exceptions.Timeout:
+ logger.error("โ Login request timed out")
+ return False, "Login request timed out. Please try again."
+ except requests.exceptions.RequestException as e:
+ logger.error(f"โ Login request failed: {e}")
+ return False, f"Connection error: {str(e)}"
+ except Exception as e:
+ logger.error(f"โ Unexpected login error: {e}")
+ return False, f"Unexpected error: {str(e)}"
+
+def logout():
+ """Logout and clear authentication state."""
+ logger.info("๐ช User logging out")
+ st.session_state.authenticated = False
+ st.session_state.auth_token = None
+ st.session_state.user_info = None
+ st.session_state.auth_method = None
+
+def generate_jwt_token(email, password):
+ """Generate JWT token for given credentials."""
+ try:
+ logger.info(f"๐ Generating JWT token for: {email}")
+ response = requests.post(
+ f"{BACKEND_API_URL}/auth/jwt/login",
+ data={'username': email, 'password': password},
+ headers={'Content-Type': 'application/x-www-form-urlencoded'},
+ timeout=10
+ )
+
+ if response.status_code == 200:
+ auth_data = response.json()
+ token = auth_data.get('access_token')
+ token_type = auth_data.get('token_type', 'bearer')
+
+ if token:
+ logger.info("โ
JWT token generated successfully")
+ return True, token, token_type
+ else:
+ logger.error("โ No access token in response")
+ return False, "No access token received", None
+ else:
+ error_msg = "Invalid credentials"
+ try:
+ error_data = response.json()
+ error_msg = error_data.get('detail', error_msg)
+ except:
+ pass
+ logger.error(f"โ Token generation failed: {error_msg}")
+ return False, error_msg, None
+
+ except requests.exceptions.Timeout:
+ logger.error("โ Token generation request timed out")
+ return False, "Request timed out. Please try again.", None
+ except requests.exceptions.RequestException as e:
+ logger.error(f"โ Token generation request failed: {e}")
+ return False, f"Connection error: {str(e)}", None
+ except Exception as e:
+ logger.error(f"โ Unexpected token generation error: {e}")
+ return False, f"Unexpected error: {str(e)}", None
+
+def show_auth_sidebar():
+ """Show authentication status and controls in sidebar."""
+ with st.sidebar:
+ st.header("๐ Authentication")
+
+ # Get auth configuration from backend
+ auth_config = get_auth_config()
+
+ if st.session_state.get('authenticated', False):
+ user_info = st.session_state.get('user_info', {})
+ user_name = user_info.get('name', 'Unknown User')
+ auth_method = st.session_state.get('auth_method', 'unknown')
+
+ st.success(f"โ
Logged in as **{user_name}**")
+ st.caption(f"Method: {auth_method.title()}")
+
+ # Quick token access for authenticated users
+ current_token = st.session_state.get('auth_token')
+ if current_token:
+ with st.expander("๐ Your Current Token"):
+ st.text_area(
+ "Current Auth Token:",
+ value=current_token,
+ height=100,
+ help="Your current authentication token",
+ key="current_user_token"
+ )
+
+ col1, col2 = st.columns([1, 1])
+ with col1:
+ if st.button("๐ Copy Current Token", key="copy_current_token", use_container_width=True):
+ copy_current_js = f"""
+
+ """
+ st.components.v1.html(copy_current_js, height=0)
+ st.success("โ
Current token copied!")
+
+ with col2:
+ if st.button("๐ Copy Auth Header", key="copy_current_auth", use_container_width=True):
+ auth_header_current = f"Authorization: Bearer {current_token}"
+ copy_auth_current_js = f"""
+
+ """
+ st.components.v1.html(copy_auth_current_js, height=0)
+ st.success("โ
Auth header copied!")
+
+ st.caption("๐ก Use this token for WebSocket connections and API calls")
+
+ if st.button("๐ช Logout", use_container_width=True):
+ logout()
+ st.rerun()
+ else:
+ st.warning("๐ Not authenticated")
+
+ # Manual token input
+ with st.expander("๐ Manual Token Entry"):
+ manual_token = st.text_input("JWT Token:", type="password", help="Paste token from generated JWT")
+ if st.button("Submit Token"):
+ if manual_token.strip():
+ # Validate token
+ headers = {'Authorization': f'Bearer {manual_token.strip()}'}
+ try:
+ response = requests.get(f"{BACKEND_API_URL}/api/users", headers=headers, timeout=5)
+ if response.status_code == 200:
+ st.session_state.authenticated = True
+ st.session_state.auth_token = manual_token.strip()
+ st.session_state.auth_method = 'manual'
+ st.session_state.user_info = {'user_id': 'Unknown', 'email': 'Unknown', 'name': 'Manual Login'}
+ st.success("โ
Token validated successfully!")
+ st.rerun()
+ else:
+ st.error("โ Invalid token")
+ except Exception as e:
+ st.error(f"โ Error validating token: {e}")
+ else:
+ st.error("Please enter a token")
+
+ # Email/Password login
+ with st.expander("๐ Email & Password Login", expanded=True):
+ with st.form("login_form"):
+ email = st.text_input("Email:")
+ password = st.text_input("Password:", type="password")
+ login_submitted = st.form_submit_button("๐ Login")
+
+ if login_submitted:
+ if email.strip() and password.strip():
+ with st.spinner("Logging in..."):
+ success, message = login_with_credentials(email.strip(), password.strip())
+ if success:
+ st.success(message)
+ st.rerun()
+ else:
+ st.error(message)
+ else:
+ st.error("Please enter both email and password")
+
+ # JWT Token Generator
+ with st.expander("๐ Generate JWT Token"):
+ st.info("Generate JWT tokens for API access or WebSocket connections")
+ with st.form("jwt_token_form"):
+ jwt_email = st.text_input("Email:", placeholder="admin@example.com")
+ jwt_password = st.text_input("Password:", type="password", placeholder="Admin password")
+ generate_submitted = st.form_submit_button("๐ Generate Token")
+
+ if generate_submitted:
+ if jwt_email.strip() and jwt_password.strip():
+ with st.spinner("Generating JWT token..."):
+ success, result, token_type = generate_jwt_token(jwt_email.strip(), jwt_password.strip())
+ if success:
+ st.success("โ
JWT token generated successfully!")
+
+ # Create a container for the token display
+ token_container = st.container()
+ with token_container:
+ st.write("**Your JWT Token:**")
+
+ # Display token in a text area (read-only)
+ st.text_area(
+ "Access Token:",
+ value=result,
+ height=100,
+ help="Copy this token for API calls or WebSocket connections",
+ key="generated_jwt_token"
+ )
+
+ # Copy functionality with JavaScript
+ col1, col2 = st.columns([1, 1])
+ with col1:
+ copy_button = st.button("๐ Copy Token", key="copy_jwt_token", use_container_width=True)
+ with col2:
+ copy_auth_header = st.button("๐ Copy Auth Header", key="copy_auth_header", use_container_width=True)
+
+ if copy_button:
+ # JavaScript copy functionality
+ copy_js = f"""
+
+ """
+ st.components.v1.html(copy_js, height=0)
+ st.success("โ
Token copied to clipboard!")
+ st.info("๐ก **Fallback:** If automatic copy failed, select text in the box above and copy (Ctrl+C)")
+
+ if copy_auth_header:
+ # JavaScript copy functionality for auth header
+ auth_header = f"Authorization: Bearer {result}"
+ copy_auth_js = f"""
+
+ """
+ st.components.v1.html(copy_auth_js, height=0)
+ st.success("โ
Authorization header copied to clipboard!")
+ st.code(f"Authorization: Bearer {result}")
+ st.info("๐ก **Fallback:** If automatic copy failed, select text in the code box above and copy (Ctrl+C)")
+
+ # Show usage examples
+ st.divider()
+ st.write("**Usage Examples:**")
+
+ col1, col2 = st.columns(2)
+ with col1:
+ st.write("**WebSocket Connection:**")
+ st.code(f"ws://your-server:8000/ws?token={result[:20]}...")
+
+ with col2:
+ st.write("**API Call:**")
+ st.code(f"""curl -H "Authorization: Bearer {result[:20]}..." \\
+ {BACKEND_API_URL}/api/users""")
+
+ st.write("**Full Token (for copying):**")
+ st.code(result)
+ else:
+ st.error(f"โ Failed to generate token: {result}")
+ else:
+ st.error("Please enter both email and password")
+
+ # Registration info
+ with st.expander("๐ New User Registration"):
+ st.info("New users can register using the backend API:")
+ st.code(f"POST {BACKEND_API_URL}/auth/register")
+ st.caption("๐ก Email/password registration available")
+
+ # Show auth configuration status
+ if auth_config:
+ with st.expander("โ๏ธ Auth Configuration"):
+ st.write("**Available Methods:**")
+ st.write("โข Email/Password: โ
Enabled")
+ st.write("โข Registration: โ
Enabled")
+ else:
+ st.caption("โ ๏ธ Could not load auth configuration from backend")
+
+# ---- Health Check Functions ---- #
+@st.cache_data(ttl=30) # Cache for 30 seconds to avoid too many requests
+def get_system_health():
+ """Get comprehensive system health from backend."""
+ logger.info("๐ฅ Performing system health check")
+ start_time = time.time()
+
+ try:
+ # First try the simple readiness check with shorter timeout
+ logger.debug("๐ Checking backend readiness...")
+ response = requests.get(f"{BACKEND_API_URL}/readiness", timeout=5)
+ if response.status_code == 200:
+ logger.info("โ
Backend readiness check passed")
+ # Backend is responding, now try the full health check with longer timeout
+ try:
+ logger.debug("๐ Performing full health check...")
+ health_response = requests.get(f"{BACKEND_API_URL}/health", timeout=30)
+ if health_response.status_code == 200:
+ health_data = health_response.json()
+ duration = time.time() - start_time
+ logger.info(f"โ
Full health check completed in {duration:.3f}s")
+ logger.debug(f"Health data: {health_data}")
+ return health_data
+ else:
+ # Health check failed but backend is responsive
+ duration = time.time() - start_time
+ logger.warning(f"โ ๏ธ Health check failed with status {health_response.status_code} in {duration:.3f}s")
+ return {
+ "status": "partial",
+ "overall_healthy": False,
+ "services": {
+ "backend": {
+ "status": f"โ ๏ธ Backend responsive but health check failed: HTTP {health_response.status_code}",
+ "healthy": False
+ }
+ },
+ "error": "Health check endpoint returned unexpected status code"
+ }
+ except requests.exceptions.Timeout:
+ # Health check timed out but backend is responsive
+ duration = time.time() - start_time
+ logger.warning(f"โ ๏ธ Health check timed out in {duration:.3f}s")
+ return {
+ "status": "partial",
+ "overall_healthy": False,
+ "services": {
+ "backend": {
+ "status": "โ ๏ธ Backend responsive but health check timed out (some services may be slow)",
+ "healthy": False
+ }
+ },
+ "error": "Health check timed out - external services may be unavailable"
+ }
+ except Exception as e:
+ duration = time.time() - start_time
+ logger.error(f"โ Health check error in {duration:.3f}s: {e}")
+ return {
+ "status": "partial",
+ "overall_healthy": False,
+ "services": {
+ "backend": {
+ "status": f"โ ๏ธ Backend responsive but health check failed: {str(e)}",
+ "healthy": False
+ }
+ },
+ "error": str(e)
+ }
+ else:
+ duration = time.time() - start_time
+ logger.error(f"โ Backend readiness check failed with status {response.status_code} in {duration:.3f}s")
+ return {
+ "status": "unhealthy",
+ "overall_healthy": False,
+ "services": {
+ "backend": {
+ "status": f"โ Backend API Error: HTTP {response.status_code}",
+ "healthy": False
+ }
+ },
+ "error": "Backend API returned unexpected status code"
+ }
+ except Exception as e:
+ duration = time.time() - start_time
+ logger.error(f"โ System health check failed in {duration:.3f}s: {e}")
+ return {
+ "status": "unhealthy",
+ "overall_healthy": False,
+ "services": {
+ "backend": {
+ "status": f"โ Backend API Connection Failed: {str(e)}",
+ "healthy": False
+ }
+ },
+ "error": str(e)
+ }
+
+# ---- Helper Functions ---- #
+def get_data(endpoint: str, require_auth: bool = False):
+ """Helper function to get data from the backend API with retry logic."""
+ logger.debug(f"๐ก GET request to endpoint: {endpoint}")
+ start_time = time.time()
+
+ # Check authentication if required
+ if require_auth and not st.session_state.get('authenticated', False):
+ logger.warning(f"โ Authentication required for endpoint: {endpoint}")
+ st.error(f"๐ Authentication required to access {endpoint}")
+ return None
+
+ max_retries = 3
+ base_delay = 1
+ headers = get_auth_headers() if require_auth else {}
+
+ for attempt in range(max_retries):
+ try:
+ logger.debug(f"๐ก Attempt {attempt + 1}/{max_retries} for GET {endpoint}")
+ response = requests.get(f"{BACKEND_API_URL}{endpoint}", headers=headers)
+
+ # Handle authentication errors
+ if response.status_code == 401:
+ logger.error(f"โ Authentication failed for {endpoint}")
+ st.error("๐ Authentication failed. Please login again.")
+ logout() # Clear invalid auth state
+ return None
+ elif response.status_code == 403:
+ logger.error(f"โ Access forbidden for {endpoint}")
+ st.error("๐ Access forbidden. You don't have permission for this resource.")
+ return None
+
+ response.raise_for_status()
+ duration = time.time() - start_time
+ logger.info(f"โ
GET {endpoint} successful in {duration:.3f}s")
+ return response.json()
+ except requests.exceptions.RequestException as e:
+ duration = time.time() - start_time
+ if attempt < max_retries - 1:
+ delay = base_delay * (2 ** attempt)
+ logger.warning(f"โ ๏ธ GET {endpoint} attempt {attempt + 1} failed in {duration:.3f}s, retrying in {delay}s: {str(e)}")
+ time.sleep(delay)
+ continue
+ else:
+ logger.error(f"โ GET {endpoint} failed after {max_retries} attempts in {duration:.3f}s: {e}")
+ if not require_auth: # Only show connection error for public endpoints
+ st.error(f"Could not connect to the backend at `{BACKEND_API_URL}`. Please ensure it's running. Error: {e}")
+ return None
+
+def post_data(endpoint: str, params: dict | None = None, json_data: dict | None = None, require_auth: bool = False):
+ """Helper function to post data to the backend API."""
+ logger.debug(f"๐ค POST request to endpoint: {endpoint} with params: {params}")
+ start_time = time.time()
+
+ # Check authentication if required
+ if require_auth and not st.session_state.get('authenticated', False):
+ logger.warning(f"โ Authentication required for endpoint: {endpoint}")
+ st.error(f"๐ Authentication required to access {endpoint}")
+ return None
+
+ headers = get_auth_headers() if require_auth else {}
+
+ try:
+ kwargs = {'headers': headers}
+ if params:
+ kwargs['params'] = params
+ if json_data:
+ kwargs['json'] = json_data
+
+ response = requests.post(f"{BACKEND_API_URL}{endpoint}", **kwargs)
+
+ # Handle authentication errors
+ if response.status_code == 401:
+ logger.error(f"โ Authentication failed for {endpoint}")
+ st.error("๐ Authentication failed. Please login again.")
+ logout() # Clear invalid auth state
+ return None
+ elif response.status_code == 403:
+ logger.error(f"โ Access forbidden for {endpoint}")
+ st.error("๐ Access forbidden. You don't have permission for this resource.")
+ return None
+
+ # Handle specific HTTP status codes before raising for status
+ if response.status_code == 409:
+ duration = time.time() - start_time
+ logger.error(f"โ POST {endpoint} failed with 409 Conflict in {duration:.3f}s")
+ # Try to get the specific error message from the response
+ try:
+ error_data = response.json()
+ error_message = error_data.get('message', 'Resource already exists')
+ st.error(f"โ {error_message}")
+ except:
+ st.error("โ Resource already exists. Please check your input and try again.")
+ return None
+
+ response.raise_for_status()
+ duration = time.time() - start_time
+ logger.info(f"โ
POST {endpoint} successful in {duration:.3f}s")
+ return response.json()
+ except requests.exceptions.RequestException as e:
+ duration = time.time() - start_time
+ logger.error(f"โ POST {endpoint} failed in {duration:.3f}s: {e}")
+ st.error(f"Error posting to backend: {e}")
+ return None
+
+def delete_data(endpoint: str, params: dict | None = None, require_auth: bool = False):
+ """Helper function to delete data from the backend API."""
+ logger.debug(f"๐๏ธ DELETE request to endpoint: {endpoint} with params: {params}")
+ start_time = time.time()
+
+ # Check authentication if required
+ if require_auth and not st.session_state.get('authenticated', False):
+ logger.warning(f"โ Authentication required for endpoint: {endpoint}")
+ st.error(f"๐ Authentication required to access {endpoint}")
+ return None
+
+ headers = get_auth_headers() if require_auth else {}
+
+ try:
+ response = requests.delete(f"{BACKEND_API_URL}{endpoint}", params=params, headers=headers)
+
+ # Handle authentication errors
+ if response.status_code == 401:
+ logger.error(f"โ Authentication failed for {endpoint}")
+ st.error("๐ Authentication failed. Please login again.")
+ logout() # Clear invalid auth state
+ return None
+ elif response.status_code == 403:
+ logger.error(f"โ Access forbidden for {endpoint}")
+ st.error("๐ Access forbidden. You don't have permission for this resource.")
+ return None
+
+ response.raise_for_status()
+ duration = time.time() - start_time
+ logger.info(f"โ
DELETE {endpoint} successful in {duration:.3f}s")
+ return response.json()
+ except requests.exceptions.RequestException as e:
+ duration = time.time() - start_time
+ logger.error(f"โ DELETE {endpoint} failed in {duration:.3f}s: {e}")
+ st.error(f"Error deleting from backend: {e}")
+ return None
+
+# ---- Streamlit App Configuration ---- #
+logger.info("๐จ Configuring Streamlit app...")
+st.set_page_config(
+ page_title="Friend-Lite Dashboard",
+ layout="wide",
+ initial_sidebar_state="expanded"
+)
+
+# Initialize authentication state
+init_auth_state()
+
+# Check for authentication token in URL parameters
+check_auth_from_url()
+
+st.title("Friend-Lite Dashboard")
+logger.info("๐ Dashboard initialized")
+
+# Inject custom CSS for conversation box using Streamlit theme variables
+st.markdown(
+ """
+
+ """,
+ unsafe_allow_html=True,
+)
+
+# ---- Sidebar with Authentication and Health Checks ---- #
+# Show authentication first
+show_auth_sidebar()
+
+with st.sidebar:
+ st.header("๐ System Health")
+ logger.debug("๐ Loading system health sidebar...")
+
+ with st.expander("Service Status", expanded=True):
+ # Get system health from backend
+ with st.spinner("Checking system health..."):
+ health_data = get_system_health()
+
+ if health_data.get("overall_healthy", False):
+ st.success(f"๐ข System Status: {health_data.get('status', 'Unknown').title()}")
+ logger.info("๐ข System health check passed")
+ else:
+ st.error(f"๐ด System Status: {health_data.get('status', 'Unknown').title()}")
+ logger.warning(f"๐ด System health check failed: {health_data.get('error', 'Unknown error')}")
+
+ # Show individual services
+ services = health_data.get("services", {})
+ for service_name, service_info in services.items():
+ status_text = service_info.get("status", "Unknown")
+ st.write(f"**{service_name.title()}:** {status_text}")
+ logger.debug(f"Service {service_name}: {status_text}")
+
+ # Show additional info if available
+ if "models" in service_info:
+ st.caption(f"Models available: {service_info['models']}")
+ logger.debug(f"Service {service_name} models: {service_info['models']}")
+ if "uri" in service_info:
+ st.caption(f"URI: {service_info['uri']}")
+ logger.debug(f"Service {service_name} URI: {service_info['uri']}")
+
+ if st.button("๐ Refresh Health Check"):
+ logger.info("๐ Manual health check refresh requested")
+ st.cache_data.clear()
+ st.rerun()
+
+ st.divider()
+
+ # Close Conversation Section
+ st.header("๐ Close Conversation")
+ logger.debug("๐ Loading close conversation section...")
+
+ with st.expander("Active Clients & Close Conversation", expanded=True):
+ # Get active clients
+ logger.debug("๐ก Fetching active clients...")
+ active_clients_data = get_data("/api/active_clients", require_auth=True)
+ clients = active_clients_data["clients"] if active_clients_data and active_clients_data.get("clients") else {}
+
+ if clients:
+ logger.info(f"๐ Found {len(clients)} accessible clients")
+
+ # Check if user is authenticated to show appropriate messages
+ if st.session_state.get('authenticated', False):
+ user_info = st.session_state.get('user_info', {})
+ is_admin = user_info.get('is_superuser', False) if isinstance(user_info, dict) else False
+
+ if not is_admin and len(clients) == 0:
+ st.info("๐ No active clients found for your account.")
+ st.caption("๐ก **Tip:** Connect an audio client with your user ID to see it here.")
+ elif not is_admin:
+ st.caption("โน๏ธ You can only see and manage your own conversations.")
+
+ # Show active clients with conversation status
+ for client_info in clients:
+ client_id = client_info.get('client_id')
+ logger.debug(f"๐ค Processing client: {client_id} - Active conversation: {client_info.get('has_active_conversation', False)}")
+
+ col1, col2 = st.columns([2, 1])
+
+ with col1:
+ if client_info.get("has_active_conversation", False):
+ st.write(f"๐ข **{client_id}** (Active conversation)")
+ if client_info.get("current_audio_uuid"):
+ st.caption(f"UUID: {client_info['current_audio_uuid'][:8]}...")
+ logger.debug(f"Client {client_id} has active conversation with UUID: {client_info['current_audio_uuid']}")
+ else:
+ st.write(f"โช **{client_id}** (No active conversation)")
+ logger.debug(f"Client {client_id} has no active conversation")
+
+ with col2:
+ if client_info.get("has_active_conversation", False):
+ close_btn = st.button(
+ "๐ Close",
+ key=f"close_{client_id}",
+ help=f"Close current conversation for {client_id}",
+ type="secondary"
+ )
+
+ if close_btn:
+ logger.info(f"๐ Closing conversation for client: {client_id}")
+ result = post_data(f"/api/conversations/{client_id}/close", require_auth=True)
+ if result:
+ st.success(f"โ
Conversation closed for {client_id}")
+ logger.info(f"โ
Successfully closed conversation for {client_id}")
+ st.rerun()
+ else:
+ st.error(f"โ Failed to close conversation for {client_id}")
+ logger.error(f"โ Failed to close conversation for {client_id}")
+ else:
+ st.caption("No active conversation")
+
+ if len(clients) > 0:
+ st.info(f"๐ก **Total accessible clients:** {active_clients_data.get('active_clients_count', 0)}")
+ else:
+ if st.session_state.get('authenticated', False):
+ st.info("๐ No active clients found for your account.")
+ st.markdown("""
+ **To see active clients here:**
+ 1. Connect an audio client using your user ID
+ 2. Make sure to include your authentication token in the WebSocket connection
+ 3. Use the format: `ws://localhost:8000/ws?user_id=YOUR_USER_ID&token=YOUR_TOKEN`
+ """)
+ else:
+ st.warning("๐ Please authenticate to view your active clients.")
+ logger.info("๐ No active clients found")
+
+ st.divider()
+
+ # Configuration Info
+ with st.expander("Configuration"):
+ logger.debug("๐ง Loading configuration info...")
+ health_data = get_system_health()
+ config = health_data.get("config", {})
+
+ st.code(f"""
+Backend API: {BACKEND_API_URL}
+Backend Public: {BACKEND_PUBLIC_URL}
+Active Clients: {config.get('active_clients', 'Unknown')}
+MongoDB URI: {config.get('mongodb_uri', 'Unknown')[:30]}...
+Ollama URL: {config.get('ollama_url', 'Unknown')}
+Qdrant URL: {config.get('qdrant_url', 'Unknown')}
+ASR URI: {config.get('asr_uri', 'Unknown')}
+Chunk Directory: {config.get('chunk_dir', 'Unknown')}
+ """)
+
+ # Audio connectivity test
+ st.write("**Audio Endpoint Test:**")
+ try:
+ import requests
+ test_url = f"{BACKEND_PUBLIC_URL}/audio/"
+ response = requests.head(test_url, timeout=2)
+ if response.status_code in [200, 404]: # 404 is OK for directory listing
+ st.success(f"โ
Audio endpoint reachable: {test_url}")
+ else:
+ st.error(f"โ Audio endpoint issue (HTTP {response.status_code}): {test_url}")
+ except Exception as e:
+ st.error(f"โ Cannot reach audio endpoint: {e}")
+ st.caption(f"Trying URL: {BACKEND_PUBLIC_URL}/audio/")
+
+ # Manual override option for audio URL
+ st.write("**Audio URL Override:**")
+ if st.button("๐ง Fix Audio URLs"):
+ # Allow user to manually set the correct public URL
+ st.session_state['show_url_override'] = True
+
+ if st.session_state.get('show_url_override', False):
+ custom_url = st.text_input(
+ "Custom Backend Public URL",
+ value=BACKEND_PUBLIC_URL,
+ help="Enter the URL that your browser can access (e.g., http://100.99.62.5:8000)"
+ )
+ if st.button("Apply Custom URL"):
+ st.session_state['custom_backend_url'] = custom_url
+ st.session_state['show_url_override'] = False
+ st.success(f"โ
Audio URLs will now use: {custom_url}")
+ st.rerun()
+
+ logger.debug(f"๐ง Configuration displayed - Backend API: {BACKEND_API_URL}")
+
+# Show warning if system is unhealthy
+health_data = get_system_health()
+if not health_data.get("overall_healthy", False):
+ st.error("โ ๏ธ Some critical services are unavailable. The dashboard may not function properly.")
+ logger.warning("โ ๏ธ System is unhealthy - some services unavailable")
+
+# Show authentication status and guidance
+if not st.session_state.get('authenticated', False):
+ st.info("๐ **Authentication Required:** Some features require authentication. Please login using the sidebar to access user management, protected conversations, and admin functions.")
+else:
+ user_info = st.session_state.get('user_info', {})
+ st.success(f"โ
**Authenticated as:** {user_info.get('name', 'Unknown User')} - You have access to all features.")
+
+# ---- Main Content ---- #
+logger.info("๐ Loading main dashboard tabs...")
+# Check if user is admin to show debug tab
+is_admin = False
+if st.session_state.get('authenticated', False):
+ user_info = st.session_state.get('user_info', {})
+ if isinstance(user_info, dict):
+ is_admin = user_info.get('is_superuser', False)
+
+ # Check if the token has superuser privileges by trying an admin endpoint
+ if not is_admin:
+ try:
+ test_response = get_data("/api/users", require_auth=True)
+ if test_response and isinstance(test_response, list) and len(test_response) > 0:
+ # Find the current user in the response
+ current_user_email = user_info.get('email')
+ for user in test_response:
+ if user.get('email') == current_user_email and user.get('is_superuser'):
+ is_admin = True
+ break
+ logger.info(f"๐ง Admin test via /api/users: response_length={len(test_response) if test_response else 0}, is_admin={is_admin}")
+ except Exception as e:
+ logger.warning(f"๐ง Admin test failed: {e}")
+
+# Debug: Show admin detection status
+if st.session_state.get('authenticated', False):
+ user_info = st.session_state.get('user_info', {})
+ st.sidebar.caption(f"๐ง Admin status: {'โ
Admin' if is_admin else 'โ Regular user'}")
+ # Add debug info to help troubleshoot
+ with st.sidebar.expander("๐ง Debug User Info", expanded=False):
+ st.write("User Info Type:", type(user_info))
+ if isinstance(user_info, dict):
+ st.write("is_superuser value:", user_info.get('is_superuser', 'NOT_FOUND'))
+ st.write("All user_info keys:", list(user_info.keys()) if user_info else "Empty dict")
+ st.write("Session authenticated:", st.session_state.get('authenticated', False))
+ st.write("Final is_admin:", is_admin)
+
+# Create tabs based on admin status
+if is_admin:
+ tab_convos, tab_mem, tab_users, tab_manage, tab_debug = st.tabs(["Conversations", "Memories", "User Management", "Conversation Management", "๐ง System State"])
+else:
+ tab_convos, tab_mem, tab_users, tab_manage = st.tabs(["Conversations", "Memories", "User Management", "Conversation Management"])
+ tab_debug = None # Set to None for non-admin users
+
+with tab_convos:
+ logger.debug("๐จ๏ธ Loading conversations tab...")
+ st.header("Latest Conversations")
+
+ # Initialize session state for refresh tracking
+ if 'refresh_timestamp' not in st.session_state:
+ st.session_state.refresh_timestamp = 0
+
+ # Add debug mode toggle
+ col1, col2 = st.columns([3, 1])
+ with col1:
+ if st.button("Refresh Conversations"):
+ logger.info("๐ Manual conversation refresh requested")
+ st.session_state.refresh_timestamp = int(time.time())
+ st.session_state.refresh_random = random.randint(1000, 9999)
+ st.rerun()
+ with col2:
+ debug_mode = st.checkbox("๐ง Debug Mode",
+ help="Show original audio files instead of cropped versions",
+ key="debug_mode")
+ if debug_mode:
+ logger.debug("๐ง Debug mode enabled")
+
+ # Generate cache-busting parameter based on session state
+ if st.session_state.refresh_timestamp > 0:
+ random_component = getattr(st.session_state, 'refresh_random', 0)
+ cache_buster = f"?t={st.session_state.refresh_timestamp}&r={random_component}"
+ st.info("๐ Audio files refreshed - cache cleared for latest versions")
+ logger.info("๐ Audio cache busting applied")
+ else:
+ cache_buster = ""
+
+ logger.debug("๐ก Fetching conversations data...")
+ conversations = get_data("/api/conversations", require_auth=True)
+
+ if conversations:
+ logger.info(f"๐ Loaded {len(conversations) if isinstance(conversations, list) else 'grouped'} conversations")
+
+ # Check if conversations is the new grouped format or old format
+ if isinstance(conversations, dict) and "conversations" in conversations:
+ # New grouped format
+ logger.debug("๐ Processing conversations in new grouped format")
+ conversations_data = conversations["conversations"]
+
+ for client_id, client_conversations in conversations_data.items():
+ logger.debug(f"๐ค Processing conversations for client: {client_id} ({len(client_conversations)} conversations)")
+ st.subheader(f"๐ค {client_id}")
+
+ for convo in client_conversations:
+ logger.debug(f"๐จ๏ธ Processing conversation: {convo.get('audio_uuid', 'unknown')}")
+
+ col1, col2 = st.columns([1, 4])
+ with col1:
+ # Format timestamp for better readability
+ ts = datetime.fromtimestamp(convo['timestamp'])
+ st.write(f"**Timestamp:**")
+ st.write(ts.strftime('%Y-%m-%d %H:%M:%S'))
+
+ # Show Audio UUID
+ audio_uuid = convo.get("audio_uuid", "N/A")
+ st.write(f"**Audio UUID:**")
+ st.code(audio_uuid, language=None)
+
+ # Show identified speakers
+ speakers = convo.get("speakers_identified", [])
+ if speakers:
+ st.write(f"**Speakers:**")
+ for speaker in speakers:
+ st.write(f"๐ค `{speaker}`")
+ logger.debug(f"๐ค Speakers identified: {speakers}")
+
+ # Show audio duration info if available
+ cropped_duration = convo.get("cropped_duration")
+ if cropped_duration:
+ st.write(f"**Cropped Duration:**")
+ st.write(f"โฑ๏ธ {cropped_duration:.1f}s")
+
+ # Show speech segments count
+ speech_segments = convo.get("speech_segments", [])
+ if speech_segments:
+ st.write(f"**Speech Segments:**")
+ st.write(f"๐ฃ๏ธ {len(speech_segments)} segments")
+ logger.debug(f"๐ฃ๏ธ Speech segments: {len(speech_segments)}")
+
+ with col2:
+ # Display conversation transcript with new format
+ transcript = convo.get("transcript", [])
+ if transcript:
+ logger.debug(f"๐ Displaying transcript with {len(transcript)} segments")
+ st.write("**Conversation:**")
+ conversation_text = ""
+ for segment in transcript:
+ speaker = segment.get("speaker", "Unknown")
+ text = segment.get("text", "")
+ start_time = segment.get("start", 0.0)
+ end_time = segment.get("end", 0.0)
+
+ # Format timing if available
+ timing_info = ""
+ if start_time > 0 or end_time > 0:
+ timing_info = f" [{start_time:.1f}s - {end_time:.1f}s]"
+
+ conversation_text += f"{speaker}{timing_info}: {text}
"
+
+ # Display in a scrollable container with max height
+ st.markdown(
+ f'{conversation_text}
',
+ unsafe_allow_html=True
+ )
+
+ # Smart audio display logic
+ audio_path = convo.get("audio_path")
+ cropped_audio_path = convo.get("cropped_audio_path")
+
+ if audio_path:
+ # Determine which audio to show
+ if debug_mode:
+ # Debug mode: always show original
+ selected_audio_path = audio_path
+ audio_label = "๐ง **Original Audio** (Debug Mode)"
+ logger.debug(f"๐ง Debug mode: showing original audio: {audio_path}")
+ elif cropped_audio_path:
+ # Normal mode: prefer cropped if available
+ selected_audio_path = cropped_audio_path
+ audio_label = "๐ต **Cropped Audio** (Silence Removed)"
+ logger.debug(f"๐ต Normal mode: showing cropped audio: {cropped_audio_path}")
+ else:
+ # Fallback: show original if no cropped version
+ selected_audio_path = audio_path
+ audio_label = "๐ต **Original Audio** (No cropped version available)"
+ logger.debug(f"๐ต Fallback: showing original audio (no cropped version): {audio_path}")
+
+ # Display audio with label and cache-busting
+ st.write(audio_label)
+ # Use custom URL if set, otherwise use detected URL
+ backend_url = st.session_state.get('custom_backend_url', BACKEND_PUBLIC_URL)
+ audio_url = f"{backend_url}/audio/{selected_audio_path}{cache_buster}"
+
+ # Test audio accessibility
+ try:
+ import requests
+ test_response = requests.head(audio_url, timeout=2)
+ if test_response.status_code == 200:
+ st.audio(audio_url, format="audio/wav")
+ logger.debug(f"๐ต Audio URL accessible: {audio_url}")
+ else:
+ st.error(f"โ Audio file not accessible (HTTP {test_response.status_code})")
+ st.code(f"URL: {audio_url}")
+ logger.error(f"๐ต Audio URL not accessible: {audio_url} (HTTP {test_response.status_code})")
+ except Exception as e:
+ st.error(f"โ Cannot reach audio file: {str(e)}")
+ st.code(f"URL: {audio_url}")
+ logger.error(f"๐ต Audio URL error: {audio_url} - {e}")
+
+ # Show additional info in debug mode or when both versions exist
+ if debug_mode and cropped_audio_path:
+ st.caption(f"๐ก Cropped version available: {cropped_audio_path}")
+ elif not debug_mode and cropped_audio_path:
+ st.caption(f"๐ก Enable debug mode to hear original with silence")
+
+ # Display memory information if available
+ memories = convo.get("memories", [])
+ if memories:
+ st.write("**๐ง Memories Created:**")
+ memory_count = len(memories)
+ st.write(f"๐ {memory_count} memory{'ies' if memory_count != 1 else ''} extracted from this conversation")
+
+ # Show memory details in an expandable section
+ with st.expander(f"๐ View Memory Details ({memory_count} items)", expanded=False):
+ # Fetch actual memory content from the API
+ user_memories_response = get_data("/api/memories", require_auth=True)
+ memory_contents = {}
+
+ if user_memories_response and "memories" in user_memories_response:
+ for mem in user_memories_response["memories"]:
+ memory_contents[mem.get("id")] = mem.get("memory", "No content available")
+
+ for i, memory in enumerate(memories):
+ memory_id = memory.get("memory_id", "Unknown")
+ status = memory.get("status", "unknown")
+ created_at = memory.get("created_at", "Unknown")
+
+ # Get actual memory content
+ memory_text = memory_contents.get(memory_id, "Memory content not found")
+
+ # Display each memory with content
+ st.write(f"**Memory {i+1}:**")
+
+ # Show memory content in a highlighted box
+ if memory_text and memory_text != "Memory content not found" and memory_text != "No content available":
+ st.info(f"๐ญ {memory_text}")
+ else:
+ st.warning(f"๐ ID: `{memory_id}`")
+ st.caption("Memory content not available - this may be a transcript-based fallback")
+
+ st.caption(f"๐
Created: {created_at}")
+
+ # Show status badge
+ if status == "created":
+ st.success(f"โ
{status}")
+ else:
+ st.info(f"โน๏ธ {status}")
+
+ if i < len(memories) - 1: # Add separator between memories
+ st.markdown("---")
+ else:
+ # Show when no memories are available
+ if convo.get("has_memory") is False:
+ st.caption("๐ No memories extracted from this conversation yet")
+
+ st.divider()
+ else:
+ # Old format - single list of conversations
+ logger.debug("๐ Processing conversations in old format")
+ for convo in conversations:
+ logger.debug(f"๐จ๏ธ Processing conversation: {convo.get('audio_uuid', 'unknown')}")
+
+ col1, col2 = st.columns([1, 4])
+ with col1:
+ # Format timestamp for better readability
+ ts = datetime.fromtimestamp(convo['timestamp'])
+ st.write(f"**Timestamp:**")
+ st.write(ts.strftime('%Y-%m-%d %H:%M:%S'))
+
+ # Show client_id with better formatting
+ client_id = convo.get('client_id', 'N/A')
+ if client_id.startswith('client_'):
+ st.write(f"**Client ID:**")
+ st.write(f"`{client_id}`")
+ else:
+ st.write(f"**User ID:**")
+ st.write(f"๐ค `{client_id}`")
+
+ # Show Audio UUID
+ audio_uuid = convo.get("audio_uuid", "N/A")
+ st.write(f"**Audio UUID:**")
+ st.code(audio_uuid, language=None)
+
+ # Show identified speakers
+ speakers = convo.get("speakers_identified", [])
+ if speakers:
+ st.write(f"**Speakers:**")
+ for speaker in speakers:
+ st.write(f"๐ค `{speaker}`")
+
+ with col2:
+ # Display conversation transcript with new format
+ transcript = convo.get("transcript", [])
+ if transcript:
+ logger.debug(f"๐ Displaying transcript with {len(transcript)} segments")
+ st.write("**Conversation:**")
+ conversation_text = ""
+ for segment in transcript:
+ speaker = segment.get("speaker", "Unknown")
+ text = segment.get("text", "")
+ start_time = segment.get("start", 0.0)
+ end_time = segment.get("end", 0.0)
+
+ # Format timing if available
+ timing_info = ""
+ if start_time > 0 or end_time > 0:
+ timing_info = f" [{start_time:.1f}s - {end_time:.1f}s]"
+
+ conversation_text += f"{speaker}{timing_info}: {text}
"
+
+ # Display in a scrollable container with max height
+ st.markdown(
+ f'{conversation_text}
',
+ unsafe_allow_html=True
+ )
+ else:
+ # Fallback for old format
+ old_transcript = convo.get("transcription", "No transcript available.")
+ st.text_area("Transcription", old_transcript, height=150, disabled=True, key=f"transcript_{convo['_id']}")
+
+ # Smart audio display logic (same as above)
+ audio_path = convo.get("audio_path")
+ cropped_audio_path = convo.get("cropped_audio_path")
+
+ if audio_path:
+ # Determine which audio to show
+ if debug_mode:
+ # Debug mode: always show original
+ selected_audio_path = audio_path
+ audio_label = "๐ง **Original Audio** (Debug Mode)"
+ logger.debug(f"๐ง Debug mode: showing original audio: {audio_path}")
+ elif cropped_audio_path:
+ # Normal mode: prefer cropped if available
+ selected_audio_path = cropped_audio_path
+ audio_label = "๐ต **Cropped Audio** (Silence Removed)"
+ logger.debug(f"๐ต Normal mode: showing cropped audio: {cropped_audio_path}")
+ else:
+ # Fallback: show original if no cropped version
+ selected_audio_path = audio_path
+ audio_label = "๐ต **Original Audio** (No cropped version available)"
+ logger.debug(f"๐ต Fallback: showing original audio (no cropped version): {audio_path}")
+
+ # Display audio with label and cache-busting
+ st.write(audio_label)
+ # Use custom URL if set, otherwise use detected URL
+ backend_url = st.session_state.get('custom_backend_url', BACKEND_PUBLIC_URL)
+ audio_url = f"{backend_url}/audio/{selected_audio_path}{cache_buster}"
+
+ # Test audio accessibility
+ try:
+ import requests
+ test_response = requests.head(audio_url, timeout=2)
+ if test_response.status_code == 200:
+ st.audio(audio_url, format="audio/wav")
+ logger.debug(f"๐ต Audio URL accessible: {audio_url}")
+ else:
+ st.error(f"โ Audio file not accessible (HTTP {test_response.status_code})")
+ st.code(f"URL: {audio_url}")
+ logger.error(f"๐ต Audio URL not accessible: {audio_url} (HTTP {test_response.status_code})")
+ except Exception as e:
+ st.error(f"โ Cannot reach audio file: {str(e)}")
+ st.code(f"URL: {audio_url}")
+ logger.error(f"๐ต Audio URL error: {audio_url} - {e}")
+
+ # Show additional info in debug mode or when both versions exist
+ if debug_mode and cropped_audio_path:
+ st.caption(f"๐ก Cropped version available: {cropped_audio_path}")
+ elif not debug_mode and cropped_audio_path:
+ st.caption(f"๐ก Enable debug mode to hear original with silence")
+
+ # Display memory information if available (same as grouped format)
+ memories = convo.get("memories", [])
+ if memories:
+ st.write("**๐ง Memories Created:**")
+ memory_count = len(memories)
+ st.write(f"๐ {memory_count} memory{'ies' if memory_count != 1 else ''} extracted from this conversation")
+
+ # Show memory details in an expandable section
+ with st.expander(f"๐ View Memory Details ({memory_count} items)", expanded=False):
+ # Fetch actual memory content from the API
+ user_memories_response = get_data("/api/memories", require_auth=True)
+ memory_contents = {}
+
+ if user_memories_response and "memories" in user_memories_response:
+ for mem in user_memories_response["memories"]:
+ memory_contents[mem.get("id")] = mem.get("memory", "No content available")
+
+ for i, memory in enumerate(memories):
+ memory_id = memory.get("memory_id", "Unknown")
+ status = memory.get("status", "unknown")
+ created_at = memory.get("created_at", "Unknown")
+
+ # Get actual memory content
+ memory_text = memory_contents.get(memory_id, "Memory content not found")
+
+ # Display each memory with content
+ st.write(f"**Memory {i+1}:**")
+
+ # Show memory content in a highlighted box
+ if memory_text and memory_text != "Memory content not found" and memory_text != "No content available":
+ st.info(f"๐ญ {memory_text}")
+ else:
+ st.warning(f"๐ ID: `{memory_id}`")
+ st.caption("Memory content not available - this may be a transcript-based fallback")
+
+ st.caption(f"๐
Created: {created_at}")
+
+ # Show status badge
+ if status == "created":
+ st.success(f"โ
{status}")
+ else:
+ st.info(f"โน๏ธ {status}")
+
+ if i < len(memories) - 1: # Add separator between memories
+ st.markdown("---")
+ else:
+ # Show when no memories are available
+ if convo.get("has_memory") is False:
+ st.caption("๐ No memories extracted from this conversation yet")
+
+ st.divider()
+ elif conversations is not None:
+ st.info("No conversations found. The backend is connected but the database might be empty.")
+ logger.info("๐ No conversations found in database")
+
+with tab_mem:
+ logger.debug("๐ง Loading memories tab...")
+ st.header("Memories & Action Items")
+
+ # Use session state for selected user if available
+ default_user = st.session_state.get('selected_user', '')
+
+ # User selection for memories and action items
+ col1, col2 = st.columns([2, 1])
+ with col1:
+ user_id_input = st.text_input("Enter username to view memories & action items:",
+ value=default_user,
+ placeholder="e.g., john_doe, alice123")
+ with col2:
+ st.write("") # Spacer
+ refresh_mem_btn = st.button("Load Data", key="refresh_memories")
+
+ # Clear the session state after using it
+ if 'selected_user' in st.session_state:
+ del st.session_state['selected_user']
+
+ if refresh_mem_btn:
+ logger.info("๐ Manual memories refresh requested")
+ st.rerun()
+
+ # Get memories and action items based on user selection
+ if user_id_input.strip():
+ logger.info(f"๐ง Loading data for user: {user_id_input.strip()}")
+ st.info(f"Showing data for user: **{user_id_input.strip()}**")
+
+ # Load both memories and action items
+ col1, col2 = st.columns([1, 1])
+
+ with col1:
+ with st.spinner("Loading memories..."):
+ logger.debug(f"๐ก Fetching memories for user: {user_id_input.strip()}")
+ memories_response = get_data(f"/api/memories?user_id={user_id_input.strip()}", require_auth=True)
+
+ with col2:
+ with st.spinner("Loading action items..."):
+ logger.debug(f"๐ก Fetching action items for user: {user_id_input.strip()}")
+ action_items_response = get_data(f"/api/action-items?user_id={user_id_input.strip()}", require_auth=True)
+
+ # Handle the API response format with "results" wrapper for memories
+ if memories_response and isinstance(memories_response, dict) and "results" in memories_response:
+ memories = memories_response["results"]
+ logger.debug(f"๐ง Memories response has 'results' wrapper, extracted {len(memories)} memories")
+ else:
+ memories = memories_response
+ logger.debug(f"๐ง Memories response format: {type(memories_response)}")
+
+ # Handle action items response
+ if action_items_response and isinstance(action_items_response, dict) and "action_items" in action_items_response:
+ action_items = action_items_response["action_items"]
+ logger.debug(f"๐ฏ Action items response has 'action_items' wrapper, extracted {len(action_items)} items")
+ else:
+ action_items = action_items_response if action_items_response else []
+ logger.debug(f"๐ฏ Action items response format: {type(action_items_response)}")
+ else:
+ # Show instruction to enter a username
+ memories = None
+ action_items = None
+ logger.debug("๐ No user ID provided, showing instructions")
+ st.info("๐ Please enter a username above to view their memories and action items.")
+ st.markdown("๐ก **Tip:** You can find existing usernames in the 'User Management' tab.")
+
+ # Admin Debug Section - Show before regular memories
+ if st.session_state.get('authenticated', False):
+ user_info = st.session_state.get('user_info', {})
+
+ # Check if user is admin (look for is_superuser in different possible locations)
+ is_admin = False
+ if isinstance(user_info, dict):
+ is_admin = user_info.get('is_superuser', False)
+
+ # Alternative: Check if the token has superuser privileges by trying an admin endpoint
+ if not is_admin:
+ try:
+ test_response = get_data("/api/users", require_auth=True)
+ is_admin = test_response is not None
+ except:
+ pass
+
+ if is_admin:
+ st.subheader("๐ง Admin Debug: All Memories")
+ logger.debug("๐ง Admin user detected, showing admin debug section")
+
+ col1, col2 = st.columns([1, 1])
+ with col1:
+ if st.button("๐ง View All User Memories (Admin)", key="admin_all_memories"):
+ logger.info("๐ Admin: Loading all memories for all users")
+ st.session_state['show_admin_memories'] = True
+
+ with col2:
+ if st.session_state.get('show_admin_memories', False):
+ if st.button("โ Hide Admin View", key="hide_admin_views"):
+ st.session_state['show_admin_memories'] = False
+ st.rerun()
+
+ # Show admin memories view if requested
+ if st.session_state.get('show_admin_memories', False):
+ with st.spinner("Loading memories..."):
+ logger.debug("๐ Fetching memories for admin view")
+
+ # Use the working user memories endpoint since admin is a user too
+ user_memories_response = get_data("/api/memories", require_auth=True)
+
+ if user_memories_response and "memories" in user_memories_response:
+ # Get current user info
+ user_info = st.session_state.get('user', {})
+ user_id = user_info.get('id', 'unknown')
+ user_email = user_info.get('email', 'unknown')
+
+ memories = user_memories_response["memories"]
+
+ # Format as admin response for compatibility with existing UI
+ admin_memories_response = {
+ "memories": [
+ {
+ "id": memory.get("id"),
+ "memory": memory.get("memory", "No content"),
+ "user_id": user_id,
+ "owner_email": user_email,
+ "created_at": memory.get("created_at"),
+ "client_id": memory.get("metadata", {}).get("client_id", "unknown"),
+ "metadata": memory.get("metadata", {})
+ }
+ for memory in memories
+ ],
+ "user_memories": {
+ user_id: [
+ {
+ "memory": memory.get("memory", "No content"),
+ "created_at": memory.get("created_at"),
+ "client_id": memory.get("metadata", {}).get("client_id", "unknown"),
+ "owner_email": user_email
+ }
+ for memory in memories
+ ]
+ } if memories else {},
+ "total_memories": len(memories),
+ "total_users": 1 if memories else 0,
+ "stats": {
+ "users_with_memories": [user_id] if memories else [],
+ "client_ids_with_memories": []
+ }
+ }
+ else:
+ admin_memories_response = None
+
+ if admin_memories_response:
+ logger.info(f"๐ Admin memories: Loaded {admin_memories_response.get('total_memories', 0)} memories from {admin_memories_response.get('total_users', 0)} users")
+
+ # Display summary stats including debug info
+ col1, col2, col3 = st.columns(3)
+ with col1:
+ st.metric("Total Users", admin_memories_response.get('total_users', 0))
+ with col2:
+ st.metric("Total Memories", admin_memories_response.get('total_memories', 0))
+ with col3:
+ stats = admin_memories_response.get('stats', {})
+ st.metric("Debug Tracker", "โ
" if stats.get('debug_tracker_initialized') else "โ")
+
+ st.divider()
+
+ # Add view toggle
+ view_mode = st.radio(
+ "View Mode:",
+ ["๐ By User", "๐ All Memories"],
+ horizontal=True
+ )
+
+ if view_mode == "๐ By User":
+ # Display memories grouped by user
+ user_memories = admin_memories_response.get('user_memories', {})
+ stats = admin_memories_response.get('stats', {})
+
+ if user_memories:
+ st.write("### ๐ฅ Memories by User")
+
+ # Show debug info
+ users_with_memories = stats.get('users_with_memories', [])
+ client_ids_with_memories = stats.get('client_ids_with_memories', [])
+
+ if users_with_memories:
+ st.caption(f"Found users: {', '.join(users_with_memories[:5])}{'...' if len(users_with_memories) > 5 else ''}")
+
+ for user_id, user_memory_list in user_memories.items():
+ memory_count = len(user_memory_list)
+
+ # Get user info from first memory if available
+ user_email = "Unknown"
+ if user_memory_list:
+ user_email = user_memory_list[0].get('owner_email', user_id)
+
+ # User header with collapsible section
+ with st.expander(f"๐ค {user_email} ({user_id}) - {memory_count} memories", expanded=False):
+ if user_memory_list:
+ # Show first 10 memories for this user
+ memories_to_show = user_memory_list[:10]
+
+ for i, memory in enumerate(memories_to_show):
+ memory_text = memory.get('memory', 'No content')
+ created_at = memory.get('created_at', 'Unknown')
+ client_id = memory.get('client_id', 'Unknown')
+
+ st.write(f"**{i+1}.** {memory_text[:200]}{'...' if len(memory_text) > 200 else ''}")
+ st.caption(f"๐
{created_at} | ๐ {client_id}")
+
+ if i < len(memories_to_show) - 1:
+ st.markdown("---")
+
+ if memory_count > 10:
+ st.info(f"... and {memory_count - 10} more memories")
+ else:
+ st.info("No memories found for this user.")
+
+ if client_ids_with_memories:
+ st.write("### ๐ Debug: Client IDs Found")
+ st.caption(f"Client IDs: {', '.join(client_ids_with_memories[:10])}{'...' if len(client_ids_with_memories) > 10 else ''}")
+
+ else:
+ st.info("No memories found across all users.")
+
+ else:
+ # Display all memories in flat view
+ memories = admin_memories_response.get('memories', [])
+
+ if memories:
+ st.write("### ๐ง All User Memories")
+
+ # Create a searchable/filterable view
+ search_term = st.text_input("๐ Search memories", placeholder="Enter text to search...")
+
+ if search_term:
+ filtered_memories = [
+ m for m in memories
+ if search_term.lower() in m.get('memory', '').lower() or
+ search_term.lower() in m.get('owner_email', '').lower() or
+ search_term.lower() in m.get('user_id', '').lower()
+ ]
+ st.caption(f"Showing {len(filtered_memories)} memories matching '{search_term}'")
+ else:
+ filtered_memories = memories
+ st.caption(f"Showing all {len(memories)} memories")
+
+ # Display memories in a nice format
+ for i, memory in enumerate(filtered_memories[:50]): # Limit to 50 for performance
+ with st.container():
+ # Memory header
+ col1, col2, col3 = st.columns([2, 1, 1])
+ with col1:
+ st.write(f"**Memory {i+1}**")
+ with col2:
+ st.caption(f"๐ค {memory.get('owner_email', memory.get('user_id', 'Unknown'))}")
+ with col3:
+ st.caption(f"๐
{memory.get('created_at', 'Unknown')}")
+
+ # Memory content
+ memory_text = memory.get('memory', 'No content')
+ st.write(memory_text)
+
+ # Memory metadata
+ with st.expander("๐ Memory Details", expanded=False):
+ col1, col2 = st.columns(2)
+ with col1:
+ st.write(f"**User ID:** {memory.get('user_id', 'Unknown')}")
+ st.write(f"**Owner Email:** {memory.get('owner_email', 'Unknown')}")
+ st.write(f"**Client ID:** {memory.get('client_id', 'Unknown')}")
+ with col2:
+ st.write(f"**Audio UUID:** {memory.get('audio_uuid', 'Unknown')}")
+ st.write(f"**Memory ID:** {memory.get('id', memory.get('memory_id', 'Unknown'))}")
+ metadata = memory.get('metadata', {})
+ if metadata:
+ st.write(f"**Source:** {metadata.get('source', 'Unknown')}")
+
+ st.divider()
+
+ if len(filtered_memories) > 50:
+ st.info(f"Showing first 50 memories. Total: {len(filtered_memories)}")
+
+ else:
+ st.info("No memories found across all users.")
+
+ else:
+ logger.error("โ Failed to load admin memories")
+ st.error("โ Failed to load admin memories. You may not have admin privileges.")
+
+ st.divider()
+
+ # Display Memories Section
+ if memories is not None:
+ logger.debug("๐ง Displaying memories section...")
+ st.subheader("๐ง Discovered Memories")
+
+ if memories:
+ logger.info(f"๐ง Displaying {len(memories)} memories for user {user_id_input.strip()}")
+
+ # Add view options
+ col1, col2 = st.columns([3, 1])
+ with col1:
+ st.markdown(f"Found **{len(memories)}** memories for user **{user_id_input.strip()}**")
+ with col2:
+ view_mode = st.selectbox(
+ "View Mode:",
+ ["Standard View", "Transcript Analysis"],
+ key="memory_view_mode"
+ )
+
+ if view_mode == "Standard View":
+ # Original view
+ df = pd.DataFrame(memories)
+
+ # Make the dataframe more readable
+ if "created_at" in df.columns:
+ df['created_at'] = pd.to_datetime(df['created_at']).dt.strftime('%Y-%m-%d %H:%M:%S')
+
+ # Reorder and rename columns for clarity - handle both "memory" and "text" fields
+ display_cols = {
+ "id": "Memory ID",
+ "created_at": "Created At"
+ }
+
+ # Check which memory field exists and add it to display columns
+ if "memory" in df.columns:
+ display_cols["memory"] = "Memory"
+ logger.debug("๐ง Using 'memory' field for display")
+ elif "text" in df.columns:
+ display_cols["text"] = "Memory"
+ logger.debug("๐ง Using 'text' field for display")
+
+ # Filter for columns that exist in the dataframe
+ cols_to_display = [col for col in display_cols.keys() if col in df.columns]
+
+ if cols_to_display:
+ logger.debug(f"๐ง Displaying columns: {cols_to_display}")
+ st.dataframe(
+ df[cols_to_display].rename(columns=display_cols),
+ use_container_width=True,
+ hide_index=True
+ )
+ else:
+ logger.error(f"โ ๏ธ Unexpected memory data format - missing expected fields. Available columns: {list(df.columns)}")
+ st.error("โ ๏ธ Unexpected memory data format - missing expected fields")
+ st.write("Debug info - Available columns:", list(df.columns))
+
+ else: # Transcript Analysis View
+ with st.spinner("Loading memories with transcript analysis..."):
+ enriched_response = get_data(f"/api/memories/with-transcripts?user_id={user_id_input.strip()}", require_auth=True)
+
+ if enriched_response:
+ enriched_memories = enriched_response.get('memories', [])
+
+ if enriched_memories:
+ # Create enhanced dataframe for transcript analysis
+ analysis_data = []
+ for memory in enriched_memories:
+ analysis_data.append({
+ "Audio UUID": memory.get('audio_uuid', 'N/A')[:12] + "..." if memory.get('audio_uuid') else 'N/A',
+ "Memory Text": memory.get('memory_text', '')[:100] + "..." if len(memory.get('memory_text', '')) > 100 else memory.get('memory_text', ''),
+ "Transcript": memory.get('transcript', '')[:100] + "..." if memory.get('transcript') and len(memory.get('transcript', '')) > 100 else memory.get('transcript', 'N/A')[:100] if memory.get('transcript') else 'N/A',
+ "Transcript Chars": memory.get('transcript_length', 0),
+ "Memory Chars": memory.get('memory_length', 0),
+ "Compression %": f"{memory.get('compression_ratio', 0)}%",
+ "Client ID": memory.get('client_id', 'N/A'),
+ "Created": memory.get('created_at', 'N/A')[:19] if memory.get('created_at') else 'N/A'
+ })
+
+ # Display the enhanced table
+ if analysis_data:
+ analysis_df = pd.DataFrame(analysis_data)
+ st.dataframe(analysis_df, use_container_width=True, hide_index=True)
+
+ # Show detailed expandable views
+ st.subheader("๐ Detailed Memory Analysis")
+
+ for i, memory in enumerate(enriched_memories):
+ audio_uuid = memory.get('audio_uuid', 'unknown')
+ memory_text = memory.get('memory_text', '')
+ transcript = memory.get('transcript', '')
+ compression_ratio = memory.get('compression_ratio', 0)
+
+ # Create meaningful title
+ title_text = memory_text[:50] + "..." if len(memory_text) > 50 else memory_text
+ if not title_text.strip():
+ title_text = f"Memory {i+1}"
+
+ with st.expander(f"๐ง {title_text} | {compression_ratio}% compression", expanded=False):
+ col1, col2 = st.columns(2)
+
+ with col1:
+ st.markdown("**๐ค Original Transcript**")
+ if transcript and transcript.strip():
+ st.text_area(
+ f"Transcript ({len(transcript)} chars):",
+ value=transcript,
+ height=200,
+ disabled=True,
+ key=f"transcript_{i}"
+ )
+ else:
+ st.info("No transcript available")
+
+ with col2:
+ st.markdown("**๐ง Extracted Memory**")
+ if memory_text and memory_text.strip():
+ st.text_area(
+ f"Memory ({len(memory_text)} chars):",
+ value=memory_text,
+ height=200,
+ disabled=True,
+ key=f"memory_text_{i}"
+ )
+ else:
+ st.warning("No memory text")
+
+ # Additional details
+ st.markdown("**๐ Metadata**")
+ col1, col2, col3 = st.columns(3)
+ with col1:
+ st.metric("Audio UUID", audio_uuid[:12] + "..." if audio_uuid and len(audio_uuid) > 12 else audio_uuid or "N/A")
+ with col2:
+ st.metric("Client ID", memory.get('client_id', 'N/A'))
+ with col3:
+ st.metric("User Email", memory.get('user_email', 'N/A'))
+ else:
+ st.info("No enriched memory data available")
+ else:
+ st.info("No memories with transcript data found")
+ else:
+ st.error("Failed to load enriched memory data")
+ else:
+ logger.info(f"๐ง No memories found for user {user_id_input.strip()}")
+ st.info("No memories found for this user.")
+
+ # Display Action Items Section
+ if action_items is not None:
+ logger.debug("๐ฏ Displaying action items section...")
+ st.subheader("๐ฏ Action Items")
+
+ if action_items:
+ logger.info(f"๐ฏ Displaying {len(action_items)} action items for user {user_id_input.strip()}")
+
+ # Status filter for action items
+ col1, col2, col3 = st.columns([2, 1, 1])
+ with col1:
+ status_filter = st.selectbox(
+ "Filter by status:",
+ options=["All", "open", "in_progress", "completed", "cancelled"],
+ index=0,
+ key="action_items_filter"
+ )
+ with col2:
+ show_stats = st.button("๐ Show Stats", key="show_action_stats")
+ with col3:
+ # Manual action item creation button
+ if st.button("โ Add Item", key="add_action_item"):
+ logger.info("โ Manual action item creation requested")
+ st.session_state['show_add_action_item'] = True
+
+ # Filter action items by status
+ if status_filter != "All":
+ filtered_items = [item for item in action_items if item.get('status') == status_filter]
+ logger.debug(f"๐ฏ Filtered action items by status '{status_filter}': {len(filtered_items)} items")
+ else:
+ filtered_items = action_items
+ logger.debug(f"๐ฏ Showing all action items: {len(filtered_items)} items")
+
+ # Show statistics if requested
+ if show_stats:
+ logger.info("๐ Action items statistics requested")
+ stats_response = get_data(f"/api/action-items/stats?user_id={user_id_input.strip()}", require_auth=True)
+ if stats_response and "statistics" in stats_response:
+ stats = stats_response["statistics"]
+ logger.debug(f"๐ Action items statistics: {stats}")
+
+ # Display stats in columns
+ col1, col2, col3, col4 = st.columns(4)
+ with col1:
+ st.metric("Total", stats["total"])
+ st.metric("Open", stats["open"])
+ with col2:
+ st.metric("In Progress", stats["in_progress"])
+ st.metric("Completed", stats["completed"])
+ with col3:
+ st.metric("Cancelled", stats["cancelled"])
+ st.metric("Overdue", stats.get("overdue", 0))
+ with col4:
+ st.write("**By Priority:**")
+ for priority, count in stats.get("by_priority", {}).items():
+ if count > 0:
+ st.write(f"โข {priority.title()}: {count}")
+
+ # Assignee breakdown
+ if stats.get("by_assignee"):
+ st.write("**By Assignee:**")
+ assignee_df = pd.DataFrame(list(stats["by_assignee"].items()), columns=["Assignee", "Count"])
+ st.dataframe(assignee_df, hide_index=True, use_container_width=True)
+ else:
+ logger.warning("๐ Action items statistics not available")
+
+ # Manual action item creation form
+ if st.session_state.get('show_add_action_item', False):
+ logger.debug("โ Showing action item creation form")
+ with st.expander("โ Create New Action Item", expanded=True):
+ with st.form("create_action_item"):
+ description = st.text_input("Description*:", placeholder="e.g., Send quarterly report to management")
+ col1, col2 = st.columns(2)
+ with col1:
+ assignee = st.text_input("Assignee:", placeholder="e.g., john_doe", value="unassigned")
+ priority = st.selectbox("Priority:", options=["high", "medium", "low", "not_specified"], index=1)
+ with col2:
+ due_date = st.text_input("Due Date:", placeholder="e.g., Friday, 2024-01-15", value="not_specified")
+ context = st.text_input("Context:", placeholder="e.g., Mentioned in team meeting")
+
+ submitted = st.form_submit_button("Create Action Item")
+
+ if submitted:
+ logger.info(f"โ Creating action item for user {user_id_input.strip()}")
+ if description.strip():
+ create_data = {
+ "description": description.strip(),
+ "assignee": assignee.strip() if assignee.strip() else "unassigned",
+ "due_date": due_date.strip() if due_date.strip() else "not_specified",
+ "priority": priority,
+ "context": context.strip()
+ }
+
+ try:
+ logger.debug(f"๐ค Creating action item with data: {create_data}")
+ response = requests.post(
+ f"{BACKEND_API_URL}/api/action-items",
+ json=create_data,
+ headers=get_auth_headers()
+ )
+ response.raise_for_status()
+ result = response.json()
+ st.success(f"โ
Action item created: {result['action_item']['description']}")
+ logger.info(f"โ
Action item created successfully: {result['action_item']['description']}")
+ st.session_state['show_add_action_item'] = False
+ st.rerun()
+ except requests.exceptions.RequestException as e:
+ logger.error(f"โ Error creating action item: {e}")
+ st.error(f"Error creating action item: {e}")
+ else:
+ logger.warning("โ ๏ธ Action item creation attempted without description")
+ st.error("Please enter a description for the action item")
+
+ if st.button("โ Cancel", key="cancel_add_action"):
+ logger.debug("โ Action item creation cancelled")
+ st.session_state['show_add_action_item'] = False
+ st.rerun()
+
+ # Display action items
+ if filtered_items:
+ logger.debug(f"๐ฏ Displaying {len(filtered_items)} filtered action items")
+ st.write(f"**Showing {len(filtered_items)} action items** (filtered by: {status_filter})")
+
+ for i, item in enumerate(filtered_items):
+ logger.debug(f"๐ฏ Processing action item {i+1}: {item.get('description', 'No description')[:50]}...")
+
+ with st.container():
+ # Create columns for action item display
+ col1, col2, col3 = st.columns([3, 1, 1])
+
+ with col1:
+ # Description with status badge
+ status = item.get('status', 'open')
+ status_emoji = {
+ 'open': '๐ต',
+ 'in_progress': '๐ก',
+ 'completed': 'โ
',
+ 'cancelled': 'โ'
+ }.get(status, '๐ต')
+
+ st.write(f"**{status_emoji} {item.get('description', 'No description')}**")
+
+ # Additional details
+ details = []
+ if item.get('assignee') and item.get('assignee') != 'unassigned':
+ details.append(f"๐ค {item['assignee']}")
+ if item.get('due_date') and item.get('due_date') != 'not_specified':
+ details.append(f"๐
{item['due_date']}")
+ if item.get('priority') and item.get('priority') != 'not_specified':
+ priority_emoji = {'high': '๐ด', 'medium': '๐ก', 'low': '๐ข'}.get(item['priority'], 'โช')
+ details.append(f"{priority_emoji} {item['priority']}")
+ if item.get('context'):
+ details.append(f"๐ญ {item['context']}")
+
+ if details:
+ st.caption(" | ".join(details))
+
+ # Creation info
+ created_at = item.get('created_at')
+ if created_at:
+ try:
+ if isinstance(created_at, (int, float)):
+ created_time = datetime.fromtimestamp(created_at)
+ else:
+ created_time = pd.to_datetime(created_at)
+ st.caption(f"Created: {created_time.strftime('%Y-%m-%d %H:%M:%S')}")
+ except:
+ st.caption(f"Created: {created_at}")
+
+ with col2:
+ # Status update
+ new_status = st.selectbox(
+ "Status:",
+ options=["open", "in_progress", "completed", "cancelled"],
+ index=["open", "in_progress", "completed", "cancelled"].index(status),
+ key=f"status_{i}_{item.get('memory_id', i)}"
+ )
+
+ if new_status != status:
+ if st.button("Update", key=f"update_{i}_{item.get('memory_id', i)}"):
+ memory_id = item.get('memory_id')
+ if memory_id:
+ logger.info(f"๐ Updating action item {memory_id} status from {status} to {new_status}")
+ try:
+ response = requests.put(
+ f"{BACKEND_API_URL}/api/action-items/{memory_id}",
+ json={"status": new_status},
+ headers=get_auth_headers()
+ )
+ response.raise_for_status()
+ st.success(f"Status updated to {new_status}")
+ logger.info(f"โ
Action item status updated successfully")
+ st.rerun()
+ except requests.exceptions.RequestException as e:
+ logger.error(f"โ Error updating action item status: {e}")
+ st.error(f"Error updating status: {e}")
+ else:
+ logger.error(f"โ No memory ID found for action item")
+ st.error("No memory ID found for this action item")
+
+ with col3:
+ # Delete button
+ if st.button("๐๏ธ Delete", key=f"delete_{i}_{item.get('memory_id', i)}", type="secondary"):
+ memory_id = item.get('memory_id')
+ if memory_id:
+ logger.info(f"๐๏ธ Deleting action item {memory_id}")
+ try:
+ response = requests.delete(f"{BACKEND_API_URL}/api/action-items/{memory_id}", headers=get_auth_headers())
+ response.raise_for_status()
+ st.success("Action item deleted")
+ logger.info(f"โ
Action item deleted successfully")
+ st.rerun()
+ except requests.exceptions.RequestException as e:
+ logger.error(f"โ Error deleting action item: {e}")
+ st.error(f"Error deleting action item: {e}")
+ else:
+ logger.error(f"โ No memory ID found for action item")
+ st.error("No memory ID found for this action item")
+
+ st.divider()
+
+ st.caption(f"๐ก **Tip:** Action items are automatically extracted from conversations at the end of each session")
+ else:
+ if status_filter == "All":
+ logger.info(f"๐ฏ No action items found for user {user_id_input.strip()}")
+ st.info("No action items found for this user.")
+ else:
+ logger.info(f"๐ฏ No action items found with status '{status_filter}' for user {user_id_input.strip()}")
+ st.info(f"No action items found with status '{status_filter}' for this user.")
+ else:
+ logger.info(f"๐ฏ No action items found for user {user_id_input.strip()}")
+ st.info("No action items found for this user.")
+
+ # Show option to create manual action item even when none exist
+ if user_id_input.strip() and st.button("โ Create First Action Item", key="create_first_item"):
+ logger.info("โ Creating first action item for user")
+ st.session_state['show_add_action_item'] = True
+ st.rerun()
+
+with tab_users:
+ st.header("User Management")
+
+ # Create User Section
+ st.subheader("Create New User")
+ with st.form("create_user_form"):
+ st.write("Create a new user with an email and a temporary password.")
+ new_user_email = st.text_input("New User Email:", placeholder="e.g., john.doe@example.com")
+ new_user_password = st.text_input("Temporary Password:", type="password", value="changeme")
+ create_user_submitted = st.form_submit_button("Create User")
+
+ if create_user_submitted:
+ if new_user_email.strip() and new_user_password.strip():
+ create_data = {"email": new_user_email.strip(), "password": new_user_password.strip()}
+ # This endpoint requires authentication
+ result = post_data("/api/create_user", json_data=create_data, require_auth=True)
+ if result:
+ st.success(f"โ
User '{new_user_email.strip()}' created successfully!")
+ st.rerun()
+ # Note: Error handling for 409 Conflict (user exists) is now handled in post_data function
+ else:
+ st.error("โ Please provide both email and password.")
+
+ st.divider()
+
+ # List Users Section
+ st.subheader("Existing Users")
+ col1, col2 = st.columns([1, 1])
+ with col1:
+ refresh_users_btn = st.button("Refresh Users", key="refresh_users")
+
+ if refresh_users_btn:
+ st.rerun()
+
+ users = get_data("/api/users", require_auth=True)
+
+ if users:
+ st.write(f"**Total Users:** {len(users)}")
+
+ # Debug: Show first user structure (temporary)
+ with st.expander("๐ Debug: User Data Structure", expanded=False):
+ if users:
+ st.write("**First user data structure:**")
+ st.json(users[0])
+ st.caption("๐ก This shows the actual fields returned by the API")
+
+ # Initialize session state for delete confirmation
+ if 'delete_confirmation' not in st.session_state:
+ st.session_state.delete_confirmation = {}
+
+ # Display users in a nice format
+ for index, user in enumerate(users):
+ # The API returns 'id' (ObjectId), 'email', 'display_name', etc.
+ # Use display_name if available, otherwise email, otherwise the ID
+ user_display = user.get('display_name') or user.get('email', 'Unknown User')
+ user_db_id = str(user.get('id', 'unknown')) # MongoDB ObjectId as string
+ # Create unique key using both user_db_id and index to avoid duplicates
+ unique_key = f"{user_db_id}_{index}"
+
+ col1, col2 = st.columns([3, 1])
+ with col1:
+ st.write(f"๐ค **{user_display}**")
+ st.caption(f"Email: {user.get('email', 'No email')}")
+ st.caption(f"ID: {user_db_id}")
+
+ with col2:
+ # Check if we're in confirmation mode for this user (use db_id as key)
+ if user_db_id in st.session_state.delete_confirmation:
+ # Show confirmation dialog in a container
+ with st.container():
+ st.error("โ ๏ธ **Confirm Deletion**")
+ st.write(f"Delete user **{user_display}** and optionally:")
+
+ # Checkboxes for what to delete
+ delete_conversations = st.checkbox(
+ "๐จ๏ธ Delete all conversations",
+ key=f"conv_{unique_key}",
+ help="Permanently delete all audio recordings and transcripts"
+ )
+ delete_memories = st.checkbox(
+ "๐ง Delete all memories",
+ key=f"mem_{unique_key}",
+ help="Permanently delete all extracted memories from conversations"
+ )
+
+ # Action buttons
+ col_cancel, col_confirm = st.columns([1, 1])
+
+ with col_cancel:
+ if st.button("โ Cancel", key=f"cancel_{unique_key}", use_container_width=True, type="secondary"):
+ del st.session_state.delete_confirmation[user_db_id]
+ st.rerun()
+
+ with col_confirm:
+ if st.button("๐๏ธ Confirm Delete", key=f"confirm_{unique_key}", use_container_width=True, type="primary"):
+ # Build delete parameters - use MongoDB ObjectId
+ params = {
+ "user_id": user_db_id, # MongoDB ObjectId as string
+ "delete_conversations": delete_conversations,
+ "delete_memories": delete_memories
+ }
+
+ # This endpoint requires authentication
+ result = delete_data("/api/delete_user", params, require_auth=True)
+ if result:
+ deleted_data = result.get('deleted_data', {})
+ message = result.get('message', f"User '{user_display}' deleted")
+ st.success(message)
+
+ # Show detailed deletion info
+ if deleted_data.get('conversations_deleted', 0) > 0 or deleted_data.get('memories_deleted', 0) > 0:
+ st.info(f"๐ Deleted: {deleted_data.get('conversations_deleted', 0)} conversations, {deleted_data.get('memories_deleted', 0)} memories")
+
+ del st.session_state.delete_confirmation[user_db_id]
+ st.rerun()
+
+ if delete_conversations or delete_memories:
+ st.caption("โ ๏ธ Selected data will be **permanently deleted** and cannot be recovered!")
+ else:
+ # Show normal delete button
+ delete_btn = st.button("๐๏ธ Delete", key=f"delete_{unique_key}", type="secondary")
+ if delete_btn:
+ st.session_state.delete_confirmation[user_db_id] = True
+ st.rerun()
+
+ st.divider()
+
+ elif users is not None:
+ st.info("No users found in the system.")
+
+ st.divider()
+
+ # Quick Actions Section
+ st.subheader("Quick Actions")
+ st.write("**View User Memories:**")
+ col1, col2 = st.columns([3, 1])
+ with col1:
+ quick_user_id = st.text_input("User ID to view memories:", placeholder="Enter user ID", key="quick_view_user")
+ with col2:
+ st.write("") # Spacer
+ view_memories_btn = st.button("View Memories", key="view_memories")
+
+ if view_memories_btn and quick_user_id.strip():
+ # Switch to memories tab with this user
+ st.session_state['selected_user'] = quick_user_id.strip()
+ st.info(f"Switch to the 'Memories' tab to view memories for user: {quick_user_id.strip()}")
+
+ # Tips section
+ st.subheader("๐ก Tips")
+ st.markdown("""
+ - **User IDs** should be unique identifiers (e.g., usernames, email prefixes)
+ - Users are automatically created when they connect with audio if they don't exist
+ - **Delete Options:**
+ - **User Account**: Always deleted when you click delete
+ - **๐จ๏ธ Conversations**: Check to delete all audio recordings and transcripts
+ - **๐ง Memories**: Check to delete all extracted memories from conversations
+ - Mix and match: You can delete just conversations, just memories, or both
+ - Use the 'Memories' tab to view specific user memories
+ """)
+
+ # Authentication information
+ st.subheader("๐ Authentication System")
+ if st.session_state.get('authenticated', False):
+ st.success("โ
You are authenticated and can use all user management features.")
+ user_info = st.session_state.get('user_info', {})
+ st.info(f"**Current User:** {user_info.get('name', 'Unknown')}")
+ st.info(f"**Auth Method:** {st.session_state.get('auth_method', 'unknown').title()}")
+ else:
+ st.warning("๐ Authentication required for user management operations.")
+ st.markdown("""
+ **How to authenticate:**
+ 1. **Email/Password**: Use the login form in the sidebar if you have an account
+ 2. **Manual Token**: If you have a JWT token, paste it in the manual entry section
+
+ **Note:** The backend requires authentication for:
+ - Creating new users
+ - Deleting users and their data
+ - WebSocket audio connections
+ """)
+
+ st.markdown("**Authentication Configuration:**")
+ st.code(f"""
+# Required environment variables for backend:
+AUTH_SECRET_KEY=your-secret-key
+ """)
+
+ st.caption("๐ก Email/password authentication is enabled by default")
+
+with tab_manage:
+ st.header("Conversation Management")
+
+ st.subheader("๐ Close Current Conversation")
+
+ # Check if user is authenticated and show appropriate message
+ if st.session_state.get('authenticated', False):
+ user_info = st.session_state.get('user_info', {})
+ is_admin = user_info.get('is_superuser', False) if isinstance(user_info, dict) else False
+
+ if is_admin:
+ st.write("Close the current active conversation for any connected client.")
+ else:
+ st.write("Close the current active conversation for your connected clients.")
+
+ # Get active clients for the dropdown
+ active_clients_data = get_data("/api/active_clients", require_auth=True)
+
+ if active_clients_data and active_clients_data.get("clients"):
+ clients = active_clients_data["clients"]
+
+ # Filter to only clients with active conversations
+ active_conversations = {
+ client_info.get('client_id'): client_info
+ for client_info in clients
+ if client_info.get("has_active_conversation", False)
+ }
+
+ if active_conversations:
+ col1, col2 = st.columns([3, 1])
+
+ with col1:
+ selected_client = st.selectbox(
+ "Select client to close conversation:",
+ options=list(active_conversations.keys()),
+ format_func=lambda x: f"{x} (UUID: {active_conversations[x].get('current_audio_uuid', 'N/A')[:8]}...)"
+ )
+
+ with col2:
+ st.write("") # Spacer
+ close_conversation_btn = st.button("๐ Close Conversation", key="close_conv_main", type="primary")
+
+ if close_conversation_btn and selected_client:
+ result = post_data(f"/api/conversations/{selected_client}/close", require_auth=True)
+ if result:
+ st.success(f"โ
Successfully closed conversation for client '{selected_client}'!")
+ st.info(f"๐ {result.get('message', 'Conversation closed')}")
+ time.sleep(1) # Brief pause before refresh
+ st.rerun()
+ else:
+ st.error(f"โ Failed to close conversation for client '{selected_client}'")
+ else:
+ if len(clients) > 0:
+ st.info("๐ No clients with active conversations found.")
+ st.caption("๐ก Your connected clients don't have active conversations at the moment.")
+ else:
+ st.info("๐ No connected clients found for your account.")
+ st.caption("๐ก Connect an audio client with your user ID to manage conversations.")
+
+ # Show all clients status (only if there are clients)
+ if len(clients) > 0:
+ with st.expander("All Connected Clients Status"):
+ for client_info in clients:
+ client_id = client_info.get('client_id')
+ status_icon = "๐ข" if client_info.get("has_active_conversation", False) else "โช"
+ st.write(f"{status_icon} **{client_id}** - {'Active conversation' if client_info.get('has_active_conversation', False) else 'No active conversation'}")
+ if client_info.get("current_audio_uuid"):
+ st.caption(f" Audio UUID: {client_info['current_audio_uuid']}")
+
+ # Show ownership info for non-admin users
+ if not is_admin:
+ st.caption("โน๏ธ You can only see and manage clients that belong to your account.")
+ else:
+ st.info("๐ No accessible clients found for your account.")
+ st.markdown("""
+ **To connect an audio client:**
+ 1. Use your user ID when connecting: `user_id=YOUR_USER_ID`
+ 2. Include your authentication token in the WebSocket connection
+ 3. Example: `ws://localhost:8000/ws?user_id=YOUR_USER_ID&token=YOUR_TOKEN`
+ """)
+
+ if st.session_state.get('auth_token'):
+ st.info("๐ก Your authentication token is available - see the WebSocket connection info below.")
+ else:
+ st.warning("โ ๏ธ Please authenticate first to get your token for audio client connections.")
+ else:
+ st.warning("๐ Authentication required to manage conversations.")
+ st.markdown("""
+ **Please authenticate using the sidebar to:**
+ - View your active audio clients
+ - Close conversations for your clients
+ - Manage your conversation data
+ """)
+ st.info("๐ Use the authentication options in the sidebar to get started.")
+
+ st.divider()
+
+ st.subheader("Add Speaker to Conversation")
+ st.write("Add speakers to conversations even if they haven't spoken yet.")
+
+ col1, col2, col3 = st.columns([2, 2, 1])
+ with col1:
+ audio_uuid_input = st.text_input("Audio UUID:", placeholder="Enter the audio UUID")
+ with col2:
+ speaker_id_input = st.text_input("Speaker ID:", placeholder="e.g., speaker_1, john_doe")
+ with col3:
+ st.write("") # Spacer
+ add_speaker_btn = st.button("Add Speaker", key="add_speaker")
+
+ if add_speaker_btn:
+ if audio_uuid_input.strip() and speaker_id_input.strip():
+ result = post_data(f"/api/conversations/{audio_uuid_input.strip()}/speakers",
+ params={"speaker_id": speaker_id_input.strip()}, require_auth=True)
+ if result:
+ st.success(f"Speaker '{speaker_id_input.strip()}' added to conversation!")
+ else:
+ st.error("Please enter both Audio UUID and Speaker ID")
+
+ st.divider()
+
+ st.subheader("Update Transcript Segment")
+ st.write("Modify speaker identification or timing information for transcript segments.")
+
+ col1, col2 = st.columns([1, 1])
+ with col1:
+ update_audio_uuid = st.text_input("Audio UUID:", placeholder="Enter the audio UUID", key="update_uuid")
+ segment_index = st.number_input("Segment Index:", min_value=0, value=0, step=1)
+ new_speaker = st.text_input("New Speaker ID (optional):", placeholder="Leave empty to keep current")
+
+ with col2:
+ start_time = st.number_input("Start Time (seconds):", min_value=0.0, value=0.0, step=0.1, format="%.1f")
+ end_time = st.number_input("End Time (seconds):", min_value=0.0, value=0.0, step=0.1, format="%.1f")
+ update_segment_btn = st.button("Update Segment", key="update_segment")
+
+ if update_segment_btn:
+ if update_audio_uuid.strip():
+ params = {}
+ if new_speaker.strip():
+ params["speaker_id"] = new_speaker.strip()
+ if start_time > 0:
+ params["start_time"] = start_time
+ if end_time > 0:
+ params["end_time"] = end_time
+
+ if params:
+ # Use requests.put for this endpoint
+ try:
+ response = requests.put(
+ f"{BACKEND_API_URL}/api/conversations/{update_audio_uuid.strip()}/transcript/{segment_index}",
+ params=params,
+ headers=get_auth_headers()
+ )
+ response.raise_for_status()
+ result = response.json()
+ st.success("Transcript segment updated successfully!")
+ except requests.exceptions.RequestException as e:
+ st.error(f"Error updating segment: {e}")
+ else:
+ st.warning("Please specify at least one field to update")
+ else:
+ st.error("Please enter the Audio UUID")
+
+ st.divider()
+
+ st.subheader("๐ก Schema Information")
+ st.markdown("""
+ **New Conversation Schema:**
+ ```json
+ {
+ "audio_uuid": "unique_identifier",
+ "audio_path": "path/to/audio/file.wav",
+ "client_id": "user_or_client_id",
+ "timestamp": 1234567890,
+ "transcript": [
+ {
+ "speaker": "speaker_1",
+ "text": "Hello, how are you?",
+ "start": 0.0,
+ "end": 3.2
+ },
+ {
+ "speaker": "speaker_2",
+ "text": "I'm good, thanks!",
+ "start": 3.3,
+ "end": 5.0
+ }
+ ],
+ "speakers_identified": ["speaker_1", "speaker_2"]
+ }
+ ```
+ """)
+
+ st.info("๐ก **Tip**: You can find Audio UUIDs in the conversation details on the 'Conversations' tab.")
+
+ st.divider()
+
+ # Authentication info for WebSocket connections
+ st.subheader("๐ Authentication & WebSocket Connections")
+ if st.session_state.get('authenticated', False):
+ auth_token = st.session_state.get('auth_token', '')
+ st.success("โ
You are authenticated. Audio clients can use your token for WebSocket connections.")
+
+ with st.expander("WebSocket Connection Info"):
+ st.markdown("**For audio clients, use one of these WebSocket URLs:**")
+ st.code(f"""
+# Opus audio stream (with authentication):
+ws://localhost:8000/ws?token={auth_token[:20]}...
+
+# PCM audio stream (with authentication):
+ws://localhost:8000/ws_pcm?token={auth_token[:20]}...
+
+# Or include in Authorization header:
+Authorization: Bearer {auth_token[:20]}...
+ """)
+ st.caption("โ ๏ธ Keep your token secure and don't share it publicly!")
+
+ st.info("๐ต **Audio clients must now authenticate** to connect to WebSocket endpoints.")
+ else:
+ st.warning("๐ WebSocket audio connections now require authentication.")
+ st.markdown("""
+ **Important Changes:**
+ - All WebSocket endpoints (`/ws` and `/ws_pcm`) now require authentication
+ - Audio clients must include a JWT token in the connection
+ - Tokens can be passed via query parameter (`?token=...`) or Authorization header
+ - Get a token by logging in via the sidebar or using the backend auth endpoints
+ """)
+
+ st.info("๐ **Log in using the sidebar** to get your authentication token for audio clients.")
+
+# System State Tab
+if tab_debug is not None:
+ with tab_debug:
+ st.header("๐ง System State & Failure Recovery")
+ st.caption("Real-time system monitoring and debug information")
+
+ # Check authentication like other tabs
+ if not st.session_state.get('authenticated', False):
+ st.warning("๐ Please log in to access system monitoring features")
+ else:
+ # Show immediate system status
+ st.info("๐ก **Click the buttons below to load different system monitoring sections**")
+
+ # Get debug system tracker data
+ try:
+ tracker = get_debug_tracker()
+ dashboard_data = tracker.get_dashboard_data()
+ system_metrics = dashboard_data["system_metrics"]
+ recent_transactions = dashboard_data["recent_transactions"]
+ recent_issues = dashboard_data["recent_issues"]
+
+ # Quick system status check (always visible)
+ with st.container():
+ st.subheader("โก System Overview")
+ col1, col2, col3, col4 = st.columns(4)
+
+ with col1:
+ st.metric("Total Transactions", system_metrics["total_transactions"])
+
+ with col2:
+ st.metric("Active", system_metrics["active_transactions"])
+
+ with col3:
+ st.metric("Failed", system_metrics["failed_transactions"])
+
+ with col4:
+ st.metric("Completed", system_metrics["completed_transactions"])
+
+ # Additional metrics
+ col1, col2, col3, col4 = st.columns(4)
+ with col1:
+ st.metric("Active WebSockets", system_metrics["active_websockets"])
+ with col2:
+ st.metric("Audio Chunks", system_metrics["total_audio_chunks"])
+ with col3:
+ st.metric("Transcriptions", system_metrics["total_transcriptions"])
+ with col4:
+ st.metric("Memories Created", system_metrics["total_memories"])
+
+ # System uptime and activity
+ col1, col2, col3 = st.columns(3)
+ with col1:
+ st.metric("Uptime (hours)", f"{system_metrics['uptime_hours']:.1f}")
+ with col2:
+ st.metric("Active Users", dashboard_data["active_users"])
+ with col3:
+ st.metric("Stalled", system_metrics["stalled_transactions"])
+
+ st.divider()
+
+ # Recent issues (pipeline problems)
+ if recent_issues:
+ st.subheader("๐จ Recent Issues")
+ st.warning(f"Found {len(recent_issues)} recent issues that need attention:")
+
+ issues_data = []
+ for issue in recent_issues:
+ issues_data.append({
+ "Timestamp": issue["timestamp"][:19].replace("T", " "),
+ "Transaction": issue["transaction_id"][:8],
+ "User": issue["user_id"][-6:] if len(issue["user_id"]) > 6 else issue["user_id"],
+ "Issue": issue["issue"]
+ })
+
+ issues_df = pd.DataFrame(issues_data)
+ st.dataframe(issues_df, use_container_width=True)
+ else:
+ st.success("โ
No recent issues detected!")
+
+ st.divider()
+
+ # Recent transactions
+ st.subheader("๐ Recent Transactions")
+
+ if recent_transactions:
+ transaction_data = []
+ for t in recent_transactions:
+ status_emoji = {
+ "in_progress": "๐",
+ "completed": "โ
",
+ "failed": "โ",
+ "stalled": "โฐ"
+ }.get(t["status"], "โ")
+
+ transaction_data.append({
+ "Status": f"{status_emoji} {t['status'].title()}",
+ "Stage": t["current_stage"].replace("_", " ").title(),
+ "User": t["user_id"],
+ "Created": t["created_at"][:19].replace("T", " "),
+ "Issue": t["issue"] or ""
+ })
+
+ df = pd.DataFrame(transaction_data)
+ st.dataframe(df, use_container_width=True)
+ else:
+ st.info("No recent transactions")
+
+ except Exception as e:
+ st.error(f"โ Error loading system data: {e}")
+ st.write("Debug tracker may not be initialized yet or there may be a configuration issue.")
+
+ # Refresh button
+ if st.button("๐ Refresh System Stats"):
+ st.rerun()
+
+ col1, col2 = st.columns([1, 1])
+ with col1:
+ if st.button("๐ Load Debug Stats", key="load_debug_stats"):
+ st.session_state['debug_stats_loaded'] = True
+ with col2:
+ if st.button("๐ Refresh Debug Data", key="refresh_debug_data"):
+ # Clear cached data to force refresh
+ if 'debug_stats_loaded' in st.session_state:
+ del st.session_state['debug_stats_loaded']
+ if 'debug_sessions_loaded' in st.session_state:
+ del st.session_state['debug_sessions_loaded']
+ st.rerun()
+
+ if st.session_state.get('debug_stats_loaded', False):
+ with st.spinner("Loading debug statistics..."):
+ try:
+ debug_stats = get_data("/api/debug/memory/stats", require_auth=True)
+
+ if debug_stats:
+ stats = debug_stats.get('stats', {})
+
+ st.success("โ
Memory processing statistics loaded successfully")
+
+ # Display key metrics
+ col1, col2, col3, col4 = st.columns(4)
+ with col1:
+ st.metric("Total Sessions", stats.get('total_sessions', 0))
+ with col2:
+ success_rate = stats.get('success_rate', 0) or 0
+ st.metric("Success Rate", f"{success_rate:.1f}%",
+ delta=f"{'โ
' if success_rate > 80 else 'โ ๏ธ' if success_rate > 50 else 'โ'}")
+ with col3:
+ avg_time = stats.get('avg_processing_time_seconds', 0) or 0
+ st.metric("Avg Processing Time", f"{avg_time:.2f}s")
+ with col4:
+ failed = stats.get('failed_sessions', 0) or 0
+ st.metric("Failed Sessions", failed,
+ delta=f"{'โ
' if failed == 0 else 'โ ๏ธ'}")
+
+ # Show additional metrics
+ col1, col2, col3 = st.columns(3)
+ with col1:
+ st.metric("Total Memories", stats.get('total_memories', 0) or 0)
+ with col2:
+ st.metric("Successful Sessions", stats.get('successful_sessions', 0) or 0)
+ with col3:
+ memories_per_session = stats.get('memories_per_session', 0) or 0
+ st.metric("Memories per Session", f"{memories_per_session:.1f}")
+
+ # Show detailed stats
+ with st.expander("๐ Detailed Statistics", expanded=False):
+ st.json(stats)
+ else:
+ st.error("โ Failed to load debug statistics - No data received")
+ st.caption("This could indicate an authentication issue or the debug endpoint is not available")
+ except Exception as e:
+ st.error(f"โ Error loading debug statistics: {str(e)}")
+ st.caption("Check the backend logs for more details")
+
+ st.divider()
+
+ # Recent Sessions Section
+ st.subheader("๐ Recent Memory Sessions")
+
+ col1, col2 = st.columns([1, 1])
+ with col1:
+ session_limit = st.number_input("Number of sessions to load:", min_value=5, max_value=100, value=20, step=5)
+ with col2:
+ if st.button("๐ Load Recent Sessions", key="load_debug_sessions"):
+ st.session_state['debug_sessions_loaded'] = True
+
+ if st.session_state.get('debug_sessions_loaded', False):
+ with st.spinner("Loading recent memory sessions..."):
+ debug_sessions = get_data(f"/api/debug/memory/sessions?limit={session_limit}", require_auth=True)
+
+ if debug_sessions:
+ sessions = debug_sessions.get('sessions', [])
+
+ if sessions:
+ st.success(f"โ
Loaded {len(sessions)} memory sessions")
+
+ # Display sessions in a table
+ session_data = []
+ for session in sessions:
+ session_data.append({
+ "Audio UUID": session.get('audio_uuid', 'N/A')[:12] + "...",
+ "User ID": session.get('user_id', 'N/A')[:12] + "...",
+ "Status": session.get('status', 'unknown'),
+ "Processing Time": f"{session.get('processing_time', 0):.2f}s",
+ "Transcript Length": session.get('transcript_length', 0),
+ "Memory Count": session.get('memory_count', 0),
+ "Created": session.get('created_at', 'N/A')[:19] if session.get('created_at') else 'N/A'
+ })
+
+ df = pd.DataFrame(session_data)
+ st.dataframe(df, use_container_width=True)
+ else:
+ st.info("No memory sessions found")
+ else:
+ st.error("Failed to load memory sessions")
+
+ st.divider()
+
+ # Transcript vs Memory Comparison Section
+ st.subheader("๐ Transcript vs Memory Analysis")
+
+ st.markdown("Compare original transcripts with extracted memories to understand memory extraction quality.")
+
+ # Add section for viewing all transcripts vs memories
+ col1, col2, col3 = st.columns([2, 1, 1])
+ with col1:
+ audio_uuid_input = st.text_input(
+ "Enter Audio UUID for analysis:",
+ placeholder="e.g., 84a6fced90aa4232ac00db6bbfcf626b",
+ help="Enter the full audio UUID to analyze transcript vs memory extraction"
+ )
+ with col2:
+ if st.button("๐ Analyze Session", key="analyze_transcript_memory", disabled=not audio_uuid_input.strip()):
+ st.session_state['transcript_analysis_uuid'] = audio_uuid_input.strip()
+ st.session_state['transcript_analysis_loaded'] = True
+ with col3:
+ if st.button("๐ Show All Transcripts", key="btn_show_all_transcripts", help="Show all transcripts vs memories for comprehensive analysis"):
+ st.session_state['show_all_transcripts_view'] = True
+ st.session_state['transcript_analysis_loaded'] = False # Clear single session analysis
+
+ if st.session_state.get('transcript_analysis_loaded', False) and st.session_state.get('transcript_analysis_uuid'):
+ analysis_uuid = st.session_state['transcript_analysis_uuid']
+
+ with st.spinner(f"Loading transcript vs memory analysis for {analysis_uuid[:12]}..."):
+ transcript_analysis = get_data(f"/api/debug/memory/transcript-vs-memory/{analysis_uuid}", require_auth=True)
+
+ if transcript_analysis:
+ st.success(f"โ
Analysis loaded for session {analysis_uuid[:12]}...")
+
+ # Session Info
+ with st.expander("๐ Session Information", expanded=True):
+ session_info = transcript_analysis.get('session_info', {})
+ col1, col2, col3 = st.columns(3)
+
+ with col1:
+ st.metric("User", transcript_analysis.get('user_email', 'N/A'))
+ st.metric("Client ID", transcript_analysis.get('client_id', 'N/A'))
+
+ with col2:
+ success = session_info.get('memory_processing_success', False)
+ st.metric("Processing Status", "โ
Success" if success else "โ Failed")
+
+ if session_info.get('memory_processing_error'):
+ st.error(f"Error: {session_info['memory_processing_error']}")
+
+ with col3:
+ analysis = transcript_analysis.get('analysis', {})
+ compression_ratio = transcript_analysis.get('memories', {}).get('compression_ratio_percent', 0)
+ st.metric("Compression Ratio", f"{compression_ratio}%")
+
+ # Transcript vs Memory Comparison
+ st.subheader("๐ Transcript vs Memory Comparison")
+
+ col1, col2 = st.columns(2)
+
+ with col1:
+ st.markdown("### ๐ค Original Transcript")
+ transcript_data = transcript_analysis.get('transcript', {})
+
+ # Transcript statistics
+ st.markdown(f"""
+ **Statistics:**
+ - **Characters:** {transcript_data.get('character_count', 0):,}
+ - **Words:** {transcript_data.get('word_count', 0):,}
+ - **Segments:** {transcript_data.get('segment_count', 0)}
+ """)
+
+ # Full conversation text
+ full_conversation = transcript_data.get('full_conversation', '')
+ if full_conversation.strip():
+ st.text_area(
+ "Full Conversation:",
+ value=full_conversation,
+ height=300,
+ disabled=True,
+ key="original_transcript"
+ )
+ else:
+ st.warning("No transcript available")
+
+ with col2:
+ st.markdown("### ๐ง Extracted Memories")
+ memories_data = transcript_analysis.get('memories', {})
+
+ # Memory statistics
+ st.markdown(f"""
+ **Statistics:**
+ - **Extractions:** {memories_data.get('extraction_count', 0)}
+ - **Characters:** {memories_data.get('total_memory_characters', 0):,}
+ - **Compression:** {memories_data.get('compression_ratio_percent', 0)}%
+ """)
+
+ # Memory extractions
+ extractions = memories_data.get('extractions', [])
+ if extractions:
+ for i, memory in enumerate(extractions):
+ with st.expander(f"Memory {i+1}: {memory.get('memory_type', 'general')}", expanded=i==0):
+ st.markdown(f"**ID:** `{memory.get('memory_id', 'unknown')}`")
+ st.markdown(f"**Type:** {memory.get('memory_type', 'general')}")
+
+ memory_text = memory.get('memory_text', '')
+ if memory_text:
+ st.text_area(
+ "Memory Text:",
+ value=memory_text,
+ height=100,
+ disabled=True,
+ key=f"memory_{i}"
+ )
+
+ # Show extraction prompt and LLM response in details
+ with st.expander("๐ง Extraction Details"):
+ if memory.get('extraction_prompt'):
+ st.markdown("**Prompt Used:**")
+ st.code(memory['extraction_prompt'], language="text")
+
+ if memory.get('llm_response'):
+ st.markdown("**Raw LLM Response:**")
+ st.code(memory['llm_response'], language="text")
+ else:
+ analysis = transcript_analysis.get('analysis', {})
+ if analysis.get('empty_results'):
+ st.info("๐ค LLM determined no memorable content in this conversation")
+ else:
+ st.warning("No memory extractions found")
+
+ # Analysis Summary
+ st.subheader("๐ Analysis Summary")
+ analysis = transcript_analysis.get('analysis', {})
+
+ col1, col2, col3, col4 = st.columns(4)
+ with col1:
+ has_transcript = analysis.get('has_transcript', False)
+ st.metric("Has Transcript", "โ
Yes" if has_transcript else "โ No")
+
+ with col2:
+ has_memories = analysis.get('has_memories', False)
+ st.metric("Has Memories", "โ
Yes" if has_memories else "โ No")
+
+ with col3:
+ processing_successful = analysis.get('processing_successful', False)
+ st.metric("Processing OK", "โ
Yes" if processing_successful else "โ No")
+
+ with col4:
+ empty_results = analysis.get('empty_results', False)
+ st.metric("Empty Results", "โ ๏ธ Yes" if empty_results else "โ
No")
+
+ # Quality Assessment
+ if has_transcript and processing_successful:
+ if has_memories:
+ compression_ratio = memories_data.get('compression_ratio_percent', 0)
+ if compression_ratio > 50:
+ st.warning("โ ๏ธ High compression ratio - may indicate poor memory extraction")
+ elif compression_ratio < 5:
+ st.warning("โ ๏ธ Very low compression ratio - memories may be too brief")
+ else:
+ st.success("โ
Good compression ratio for memory extraction")
+ elif empty_results:
+ st.info("โน๏ธ LLM correctly identified no memorable content")
+ else:
+ st.error("โ Processing succeeded but no memories or errors recorded")
+
+ else:
+ st.error(f"Failed to load analysis for {analysis_uuid}")
+
+ # Show All Transcripts vs Memories section
+ if st.session_state.get('show_all_transcripts_view', False):
+ st.subheader("๐ All Transcripts vs Memories Analysis")
+
+ # Options for filtering
+ col1, col2, col3 = st.columns([2, 1, 1])
+ with col1:
+ user_filter = st.text_input(
+ "Filter by User (optional):",
+ placeholder="e.g., user@example.com",
+ help="Leave empty to show all users (admin only)"
+ )
+ with col2:
+ limit = st.number_input(
+ "Limit results:",
+ min_value=10,
+ max_value=500,
+ value=50,
+ step=10,
+ help="Maximum number of memories to display"
+ )
+ with col3:
+ st.write("") # Spacer
+ if st.button("๐ Refresh Data", key="refresh_all_transcripts"):
+ st.session_state['all_transcripts_data'] = None # Clear cache
+ st.rerun()
+
+ # Load all transcripts vs memories
+ if 'all_transcripts_data' not in st.session_state:
+ with st.spinner("Loading all transcripts vs memories..."):
+ try:
+ # Use appropriate endpoint based on user permissions and filters
+ if user_filter.strip():
+ # Filter by specific user
+ endpoint = f"/api/memories/with-transcripts?user_id={user_filter.strip()}&limit={limit}"
+ else:
+ # Show all users (admin only) or current user
+ endpoint = f"/api/memories/with-transcripts?limit={limit}"
+
+ all_data = get_data(endpoint, require_auth=True)
+ st.session_state['all_transcripts_data'] = all_data
+
+ except Exception as e:
+ st.error(f"Error loading data: {str(e)}")
+ st.session_state['all_transcripts_data'] = None
+
+ # Display the data
+ if st.session_state.get('all_transcripts_data'):
+ data = st.session_state['all_transcripts_data']
+ memories = data.get('memories', [])
+
+ if memories:
+ st.success(f"โ
Loaded {len(memories)} memories with transcript analysis")
+
+ # Summary statistics
+ total_memories = len(memories)
+ memories_with_transcripts = sum(1 for m in memories if m.get('transcript') and m.get('transcript').strip())
+ memories_without_transcripts = total_memories - memories_with_transcripts
+ avg_compression = sum(m.get('compression_ratio', 0) for m in memories) / total_memories if total_memories > 0 else 0
+
+ # Display summary metrics
+ col1, col2, col3, col4 = st.columns(4)
+ with col1:
+ st.metric("Total Memories", total_memories)
+ with col2:
+ st.metric("With Transcripts", memories_with_transcripts)
+ with col3:
+ st.metric("Without Transcripts", memories_without_transcripts)
+ with col4:
+ st.metric("Avg Compression", f"{avg_compression:.1f}%")
+
+ # Filter and search options
+ st.subheader("๐ Filter Options")
+ col1, col2, col3 = st.columns([2, 1, 1])
+ with col1:
+ search_term = st.text_input(
+ "Search in memories/transcripts:",
+ placeholder="Enter text to search...",
+ key="search_all_transcripts"
+ )
+ with col2:
+ show_only_with_transcripts = st.checkbox(
+ "Only with transcripts",
+ value=False,
+ key="filter_with_transcripts"
+ )
+ with col3:
+ compression_filter = st.selectbox(
+ "Compression filter:",
+ ["All", "High (>50%)", "Medium (10-50%)", "Low (<10%)", "Zero (0%)"],
+ key="compression_filter"
+ )
+
+ # Apply filters
+ filtered_memories = memories
+
+ if search_term:
+ filtered_memories = [
+ m for m in filtered_memories
+ if (search_term.lower() in m.get('memory_text', '').lower() or
+ search_term.lower() in m.get('transcript', '').lower())
+ ]
+
+ if show_only_with_transcripts:
+ filtered_memories = [
+ m for m in filtered_memories
+ if m.get('transcript') and m.get('transcript').strip()
+ ]
+
+ if compression_filter != "All":
+ if compression_filter == "High (>50%)":
+ filtered_memories = [m for m in filtered_memories if m.get('compression_ratio', 0) > 50]
+ elif compression_filter == "Medium (10-50%)":
+ filtered_memories = [m for m in filtered_memories if 10 <= m.get('compression_ratio', 0) <= 50]
+ elif compression_filter == "Low (<10%)":
+ filtered_memories = [m for m in filtered_memories if 0 < m.get('compression_ratio', 0) < 10]
+ elif compression_filter == "Zero (0%)":
+ filtered_memories = [m for m in filtered_memories if m.get('compression_ratio', 0) == 0]
+
+ if search_term or show_only_with_transcripts or compression_filter != "All":
+ st.caption(f"Showing {len(filtered_memories)} of {total_memories} memories")
+
+ # Display results in a table format
+ if filtered_memories:
+ st.subheader("๐ Transcript vs Memory Analysis Table")
+
+ # Create summary table
+ table_data = []
+ for memory in filtered_memories:
+ table_data.append({
+ "Audio UUID": memory.get('audio_uuid', 'N/A')[:12] + "..." if memory.get('audio_uuid') else 'N/A',
+ "Memory": memory.get('memory_text', '')[:60] + "..." if len(memory.get('memory_text', '')) > 60 else memory.get('memory_text', ''),
+ "Transcript": memory.get('transcript', 'N/A')[:60] + "..." if memory.get('transcript') and len(memory.get('transcript', '')) > 60 else memory.get('transcript', 'N/A')[:60] if memory.get('transcript') else 'N/A',
+ "T-Chars": memory.get('transcript_length', 0),
+ "M-Chars": memory.get('memory_length', 0),
+ "Compression": f"{memory.get('compression_ratio', 0):.1f}%",
+ "Client": memory.get('client_id', 'N/A')[:8] + "..." if memory.get('client_id') else 'N/A',
+ "Created": memory.get('created_at', 'N/A')[:16] if memory.get('created_at') else 'N/A'
+ })
+
+ # Display table
+ df = pd.DataFrame(table_data)
+ st.dataframe(df, use_container_width=True, hide_index=True)
+
+ # Detailed expandable views
+ st.subheader("๐ Detailed Analysis")
+
+ for i, memory in enumerate(filtered_memories):
+ audio_uuid = memory.get('audio_uuid', 'unknown')
+ memory_text = memory.get('memory_text', '')
+ transcript = memory.get('transcript', '')
+ compression_ratio = memory.get('compression_ratio', 0)
+ client_id = memory.get('client_id', 'unknown')
+
+ # Create meaningful title
+ title_text = memory_text[:50] + "..." if len(memory_text) > 50 else memory_text
+ if not title_text.strip():
+ title_text = f"Memory {i+1}"
+
+ # Color code based on compression ratio
+ if compression_ratio > 50:
+ status_emoji = "๐ด" # High compression
+ elif compression_ratio > 10:
+ status_emoji = "๐ก" # Medium compression
+ elif compression_ratio > 0:
+ status_emoji = "๐ข" # Good compression
+ else:
+ status_emoji = "โช" # No compression
+
+ with st.expander(f"{status_emoji} {title_text} | {compression_ratio:.1f}% | {client_id[:8]}...", expanded=False):
+ col1, col2 = st.columns(2)
+
+ with col1:
+ st.markdown("**๐ค Original Transcript**")
+ if transcript and transcript.strip():
+ st.text_area(
+ f"Transcript ({len(transcript)} chars):",
+ value=transcript,
+ height=200,
+ disabled=True,
+ key=f"all_transcript_{i}"
+ )
+ else:
+ st.info("No transcript available")
+
+ with col2:
+ st.markdown("**๐ง Extracted Memory**")
+ if memory_text and memory_text.strip():
+ st.text_area(
+ f"Memory ({len(memory_text)} chars):",
+ value=memory_text,
+ height=200,
+ disabled=True,
+ key=f"all_memory_{i}"
+ )
+ else:
+ st.warning("No memory text")
+
+ # Additional metadata
+ st.markdown("**๐ Metadata**")
+ metadata_col1, metadata_col2 = st.columns(2)
+ with metadata_col1:
+ st.write(f"**Audio UUID:** `{audio_uuid}`")
+ st.write(f"**Client ID:** `{client_id}`")
+ with metadata_col2:
+ st.write(f"**Created:** {memory.get('created_at', 'N/A')}")
+ st.write(f"**User:** {memory.get('user_email', 'N/A')}")
+ else:
+ st.info("No memories match the current filters")
+ else:
+ st.warning("No memories found with the current filters")
+ else:
+ st.error("Failed to load transcript vs memory data")
+
+ # Add option to close the view
+ if st.button("โ Close Analysis", key="close_all_transcripts"):
+ st.session_state['show_all_transcripts_view'] = False
+ st.rerun()
+
+ st.divider()
+
+ # Help Section
+ st.subheader("๐ Debug API Reference")
+
+ with st.expander("๐ Available Debug Endpoints", expanded=False):
+ st.markdown("""
+ **Memory Debug APIs:**
+ - `GET /api/debug/memory/stats` - Memory processing statistics
+ - `GET /api/debug/memory/sessions` - Recent memory sessions
+ - `GET /api/debug/memory/session/{uuid}` - Session details
+ - `GET /api/debug/memory/transcript-vs-memory/{uuid}` - Transcript vs memory comparison
+ - `GET /api/debug/memory/config` - Memory configuration
+ - `GET /api/debug/memory/pipeline/{uuid}` - Processing pipeline trace
+
+ All endpoints require authentication.
+ """)
+
+ st.info("๐ก **Tip**: Use these debug tools to monitor system performance and troubleshoot issues with memory extraction.")
diff --git a/backends/advanced-backend/start_backend.sh b/backends/advanced-backend/start_backend.sh
deleted file mode 100755
index e0c25db4..00000000
--- a/backends/advanced-backend/start_backend.sh
+++ /dev/null
@@ -1,4 +0,0 @@
-#!/bin/bash
-
-# Start the backend
-uv run python3 main.py
\ No newline at end of file
diff --git a/backends/advanced-backend/tests/test_memory_service.py b/backends/advanced-backend/tests/test_memory_service.py
new file mode 100644
index 00000000..0caba186
--- /dev/null
+++ b/backends/advanced-backend/tests/test_memory_service.py
@@ -0,0 +1,556 @@
+#!/usr/bin/env python3
+"""
+Comprehensive test file for debugging memory service issues.
+
+This script tests:
+1. Ollama connectivity and model availability
+2. Qdrant connectivity
+3. Mem0 configuration
+4. Memory service initialization
+5. Memory creation functionality
+6. Action item extraction
+
+Run this from the backend directory:
+python tests/test_memory_service.py
+"""
+
+import asyncio
+import logging
+import os
+import sys
+import time
+import json
+from pathlib import Path
+
+# Load environment variables from .env file
+from dotenv import load_dotenv
+load_dotenv()
+
+# Add src to path so we can import modules
+sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
+
+import requests
+import ollama
+from mem0 import Memory
+
+# Import our modules
+try:
+ from memory.memory_service import (
+ MemoryService,
+ get_memory_service,
+ init_memory_config,
+ _init_process_memory,
+ _add_memory_to_store,
+ _extract_action_items_from_transcript,
+ MEM0_CONFIG,
+ OLLAMA_BASE_URL,
+ QDRANT_BASE_URL,
+ )
+ print("โ
Successfully imported memory service modules")
+except ImportError as e:
+ print(f"โ Failed to import memory service modules: {e}")
+ sys.exit(1)
+
+# Configure logging
+logging.basicConfig(
+ level=logging.DEBUG,
+ format='%(asctime)s | %(levelname)-8s | %(name)-20s | %(message)s'
+)
+logger = logging.getLogger("memory_test")
+
+class MemoryServiceTester:
+ """Comprehensive memory service tester."""
+
+ def __init__(self):
+ self.ollama_url = OLLAMA_BASE_URL
+ self.qdrant_url = QDRANT_BASE_URL
+ self.test_results = {}
+
+ async def run_all_tests(self):
+ """Run all tests in sequence."""
+ print("๐งช Starting Memory Service Diagnostic Tests")
+ print("=" * 60)
+
+ tests = [
+ ("Configuration Check", self.test_configuration),
+ ("Ollama Connectivity", self.test_ollama_connectivity),
+ ("Ollama Models", self.test_ollama_models),
+ ("Qdrant Connectivity", self.test_qdrant_connectivity),
+ ("Mem0 Configuration", self.test_mem0_config),
+ ("Memory Service Initialization", self.test_memory_service_init),
+ ("Process Memory Initialization", self.test_process_memory_init),
+ ("Basic Memory Creation", self.test_basic_memory_creation),
+ ("Action Item Extraction", self.test_action_item_extraction),
+ ("Full Integration Test", self.test_full_integration),
+ ]
+
+ for test_name, test_func in tests:
+ print(f"\n๐ Running: {test_name}")
+ print("-" * 40)
+ try:
+ if asyncio.iscoroutinefunction(test_func):
+ result = await test_func()
+ else:
+ result = test_func()
+ self.test_results[test_name] = result
+ status = "โ
PASS" if result else "โ FAIL"
+ print(f"{status}: {test_name}")
+ except Exception as e:
+ self.test_results[test_name] = False
+ print(f"โ ERROR in {test_name}: {e}")
+ logger.exception(f"Test failed: {test_name}")
+
+ self.print_summary()
+
+ def test_configuration(self):
+ """Test current configuration values."""
+ print(f"๐ Current Configuration:")
+ print(f" OLLAMA_BASE_URL: {self.ollama_url}")
+ print(f" QDRANT_BASE_URL: {self.qdrant_url}")
+
+ # Show environment variables from .env file
+ print(f"\n๐ Environment Variables from .env:")
+ env_vars = [
+ 'OLLAMA_BASE_URL', 'OFFLINE_ASR_TCP_URI', 'HF_TOKEN',
+ 'ADMIN_EMAIL', 'ADMIN_USERNAME', 'DEBUG_DIR'
+ ]
+ for var in env_vars:
+ value = os.getenv(var, 'Not set')
+ if 'TOKEN' in var or 'PASSWORD' in var or 'SECRET' in var:
+ # Mask sensitive values
+ display_value = f"{value[:10]}..." if len(value) > 10 else "***"
+ else:
+ display_value = value
+ print(f" {var}: {display_value}")
+
+ print(f"\n๐ง Mem0 Config:")
+ print(f" LLM Provider: {MEM0_CONFIG['llm']['provider']}")
+ print(f" LLM Model: {MEM0_CONFIG['llm']['config']['model']}")
+ print(f" LLM Ollama URL: {MEM0_CONFIG['llm']['config']['ollama_base_url']}")
+ print(f" Embedder Provider: {MEM0_CONFIG['embedder']['provider']}")
+ print(f" Embedder Model: {MEM0_CONFIG['embedder']['config']['model']}")
+ print(f" Embedder Ollama URL: {MEM0_CONFIG['embedder']['config']['ollama_base_url']}")
+ print(f" Vector Store: {MEM0_CONFIG['vector_store']['provider']}")
+ print(f" Qdrant Host: {MEM0_CONFIG['vector_store']['config']['host']}")
+ print(f" Qdrant Port: {MEM0_CONFIG['vector_store']['config']['port']}")
+
+ # Check for potential issues
+ issues = []
+ if 'ollama:11434' in self.ollama_url:
+ issues.append("๐ง Ollama URL uses Docker hostname 'ollama' - should be http://192.168.0.110:11434")
+ if self.qdrant_url == 'qdrant':
+ issues.append("๐ง Qdrant URL uses Docker hostname 'qdrant' - may not work outside container")
+
+ # Check if the configuration matches your environment
+ expected_ollama = "http://192.168.0.110:11434"
+ if self.ollama_url == expected_ollama:
+ print(f"\nโ
Ollama URL matches your .env configuration: {expected_ollama}")
+ else:
+ issues.append(f"๐ง Ollama URL mismatch - expected {expected_ollama}, got {self.ollama_url}")
+
+ if issues:
+ print("\nโ ๏ธ Configuration Issues Found:")
+ for issue in issues:
+ print(f" {issue}")
+ else:
+ print("\nโ
Configuration looks good!")
+
+ return len(issues) == 0
+
+ def test_ollama_connectivity(self):
+ """Test Ollama server connectivity."""
+ try:
+ # Test HTTP connectivity first
+ health_url = f"{self.ollama_url}/api/version"
+ print(f"๐ Testing Ollama HTTP connectivity to: {health_url}")
+
+ response = requests.get(health_url, timeout=10)
+ if response.status_code == 200:
+ version_info = response.json()
+ print(f"โ
Ollama server version: {version_info.get('version', 'unknown')}")
+
+ # Test Ollama Python client
+ print("๐ Testing Ollama Python client...")
+ client = ollama.Client(host=self.ollama_url)
+
+ # Try to list models
+ models = client.list()
+ print(f"๐ Available models: {len(models.get('models', []))}")
+ return True
+ else:
+ print(f"โ Ollama HTTP error: {response.status_code}")
+ return False
+
+ except requests.exceptions.ConnectionError:
+ print(f"โ Cannot connect to Ollama at {self.ollama_url}")
+ print("๐ก Suggestion: Update OLLAMA_BASE_URL to http://192.168.0.110:11434")
+ return False
+ except Exception as e:
+ print(f"โ Ollama connectivity error: {e}")
+ return False
+
+ def test_ollama_models(self):
+ """Test required Ollama models are available."""
+ try:
+ client = ollama.Client(host=self.ollama_url)
+ models_response = client.list()
+ models = models_response.get('models', [])
+ model_names = [model['name'] for model in models]
+
+ required_models = ['llama3.1:latest', 'nomic-embed-text:latest']
+ missing_models = []
+
+ print(f"๐ Available models ({len(model_names)}):")
+ for model_name in model_names:
+ print(f" โ
{model_name}")
+
+ print(f"\n๐ Checking required models:")
+ for required_model in required_models:
+ if any(required_model in model_name for model_name in model_names):
+ print(f" โ
{required_model} - Found")
+ else:
+ print(f" โ {required_model} - Missing")
+ missing_models.append(required_model)
+
+ if missing_models:
+ print(f"\n๐ก To pull missing models, run:")
+ for model in missing_models:
+ print(f" ollama pull {model}")
+ return False
+ else:
+ return True
+
+ except Exception as e:
+ print(f"โ Error checking models: {e}")
+ return False
+
+ def test_qdrant_connectivity(self):
+ """Test Qdrant server connectivity."""
+ try:
+ # Try different Qdrant URLs
+ qdrant_urls = [
+ f"http://{self.qdrant_url}:6333",
+ f"http://{self.qdrant_url}:6334",
+ "http://localhost:6333",
+ "http://localhost:6334",
+ "http://192.168.0.110:6333",
+ "http://192.168.0.110:6334"
+ ]
+
+ for url in qdrant_urls:
+ try:
+ print(f"๐ Testing Qdrant connectivity to: {url}")
+ response = requests.get(f"{url}/health", timeout=5)
+ if response.status_code == 200:
+ print(f"โ
Qdrant reachable at {url}")
+
+ # Test collections endpoint
+ collections_response = requests.get(f"{url}/collections", timeout=5)
+ if collections_response.status_code == 200:
+ collections = collections_response.json()
+ print(f"๐ Collections found: {len(collections.get('result', {}).get('collections', []))}")
+ return True
+ else:
+ print(f"โ ๏ธ Qdrant health OK but collections endpoint failed: {collections_response.status_code}")
+ return True # Health is good enough
+ except requests.exceptions.RequestException:
+ continue
+
+ print("โ Could not connect to Qdrant on any URL")
+ print("๐ก Make sure Qdrant is running and accessible")
+ return False
+
+ except Exception as e:
+ print(f"โ Qdrant connectivity error: {e}")
+ return False
+
+ def test_mem0_config(self):
+ """Test Mem0 configuration validation."""
+ try:
+ print("๐ง Validating Mem0 configuration...")
+
+ # Check required fields
+ required_fields = [
+ ['llm', 'provider'],
+ ['llm', 'config', 'model'],
+ ['llm', 'config', 'ollama_base_url'],
+ ['embedder', 'provider'],
+ ['embedder', 'config', 'model'],
+ ['embedder', 'config', 'ollama_base_url'],
+ ['vector_store', 'provider'],
+ ['vector_store', 'config', 'host'],
+ ['vector_store', 'config', 'port'],
+ ]
+
+ config_valid = True
+ for field_path in required_fields:
+ current = MEM0_CONFIG
+ try:
+ for key in field_path:
+ current = current[key]
+ print(f" โ
{'.'.join(field_path)}: {current}")
+ except KeyError:
+ print(f" โ {'.'.join(field_path)}: Missing")
+ config_valid = False
+
+ # Test if we can create a Memory instance
+ print("\n๐๏ธ Testing Memory instance creation...")
+ try:
+ # This might fail due to connectivity, but should validate config structure
+ memory = Memory.from_config(MEM0_CONFIG)
+ print("โ
Memory instance created successfully")
+ return config_valid
+ except Exception as e:
+ if "connection" in str(e).lower() or "timeout" in str(e).lower():
+ print(f"โ ๏ธ Memory instance creation failed due to connectivity: {e}")
+ return config_valid # Config is probably OK, just can't connect
+ else:
+ print(f"โ Memory instance creation failed due to config: {e}")
+ return False
+
+ except Exception as e:
+ print(f"โ Mem0 config validation error: {e}")
+ return False
+
+ async def test_memory_service_init(self):
+ """Test MemoryService initialization."""
+ try:
+ print("๐ Testing MemoryService initialization...")
+
+ service = MemoryService()
+ print(f"๐ Initial state - initialized: {service._initialized}")
+
+ # Test initialization with timeout
+ start_time = time.time()
+ try:
+ await asyncio.wait_for(service.initialize(), timeout=30)
+ init_time = time.time() - start_time
+ print(f"โ
MemoryService initialized successfully in {init_time:.2f}s")
+ print(f"๐ Final state - initialized: {service._initialized}")
+ return True
+ except asyncio.TimeoutError:
+ print("โ MemoryService initialization timed out after 30s")
+ return False
+
+ except Exception as e:
+ print(f"โ MemoryService initialization error: {e}")
+ return False
+
+ def test_process_memory_init(self):
+ """Test process memory initialization (used in workers)."""
+ try:
+ print("๐ Testing process memory initialization...")
+
+ process_memory = _init_process_memory()
+ if process_memory:
+ print("โ
Process memory initialized successfully")
+ return True
+ else:
+ print("โ Process memory initialization returned None")
+ return False
+
+ except Exception as e:
+ print(f"โ Process memory initialization error: {e}")
+ return False
+
+ def test_basic_memory_creation(self):
+ """Test basic memory creation functionality."""
+ try:
+ print("๐พ Testing basic memory creation...")
+
+ # Test data
+ test_transcript = "Hello, this is a test conversation about planning a meeting for next week."
+ test_client_id = "test_client_123"
+ test_audio_uuid = f"test_audio_{int(time.time())}"
+ test_user_id = "test_user_456"
+ test_user_email = "test@example.com"
+
+ print(f"๐ Test data:")
+ print(f" Transcript: {test_transcript}")
+ print(f" Client ID: {test_client_id}")
+ print(f" Audio UUID: {test_audio_uuid}")
+ print(f" User ID: {test_user_id}")
+ print(f" User Email: {test_user_email}")
+
+ # Test the low-level function
+ result = _add_memory_to_store(
+ test_transcript,
+ test_client_id,
+ test_audio_uuid,
+ test_user_id,
+ test_user_email
+ )
+
+ if result:
+ print("โ
Basic memory creation successful")
+ return True
+ else:
+ print("โ Basic memory creation failed")
+ return False
+
+ except Exception as e:
+ print(f"โ Basic memory creation error: {e}")
+ return False
+
+ def test_action_item_extraction(self):
+ """Test action item extraction functionality."""
+ try:
+ print("๐ Testing action item extraction...")
+
+ # Test transcript with obvious action items
+ test_transcript = """
+ John: We need to schedule a meeting for next Tuesday to discuss the project.
+ Mary: I'll send you the agenda by tomorrow.
+ John: Great, and can you also review the budget document before the meeting?
+ Mary: Sure, I'll get that done by Monday.
+ """
+
+ test_client_id = "test_client_123"
+ test_audio_uuid = f"test_action_items_{int(time.time())}"
+
+ print(f"๐ Test transcript:")
+ print(f" {test_transcript.strip()}")
+
+ action_items = _extract_action_items_from_transcript(
+ test_transcript,
+ test_client_id,
+ test_audio_uuid
+ )
+
+ print(f"๐ Extracted {len(action_items)} action items:")
+ for i, item in enumerate(action_items, 1):
+ print(f" {i}. {item.get('description', 'No description')}")
+ print(f" Assignee: {item.get('assignee', 'unassigned')}")
+ print(f" Due: {item.get('due_date', 'not_specified')}")
+ print(f" Priority: {item.get('priority', 'not_specified')}")
+
+ if len(action_items) > 0:
+ print("โ
Action item extraction successful")
+ return True
+ else:
+ print("โ ๏ธ No action items extracted (might be working correctly)")
+ return True # This might be correct behavior
+
+ except Exception as e:
+ print(f"โ Action item extraction error: {e}")
+ return False
+
+ async def test_full_integration(self):
+ """Test the full integration flow."""
+ try:
+ print("๐ Testing full integration flow...")
+
+ # Get the global memory service
+ service = get_memory_service()
+
+ # Test data
+ test_transcript = "This is a full integration test. We discussed planning a project review meeting and setting up the new development environment."
+ test_client_id = "integration_test_client"
+ test_audio_uuid = f"integration_test_{int(time.time())}"
+ test_user_id = "integration_test_user"
+ test_user_email = "integration@test.com"
+
+ print(f"๐ Integration test data:")
+ print(f" Transcript: {test_transcript}")
+ print(f" User: {test_user_email}")
+
+ # Test memory addition (high-level API)
+ print("๐พ Testing high-level memory addition...")
+ memory_result = await service.add_memory(
+ test_transcript,
+ test_client_id,
+ test_audio_uuid,
+ test_user_id,
+ test_user_email
+ )
+
+ if memory_result:
+ print("โ
High-level memory addition successful")
+
+ # Test memory retrieval
+ print("๐ Testing memory retrieval...")
+ try:
+ memories = service.get_all_memories(test_user_id, limit=10)
+ print(f"๐ Retrieved {len(memories)} memories for user")
+
+ # Look for our test memory
+ found_test_memory = False
+ for memory in memories:
+ if test_audio_uuid in str(memory.get('metadata', {})):
+ found_test_memory = True
+ print(f"โ
Found test memory in results")
+ break
+
+ if not found_test_memory:
+ print("โ ๏ธ Test memory not found in retrieval results")
+
+ except Exception as retrieval_error:
+ print(f"โ ๏ธ Memory retrieval failed: {retrieval_error}")
+
+ return True
+ else:
+ print("โ High-level memory addition failed")
+ return False
+
+ except Exception as e:
+ print(f"โ Full integration test error: {e}")
+ return False
+
+ def print_summary(self):
+ """Print test summary."""
+ print("\n" + "=" * 60)
+ print("๐ TEST SUMMARY")
+ print("=" * 60)
+
+ passed = sum(1 for result in self.test_results.values() if result)
+ total = len(self.test_results)
+
+ for test_name, result in self.test_results.items():
+ status = "โ
PASS" if result else "โ FAIL"
+ print(f"{status} {test_name}")
+
+ print(f"\n๐ฏ Overall Result: {passed}/{total} tests passed")
+
+ if passed == total:
+ print("๐ All tests passed! Memory service should be working correctly.")
+ else:
+ print("\n๐ง RECOMMENDATIONS:")
+
+ # Specific recommendations based on failures
+ if not self.test_results.get("Ollama Connectivity", True):
+ print(" 1. Update OLLAMA_BASE_URL environment variable to: http://192.168.0.110:11434")
+ print(" Add this to your docker-compose.yml environment section.")
+
+ if not self.test_results.get("Ollama Models", True):
+ print(" 2. Pull required Ollama models:")
+ print(" ollama pull llama3.1:latest")
+ print(" ollama pull nomic-embed-text:latest")
+
+ if not self.test_results.get("Qdrant Connectivity", True):
+ print(" 3. Ensure Qdrant is running and accessible")
+ print(" Check docker-compose logs for qdrant service")
+
+ if not self.test_results.get("Memory Service Initialization", True):
+ print(" 4. Memory service initialization failed - check Ollama and Qdrant connectivity")
+
+ print("\n ๐ After making changes, restart your services:")
+ print(" docker-compose restart friend-backend")
+
+def main():
+ """Main test function."""
+ print("๐ฌ Memory Service Diagnostic Tool")
+ print("This tool will help identify why memories aren't being created.")
+ print()
+
+ # Check if we're running in the right directory
+ if not Path("src/memory").exists():
+ print("โ Please run this from the backends/advanced-backend directory")
+ print(" cd backends/advanced-backend")
+ print(" python tests/test_memory_service.py")
+ sys.exit(1)
+
+ tester = MemoryServiceTester()
+ asyncio.run(tester.run_all_tests())
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/backends/advanced-backend/tests/test_memory_service_fixed.py b/backends/advanced-backend/tests/test_memory_service_fixed.py
new file mode 100644
index 00000000..9ae7edba
--- /dev/null
+++ b/backends/advanced-backend/tests/test_memory_service_fixed.py
@@ -0,0 +1,289 @@
+#!/usr/bin/env python3
+"""
+Fixed version of memory service test using public API.
+
+This script tests:
+1. Memory service initialization via public API
+2. Memory creation through proper channels
+3. Memory retrieval and search
+
+Run this from the backend directory:
+python tests/test_memory_service_fixed.py
+"""
+
+import asyncio
+import logging
+import os
+import sys
+import time
+from pathlib import Path
+
+# Load environment variables from .env file
+from dotenv import load_dotenv
+load_dotenv()
+
+# Add src to path so we can import modules
+sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
+
+import requests
+
+# Import public API only
+try:
+ from memory import (
+ MemoryService,
+ get_memory_service,
+ init_memory_config,
+ )
+ print("โ
Successfully imported memory service modules")
+except ImportError as e:
+ print(f"โ Failed to import memory service modules: {e}")
+ sys.exit(1)
+
+# Configure logging
+logging.basicConfig(
+ level=logging.INFO,
+ format='%(asctime)s | %(levelname)-8s | %(name)-20s | %(message)s'
+)
+logger = logging.getLogger("memory_test")
+
+class MemoryServiceTester:
+ """Memory service tester using public API."""
+
+ def __init__(self):
+ self.ollama_url = os.getenv("OLLAMA_BASE_URL", "http://ollama:11434")
+ self.qdrant_url = os.getenv("QDRANT_BASE_URL", "qdrant")
+ self.test_results = {}
+
+ async def run_all_tests(self):
+ """Run all tests in sequence."""
+ print("๐งช Starting Memory Service Tests (Public API)")
+ print("=" * 60)
+
+ tests = [
+ ("Configuration Check", self.test_configuration),
+ ("External Dependencies", self.test_external_dependencies),
+ ("Memory Service Initialization", self.test_memory_service_init),
+ ("Memory Creation (Public API)", self.test_memory_creation),
+ ("Memory Retrieval", self.test_memory_retrieval),
+ ("Action Items", self.test_action_items),
+ ]
+
+ for test_name, test_func in tests:
+ print(f"\n๐ Running: {test_name}")
+ print("-" * 40)
+ try:
+ if asyncio.iscoroutinefunction(test_func):
+ result = await test_func()
+ else:
+ result = test_func()
+ self.test_results[test_name] = result
+ status = "โ
PASS" if result else "โ FAIL"
+ print(f"{status}: {test_name}")
+ except Exception as e:
+ self.test_results[test_name] = False
+ print(f"โ ERROR in {test_name}: {e}")
+ logger.exception(f"Test failed: {test_name}")
+
+ self.print_summary()
+
+ def test_configuration(self):
+ """Test configuration setup."""
+ print(f"๐ Environment Configuration:")
+ print(f" OLLAMA_BASE_URL: {self.ollama_url}")
+ print(f" QDRANT_BASE_URL: {self.qdrant_url}")
+
+ # Test config initialization
+ try:
+ config = init_memory_config(
+ ollama_base_url=self.ollama_url,
+ qdrant_base_url=self.qdrant_url
+ )
+ print("โ
Memory config initialized successfully")
+ return True
+ except Exception as e:
+ print(f"โ Config initialization failed: {e}")
+ return False
+
+ def test_external_dependencies(self):
+ """Test external service connectivity."""
+ results = []
+
+ # Test Ollama
+ try:
+ print(f"๐ Testing Ollama connectivity...")
+ response = requests.get(f"{self.ollama_url}/api/version", timeout=10)
+ if response.status_code == 200:
+ print(f"โ
Ollama accessible")
+ results.append(True)
+ else:
+ print(f"โ Ollama returned {response.status_code}")
+ results.append(False)
+ except Exception as e:
+ print(f"โ Ollama connection failed: {e}")
+ results.append(False)
+
+ # Test Qdrant
+ try:
+ print(f"๐ Testing Qdrant connectivity...")
+ # Try different possible URLs
+ qdrant_urls = [
+ f"http://{self.qdrant_url}:6333",
+ "http://localhost:6333",
+ "http://192.168.0.110:6333"
+ ]
+
+ qdrant_accessible = False
+ for url in qdrant_urls:
+ try:
+ response = requests.get(f"{url}/health", timeout=5)
+ if response.status_code == 200:
+ print(f"โ
Qdrant accessible at {url}")
+ qdrant_accessible = True
+ break
+ except:
+ continue
+
+ if not qdrant_accessible:
+ print("โ Qdrant not accessible on any URL")
+ results.append(qdrant_accessible)
+
+ except Exception as e:
+ print(f"โ Qdrant connection test failed: {e}")
+ results.append(False)
+
+ return all(results)
+
+ async def test_memory_service_init(self):
+ """Test MemoryService initialization through public API."""
+ try:
+ print("๐ Testing MemoryService initialization...")
+
+ # Get the global memory service
+ service = get_memory_service()
+ print(f"๐ Service obtained: {type(service).__name__}")
+
+ # Test connection
+ connection_ok = await service.test_connection()
+ print(f"๐ Connection test: {'โ
OK' if connection_ok else 'โ Failed'}")
+
+ return connection_ok
+
+ except Exception as e:
+ print(f"โ MemoryService initialization error: {e}")
+ return False
+
+ async def test_memory_creation(self):
+ """Test memory creation through public API."""
+ try:
+ print("๐พ Testing memory creation...")
+
+ # Get service
+ service = get_memory_service()
+
+ # Test data
+ test_transcript = "This is a test conversation about planning a project meeting for next week."
+ test_client_id = "test_client_123"
+ test_audio_uuid = f"test_audio_{int(time.time())}"
+ test_user_id = "test_user_456"
+ test_user_email = "test@example.com"
+
+ print(f"๐ Creating memory for: {test_user_email}")
+
+ # Create memory using public API
+ result = await service.add_memory(
+ test_transcript,
+ test_client_id,
+ test_audio_uuid,
+ test_user_id,
+ test_user_email
+ )
+
+ if result:
+ print("โ
Memory creation successful")
+ return True
+ else:
+ print("โ Memory creation failed")
+ return False
+
+ except Exception as e:
+ print(f"โ Memory creation error: {e}")
+ return False
+
+ def test_memory_retrieval(self):
+ """Test memory retrieval."""
+ try:
+ print("๐ Testing memory retrieval...")
+
+ service = get_memory_service()
+ test_user_id = "test_user_456"
+
+ # Get memories
+ memories = service.get_all_memories(test_user_id, limit=10)
+ print(f"๐ Retrieved {len(memories)} memories")
+
+ # Test search
+ search_results = service.search_memories("test conversation", test_user_id, limit=5)
+ print(f"๐ Search returned {len(search_results)} results")
+
+ return True
+
+ except Exception as e:
+ print(f"โ Memory retrieval error: {e}")
+ return False
+
+ def test_action_items(self):
+ """Test action item functionality."""
+ try:
+ print("๐ Testing action items...")
+
+ service = get_memory_service()
+ test_user_id = "test_user_456"
+
+ # Get action items
+ action_items = service.get_action_items(test_user_id, limit=10)
+ print(f"๐ Retrieved {len(action_items)} action items")
+
+ return True
+
+ except Exception as e:
+ print(f"โ Action items test error: {e}")
+ return False
+
+ def print_summary(self):
+ """Print test summary."""
+ print("\n" + "=" * 60)
+ print("๐ TEST SUMMARY")
+ print("=" * 60)
+
+ passed = sum(1 for result in self.test_results.values() if result)
+ total = len(self.test_results)
+
+ for test_name, result in self.test_results.items():
+ status = "โ
PASS" if result else "โ FAIL"
+ print(f"{status} {test_name}")
+
+ print(f"\n๐ฏ Overall Result: {passed}/{total} tests passed")
+
+ if passed == total:
+ print("๐ All tests passed! Memory service is working correctly.")
+ else:
+ print("\n๐ง Issues found - check the failing tests above.")
+
+def main():
+ """Main test function."""
+ print("๐ฌ Memory Service Test (Public API)")
+ print("This tests the memory service using its intended public interface.")
+ print()
+
+ # Check if we're running in the right directory
+ if not Path("src/memory").exists():
+ print("โ Please run this from the backends/advanced-backend directory")
+ print(" cd backends/advanced-backend")
+ print(" python tests/test_memory_service_fixed.py")
+ sys.exit(1)
+
+ tester = MemoryServiceTester()
+ asyncio.run(tester.run_all_tests())
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/backends/advanced-backend/upload_files.py b/backends/advanced-backend/upload_files.py
new file mode 100755
index 00000000..05f61f79
--- /dev/null
+++ b/backends/advanced-backend/upload_files.py
@@ -0,0 +1,235 @@
+#!/usr/bin/env python3
+"""
+Upload audio files to the Friend-Lite backend for processing.
+"""
+
+import os
+import sys
+import requests
+from pathlib import Path
+from typing import Optional
+
+
+def load_env_variables() -> Optional[str]:
+ """Load ADMIN_PASSWORD from .env file."""
+ env_file = Path(".env")
+ if not env_file.exists():
+ print("โ .env file not found. Please create it with ADMIN_PASSWORD.")
+ return None
+
+ admin_password = None
+ with open(env_file, 'r') as f:
+ for line in f:
+ line = line.strip()
+ if line.startswith('ADMIN_PASSWORD='):
+ admin_password = line.split('=', 1)[1].strip('"\'')
+ break
+
+ if not admin_password:
+ print("โ ADMIN_PASSWORD not found in .env file.")
+ return None
+
+ return admin_password
+
+
+def get_admin_token(password: str, base_url: str = "http://localhost:8000") -> Optional[str]:
+ """Authenticate and get admin token."""
+ print("๐ Requesting admin token...")
+
+ auth_url = f"{base_url}/auth/jwt/login"
+
+ try:
+ response = requests.post(
+ auth_url,
+ data={
+ 'username': 'admin@example.com',
+ 'password': password
+ },
+ headers={
+ 'Content-Type': 'application/x-www-form-urlencoded'
+ },
+ timeout=10
+ )
+
+ print(f"๐ Auth response status: {response.status_code}")
+
+ if response.status_code == 200:
+ data = response.json()
+ token = data.get('access_token')
+ if token:
+ print("โ
Admin token obtained.")
+ return token
+ else:
+ print("โ No access token in response.")
+ print(f"Available fields: {list(data.keys())}")
+ return None
+ else:
+ print(f"โ Authentication failed with status {response.status_code}")
+ try:
+ error_data = response.json()
+ print(f"Error details: {error_data}")
+ except:
+ print(f"Response text: {response.text}")
+ return None
+
+ except requests.exceptions.RequestException as e:
+ print(f"โ Request failed: {e}")
+ return None
+
+
+def collect_wav_files(audio_dir: str, filter_list: Optional[list[str]] = None) -> list[str]:
+ """Collect all .wav files from the specified directory."""
+ print(f"๐ Collecting .wav files from {audio_dir} ...")
+
+ audio_path = Path(audio_dir).expanduser()
+ if not audio_path.exists():
+ print(f"โ Directory {audio_path} does not exist.")
+ return []
+
+ wav_files = list(audio_path.glob("*.wav"))
+
+ if not wav_files:
+ print(f"โ ๏ธ No .wav files found in {audio_path}")
+ return []
+
+ # Filter files if filter_list is provided, otherwise accept all
+ if filter_list is None:
+ selected_files = wav_files
+ else:
+ selected_files = []
+ for f in wav_files:
+ if f.name in filter_list:
+ selected_files.append(f)
+ else:
+ print(f" โญ๏ธ Skipping file (not in filter): {f.name}")
+
+ print(f"๐ฆ Total files to upload: {len(selected_files)}")
+ for file_path in selected_files:
+ print(f" โ Added file: {file_path}")
+
+ return [str(f) for f in selected_files]
+
+
+def upload_files(files: list[str], token: str, base_url: str = "http://localhost:8000") -> bool:
+ """Upload files to the backend for processing."""
+ if not files:
+ print("โ No files to upload.")
+ return False
+
+ print(f"๐ Uploading files to {base_url}/api/process-audio-files ...")
+
+ # Prepare files for upload
+ files_data = []
+ for file_path in files:
+ try:
+ files_data.append(('files', (os.path.basename(file_path), open(file_path, 'rb'), 'audio/wav')))
+ except IOError as e:
+ print(f"โ Error opening file {file_path}: {e}")
+ continue
+
+ if not files_data:
+ print("โ No files could be opened for upload.")
+ return False
+
+ try:
+ response = requests.post(
+ f"{base_url}/api/process-audio-files",
+ files=files_data,
+ data={'device_name': 'file_upload_batch'},
+ headers={
+ 'Authorization': f'Bearer {token}'
+ },
+ timeout=300 # 5 minutes timeout for large uploads
+ )
+
+ # Close all file handles
+ for _, file_tuple in files_data:
+ file_tuple[1].close()
+
+ print(f"๐ค Upload response status: {response.status_code}")
+
+ if response.status_code == 200:
+ print("โ
File upload completed successfully.")
+ try:
+ result = response.json()
+ print(f"๐ Response: {result}")
+ except:
+ print(f"๐ Response: {response.text}")
+ return True
+ else:
+ print(f"โ File upload failed with status {response.status_code}")
+ try:
+ error_data = response.json()
+ print(f"Error details: {error_data}")
+ except:
+ print(f"Response text: {response.text}")
+ return False
+
+ except requests.exceptions.Timeout:
+ print("โ Upload request timed out.")
+ return False
+ except requests.exceptions.RequestException as e:
+ print(f"โ Upload request failed: {e}")
+ return False
+ finally:
+ # Ensure all file handles are closed
+ for _, file_tuple in files_data:
+ try:
+ file_tuple[1].close()
+ except:
+ pass
+
+
+def main():
+ """Main function to orchestrate the upload process."""
+ print("๐ต Friend-Lite Audio File Upload Tool")
+ print("=" * 40)
+
+ # Load environment variables
+ admin_password = load_env_variables()
+ if not admin_password:
+ sys.exit(1)
+
+ # Get admin token
+ token = get_admin_token(admin_password)
+ if not token:
+ sys.exit(1)
+ # Test with the specific file the user mentioned
+ specific_file = "none"
+
+ # Check backends/advanced-backend/audio_chunks/ first
+ backend_audio_dir = "./audio_chunks/"
+ audio_dir_path = Path(backend_audio_dir)
+ specific_file_path = audio_dir_path / specific_file
+
+ if specific_file_path.exists():
+ wav_files = [str(specific_file_path)]
+ print(f"๐ฆ Found specific test file: {specific_file_path}")
+ else:
+ # Fallback to original directory
+ audio_dir = os.path.expanduser("~/Some dir/")
+ # You can specify some test_files list if you want here
+ wav_files = collect_wav_files(audio_dir, filter_list=None)
+ if not wav_files:
+ sys.exit(1)
+
+ if not wav_files:
+ print("โ None of the test files were found")
+ sys.exit(1)
+
+ print(f"๐งช Testing with {len(wav_files)} files:")
+ for f in wav_files:
+ print(f" - {os.path.basename(f)}")
+
+ success = upload_files(wav_files, token)
+
+ if success:
+ print("\n๐ Upload process completed successfully!")
+ sys.exit(0)
+ else:
+ print("\nโ Upload process failed.")
+ sys.exit(1)
+
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/backends/advanced-backend/uv.lock b/backends/advanced-backend/uv.lock
index ca43c517..f2f4c4ca 100644
--- a/backends/advanced-backend/uv.lock
+++ b/backends/advanced-backend/uv.lock
@@ -8,27 +8,31 @@ resolution-markers = [
[[package]]
name = "advanced-omi-backend"
version = "0.1.0"
-source = { virtual = "." }
+source = { editable = "." }
dependencies = [
{ name = "aiohttp" },
{ name = "easy-audio-interfaces" },
{ name = "fastapi" },
+ { name = "fastapi-users", extra = ["beanie"] },
{ name = "mem0ai" },
{ name = "motor" },
{ name = "ollama" },
{ name = "omi-sdk" },
{ name = "python-dotenv" },
+ { name = "pyyaml" },
{ name = "uvicorn" },
{ name = "wyoming" },
]
-[package.dev-dependencies]
+[package.optional-dependencies]
deepgram = [
{ name = "deepgram-sdk" },
]
dev = [
{ name = "black" },
{ name = "isort" },
+ { name = "pytest" },
+ { name = "pytest-asyncio" },
]
tests = [
{ name = "pytest" },
@@ -41,29 +45,27 @@ webui = [
[package.metadata]
requires-dist = [
{ name = "aiohttp", specifier = ">=3.8.0" },
- { name = "easy-audio-interfaces", specifier = ">=0.5.1" },
+ { name = "black", marker = "extra == 'dev'", specifier = ">=25.1.0" },
+ { name = "deepgram-sdk", marker = "extra == 'deepgram'", specifier = ">=4.0.0" },
+ { name = "easy-audio-interfaces", specifier = ">=0.7.1" },
{ name = "fastapi", specifier = ">=0.115.12" },
- { name = "mem0ai", specifier = ">=0.1.111" },
+ { name = "fastapi-users", extras = ["beanie"], specifier = ">=14.0.1" },
+ { name = "isort", marker = "extra == 'dev'", specifier = ">=6.0.1" },
+ { name = "mem0ai", specifier = ">=0.1.114" },
{ name = "motor", specifier = ">=3.7.1" },
{ name = "ollama", specifier = ">=0.4.8" },
{ name = "omi-sdk", specifier = ">=0.1.5" },
+ { name = "pytest", marker = "extra == 'dev'", specifier = ">=8.4.1" },
+ { name = "pytest", marker = "extra == 'tests'", specifier = ">=8.4.1" },
+ { name = "pytest-asyncio", marker = "extra == 'dev'", specifier = ">=1.0.0" },
+ { name = "pytest-asyncio", marker = "extra == 'tests'", specifier = ">=1.0.0" },
{ name = "python-dotenv", specifier = ">=1.1.0" },
+ { name = "pyyaml", specifier = ">=6.0.1" },
+ { name = "streamlit", marker = "extra == 'webui'", specifier = ">=1.45.1" },
{ name = "uvicorn", specifier = ">=0.34.2" },
{ name = "wyoming", specifier = ">=1.6.1" },
]
-[package.metadata.requires-dev]
-deepgram = [{ name = "deepgram-sdk", specifier = ">=4.0.0" }]
-dev = [
- { name = "black", specifier = ">=25.1.0" },
- { name = "isort", specifier = ">=6.0.1" },
-]
-tests = [
- { name = "pytest", specifier = ">=8.4.1" },
- { name = "pytest-asyncio", specifier = ">=1.0.0" },
-]
-webui = [{ name = "streamlit", specifier = ">=1.45.1" }]
-
[[package]]
name = "aenum"
version = "3.1.16"
@@ -192,6 +194,39 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/a1/ee/48ca1a7c89ffec8b6a0c5d02b89c305671d5ffd8d3c94acf8b8c408575bb/anyio-4.9.0-py3-none-any.whl", hash = "sha256:9f76d541cad6e36af7beb62e978876f3b41e3e04f2c1fbf0884604c0a9c4d93c", size = 100916 },
]
+[[package]]
+name = "argon2-cffi"
+version = "23.1.0"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "argon2-cffi-bindings" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/31/fa/57ec2c6d16ecd2ba0cf15f3c7d1c3c2e7b5fcb83555ff56d7ab10888ec8f/argon2_cffi-23.1.0.tar.gz", hash = "sha256:879c3e79a2729ce768ebb7d36d4609e3a78a4ca2ec3a9f12286ca057e3d0db08", size = 42798 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/a4/6a/e8a041599e78b6b3752da48000b14c8d1e8a04ded09c88c714ba047f34f5/argon2_cffi-23.1.0-py3-none-any.whl", hash = "sha256:c670642b78ba29641818ab2e68bd4e6a78ba53b7eff7b4c3815ae16abf91c7ea", size = 15124 },
+]
+
+[[package]]
+name = "argon2-cffi-bindings"
+version = "21.2.0"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "cffi" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/b9/e9/184b8ccce6683b0aa2fbb7ba5683ea4b9c5763f1356347f1312c32e3c66e/argon2-cffi-bindings-21.2.0.tar.gz", hash = "sha256:bb89ceffa6c791807d1305ceb77dbfacc5aa499891d2c55661c6459651fc39e3", size = 1779911 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/d4/13/838ce2620025e9666aa8f686431f67a29052241692a3dd1ae9d3692a89d3/argon2_cffi_bindings-21.2.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:ccb949252cb2ab3a08c02024acb77cfb179492d5701c7cbdbfd776124d4d2367", size = 29658 },
+ { url = "https://files.pythonhosted.org/packages/b3/02/f7f7bb6b6af6031edb11037639c697b912e1dea2db94d436e681aea2f495/argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9524464572e12979364b7d600abf96181d3541da11e23ddf565a32e70bd4dc0d", size = 80583 },
+ { url = "https://files.pythonhosted.org/packages/ec/f7/378254e6dd7ae6f31fe40c8649eea7d4832a42243acaf0f1fff9083b2bed/argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b746dba803a79238e925d9046a63aa26bf86ab2a2fe74ce6b009a1c3f5c8f2ae", size = 86168 },
+ { url = "https://files.pythonhosted.org/packages/74/f6/4a34a37a98311ed73bb80efe422fed95f2ac25a4cacc5ae1d7ae6a144505/argon2_cffi_bindings-21.2.0-cp36-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:58ed19212051f49a523abb1dbe954337dc82d947fb6e5a0da60f7c8471a8476c", size = 82709 },
+ { url = "https://files.pythonhosted.org/packages/74/2b/73d767bfdaab25484f7e7901379d5f8793cccbb86c6e0cbc4c1b96f63896/argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd46088725ef7f58b5a1ef7ca06647ebaf0eb4baff7d1d0d177c6cc8744abd86", size = 83613 },
+ { url = "https://files.pythonhosted.org/packages/4f/fd/37f86deef67ff57c76f137a67181949c2d408077e2e3dd70c6c42912c9bf/argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_i686.whl", hash = "sha256:8cd69c07dd875537a824deec19f978e0f2078fdda07fd5c42ac29668dda5f40f", size = 84583 },
+ { url = "https://files.pythonhosted.org/packages/6f/52/5a60085a3dae8fded8327a4f564223029f5f54b0cb0455a31131b5363a01/argon2_cffi_bindings-21.2.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f1152ac548bd5b8bcecfb0b0371f082037e47128653df2e8ba6e914d384f3c3e", size = 88475 },
+ { url = "https://files.pythonhosted.org/packages/8b/95/143cd64feb24a15fa4b189a3e1e7efbaeeb00f39a51e99b26fc62fbacabd/argon2_cffi_bindings-21.2.0-cp36-abi3-win32.whl", hash = "sha256:603ca0aba86b1349b147cab91ae970c63118a0f30444d4bc80355937c950c082", size = 27698 },
+ { url = "https://files.pythonhosted.org/packages/37/2c/e34e47c7dee97ba6f01a6203e0383e15b60fb85d78ac9a15cd066f6fe28b/argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl", hash = "sha256:b2ef1c30440dbbcba7a5dc3e319408b59676e2e039e2ae11a8775ecf482b192f", size = 30817 },
+ { url = "https://files.pythonhosted.org/packages/5a/e4/bf8034d25edaa495da3c8a3405627d2e35758e44ff6eaa7948092646fdcc/argon2_cffi_bindings-21.2.0-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:e415e3f62c8d124ee16018e491a009937f8cf7ebf5eb430ffc5de21b900dad93", size = 53104 },
+]
+
[[package]]
name = "attrs"
version = "25.3.0"
@@ -210,6 +245,72 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/df/73/b6e24bd22e6720ca8ee9a85a0c4a2971af8497d8f3193fa05390cbd46e09/backoff-2.2.1-py3-none-any.whl", hash = "sha256:63579f9a0628e06278f7e47b7d7d5b6ce20dc65c5e96a6f3ca99a6adca0396e8", size = 15148 },
]
+[[package]]
+name = "bcrypt"
+version = "4.3.0"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/bb/5d/6d7433e0f3cd46ce0b43cd65e1db465ea024dbb8216fb2404e919c2ad77b/bcrypt-4.3.0.tar.gz", hash = "sha256:3a3fd2204178b6d2adcf09cb4f6426ffef54762577a7c9b54c159008cb288c18", size = 25697 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/bf/2c/3d44e853d1fe969d229bd58d39ae6902b3d924af0e2b5a60d17d4b809ded/bcrypt-4.3.0-cp313-cp313t-macosx_10_12_universal2.whl", hash = "sha256:f01e060f14b6b57bbb72fc5b4a83ac21c443c9a2ee708e04a10e9192f90a6281", size = 483719 },
+ { url = "https://files.pythonhosted.org/packages/a1/e2/58ff6e2a22eca2e2cff5370ae56dba29d70b1ea6fc08ee9115c3ae367795/bcrypt-4.3.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c5eeac541cefd0bb887a371ef73c62c3cd78535e4887b310626036a7c0a817bb", size = 272001 },
+ { url = "https://files.pythonhosted.org/packages/37/1f/c55ed8dbe994b1d088309e366749633c9eb90d139af3c0a50c102ba68a1a/bcrypt-4.3.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:59e1aa0e2cd871b08ca146ed08445038f42ff75968c7ae50d2fdd7860ade2180", size = 277451 },
+ { url = "https://files.pythonhosted.org/packages/d7/1c/794feb2ecf22fe73dcfb697ea7057f632061faceb7dcf0f155f3443b4d79/bcrypt-4.3.0-cp313-cp313t-manylinux_2_28_aarch64.whl", hash = "sha256:0042b2e342e9ae3d2ed22727c1262f76cc4f345683b5c1715f0250cf4277294f", size = 272792 },
+ { url = "https://files.pythonhosted.org/packages/13/b7/0b289506a3f3598c2ae2bdfa0ea66969812ed200264e3f61df77753eee6d/bcrypt-4.3.0-cp313-cp313t-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:74a8d21a09f5e025a9a23e7c0fd2c7fe8e7503e4d356c0a2c1486ba010619f09", size = 289752 },
+ { url = "https://files.pythonhosted.org/packages/dc/24/d0fb023788afe9e83cc118895a9f6c57e1044e7e1672f045e46733421fe6/bcrypt-4.3.0-cp313-cp313t-manylinux_2_28_x86_64.whl", hash = "sha256:0142b2cb84a009f8452c8c5a33ace5e3dfec4159e7735f5afe9a4d50a8ea722d", size = 277762 },
+ { url = "https://files.pythonhosted.org/packages/e4/38/cde58089492e55ac4ef6c49fea7027600c84fd23f7520c62118c03b4625e/bcrypt-4.3.0-cp313-cp313t-manylinux_2_34_aarch64.whl", hash = "sha256:12fa6ce40cde3f0b899729dbd7d5e8811cb892d31b6f7d0334a1f37748b789fd", size = 272384 },
+ { url = "https://files.pythonhosted.org/packages/de/6a/d5026520843490cfc8135d03012a413e4532a400e471e6188b01b2de853f/bcrypt-4.3.0-cp313-cp313t-manylinux_2_34_x86_64.whl", hash = "sha256:5bd3cca1f2aa5dbcf39e2aa13dd094ea181f48959e1071265de49cc2b82525af", size = 277329 },
+ { url = "https://files.pythonhosted.org/packages/b3/a3/4fc5255e60486466c389e28c12579d2829b28a527360e9430b4041df4cf9/bcrypt-4.3.0-cp313-cp313t-musllinux_1_1_aarch64.whl", hash = "sha256:335a420cfd63fc5bc27308e929bee231c15c85cc4c496610ffb17923abf7f231", size = 305241 },
+ { url = "https://files.pythonhosted.org/packages/c7/15/2b37bc07d6ce27cc94e5b10fd5058900eb8fb11642300e932c8c82e25c4a/bcrypt-4.3.0-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:0e30e5e67aed0187a1764911af023043b4542e70a7461ad20e837e94d23e1d6c", size = 309617 },
+ { url = "https://files.pythonhosted.org/packages/5f/1f/99f65edb09e6c935232ba0430c8c13bb98cb3194b6d636e61d93fe60ac59/bcrypt-4.3.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:3b8d62290ebefd49ee0b3ce7500f5dbdcf13b81402c05f6dafab9a1e1b27212f", size = 335751 },
+ { url = "https://files.pythonhosted.org/packages/00/1b/b324030c706711c99769988fcb694b3cb23f247ad39a7823a78e361bdbb8/bcrypt-4.3.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:2ef6630e0ec01376f59a006dc72918b1bf436c3b571b80fa1968d775fa02fe7d", size = 355965 },
+ { url = "https://files.pythonhosted.org/packages/aa/dd/20372a0579dd915dfc3b1cd4943b3bca431866fcb1dfdfd7518c3caddea6/bcrypt-4.3.0-cp313-cp313t-win32.whl", hash = "sha256:7a4be4cbf241afee43f1c3969b9103a41b40bcb3a3f467ab19f891d9bc4642e4", size = 155316 },
+ { url = "https://files.pythonhosted.org/packages/6d/52/45d969fcff6b5577c2bf17098dc36269b4c02197d551371c023130c0f890/bcrypt-4.3.0-cp313-cp313t-win_amd64.whl", hash = "sha256:5c1949bf259a388863ced887c7861da1df681cb2388645766c89fdfd9004c669", size = 147752 },
+ { url = "https://files.pythonhosted.org/packages/11/22/5ada0b9af72b60cbc4c9a399fdde4af0feaa609d27eb0adc61607997a3fa/bcrypt-4.3.0-cp38-abi3-macosx_10_12_universal2.whl", hash = "sha256:f81b0ed2639568bf14749112298f9e4e2b28853dab50a8b357e31798686a036d", size = 498019 },
+ { url = "https://files.pythonhosted.org/packages/b8/8c/252a1edc598dc1ce57905be173328eda073083826955ee3c97c7ff5ba584/bcrypt-4.3.0-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:864f8f19adbe13b7de11ba15d85d4a428c7e2f344bac110f667676a0ff84924b", size = 279174 },
+ { url = "https://files.pythonhosted.org/packages/29/5b/4547d5c49b85f0337c13929f2ccbe08b7283069eea3550a457914fc078aa/bcrypt-4.3.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3e36506d001e93bffe59754397572f21bb5dc7c83f54454c990c74a468cd589e", size = 283870 },
+ { url = "https://files.pythonhosted.org/packages/be/21/7dbaf3fa1745cb63f776bb046e481fbababd7d344c5324eab47f5ca92dd2/bcrypt-4.3.0-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:842d08d75d9fe9fb94b18b071090220697f9f184d4547179b60734846461ed59", size = 279601 },
+ { url = "https://files.pythonhosted.org/packages/6d/64/e042fc8262e971347d9230d9abbe70d68b0a549acd8611c83cebd3eaec67/bcrypt-4.3.0-cp38-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:7c03296b85cb87db865d91da79bf63d5609284fc0cab9472fdd8367bbd830753", size = 297660 },
+ { url = "https://files.pythonhosted.org/packages/50/b8/6294eb84a3fef3b67c69b4470fcdd5326676806bf2519cda79331ab3c3a9/bcrypt-4.3.0-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:62f26585e8b219cdc909b6a0069efc5e4267e25d4a3770a364ac58024f62a761", size = 284083 },
+ { url = "https://files.pythonhosted.org/packages/62/e6/baff635a4f2c42e8788fe1b1633911c38551ecca9a749d1052d296329da6/bcrypt-4.3.0-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:beeefe437218a65322fbd0069eb437e7c98137e08f22c4660ac2dc795c31f8bb", size = 279237 },
+ { url = "https://files.pythonhosted.org/packages/39/48/46f623f1b0c7dc2e5de0b8af5e6f5ac4cc26408ac33f3d424e5ad8da4a90/bcrypt-4.3.0-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:97eea7408db3a5bcce4a55d13245ab3fa566e23b4c67cd227062bb49e26c585d", size = 283737 },
+ { url = "https://files.pythonhosted.org/packages/49/8b/70671c3ce9c0fca4a6cc3cc6ccbaa7e948875a2e62cbd146e04a4011899c/bcrypt-4.3.0-cp38-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:191354ebfe305e84f344c5964c7cd5f924a3bfc5d405c75ad07f232b6dffb49f", size = 312741 },
+ { url = "https://files.pythonhosted.org/packages/27/fb/910d3a1caa2d249b6040a5caf9f9866c52114d51523ac2fb47578a27faee/bcrypt-4.3.0-cp38-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:41261d64150858eeb5ff43c753c4b216991e0ae16614a308a15d909503617732", size = 316472 },
+ { url = "https://files.pythonhosted.org/packages/dc/cf/7cf3a05b66ce466cfb575dbbda39718d45a609daa78500f57fa9f36fa3c0/bcrypt-4.3.0-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:33752b1ba962ee793fa2b6321404bf20011fe45b9afd2a842139de3011898fef", size = 343606 },
+ { url = "https://files.pythonhosted.org/packages/e3/b8/e970ecc6d7e355c0d892b7f733480f4aa8509f99b33e71550242cf0b7e63/bcrypt-4.3.0-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:50e6e80a4bfd23a25f5c05b90167c19030cf9f87930f7cb2eacb99f45d1c3304", size = 362867 },
+ { url = "https://files.pythonhosted.org/packages/a9/97/8d3118efd8354c555a3422d544163f40d9f236be5b96c714086463f11699/bcrypt-4.3.0-cp38-abi3-win32.whl", hash = "sha256:67a561c4d9fb9465ec866177e7aebcad08fe23aaf6fbd692a6fab69088abfc51", size = 160589 },
+ { url = "https://files.pythonhosted.org/packages/29/07/416f0b99f7f3997c69815365babbc2e8754181a4b1899d921b3c7d5b6f12/bcrypt-4.3.0-cp38-abi3-win_amd64.whl", hash = "sha256:584027857bc2843772114717a7490a37f68da563b3620f78a849bcb54dc11e62", size = 152794 },
+ { url = "https://files.pythonhosted.org/packages/6e/c1/3fa0e9e4e0bfd3fd77eb8b52ec198fd6e1fd7e9402052e43f23483f956dd/bcrypt-4.3.0-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:0d3efb1157edebfd9128e4e46e2ac1a64e0c1fe46fb023158a407c7892b0f8c3", size = 498969 },
+ { url = "https://files.pythonhosted.org/packages/ce/d4/755ce19b6743394787fbd7dff6bf271b27ee9b5912a97242e3caf125885b/bcrypt-4.3.0-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:08bacc884fd302b611226c01014eca277d48f0a05187666bca23aac0dad6fe24", size = 279158 },
+ { url = "https://files.pythonhosted.org/packages/9b/5d/805ef1a749c965c46b28285dfb5cd272a7ed9fa971f970435a5133250182/bcrypt-4.3.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6746e6fec103fcd509b96bacdfdaa2fbde9a553245dbada284435173a6f1aef", size = 284285 },
+ { url = "https://files.pythonhosted.org/packages/ab/2b/698580547a4a4988e415721b71eb45e80c879f0fb04a62da131f45987b96/bcrypt-4.3.0-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:afe327968aaf13fc143a56a3360cb27d4ad0345e34da12c7290f1b00b8fe9a8b", size = 279583 },
+ { url = "https://files.pythonhosted.org/packages/f2/87/62e1e426418204db520f955ffd06f1efd389feca893dad7095bf35612eec/bcrypt-4.3.0-cp39-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:d9af79d322e735b1fc33404b5765108ae0ff232d4b54666d46730f8ac1a43676", size = 297896 },
+ { url = "https://files.pythonhosted.org/packages/cb/c6/8fedca4c2ada1b6e889c52d2943b2f968d3427e5d65f595620ec4c06fa2f/bcrypt-4.3.0-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:f1e3ffa1365e8702dc48c8b360fef8d7afeca482809c5e45e653af82ccd088c1", size = 284492 },
+ { url = "https://files.pythonhosted.org/packages/4d/4d/c43332dcaaddb7710a8ff5269fcccba97ed3c85987ddaa808db084267b9a/bcrypt-4.3.0-cp39-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:3004df1b323d10021fda07a813fd33e0fd57bef0e9a480bb143877f6cba996fe", size = 279213 },
+ { url = "https://files.pythonhosted.org/packages/dc/7f/1e36379e169a7df3a14a1c160a49b7b918600a6008de43ff20d479e6f4b5/bcrypt-4.3.0-cp39-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:531457e5c839d8caea9b589a1bcfe3756b0547d7814e9ce3d437f17da75c32b0", size = 284162 },
+ { url = "https://files.pythonhosted.org/packages/1c/0a/644b2731194b0d7646f3210dc4d80c7fee3ecb3a1f791a6e0ae6bb8684e3/bcrypt-4.3.0-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:17a854d9a7a476a89dcef6c8bd119ad23e0f82557afbd2c442777a16408e614f", size = 312856 },
+ { url = "https://files.pythonhosted.org/packages/dc/62/2a871837c0bb6ab0c9a88bf54de0fc021a6a08832d4ea313ed92a669d437/bcrypt-4.3.0-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:6fb1fd3ab08c0cbc6826a2e0447610c6f09e983a281b919ed721ad32236b8b23", size = 316726 },
+ { url = "https://files.pythonhosted.org/packages/0c/a1/9898ea3faac0b156d457fd73a3cb9c2855c6fd063e44b8522925cdd8ce46/bcrypt-4.3.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:e965a9c1e9a393b8005031ff52583cedc15b7884fce7deb8b0346388837d6cfe", size = 343664 },
+ { url = "https://files.pythonhosted.org/packages/40/f2/71b4ed65ce38982ecdda0ff20c3ad1b15e71949c78b2c053df53629ce940/bcrypt-4.3.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:79e70b8342a33b52b55d93b3a59223a844962bef479f6a0ea318ebbcadf71505", size = 363128 },
+ { url = "https://files.pythonhosted.org/packages/11/99/12f6a58eca6dea4be992d6c681b7ec9410a1d9f5cf368c61437e31daa879/bcrypt-4.3.0-cp39-abi3-win32.whl", hash = "sha256:b4d4e57f0a63fd0b358eb765063ff661328f69a04494427265950c71b992a39a", size = 160598 },
+ { url = "https://files.pythonhosted.org/packages/a9/cf/45fb5261ece3e6b9817d3d82b2f343a505fd58674a92577923bc500bd1aa/bcrypt-4.3.0-cp39-abi3-win_amd64.whl", hash = "sha256:e53e074b120f2877a35cc6c736b8eb161377caae8925c17688bd46ba56daaa5b", size = 152799 },
+]
+
+[[package]]
+name = "beanie"
+version = "1.30.0"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "click" },
+ { name = "lazy-model" },
+ { name = "motor" },
+ { name = "pydantic" },
+ { name = "typing-extensions" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/46/1c/feee03924a8f255d76236a8f71fde310da52ab4e03abd1254cd9309d73e1/beanie-1.30.0.tar.gz", hash = "sha256:33ead17ff2742144c510b4b24e188f6b316dd1b614d86b57a3cfe20bc7b768c9", size = 176743 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/cb/f2/adfea21c19d73ad2e90f5346c166523dadc33493a0b398d543eeb9b67e7a/beanie-1.30.0-py3-none-any.whl", hash = "sha256:385f1b850b36a19dd221aeb83e838c83ec6b47bbf6aeac4e5bf8b8d40bfcfe51", size = 87140 },
+]
+
[[package]]
name = "black"
version = "25.1.0"
@@ -284,6 +385,39 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/4a/7e/3db2bd1b1f9e95f7cddca6d6e75e2f2bd9f51b1246e546d88addca0106bd/certifi-2025.4.26-py3-none-any.whl", hash = "sha256:30350364dfe371162649852c63336a15c70c6510c2ad5015b21c2345311805f3", size = 159618 },
]
+[[package]]
+name = "cffi"
+version = "1.17.1"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "pycparser" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/fc/97/c783634659c2920c3fc70419e3af40972dbaf758daa229a7d6ea6135c90d/cffi-1.17.1.tar.gz", hash = "sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824", size = 516621 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/5a/84/e94227139ee5fb4d600a7a4927f322e1d4aea6fdc50bd3fca8493caba23f/cffi-1.17.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4", size = 183178 },
+ { url = "https://files.pythonhosted.org/packages/da/ee/fb72c2b48656111c4ef27f0f91da355e130a923473bf5ee75c5643d00cca/cffi-1.17.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c", size = 178840 },
+ { url = "https://files.pythonhosted.org/packages/cc/b6/db007700f67d151abadf508cbfd6a1884f57eab90b1bb985c4c8c02b0f28/cffi-1.17.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36", size = 454803 },
+ { url = "https://files.pythonhosted.org/packages/1a/df/f8d151540d8c200eb1c6fba8cd0dfd40904f1b0682ea705c36e6c2e97ab3/cffi-1.17.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5", size = 478850 },
+ { url = "https://files.pythonhosted.org/packages/28/c0/b31116332a547fd2677ae5b78a2ef662dfc8023d67f41b2a83f7c2aa78b1/cffi-1.17.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff", size = 485729 },
+ { url = "https://files.pythonhosted.org/packages/91/2b/9a1ddfa5c7f13cab007a2c9cc295b70fbbda7cb10a286aa6810338e60ea1/cffi-1.17.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99", size = 471256 },
+ { url = "https://files.pythonhosted.org/packages/b2/d5/da47df7004cb17e4955df6a43d14b3b4ae77737dff8bf7f8f333196717bf/cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93", size = 479424 },
+ { url = "https://files.pythonhosted.org/packages/0b/ac/2a28bcf513e93a219c8a4e8e125534f4f6db03e3179ba1c45e949b76212c/cffi-1.17.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3", size = 484568 },
+ { url = "https://files.pythonhosted.org/packages/d4/38/ca8a4f639065f14ae0f1d9751e70447a261f1a30fa7547a828ae08142465/cffi-1.17.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8", size = 488736 },
+ { url = "https://files.pythonhosted.org/packages/86/c5/28b2d6f799ec0bdecf44dced2ec5ed43e0eb63097b0f58c293583b406582/cffi-1.17.1-cp312-cp312-win32.whl", hash = "sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65", size = 172448 },
+ { url = "https://files.pythonhosted.org/packages/50/b9/db34c4755a7bd1cb2d1603ac3863f22bcecbd1ba29e5ee841a4bc510b294/cffi-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903", size = 181976 },
+ { url = "https://files.pythonhosted.org/packages/8d/f8/dd6c246b148639254dad4d6803eb6a54e8c85c6e11ec9df2cffa87571dbe/cffi-1.17.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e", size = 182989 },
+ { url = "https://files.pythonhosted.org/packages/8b/f1/672d303ddf17c24fc83afd712316fda78dc6fce1cd53011b839483e1ecc8/cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2", size = 178802 },
+ { url = "https://files.pythonhosted.org/packages/0e/2d/eab2e858a91fdff70533cab61dcff4a1f55ec60425832ddfdc9cd36bc8af/cffi-1.17.1-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3", size = 454792 },
+ { url = "https://files.pythonhosted.org/packages/75/b2/fbaec7c4455c604e29388d55599b99ebcc250a60050610fadde58932b7ee/cffi-1.17.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683", size = 478893 },
+ { url = "https://files.pythonhosted.org/packages/4f/b7/6e4a2162178bf1935c336d4da8a9352cccab4d3a5d7914065490f08c0690/cffi-1.17.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5", size = 485810 },
+ { url = "https://files.pythonhosted.org/packages/c7/8a/1d0e4a9c26e54746dc08c2c6c037889124d4f59dffd853a659fa545f1b40/cffi-1.17.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4", size = 471200 },
+ { url = "https://files.pythonhosted.org/packages/26/9f/1aab65a6c0db35f43c4d1b4f580e8df53914310afc10ae0397d29d697af4/cffi-1.17.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd", size = 479447 },
+ { url = "https://files.pythonhosted.org/packages/5f/e4/fb8b3dd8dc0e98edf1135ff067ae070bb32ef9d509d6cb0f538cd6f7483f/cffi-1.17.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed", size = 484358 },
+ { url = "https://files.pythonhosted.org/packages/f1/47/d7145bf2dc04684935d57d67dff9d6d795b2ba2796806bb109864be3a151/cffi-1.17.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9", size = 488469 },
+ { url = "https://files.pythonhosted.org/packages/bf/ee/f94057fa6426481d663b88637a9a10e859e492c73d0384514a17d78ee205/cffi-1.17.1-cp313-cp313-win32.whl", hash = "sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d", size = 172475 },
+ { url = "https://files.pythonhosted.org/packages/7c/fc/6a8cb64e5f0324877d503c854da15d76c1e50eb722e320b15345c4d0c6de/cffi-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a", size = 182009 },
+]
+
[[package]]
name = "charset-normalizer"
version = "3.4.2"
@@ -340,6 +474,41 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335 },
]
+[[package]]
+name = "cryptography"
+version = "45.0.5"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "cffi", marker = "platform_python_implementation != 'PyPy'" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/95/1e/49527ac611af559665f71cbb8f92b332b5ec9c6fbc4e88b0f8e92f5e85df/cryptography-45.0.5.tar.gz", hash = "sha256:72e76caa004ab63accdf26023fccd1d087f6d90ec6048ff33ad0445abf7f605a", size = 744903 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/f0/fb/09e28bc0c46d2c547085e60897fea96310574c70fb21cd58a730a45f3403/cryptography-45.0.5-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:101ee65078f6dd3e5a028d4f19c07ffa4dd22cce6a20eaa160f8b5219911e7d8", size = 7043092 },
+ { url = "https://files.pythonhosted.org/packages/b1/05/2194432935e29b91fb649f6149c1a4f9e6d3d9fc880919f4ad1bcc22641e/cryptography-45.0.5-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:3a264aae5f7fbb089dbc01e0242d3b67dffe3e6292e1f5182122bdf58e65215d", size = 4205926 },
+ { url = "https://files.pythonhosted.org/packages/07/8b/9ef5da82350175e32de245646b1884fc01124f53eb31164c77f95a08d682/cryptography-45.0.5-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e74d30ec9c7cb2f404af331d5b4099a9b322a8a6b25c4632755c8757345baac5", size = 4429235 },
+ { url = "https://files.pythonhosted.org/packages/7c/e1/c809f398adde1994ee53438912192d92a1d0fc0f2d7582659d9ef4c28b0c/cryptography-45.0.5-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:3af26738f2db354aafe492fb3869e955b12b2ef2e16908c8b9cb928128d42c57", size = 4209785 },
+ { url = "https://files.pythonhosted.org/packages/d0/8b/07eb6bd5acff58406c5e806eff34a124936f41a4fb52909ffa4d00815f8c/cryptography-45.0.5-cp311-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:e6c00130ed423201c5bc5544c23359141660b07999ad82e34e7bb8f882bb78e0", size = 3893050 },
+ { url = "https://files.pythonhosted.org/packages/ec/ef/3333295ed58d900a13c92806b67e62f27876845a9a908c939f040887cca9/cryptography-45.0.5-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:dd420e577921c8c2d31289536c386aaa30140b473835e97f83bc71ea9d2baf2d", size = 4457379 },
+ { url = "https://files.pythonhosted.org/packages/d9/9d/44080674dee514dbb82b21d6fa5d1055368f208304e2ab1828d85c9de8f4/cryptography-45.0.5-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:d05a38884db2ba215218745f0781775806bde4f32e07b135348355fe8e4991d9", size = 4209355 },
+ { url = "https://files.pythonhosted.org/packages/c9/d8/0749f7d39f53f8258e5c18a93131919ac465ee1f9dccaf1b3f420235e0b5/cryptography-45.0.5-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:ad0caded895a00261a5b4aa9af828baede54638754b51955a0ac75576b831b27", size = 4456087 },
+ { url = "https://files.pythonhosted.org/packages/09/d7/92acac187387bf08902b0bf0699816f08553927bdd6ba3654da0010289b4/cryptography-45.0.5-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:9024beb59aca9d31d36fcdc1604dd9bbeed0a55bface9f1908df19178e2f116e", size = 4332873 },
+ { url = "https://files.pythonhosted.org/packages/03/c2/840e0710da5106a7c3d4153c7215b2736151bba60bf4491bdb421df5056d/cryptography-45.0.5-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:91098f02ca81579c85f66df8a588c78f331ca19089763d733e34ad359f474174", size = 4564651 },
+ { url = "https://files.pythonhosted.org/packages/2e/92/cc723dd6d71e9747a887b94eb3827825c6c24b9e6ce2bb33b847d31d5eaa/cryptography-45.0.5-cp311-abi3-win32.whl", hash = "sha256:926c3ea71a6043921050eaa639137e13dbe7b4ab25800932a8498364fc1abec9", size = 2929050 },
+ { url = "https://files.pythonhosted.org/packages/1f/10/197da38a5911a48dd5389c043de4aec4b3c94cb836299b01253940788d78/cryptography-45.0.5-cp311-abi3-win_amd64.whl", hash = "sha256:b85980d1e345fe769cfc57c57db2b59cff5464ee0c045d52c0df087e926fbe63", size = 3403224 },
+ { url = "https://files.pythonhosted.org/packages/fe/2b/160ce8c2765e7a481ce57d55eba1546148583e7b6f85514472b1d151711d/cryptography-45.0.5-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:f3562c2f23c612f2e4a6964a61d942f891d29ee320edb62ff48ffb99f3de9ae8", size = 7017143 },
+ { url = "https://files.pythonhosted.org/packages/c2/e7/2187be2f871c0221a81f55ee3105d3cf3e273c0a0853651d7011eada0d7e/cryptography-45.0.5-cp37-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:3fcfbefc4a7f332dece7272a88e410f611e79458fab97b5efe14e54fe476f4fd", size = 4197780 },
+ { url = "https://files.pythonhosted.org/packages/b9/cf/84210c447c06104e6be9122661159ad4ce7a8190011669afceeaea150524/cryptography-45.0.5-cp37-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:460f8c39ba66af7db0545a8c6f2eabcbc5a5528fc1cf6c3fa9a1e44cec33385e", size = 4420091 },
+ { url = "https://files.pythonhosted.org/packages/3e/6a/cb8b5c8bb82fafffa23aeff8d3a39822593cee6e2f16c5ca5c2ecca344f7/cryptography-45.0.5-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:9b4cf6318915dccfe218e69bbec417fdd7c7185aa7aab139a2c0beb7468c89f0", size = 4198711 },
+ { url = "https://files.pythonhosted.org/packages/04/f7/36d2d69df69c94cbb2473871926daf0f01ad8e00fe3986ac3c1e8c4ca4b3/cryptography-45.0.5-cp37-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:2089cc8f70a6e454601525e5bf2779e665d7865af002a5dec8d14e561002e135", size = 3883299 },
+ { url = "https://files.pythonhosted.org/packages/82/c7/f0ea40f016de72f81288e9fe8d1f6748036cb5ba6118774317a3ffc6022d/cryptography-45.0.5-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:0027d566d65a38497bc37e0dd7c2f8ceda73597d2ac9ba93810204f56f52ebc7", size = 4450558 },
+ { url = "https://files.pythonhosted.org/packages/06/ae/94b504dc1a3cdf642d710407c62e86296f7da9e66f27ab12a1ee6fdf005b/cryptography-45.0.5-cp37-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:be97d3a19c16a9be00edf79dca949c8fa7eff621763666a145f9f9535a5d7f42", size = 4198020 },
+ { url = "https://files.pythonhosted.org/packages/05/2b/aaf0adb845d5dabb43480f18f7ca72e94f92c280aa983ddbd0bcd6ecd037/cryptography-45.0.5-cp37-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:7760c1c2e1a7084153a0f68fab76e754083b126a47d0117c9ed15e69e2103492", size = 4449759 },
+ { url = "https://files.pythonhosted.org/packages/91/e4/f17e02066de63e0100a3a01b56f8f1016973a1d67551beaf585157a86b3f/cryptography-45.0.5-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:6ff8728d8d890b3dda5765276d1bc6fb099252915a2cd3aff960c4c195745dd0", size = 4319991 },
+ { url = "https://files.pythonhosted.org/packages/f2/2e/e2dbd629481b499b14516eed933f3276eb3239f7cee2dcfa4ee6b44d4711/cryptography-45.0.5-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:7259038202a47fdecee7e62e0fd0b0738b6daa335354396c6ddebdbe1206af2a", size = 4554189 },
+ { url = "https://files.pythonhosted.org/packages/f8/ea/a78a0c38f4c8736287b71c2ea3799d173d5ce778c7d6e3c163a95a05ad2a/cryptography-45.0.5-cp37-abi3-win32.whl", hash = "sha256:1e1da5accc0c750056c556a93c3e9cb828970206c68867712ca5805e46dc806f", size = 2911769 },
+ { url = "https://files.pythonhosted.org/packages/79/b3/28ac139109d9005ad3f6b6f8976ffede6706a6478e21c889ce36c840918e/cryptography-45.0.5-cp37-abi3-win_amd64.whl", hash = "sha256:90cb0a7bb35959f37e23303b7eed0a32280510030daba3f7fdfbb65defde6a97", size = 3390016 },
+]
+
[[package]]
name = "dataclasses-json"
version = "0.6.7"
@@ -425,7 +594,7 @@ wheels = [
[[package]]
name = "easy-audio-interfaces"
-version = "0.5.1"
+version = "0.7.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "fire" },
@@ -436,9 +605,22 @@ dependencies = [
{ name = "websockets" },
{ name = "wyoming" },
]
-sdist = { url = "https://files.pythonhosted.org/packages/16/6f/12f728ad4f295f6dec764fde9312beeca26233368fd93d01405157bc3a02/easy_audio_interfaces-0.5.1.tar.gz", hash = "sha256:b4969f78c6ac69010be00fca35bab54bac9d3e78e5efe9d7f1ea4ebdaf6824a8", size = 36280 }
+sdist = { url = "https://files.pythonhosted.org/packages/dc/e6/9e3ff12be5b4a3e8579d7504c3f4a8981561ca75339eada4a56452092f98/easy_audio_interfaces-0.7.1.tar.gz", hash = "sha256:04cccc20cf342a89efcf079ab05a4343b57a0be8491f9519cdaf92cd421a8a7f", size = 36620 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/6f/6c/18de57f237cf90dd32a299365707a31a6b42b7b7fff4593f3867818e6afd/easy_audio_interfaces-0.7.1-py3-none-any.whl", hash = "sha256:6ee94d9636da35a3bd0cafb41498c2d0e5b8d16d746ba8f46392891e956fb199", size = 43112 },
+]
+
+[[package]]
+name = "email-validator"
+version = "2.2.0"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "dnspython" },
+ { name = "idna" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/48/ce/13508a1ec3f8bb981ae4ca79ea40384becc868bfae97fd1c942bb3a001b1/email_validator-2.2.0.tar.gz", hash = "sha256:cb690f344c617a714f22e66ae771445a1ceb46821152df8e165c5f9a364582b7", size = 48967 }
wheels = [
- { url = "https://files.pythonhosted.org/packages/e8/6b/ebb733558b5869615a002a10d33249d2fe33bf49b9e8f1470c7b7c920fa6/easy_audio_interfaces-0.5.1-py3-none-any.whl", hash = "sha256:d3ee1a164924a426bc3f2b2b2be991683170545ef53e3a062fb2c75814767dd4", size = 42365 },
+ { url = "https://files.pythonhosted.org/packages/d7/ee/bf0adb559ad3c786f12bcbc9296b3f5675f529199bef03e2df281fa1fadb/email_validator-2.2.0-py3-none-any.whl", hash = "sha256:561977c2d73ce3611850a06fa56b414621e0c8faa9d66f2611407d87465da631", size = 33521 },
]
[[package]]
@@ -455,6 +637,41 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/50/b3/b51f09c2ba432a576fe63758bddc81f78f0c6309d9e5c10d194313bf021e/fastapi-0.115.12-py3-none-any.whl", hash = "sha256:e94613d6c05e27be7ffebdd6ea5f388112e5e430c8f7d6494a9d1d88d43e814d", size = 95164 },
]
+[[package]]
+name = "fastapi-users"
+version = "14.0.1"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "email-validator" },
+ { name = "fastapi" },
+ { name = "makefun" },
+ { name = "pwdlib", extra = ["argon2", "bcrypt"] },
+ { name = "pyjwt", extra = ["crypto"] },
+ { name = "python-multipart" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/e4/26/7fe4e6a4f60d9cde2b95f58ba45ff03219b62bd03bea75d914b723ecfa2a/fastapi_users-14.0.1.tar.gz", hash = "sha256:8c032b3a75c6fb2b1f5eab8ffce5321176e9916efe1fe93e7c15ee55f0b02236", size = 120315 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/2c/52/2821d3e95a92567d38f98a33d1ef89302aa3448866bf45ff19a48a5f28f8/fastapi_users-14.0.1-py3-none-any.whl", hash = "sha256:074df59676dccf79412d2880bdcb661ab1fabc2ecec1f043b4e6a23be97ed9e1", size = 38717 },
+]
+
+[package.optional-dependencies]
+beanie = [
+ { name = "fastapi-users-db-beanie" },
+]
+
+[[package]]
+name = "fastapi-users-db-beanie"
+version = "4.0.0"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "beanie" },
+ { name = "fastapi-users" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/8b/fb/31024d9490ef13fe25021984dfdc0d174a0325562a5ec9db2d0a0e8c471e/fastapi_users_db_beanie-4.0.0.tar.gz", hash = "sha256:c2331279359c5988ed427002fffbe5f6928d77df34ae96348db5fac68ba81fcf", size = 9979 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/20/9a/a99e408dd929e133a9ef0768858e886b91328713e4b4464d1806d9042f51/fastapi_users_db_beanie-4.0.0-py3-none-any.whl", hash = "sha256:01db9a8dc1237f7bf604ac038c4fc0dfa7c920169ced03cb1fe75ca921aea39a", size = 5485 },
+]
+
[[package]]
name = "fire"
version = "0.5.0"
@@ -797,6 +1014,27 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/01/0e/b27cdbaccf30b890c40ed1da9fd4a3593a5cf94dae54fb34f8a4b74fcd3f/jsonschema_specifications-2025.4.1-py3-none-any.whl", hash = "sha256:4653bffbd6584f7de83a67e0d620ef16900b390ddc7939d56684d6c81e33f1af", size = 18437 },
]
+[[package]]
+name = "lazy-model"
+version = "0.2.0"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "pydantic" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/47/9e/c60681be72f03845c209a86d5ce0404540c8d1818fc29bc64fc95220de5c/lazy-model-0.2.0.tar.gz", hash = "sha256:57c0e91e171530c4fca7aebc3ac05a163a85cddd941bf7527cc46c0ddafca47c", size = 8152 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/0a/13/e37962a20f7051b2d6d286c3feb85754f9ea8c4cac302927971e910cc9f6/lazy_model-0.2.0-py3-none-any.whl", hash = "sha256:5a3241775c253e36d9069d236be8378288a93d4fc53805211fd152e04cc9c342", size = 13719 },
+]
+
+[[package]]
+name = "makefun"
+version = "1.16.0"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/7b/cf/6780ab8bc3b84a1cce3e4400aed3d64b6db7d5e227a2f75b6ded5674701a/makefun-1.16.0.tar.gz", hash = "sha256:e14601831570bff1f6d7e68828bcd30d2f5856f24bad5de0ccb22921ceebc947", size = 73565 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/b7/c0/4bc973defd1270b89ccaae04cef0d5fa3ea85b59b108ad2c08aeea9afb76/makefun-1.16.0-py2.py3-none-any.whl", hash = "sha256:43baa4c3e7ae2b17de9ceac20b669e9a67ceeadff31581007cca20a07bbe42c4", size = 22923 },
+]
+
[[package]]
name = "markdown-it-py"
version = "3.0.0"
@@ -870,7 +1108,7 @@ wheels = [
[[package]]
name = "mem0ai"
-version = "0.1.111"
+version = "0.1.114"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "openai" },
@@ -880,9 +1118,9 @@ dependencies = [
{ name = "qdrant-client" },
{ name = "sqlalchemy" },
]
-sdist = { url = "https://files.pythonhosted.org/packages/2d/93/ff302f96e02b5ac80a1ad18b94617985296f78aee212f86d83cba1c2a1a5/mem0ai-0.1.111.tar.gz", hash = "sha256:cc4b1a20cd4fd3b980cca4fd9f77ee4c9cff81b92e6f4d30014fd900dce59bba", size = 108299 }
+sdist = { url = "https://files.pythonhosted.org/packages/87/47/81f43e173940d000694eb20a70c0a92149c53edd2095e34b618afa41ca7d/mem0ai-0.1.114.tar.gz", hash = "sha256:b27886132eaec78544e8b8b54f0b14a36728f3c99da54cb7cb417150e2fad7e1", size = 113652 }
wheels = [
- { url = "https://files.pythonhosted.org/packages/2a/f5/185c88df177d0d9ae1226cc1ae75a2b2480280521a5c7690f1ca6a54b6af/mem0ai-0.1.111-py3-none-any.whl", hash = "sha256:53e8ce3551ffe1454b6e28ba90a8a88907280a9052edfeb872241662a4707f14", size = 168161 },
+ { url = "https://files.pythonhosted.org/packages/5e/b7/50d1d1d0600e9e5a861e733644513816011504b9a3d0ba870eadb32a481f/mem0ai-0.1.114-py3-none-any.whl", hash = "sha256:dfb7f0079ee282f5d9782e220f6f09707bcf5e107925d1901dbca30d8dd83f9b", size = 174843 },
]
[[package]]
@@ -1281,6 +1519,23 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/f7/af/ab3c51ab7507a7325e98ffe691d9495ee3d3aa5f589afad65ec920d39821/protobuf-6.31.1-py3-none-any.whl", hash = "sha256:720a6c7e6b77288b85063569baae8536671b39f15cc22037ec7045658d80489e", size = 168724 },
]
+[[package]]
+name = "pwdlib"
+version = "0.2.1"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/82/a0/9daed437a6226f632a25d98d65d60ba02bdafa920c90dcb6454c611ead6c/pwdlib-0.2.1.tar.gz", hash = "sha256:9a1d8a8fa09a2f7ebf208265e55d7d008103cbdc82b9e4902ffdd1ade91add5e", size = 11699 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/01/f3/0dae5078a486f0fdf4d4a1121e103bc42694a9da9bea7b0f2c63f29cfbd3/pwdlib-0.2.1-py3-none-any.whl", hash = "sha256:1823dc6f22eae472b540e889ecf57fd424051d6a4023ec0bcf7f0de2d9d7ef8c", size = 8082 },
+]
+
+[package.optional-dependencies]
+argon2 = [
+ { name = "argon2-cffi" },
+]
+bcrypt = [
+ { name = "bcrypt" },
+]
+
[[package]]
name = "pyarrow"
version = "20.0.0"
@@ -1316,6 +1571,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/37/40/ad395740cd641869a13bcf60851296c89624662575621968dcfafabaa7f6/pyarrow-20.0.0-cp313-cp313t-win_amd64.whl", hash = "sha256:82f1ee5133bd8f49d31be1299dc07f585136679666b502540db854968576faf9", size = 25944982 },
]
+[[package]]
+name = "pycparser"
+version = "2.22"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/1d/b2/31537cf4b1ca988837256c910a668b553fceb8f069bedc4b1c826024b52c/pycparser-2.22.tar.gz", hash = "sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6", size = 172736 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/13/a3/a812df4e2dd5696d1f351d58b8fe16a405b234ad2886a0dab9183fb78109/pycparser-2.22-py3-none-any.whl", hash = "sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc", size = 117552 },
+]
+
[[package]]
name = "pydantic"
version = "2.11.5"
@@ -1395,6 +1659,20 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/8a/0b/9fcc47d19c48b59121088dd6da2488a49d5f72dacf8262e2790a1d2c7d15/pygments-2.19.1-py3-none-any.whl", hash = "sha256:9ea1544ad55cecf4b8242fab6dd35a93bbce657034b0611ee383099054ab6d8c", size = 1225293 },
]
+[[package]]
+name = "pyjwt"
+version = "2.10.1"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/e7/46/bd74733ff231675599650d3e47f361794b22ef3e3770998dda30d3b63726/pyjwt-2.10.1.tar.gz", hash = "sha256:3cc5772eb20009233caf06e9d8a0577824723b44e6648ee0a2aedb6cf9381953", size = 87785 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/61/ad/689f02752eeec26aed679477e80e632ef1b682313be70793d798c1d5fc8f/PyJWT-2.10.1-py3-none-any.whl", hash = "sha256:dcdd193e30abefd5debf142f9adfcdd2b58004e644f25406ffaebd50bd98dacb", size = 22997 },
+]
+
+[package.optional-dependencies]
+crypto = [
+ { name = "cryptography" },
+]
+
[[package]]
name = "pymongo"
version = "4.13.0"
@@ -1539,6 +1817,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/1e/18/98a99ad95133c6a6e2005fe89faedf294a748bd5dc803008059409ac9b1e/python_dotenv-1.1.0-py3-none-any.whl", hash = "sha256:d7c01d9e2293916c18baf562d95698754b0dbbb5e74d457c45d4f6561fb9d55d", size = 20256 },
]
+[[package]]
+name = "python-multipart"
+version = "0.0.20"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/f3/87/f44d7c9f274c7ee665a29b885ec97089ec5dc034c7f3fafa03da9e39a09e/python_multipart-0.0.20.tar.gz", hash = "sha256:8dd0cab45b8e23064ae09147625994d090fa46f5b0d1e13af944c331a7fa9d13", size = 37158 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/45/58/38b5afbc1a800eeea951b9285d3912613f2603bdf897a4ab0f4bd7f405fc/python_multipart-0.0.20-py3-none-any.whl", hash = "sha256:8a62d3a8335e06589fe01f2a3e178cdcc632f3fbe0d492ad9ee0ec35aab1f104", size = 24546 },
+]
+
[[package]]
name = "pytz"
version = "2025.2"
@@ -1561,6 +1848,32 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/b4/f4/f785020090fb050e7fb6d34b780f2231f302609dc964672f72bfaeb59a28/pywin32-310-cp313-cp313-win_arm64.whl", hash = "sha256:e308f831de771482b7cf692a1f308f8fca701b2d8f9dde6cc440c7da17e47b33", size = 8458152 },
]
+[[package]]
+name = "pyyaml"
+version = "6.0.2"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/54/ed/79a089b6be93607fa5cdaedf301d7dfb23af5f25c398d5ead2525b063e17/pyyaml-6.0.2.tar.gz", hash = "sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e", size = 130631 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/86/0c/c581167fc46d6d6d7ddcfb8c843a4de25bdd27e4466938109ca68492292c/PyYAML-6.0.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:c70c95198c015b85feafc136515252a261a84561b7b1d51e3384e0655ddf25ab", size = 183873 },
+ { url = "https://files.pythonhosted.org/packages/a8/0c/38374f5bb272c051e2a69281d71cba6fdb983413e6758b84482905e29a5d/PyYAML-6.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ce826d6ef20b1bc864f0a68340c8b3287705cae2f8b4b1d932177dcc76721725", size = 173302 },
+ { url = "https://files.pythonhosted.org/packages/c3/93/9916574aa8c00aa06bbac729972eb1071d002b8e158bd0e83a3b9a20a1f7/PyYAML-6.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1f71ea527786de97d1a0cc0eacd1defc0985dcf6b3f17bb77dcfc8c34bec4dc5", size = 739154 },
+ { url = "https://files.pythonhosted.org/packages/95/0f/b8938f1cbd09739c6da569d172531567dbcc9789e0029aa070856f123984/PyYAML-6.0.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9b22676e8097e9e22e36d6b7bda33190d0d400f345f23d4065d48f4ca7ae0425", size = 766223 },
+ { url = "https://files.pythonhosted.org/packages/b9/2b/614b4752f2e127db5cc206abc23a8c19678e92b23c3db30fc86ab731d3bd/PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:80bab7bfc629882493af4aa31a4cfa43a4c57c83813253626916b8c7ada83476", size = 767542 },
+ { url = "https://files.pythonhosted.org/packages/d4/00/dd137d5bcc7efea1836d6264f049359861cf548469d18da90cd8216cf05f/PyYAML-6.0.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:0833f8694549e586547b576dcfaba4a6b55b9e96098b36cdc7ebefe667dfed48", size = 731164 },
+ { url = "https://files.pythonhosted.org/packages/c9/1f/4f998c900485e5c0ef43838363ba4a9723ac0ad73a9dc42068b12aaba4e4/PyYAML-6.0.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8b9c7197f7cb2738065c481a0461e50ad02f18c78cd75775628afb4d7137fb3b", size = 756611 },
+ { url = "https://files.pythonhosted.org/packages/df/d1/f5a275fdb252768b7a11ec63585bc38d0e87c9e05668a139fea92b80634c/PyYAML-6.0.2-cp312-cp312-win32.whl", hash = "sha256:ef6107725bd54b262d6dedcc2af448a266975032bc85ef0172c5f059da6325b4", size = 140591 },
+ { url = "https://files.pythonhosted.org/packages/0c/e8/4f648c598b17c3d06e8753d7d13d57542b30d56e6c2dedf9c331ae56312e/PyYAML-6.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:7e7401d0de89a9a855c839bc697c079a4af81cf878373abd7dc625847d25cbd8", size = 156338 },
+ { url = "https://files.pythonhosted.org/packages/ef/e3/3af305b830494fa85d95f6d95ef7fa73f2ee1cc8ef5b495c7c3269fb835f/PyYAML-6.0.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:efdca5630322a10774e8e98e1af481aad470dd62c3170801852d752aa7a783ba", size = 181309 },
+ { url = "https://files.pythonhosted.org/packages/45/9f/3b1c20a0b7a3200524eb0076cc027a970d320bd3a6592873c85c92a08731/PyYAML-6.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:50187695423ffe49e2deacb8cd10510bc361faac997de9efef88badc3bb9e2d1", size = 171679 },
+ { url = "https://files.pythonhosted.org/packages/7c/9a/337322f27005c33bcb656c655fa78325b730324c78620e8328ae28b64d0c/PyYAML-6.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0ffe8360bab4910ef1b9e87fb812d8bc0a308b0d0eef8c8f44e0254ab3b07133", size = 733428 },
+ { url = "https://files.pythonhosted.org/packages/a3/69/864fbe19e6c18ea3cc196cbe5d392175b4cf3d5d0ac1403ec3f2d237ebb5/PyYAML-6.0.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:17e311b6c678207928d649faa7cb0d7b4c26a0ba73d41e99c4fff6b6c3276484", size = 763361 },
+ { url = "https://files.pythonhosted.org/packages/04/24/b7721e4845c2f162d26f50521b825fb061bc0a5afcf9a386840f23ea19fa/PyYAML-6.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:70b189594dbe54f75ab3a1acec5f1e3faa7e8cf2f1e08d9b561cb41b845f69d5", size = 759523 },
+ { url = "https://files.pythonhosted.org/packages/2b/b2/e3234f59ba06559c6ff63c4e10baea10e5e7df868092bf9ab40e5b9c56b6/PyYAML-6.0.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:41e4e3953a79407c794916fa277a82531dd93aad34e29c2a514c2c0c5fe971cc", size = 726660 },
+ { url = "https://files.pythonhosted.org/packages/fe/0f/25911a9f080464c59fab9027482f822b86bf0608957a5fcc6eaac85aa515/PyYAML-6.0.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:68ccc6023a3400877818152ad9a1033e3db8625d899c72eacb5a668902e4d652", size = 751597 },
+ { url = "https://files.pythonhosted.org/packages/14/0d/e2c3b43bbce3cf6bd97c840b46088a3031085179e596d4929729d8d68270/PyYAML-6.0.2-cp313-cp313-win32.whl", hash = "sha256:bc2fa7c6b47d6bc618dd7fb02ef6fdedb1090ec036abab80d4681424b84c1183", size = 140527 },
+ { url = "https://files.pythonhosted.org/packages/fa/de/02b54f42487e3d3c6efb3f89428677074ca7bf43aae402517bc7cca949f3/PyYAML-6.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563", size = 156446 },
+]
+
[[package]]
name = "qdrant-client"
version = "1.14.2"
diff --git a/backends/advanced-backend/webui/.dockerignore b/backends/advanced-backend/webui/.dockerignore
deleted file mode 100644
index 31b48a7d..00000000
--- a/backends/advanced-backend/webui/.dockerignore
+++ /dev/null
@@ -1,5 +0,0 @@
-*
-!pyproject.toml
-!streamlit_app.py
-!README.md
-!.python-version
\ No newline at end of file
diff --git a/backends/advanced-backend/webui/.python-version b/backends/advanced-backend/webui/.python-version
deleted file mode 100644
index 2c073331..00000000
--- a/backends/advanced-backend/webui/.python-version
+++ /dev/null
@@ -1 +0,0 @@
-3.11
diff --git a/backends/advanced-backend/webui/pyproject.toml b/backends/advanced-backend/webui/pyproject.toml
deleted file mode 100644
index 86c298cc..00000000
--- a/backends/advanced-backend/webui/pyproject.toml
+++ /dev/null
@@ -1,14 +0,0 @@
-[project]
-name = "webui"
-version = "0.1.0"
-description = "Add your description here"
-readme = "README.md"
-requires-python = ">=3.11"
-dependencies = [
- "mem0ai>=0.1.102",
- "ollama>=0.5.1",
- "pandas>=2.2.3",
- "pymongo>=4.13.0",
- "python-dotenv>=1.1.0",
- "streamlit>=1.45.1",
-]
diff --git a/backends/advanced-backend/webui/streamlit_app.py b/backends/advanced-backend/webui/streamlit_app.py
deleted file mode 100644
index 907f548f..00000000
--- a/backends/advanced-backend/webui/streamlit_app.py
+++ /dev/null
@@ -1,1250 +0,0 @@
-import logging
-import os
-import time
-import random
-from datetime import datetime
-from pathlib import Path
-
-import pandas as pd
-import requests
-import streamlit as st
-from dotenv import load_dotenv
-
-load_dotenv()
-
-# Create logs directory for Streamlit app
-LOGS_DIR = Path("./logs")
-LOGS_DIR.mkdir(parents=True, exist_ok=True)
-
-# Configure comprehensive logging for Streamlit app
-logging.basicConfig(
- level=logging.DEBUG if os.getenv("DEBUG", "false").lower() == "true" else logging.INFO,
- format='%(asctime)s | %(levelname)-8s | %(name)-20s | %(message)s',
- handlers=[
- logging.StreamHandler(),
- logging.FileHandler(LOGS_DIR / 'streamlit.log')
- ]
-)
-
-logger = logging.getLogger("streamlit-ui")
-logger.info("๐ Starting Friend-Lite Streamlit Dashboard")
-
-# ---- Configuration ---- #
-BACKEND_API_URL = os.getenv("BACKEND_API_URL", "http://192.168.0.110:8000")
-# For browser-accessible URLs (audio files), use localhost instead of Docker service name
-BACKEND_PUBLIC_URL = os.getenv("BACKEND_PUBLIC_URL", "http://localhost:8000")
-
-logger.info(f"๐ง Configuration loaded - Backend API: {BACKEND_API_URL}, Public URL: {BACKEND_PUBLIC_URL}")
-
-# ---- Health Check Functions ---- #
-@st.cache_data(ttl=30) # Cache for 30 seconds to avoid too many requests
-def get_system_health():
- """Get comprehensive system health from backend."""
- logger.info("๐ฅ Performing system health check")
- start_time = time.time()
-
- try:
- # First try the simple readiness check with shorter timeout
- logger.debug("๐ Checking backend readiness...")
- response = requests.get(f"{BACKEND_API_URL}/readiness", timeout=5)
- if response.status_code == 200:
- logger.info("โ
Backend readiness check passed")
- # Backend is responding, now try the full health check with longer timeout
- try:
- logger.debug("๐ Performing full health check...")
- health_response = requests.get(f"{BACKEND_API_URL}/health", timeout=30)
- if health_response.status_code == 200:
- health_data = health_response.json()
- duration = time.time() - start_time
- logger.info(f"โ
Full health check completed in {duration:.3f}s")
- logger.debug(f"Health data: {health_data}")
- return health_data
- else:
- # Health check failed but backend is responsive
- duration = time.time() - start_time
- logger.warning(f"โ ๏ธ Health check failed with status {health_response.status_code} in {duration:.3f}s")
- return {
- "status": "partial",
- "overall_healthy": False,
- "services": {
- "backend": {
- "status": f"โ ๏ธ Backend responsive but health check failed: HTTP {health_response.status_code}",
- "healthy": False
- }
- },
- "error": "Health check endpoint returned unexpected status code"
- }
- except requests.exceptions.Timeout:
- # Health check timed out but backend is responsive
- duration = time.time() - start_time
- logger.warning(f"โ ๏ธ Health check timed out in {duration:.3f}s")
- return {
- "status": "partial",
- "overall_healthy": False,
- "services": {
- "backend": {
- "status": "โ ๏ธ Backend responsive but health check timed out (some services may be slow)",
- "healthy": False
- }
- },
- "error": "Health check timed out - external services may be unavailable"
- }
- except Exception as e:
- duration = time.time() - start_time
- logger.error(f"โ Health check error in {duration:.3f}s: {e}")
- return {
- "status": "partial",
- "overall_healthy": False,
- "services": {
- "backend": {
- "status": f"โ ๏ธ Backend responsive but health check failed: {str(e)}",
- "healthy": False
- }
- },
- "error": str(e)
- }
- else:
- duration = time.time() - start_time
- logger.error(f"โ Backend readiness check failed with status {response.status_code} in {duration:.3f}s")
- return {
- "status": "unhealthy",
- "overall_healthy": False,
- "services": {
- "backend": {
- "status": f"โ Backend API Error: HTTP {response.status_code}",
- "healthy": False
- }
- },
- "error": "Backend API returned unexpected status code"
- }
- except Exception as e:
- duration = time.time() - start_time
- logger.error(f"โ System health check failed in {duration:.3f}s: {e}")
- return {
- "status": "unhealthy",
- "overall_healthy": False,
- "services": {
- "backend": {
- "status": f"โ Backend API Connection Failed: {str(e)}",
- "healthy": False
- }
- },
- "error": str(e)
- }
-
-# ---- Helper Functions ---- #
-def get_data(endpoint: str):
- """Helper function to get data from the backend API with retry logic."""
- logger.debug(f"๐ก GET request to endpoint: {endpoint}")
- start_time = time.time()
-
- max_retries = 3
- base_delay = 1
-
- for attempt in range(max_retries):
- try:
- logger.debug(f"๐ก Attempt {attempt + 1}/{max_retries} for GET {endpoint}")
- response = requests.get(f"{BACKEND_API_URL}{endpoint}")
- response.raise_for_status()
- duration = time.time() - start_time
- logger.info(f"โ
GET {endpoint} successful in {duration:.3f}s")
- return response.json()
- except requests.exceptions.RequestException as e:
- duration = time.time() - start_time
- if attempt < max_retries - 1:
- delay = base_delay * (2 ** attempt)
- logger.warning(f"โ ๏ธ GET {endpoint} attempt {attempt + 1} failed in {duration:.3f}s, retrying in {delay}s: {str(e)}")
- time.sleep(delay)
- continue
- else:
- logger.error(f"โ GET {endpoint} failed after {max_retries} attempts in {duration:.3f}s: {e}")
- st.error(f"Could not connect to the backend at `{BACKEND_API_URL}`. Please ensure it's running. Error: {e}")
- return None
-
-def post_data(endpoint: str, params: dict | None = None):
- """Helper function to post data to the backend API."""
- logger.debug(f"๐ค POST request to endpoint: {endpoint} with params: {params}")
- start_time = time.time()
-
- try:
- response = requests.post(f"{BACKEND_API_URL}{endpoint}", params=params)
- response.raise_for_status()
- duration = time.time() - start_time
- logger.info(f"โ
POST {endpoint} successful in {duration:.3f}s")
- return response.json()
- except requests.exceptions.RequestException as e:
- duration = time.time() - start_time
- logger.error(f"โ POST {endpoint} failed in {duration:.3f}s: {e}")
- st.error(f"Error posting to backend: {e}")
- return None
-
-def delete_data(endpoint: str, params: dict | None = None):
- """Helper function to delete data from the backend API."""
- logger.debug(f"๐๏ธ DELETE request to endpoint: {endpoint} with params: {params}")
- start_time = time.time()
-
- try:
- response = requests.delete(f"{BACKEND_API_URL}{endpoint}", params=params)
- response.raise_for_status()
- duration = time.time() - start_time
- logger.info(f"โ
DELETE {endpoint} successful in {duration:.3f}s")
- return response.json()
- except requests.exceptions.RequestException as e:
- duration = time.time() - start_time
- logger.error(f"โ DELETE {endpoint} failed in {duration:.3f}s: {e}")
- st.error(f"Error deleting from backend: {e}")
- return None
-
-# ---- Streamlit App Configuration ---- #
-logger.info("๐จ Configuring Streamlit app...")
-st.set_page_config(
- page_title="Friend-Lite Dashboard",
- layout="wide",
- initial_sidebar_state="expanded"
-)
-
-st.title("Friend-Lite Dashboard")
-logger.info("๐ Dashboard initialized")
-
-# Inject custom CSS for conversation box using Streamlit theme variables
-st.markdown(
- """
-
- """,
- unsafe_allow_html=True,
-)
-
-# ---- Sidebar with Health Checks ---- #
-with st.sidebar:
- st.header("๐ System Health")
- logger.debug("๐ Loading system health sidebar...")
-
- with st.expander("Service Status", expanded=True):
- # Get system health from backend
- with st.spinner("Checking system health..."):
- health_data = get_system_health()
-
- if health_data.get("overall_healthy", False):
- st.success(f"๐ข System Status: {health_data.get('status', 'Unknown').title()}")
- logger.info("๐ข System health check passed")
- else:
- st.error(f"๐ด System Status: {health_data.get('status', 'Unknown').title()}")
- logger.warning(f"๐ด System health check failed: {health_data.get('error', 'Unknown error')}")
-
- # Show individual services
- services = health_data.get("services", {})
- for service_name, service_info in services.items():
- status_text = service_info.get("status", "Unknown")
- st.write(f"**{service_name.title()}:** {status_text}")
- logger.debug(f"Service {service_name}: {status_text}")
-
- # Show additional info if available
- if "models" in service_info:
- st.caption(f"Models available: {service_info['models']}")
- logger.debug(f"Service {service_name} models: {service_info['models']}")
- if "uri" in service_info:
- st.caption(f"URI: {service_info['uri']}")
- logger.debug(f"Service {service_name} URI: {service_info['uri']}")
-
- if st.button("๐ Refresh Health Check"):
- logger.info("๐ Manual health check refresh requested")
- st.cache_data.clear()
- st.rerun()
-
- st.divider()
-
- # Close Conversation Section
- st.header("๐ Close Conversation")
- logger.debug("๐ Loading close conversation section...")
-
- with st.expander("Active Clients & Close Conversation", expanded=True):
- # Get active clients
- logger.debug("๐ก Fetching active clients...")
- active_clients_data = get_data("/api/active_clients")
-
- if active_clients_data and active_clients_data.get("clients"):
- clients = active_clients_data["clients"]
- logger.info(f"๐ Found {len(clients)} active clients")
-
- # Show active clients with conversation status
- for client_id, client_info in clients.items():
- logger.debug(f"๐ค Processing client: {client_id} - Active conversation: {client_info.get('has_active_conversation', False)}")
-
- col1, col2 = st.columns([2, 1])
-
- with col1:
- if client_info.get("has_active_conversation", False):
- st.write(f"๐ข **{client_id}** (Active conversation)")
- if client_info.get("current_audio_uuid"):
- st.caption(f"UUID: {client_info['current_audio_uuid'][:8]}...")
- logger.debug(f"Client {client_id} has active conversation with UUID: {client_info['current_audio_uuid']}")
- else:
- st.write(f"โช **{client_id}** (No active conversation)")
- logger.debug(f"Client {client_id} has no active conversation")
-
- with col2:
- if client_info.get("has_active_conversation", False):
- close_btn = st.button(
- "๐ Close",
- key=f"close_{client_id}",
- help=f"Close current conversation for {client_id}",
- type="secondary"
- )
-
- if close_btn:
- logger.info(f"๐ Closing conversation for client: {client_id}")
- result = post_data("/api/close_conversation", {"client_id": client_id})
- if result:
- st.success(f"โ
Conversation closed for {client_id}")
- logger.info(f"โ
Successfully closed conversation for {client_id}")
- st.rerun()
- else:
- st.error(f"โ Failed to close conversation for {client_id}")
- logger.error(f"โ Failed to close conversation for {client_id}")
- else:
- st.caption("No active conversation")
-
- st.info(f"๐ก **Total active clients:** {active_clients_data.get('active_clients_count', 0)}")
- else:
- st.info("No active clients found")
- logger.info("๐ No active clients found")
-
- st.divider()
-
- # Configuration Info
- with st.expander("Configuration"):
- logger.debug("๐ง Loading configuration info...")
- health_data = get_system_health()
- config = health_data.get("config", {})
-
- st.code(f"""
-Backend API: {BACKEND_API_URL}
-Backend Public: {BACKEND_PUBLIC_URL}
-Active Clients: {config.get('active_clients', 'Unknown')}
-MongoDB URI: {config.get('mongodb_uri', 'Unknown')[:30]}...
-Ollama URL: {config.get('ollama_url', 'Unknown')}
-Qdrant URL: {config.get('qdrant_url', 'Unknown')}
-ASR URI: {config.get('asr_uri', 'Unknown')}
-Chunk Directory: {config.get('chunk_dir', 'Unknown')}
- """)
- logger.debug(f"๐ง Configuration displayed - Backend API: {BACKEND_API_URL}")
-
-# Show warning if system is unhealthy
-health_data = get_system_health()
-if not health_data.get("overall_healthy", False):
- st.error("โ ๏ธ Some critical services are unavailable. The dashboard may not function properly.")
- logger.warning("โ ๏ธ System is unhealthy - some services unavailable")
-
-# ---- Main Content ---- #
-logger.info("๐ Loading main dashboard tabs...")
-tab_convos, tab_mem, tab_users, tab_manage = st.tabs(["Conversations", "Memories", "User Management", "Conversation Management"])
-
-with tab_convos:
- logger.debug("๐จ๏ธ Loading conversations tab...")
- st.header("Latest Conversations")
-
- # Initialize session state for refresh tracking
- if 'refresh_timestamp' not in st.session_state:
- st.session_state.refresh_timestamp = 0
-
- # Add debug mode toggle
- col1, col2 = st.columns([3, 1])
- with col1:
- if st.button("Refresh Conversations"):
- logger.info("๐ Manual conversation refresh requested")
- st.session_state.refresh_timestamp = int(time.time())
- st.session_state.refresh_random = random.randint(1000, 9999)
- st.rerun()
- with col2:
- debug_mode = st.checkbox("๐ง Debug Mode",
- help="Show original audio files instead of cropped versions",
- key="debug_mode")
- if debug_mode:
- logger.debug("๐ง Debug mode enabled")
-
- # Generate cache-busting parameter based on session state
- if st.session_state.refresh_timestamp > 0:
- random_component = getattr(st.session_state, 'refresh_random', 0)
- cache_buster = f"?t={st.session_state.refresh_timestamp}&r={random_component}"
- st.info("๐ Audio files refreshed - cache cleared for latest versions")
- logger.info("๐ Audio cache busting applied")
- else:
- cache_buster = ""
-
- logger.debug("๐ก Fetching conversations data...")
- conversations = get_data("/api/conversations")
-
- if conversations:
- logger.info(f"๐ Loaded {len(conversations) if isinstance(conversations, list) else 'grouped'} conversations")
-
- # Check if conversations is the new grouped format or old format
- if isinstance(conversations, dict) and "conversations" in conversations:
- # New grouped format
- logger.debug("๐ Processing conversations in new grouped format")
- conversations_data = conversations["conversations"]
-
- for client_id, client_conversations in conversations_data.items():
- logger.debug(f"๐ค Processing conversations for client: {client_id} ({len(client_conversations)} conversations)")
- st.subheader(f"๐ค {client_id}")
-
- for convo in client_conversations:
- logger.debug(f"๐จ๏ธ Processing conversation: {convo.get('audio_uuid', 'unknown')}")
-
- col1, col2 = st.columns([1, 4])
- with col1:
- # Format timestamp for better readability
- ts = datetime.fromtimestamp(convo['timestamp'])
- st.write(f"**Timestamp:**")
- st.write(ts.strftime('%Y-%m-%d %H:%M:%S'))
-
- # Show Audio UUID
- audio_uuid = convo.get("audio_uuid", "N/A")
- st.write(f"**Audio UUID:**")
- st.code(audio_uuid, language=None)
-
- # Show identified speakers
- speakers = convo.get("speakers_identified", [])
- if speakers:
- st.write(f"**Speakers:**")
- for speaker in speakers:
- st.write(f"๐ค `{speaker}`")
- logger.debug(f"๐ค Speakers identified: {speakers}")
-
- # Show audio duration info if available
- cropped_duration = convo.get("cropped_duration")
- if cropped_duration:
- st.write(f"**Cropped Duration:**")
- st.write(f"โฑ๏ธ {cropped_duration:.1f}s")
-
- # Show speech segments count
- speech_segments = convo.get("speech_segments", [])
- if speech_segments:
- st.write(f"**Speech Segments:**")
- st.write(f"๐ฃ๏ธ {len(speech_segments)} segments")
- logger.debug(f"๐ฃ๏ธ Speech segments: {len(speech_segments)}")
-
- with col2:
- # Display conversation transcript with new format
- transcript = convo.get("transcript", [])
- if transcript:
- logger.debug(f"๐ Displaying transcript with {len(transcript)} segments")
- st.write("**Conversation:**")
- conversation_text = ""
- for segment in transcript:
- speaker = segment.get("speaker", "Unknown")
- text = segment.get("text", "")
- start_time = segment.get("start", 0.0)
- end_time = segment.get("end", 0.0)
-
- # Format timing if available
- timing_info = ""
- if start_time > 0 or end_time > 0:
- timing_info = f" [{start_time:.1f}s - {end_time:.1f}s]"
-
- conversation_text += f"{speaker}{timing_info}: {text}
"
-
- # Display in a scrollable container with max height
- st.markdown(
- f'{conversation_text}
',
- unsafe_allow_html=True
- )
-
- # Smart audio display logic
- audio_path = convo.get("audio_path")
- cropped_audio_path = convo.get("cropped_audio_path")
-
- if audio_path:
- # Determine which audio to show
- if debug_mode:
- # Debug mode: always show original
- selected_audio_path = audio_path
- audio_label = "๐ง **Original Audio** (Debug Mode)"
- logger.debug(f"๐ง Debug mode: showing original audio: {audio_path}")
- elif cropped_audio_path:
- # Normal mode: prefer cropped if available
- selected_audio_path = cropped_audio_path
- audio_label = "๐ต **Cropped Audio** (Silence Removed)"
- logger.debug(f"๐ต Normal mode: showing cropped audio: {cropped_audio_path}")
- else:
- # Fallback: show original if no cropped version
- selected_audio_path = audio_path
- audio_label = "๐ต **Original Audio** (No cropped version available)"
- logger.debug(f"๐ต Fallback: showing original audio (no cropped version): {audio_path}")
-
- # Display audio with label and cache-busting
- st.write(audio_label)
- audio_url = f"{BACKEND_PUBLIC_URL}/audio/{selected_audio_path}{cache_buster}"
- st.audio(audio_url, format="audio/wav")
- logger.debug(f"๐ต Audio URL: {audio_url}")
-
- # Show additional info in debug mode or when both versions exist
- if debug_mode and cropped_audio_path:
- st.caption(f"๐ก Cropped version available: {cropped_audio_path}")
- elif not debug_mode and cropped_audio_path:
- st.caption(f"๐ก Enable debug mode to hear original with silence")
-
- st.divider()
- else:
- # Old format - single list of conversations
- logger.debug("๐ Processing conversations in old format")
- for convo in conversations:
- logger.debug(f"๐จ๏ธ Processing conversation: {convo.get('audio_uuid', 'unknown')}")
-
- col1, col2 = st.columns([1, 4])
- with col1:
- # Format timestamp for better readability
- ts = datetime.fromtimestamp(convo['timestamp'])
- st.write(f"**Timestamp:**")
- st.write(ts.strftime('%Y-%m-%d %H:%M:%S'))
-
- # Show client_id with better formatting
- client_id = convo.get('client_id', 'N/A')
- if client_id.startswith('client_'):
- st.write(f"**Client ID:**")
- st.write(f"`{client_id}`")
- else:
- st.write(f"**User ID:**")
- st.write(f"๐ค `{client_id}`")
-
- # Show Audio UUID
- audio_uuid = convo.get("audio_uuid", "N/A")
- st.write(f"**Audio UUID:**")
- st.code(audio_uuid, language=None)
-
- # Show identified speakers
- speakers = convo.get("speakers_identified", [])
- if speakers:
- st.write(f"**Speakers:**")
- for speaker in speakers:
- st.write(f"๐ค `{speaker}`")
-
- with col2:
- # Display conversation transcript with new format
- transcript = convo.get("transcript", [])
- if transcript:
- logger.debug(f"๐ Displaying transcript with {len(transcript)} segments")
- st.write("**Conversation:**")
- conversation_text = ""
- for segment in transcript:
- speaker = segment.get("speaker", "Unknown")
- text = segment.get("text", "")
- start_time = segment.get("start", 0.0)
- end_time = segment.get("end", 0.0)
-
- # Format timing if available
- timing_info = ""
- if start_time > 0 or end_time > 0:
- timing_info = f" [{start_time:.1f}s - {end_time:.1f}s]"
-
- conversation_text += f"{speaker}{timing_info}: {text}
"
-
- # Display in a scrollable container with max height
- st.markdown(
- f'{conversation_text}
',
- unsafe_allow_html=True
- )
- else:
- # Fallback for old format
- old_transcript = convo.get("transcription", "No transcript available.")
- st.text_area("Transcription", old_transcript, height=150, disabled=True, key=f"transcript_{convo['_id']}")
-
- # Smart audio display logic (same as above)
- audio_path = convo.get("audio_path")
- cropped_audio_path = convo.get("cropped_audio_path")
-
- if audio_path:
- # Determine which audio to show
- if debug_mode:
- # Debug mode: always show original
- selected_audio_path = audio_path
- audio_label = "๐ง **Original Audio** (Debug Mode)"
- logger.debug(f"๐ง Debug mode: showing original audio: {audio_path}")
- elif cropped_audio_path:
- # Normal mode: prefer cropped if available
- selected_audio_path = cropped_audio_path
- audio_label = "๐ต **Cropped Audio** (Silence Removed)"
- logger.debug(f"๐ต Normal mode: showing cropped audio: {cropped_audio_path}")
- else:
- # Fallback: show original if no cropped version
- selected_audio_path = audio_path
- audio_label = "๐ต **Original Audio** (No cropped version available)"
- logger.debug(f"๐ต Fallback: showing original audio (no cropped version): {audio_path}")
-
- # Display audio with label and cache-busting
- st.write(audio_label)
- audio_url = f"{BACKEND_PUBLIC_URL}/audio/{selected_audio_path}{cache_buster}"
- st.audio(audio_url, format="audio/wav")
- logger.debug(f"๐ต Audio URL: {audio_url}")
-
- # Show additional info in debug mode or when both versions exist
- if debug_mode and cropped_audio_path:
- st.caption(f"๐ก Cropped version available: {cropped_audio_path}")
- elif not debug_mode and cropped_audio_path:
- st.caption(f"๐ก Enable debug mode to hear original with silence")
-
- st.divider()
- elif conversations is not None:
- st.info("No conversations found. The backend is connected but the database might be empty.")
- logger.info("๐ No conversations found in database")
-
-with tab_mem:
- logger.debug("๐ง Loading memories tab...")
- st.header("Memories & Action Items")
-
- # Use session state for selected user if available
- default_user = st.session_state.get('selected_user', '')
-
- # User selection for memories and action items
- col1, col2 = st.columns([2, 1])
- with col1:
- user_id_input = st.text_input("Enter username to view memories & action items:",
- value=default_user,
- placeholder="e.g., john_doe, alice123")
- with col2:
- st.write("") # Spacer
- refresh_mem_btn = st.button("Load Data", key="refresh_memories")
-
- # Clear the session state after using it
- if 'selected_user' in st.session_state:
- del st.session_state['selected_user']
-
- if refresh_mem_btn:
- logger.info("๐ Manual memories refresh requested")
- st.rerun()
-
- # Get memories and action items based on user selection
- if user_id_input.strip():
- logger.info(f"๐ง Loading data for user: {user_id_input.strip()}")
- st.info(f"Showing data for user: **{user_id_input.strip()}**")
-
- # Load both memories and action items
- col1, col2 = st.columns([1, 1])
-
- with col1:
- with st.spinner("Loading memories..."):
- logger.debug(f"๐ก Fetching memories for user: {user_id_input.strip()}")
- memories_response = get_data(f"/api/memories?user_id={user_id_input.strip()}")
-
- with col2:
- with st.spinner("Loading action items..."):
- logger.debug(f"๐ก Fetching action items for user: {user_id_input.strip()}")
- action_items_response = get_data(f"/api/action-items?user_id={user_id_input.strip()}")
-
- # Handle the API response format with "results" wrapper for memories
- if memories_response and isinstance(memories_response, dict) and "results" in memories_response:
- memories = memories_response["results"]
- logger.debug(f"๐ง Memories response has 'results' wrapper, extracted {len(memories)} memories")
- else:
- memories = memories_response
- logger.debug(f"๐ง Memories response format: {type(memories_response)}")
-
- # Handle action items response
- if action_items_response and isinstance(action_items_response, dict) and "action_items" in action_items_response:
- action_items = action_items_response["action_items"]
- logger.debug(f"๐ฏ Action items response has 'action_items' wrapper, extracted {len(action_items)} items")
- else:
- action_items = action_items_response if action_items_response else []
- logger.debug(f"๐ฏ Action items response format: {type(action_items_response)}")
- else:
- # Show instruction to enter a username
- memories = None
- action_items = None
- logger.debug("๐ No user ID provided, showing instructions")
- st.info("๐ Please enter a username above to view their memories and action items.")
- st.markdown("๐ก **Tip:** You can find existing usernames in the 'User Management' tab.")
-
- # Display Memories Section
- if memories is not None:
- logger.debug("๐ง Displaying memories section...")
- st.subheader("๐ง Discovered Memories")
-
- if memories:
- logger.info(f"๐ง Displaying {len(memories)} memories for user {user_id_input.strip()}")
- df = pd.DataFrame(memories)
-
- # Make the dataframe more readable
- if "created_at" in df.columns:
- df['created_at'] = pd.to_datetime(df['created_at']).dt.strftime('%Y-%m-%d %H:%M:%S')
-
- # Reorder and rename columns for clarity - handle both "memory" and "text" fields
- display_cols = {
- "id": "Memory ID",
- "created_at": "Created At"
- }
-
- # Check which memory field exists and add it to display columns
- if "memory" in df.columns:
- display_cols["memory"] = "Memory"
- logger.debug("๐ง Using 'memory' field for display")
- elif "text" in df.columns:
- display_cols["text"] = "Memory"
- logger.debug("๐ง Using 'text' field for display")
-
- # Filter for columns that exist in the dataframe
- cols_to_display = [col for col in display_cols.keys() if col in df.columns]
-
- if cols_to_display:
- logger.debug(f"๐ง Displaying columns: {cols_to_display}")
- st.dataframe(
- df[cols_to_display].rename(columns=display_cols),
- use_container_width=True,
- hide_index=True
- )
-
- # Show additional details
- st.caption(f"๐ Found **{len(memories)}** memories for user **{user_id_input.strip()}**")
- else:
- logger.error(f"โ ๏ธ Unexpected memory data format - missing expected fields. Available columns: {list(df.columns)}")
- st.error("โ ๏ธ Unexpected memory data format - missing expected fields")
- st.write("Debug info - Available columns:", list(df.columns))
- else:
- logger.info(f"๐ง No memories found for user {user_id_input.strip()}")
- st.info("No memories found for this user.")
-
- # Display Action Items Section
- if action_items is not None:
- logger.debug("๐ฏ Displaying action items section...")
- st.subheader("๐ฏ Action Items")
-
- if action_items:
- logger.info(f"๐ฏ Displaying {len(action_items)} action items for user {user_id_input.strip()}")
-
- # Status filter for action items
- col1, col2, col3 = st.columns([2, 1, 1])
- with col1:
- status_filter = st.selectbox(
- "Filter by status:",
- options=["All", "open", "in_progress", "completed", "cancelled"],
- index=0,
- key="action_items_filter"
- )
- with col2:
- show_stats = st.button("๐ Show Stats", key="show_action_stats")
- with col3:
- # Manual action item creation button
- if st.button("โ Add Item", key="add_action_item"):
- logger.info("โ Manual action item creation requested")
- st.session_state['show_add_action_item'] = True
-
- # Filter action items by status
- if status_filter != "All":
- filtered_items = [item for item in action_items if item.get('status') == status_filter]
- logger.debug(f"๐ฏ Filtered action items by status '{status_filter}': {len(filtered_items)} items")
- else:
- filtered_items = action_items
- logger.debug(f"๐ฏ Showing all action items: {len(filtered_items)} items")
-
- # Show statistics if requested
- if show_stats:
- logger.info("๐ Action items statistics requested")
- stats_response = get_data(f"/api/action-items/stats?user_id={user_id_input.strip()}")
- if stats_response and "statistics" in stats_response:
- stats = stats_response["statistics"]
- logger.debug(f"๐ Action items statistics: {stats}")
-
- # Display stats in columns
- col1, col2, col3, col4 = st.columns(4)
- with col1:
- st.metric("Total", stats["total"])
- st.metric("Open", stats["open"])
- with col2:
- st.metric("In Progress", stats["in_progress"])
- st.metric("Completed", stats["completed"])
- with col3:
- st.metric("Cancelled", stats["cancelled"])
- st.metric("Overdue", stats.get("overdue", 0))
- with col4:
- st.write("**By Priority:**")
- for priority, count in stats.get("by_priority", {}).items():
- if count > 0:
- st.write(f"โข {priority.title()}: {count}")
-
- # Assignee breakdown
- if stats.get("by_assignee"):
- st.write("**By Assignee:**")
- assignee_df = pd.DataFrame(list(stats["by_assignee"].items()), columns=["Assignee", "Count"])
- st.dataframe(assignee_df, hide_index=True, use_container_width=True)
- else:
- logger.warning("๐ Action items statistics not available")
-
- # Manual action item creation form
- if st.session_state.get('show_add_action_item', False):
- logger.debug("โ Showing action item creation form")
- with st.expander("โ Create New Action Item", expanded=True):
- with st.form("create_action_item"):
- description = st.text_input("Description*:", placeholder="e.g., Send quarterly report to management")
- col1, col2 = st.columns(2)
- with col1:
- assignee = st.text_input("Assignee:", placeholder="e.g., john_doe", value="unassigned")
- priority = st.selectbox("Priority:", options=["high", "medium", "low", "not_specified"], index=1)
- with col2:
- due_date = st.text_input("Due Date:", placeholder="e.g., Friday, 2024-01-15", value="not_specified")
- context = st.text_input("Context:", placeholder="e.g., Mentioned in team meeting")
-
- submitted = st.form_submit_button("Create Action Item")
-
- if submitted:
- logger.info(f"โ Creating action item for user {user_id_input.strip()}")
- if description.strip():
- create_data = {
- "description": description.strip(),
- "assignee": assignee.strip() if assignee.strip() else "unassigned",
- "due_date": due_date.strip() if due_date.strip() else "not_specified",
- "priority": priority,
- "context": context.strip()
- }
-
- try:
- logger.debug(f"๐ค Creating action item with data: {create_data}")
- response = requests.post(
- f"{BACKEND_API_URL}/api/action-items",
- params={"user_id": user_id_input.strip()},
- json=create_data
- )
- response.raise_for_status()
- result = response.json()
- st.success(f"โ
Action item created: {result['action_item']['description']}")
- logger.info(f"โ
Action item created successfully: {result['action_item']['description']}")
- st.session_state['show_add_action_item'] = False
- st.rerun()
- except requests.exceptions.RequestException as e:
- logger.error(f"โ Error creating action item: {e}")
- st.error(f"Error creating action item: {e}")
- else:
- logger.warning("โ ๏ธ Action item creation attempted without description")
- st.error("Please enter a description for the action item")
-
- if st.button("โ Cancel", key="cancel_add_action"):
- logger.debug("โ Action item creation cancelled")
- st.session_state['show_add_action_item'] = False
- st.rerun()
-
- # Display action items
- if filtered_items:
- logger.debug(f"๐ฏ Displaying {len(filtered_items)} filtered action items")
- st.write(f"**Showing {len(filtered_items)} action items** (filtered by: {status_filter})")
-
- for i, item in enumerate(filtered_items):
- logger.debug(f"๐ฏ Processing action item {i+1}: {item.get('description', 'No description')[:50]}...")
-
- with st.container():
- # Create columns for action item display
- col1, col2, col3 = st.columns([3, 1, 1])
-
- with col1:
- # Description with status badge
- status = item.get('status', 'open')
- status_emoji = {
- 'open': '๐ต',
- 'in_progress': '๐ก',
- 'completed': 'โ
',
- 'cancelled': 'โ'
- }.get(status, '๐ต')
-
- st.write(f"**{status_emoji} {item.get('description', 'No description')}**")
-
- # Additional details
- details = []
- if item.get('assignee') and item.get('assignee') != 'unassigned':
- details.append(f"๐ค {item['assignee']}")
- if item.get('due_date') and item.get('due_date') != 'not_specified':
- details.append(f"๐
{item['due_date']}")
- if item.get('priority') and item.get('priority') != 'not_specified':
- priority_emoji = {'high': '๐ด', 'medium': '๐ก', 'low': '๐ข'}.get(item['priority'], 'โช')
- details.append(f"{priority_emoji} {item['priority']}")
- if item.get('context'):
- details.append(f"๐ญ {item['context']}")
-
- if details:
- st.caption(" | ".join(details))
-
- # Creation info
- created_at = item.get('created_at')
- if created_at:
- try:
- if isinstance(created_at, (int, float)):
- created_time = datetime.fromtimestamp(created_at)
- else:
- created_time = pd.to_datetime(created_at)
- st.caption(f"Created: {created_time.strftime('%Y-%m-%d %H:%M:%S')}")
- except:
- st.caption(f"Created: {created_at}")
-
- with col2:
- # Status update
- new_status = st.selectbox(
- "Status:",
- options=["open", "in_progress", "completed", "cancelled"],
- index=["open", "in_progress", "completed", "cancelled"].index(status),
- key=f"status_{i}_{item.get('memory_id', i)}"
- )
-
- if new_status != status:
- if st.button("Update", key=f"update_{i}_{item.get('memory_id', i)}"):
- memory_id = item.get('memory_id')
- if memory_id:
- logger.info(f"๐ Updating action item {memory_id} status from {status} to {new_status}")
- try:
- response = requests.put(
- f"{BACKEND_API_URL}/api/action-items/{memory_id}",
- json={"status": new_status}
- )
- response.raise_for_status()
- st.success(f"Status updated to {new_status}")
- logger.info(f"โ
Action item status updated successfully")
- st.rerun()
- except requests.exceptions.RequestException as e:
- logger.error(f"โ Error updating action item status: {e}")
- st.error(f"Error updating status: {e}")
- else:
- logger.error(f"โ No memory ID found for action item")
- st.error("No memory ID found for this action item")
-
- with col3:
- # Delete button
- if st.button("๐๏ธ Delete", key=f"delete_{i}_{item.get('memory_id', i)}", type="secondary"):
- memory_id = item.get('memory_id')
- if memory_id:
- logger.info(f"๐๏ธ Deleting action item {memory_id}")
- try:
- response = requests.delete(f"{BACKEND_API_URL}/api/action-items/{memory_id}")
- response.raise_for_status()
- st.success("Action item deleted")
- logger.info(f"โ
Action item deleted successfully")
- st.rerun()
- except requests.exceptions.RequestException as e:
- logger.error(f"โ Error deleting action item: {e}")
- st.error(f"Error deleting action item: {e}")
- else:
- logger.error(f"โ No memory ID found for action item")
- st.error("No memory ID found for this action item")
-
- st.divider()
-
- st.caption(f"๐ก **Tip:** Action items are automatically extracted from conversations at the end of each session")
- else:
- if status_filter == "All":
- logger.info(f"๐ฏ No action items found for user {user_id_input.strip()}")
- st.info("No action items found for this user.")
- else:
- logger.info(f"๐ฏ No action items found with status '{status_filter}' for user {user_id_input.strip()}")
- st.info(f"No action items found with status '{status_filter}' for this user.")
- else:
- logger.info(f"๐ฏ No action items found for user {user_id_input.strip()}")
- st.info("No action items found for this user.")
-
- # Show option to create manual action item even when none exist
- if user_id_input.strip() and st.button("โ Create First Action Item", key="create_first_item"):
- logger.info("โ Creating first action item for user")
- st.session_state['show_add_action_item'] = True
- st.rerun()
-
-with tab_users:
- st.header("User Management")
-
- # Create User Section
- st.subheader("Create New User")
- col1, col2 = st.columns([3, 1])
- with col1:
- new_user_id = st.text_input("New User ID:", placeholder="e.g., john_doe, alice123")
- with col2:
- st.write("") # Spacer
- create_user_btn = st.button("Create User", key="create_user")
-
- if create_user_btn:
- if new_user_id.strip():
- result = post_data("/api/create_user", {"user_id": new_user_id.strip()})
- if result:
- st.success(f"User '{new_user_id.strip()}' created successfully!")
- st.rerun()
- else:
- st.error("Please enter a valid User ID")
-
- st.divider()
-
- # List Users Section
- st.subheader("Existing Users")
- col1, col2 = st.columns([1, 1])
- with col1:
- refresh_users_btn = st.button("Refresh Users", key="refresh_users")
-
- if refresh_users_btn:
- st.rerun()
-
- users = get_data("/api/users")
-
- if users:
- st.write(f"**Total Users:** {len(users)}")
-
- # Initialize session state for delete confirmation
- if 'delete_confirmation' not in st.session_state:
- st.session_state.delete_confirmation = {}
-
- # Display users in a nice format
- for user in users:
- user_id = user.get('user_id', 'Unknown')
- user_db_id = user.get('_id', 'unknown')
-
- col1, col2 = st.columns([3, 1])
- with col1:
- st.write(f"๐ค **{user_id}**")
- if '_id' in user:
- st.caption(f"ID: {user['_id']}")
-
- with col2:
- # Check if we're in confirmation mode for this user
- if user_id in st.session_state.delete_confirmation:
- # Show confirmation dialog in a container
- with st.container():
- st.error("โ ๏ธ **Confirm Deletion**")
- st.write(f"Delete user **{user_id}** and optionally:")
-
- # Checkboxes for what to delete
- delete_conversations = st.checkbox(
- "๐จ๏ธ Delete all conversations",
- key=f"conv_{user_db_id}",
- help="Permanently delete all audio recordings and transcripts"
- )
- delete_memories = st.checkbox(
- "๐ง Delete all memories",
- key=f"mem_{user_db_id}",
- help="Permanently delete all extracted memories from conversations"
- )
-
- # Action buttons
- col_cancel, col_confirm = st.columns([1, 1])
-
- with col_cancel:
- if st.button("โ Cancel", key=f"cancel_{user_db_id}", use_container_width=True, type="secondary"):
- del st.session_state.delete_confirmation[user_id]
- st.rerun()
-
- with col_confirm:
- if st.button("๐๏ธ Confirm Delete", key=f"confirm_{user_db_id}", use_container_width=True, type="primary"):
- # Build delete parameters
- params = {
- "user_id": user_id,
- "delete_conversations": delete_conversations,
- "delete_memories": delete_memories
- }
-
- result = delete_data("/api/delete_user", params)
- if result:
- deleted_data = result.get('deleted_data', {})
- message = result.get('message', f"User '{user_id}' deleted")
- st.success(message)
-
- # Show detailed deletion info
- if deleted_data.get('conversations_deleted', 0) > 0 or deleted_data.get('memories_deleted', 0) > 0:
- st.info(f"๐ Deleted: {deleted_data.get('conversations_deleted', 0)} conversations, {deleted_data.get('memories_deleted', 0)} memories")
-
- del st.session_state.delete_confirmation[user_id]
- st.rerun()
-
- if delete_conversations or delete_memories:
- st.caption("โ ๏ธ Selected data will be **permanently deleted** and cannot be recovered!")
- else:
- # Show normal delete button
- delete_btn = st.button("๐๏ธ Delete", key=f"delete_{user_db_id}", type="secondary")
- if delete_btn:
- st.session_state.delete_confirmation[user_id] = True
- st.rerun()
-
- st.divider()
-
- elif users is not None:
- st.info("No users found in the system.")
-
- st.divider()
-
- # Quick Actions Section
- st.subheader("Quick Actions")
- st.write("**View User Memories:**")
- col1, col2 = st.columns([3, 1])
- with col1:
- quick_user_id = st.text_input("User ID to view memories:", placeholder="Enter user ID", key="quick_view_user")
- with col2:
- st.write("") # Spacer
- view_memories_btn = st.button("View Memories", key="view_memories")
-
- if view_memories_btn and quick_user_id.strip():
- # Switch to memories tab with this user
- st.session_state['selected_user'] = quick_user_id.strip()
- st.info(f"Switch to the 'Memories' tab to view memories for user: {quick_user_id.strip()}")
-
- # Tips section
- st.subheader("๐ก Tips")
- st.markdown("""
- - **User IDs** should be unique identifiers (e.g., usernames, email prefixes)
- - Users are automatically created when they connect with audio if they don't exist
- - **Delete Options:**
- - **User Account**: Always deleted when you click delete
- - **๐จ๏ธ Conversations**: Check to delete all audio recordings and transcripts
- - **๐ง Memories**: Check to delete all extracted memories from conversations
- - Mix and match: You can delete just conversations, just memories, or both
- - Use the 'Memories' tab to view specific user memories
- """)
-
-with tab_manage:
- st.header("Conversation Management")
-
- st.subheader("๐ Close Current Conversation")
- st.write("Close the current active conversation for any connected client.")
-
- # Get active clients for the dropdown
- active_clients_data = get_data("/api/active_clients")
-
- if active_clients_data and active_clients_data.get("clients"):
- clients = active_clients_data["clients"]
-
- # Filter to only clients with active conversations
- active_conversations = {
- client_id: client_info
- for client_id, client_info in clients.items()
- if client_info.get("has_active_conversation", False)
- }
-
- if active_conversations:
- col1, col2 = st.columns([3, 1])
-
- with col1:
- selected_client = st.selectbox(
- "Select client to close conversation:",
- options=list(active_conversations.keys()),
- format_func=lambda x: f"{x} (UUID: {active_conversations[x].get('current_audio_uuid', 'N/A')[:8]}...)"
- )
-
- with col2:
- st.write("") # Spacer
- close_conversation_btn = st.button("๐ Close Conversation", key="close_conv_main", type="primary")
-
- if close_conversation_btn and selected_client:
- result = post_data("/api/close_conversation", {"client_id": selected_client})
- if result:
- st.success(f"โ
Successfully closed conversation for client '{selected_client}'!")
- st.info(f"๐ {result.get('message', 'Conversation closed')}")
- time.sleep(1) # Brief pause before refresh
- st.rerun()
- else:
- st.error(f"โ Failed to close conversation for client '{selected_client}'")
- else:
- st.info("๐ No clients with active conversations found")
-
- # Show all clients status
- with st.expander("All Connected Clients Status"):
- for client_id, client_info in clients.items():
- status_icon = "๐ข" if client_info.get("has_active_conversation", False) else "โช"
- st.write(f"{status_icon} **{client_id}** - {'Active conversation' if client_info.get('has_active_conversation', False) else 'No active conversation'}")
- if client_info.get("current_audio_uuid"):
- st.caption(f" Audio UUID: {client_info['current_audio_uuid']}")
- else:
- st.info("๐ No active clients found")
-
- st.divider()
-
- st.subheader("Add Speaker to Conversation")
- st.write("Add speakers to conversations even if they haven't spoken yet.")
-
- col1, col2, col3 = st.columns([2, 2, 1])
- with col1:
- audio_uuid_input = st.text_input("Audio UUID:", placeholder="Enter the audio UUID")
- with col2:
- speaker_id_input = st.text_input("Speaker ID:", placeholder="e.g., speaker_1, john_doe")
- with col3:
- st.write("") # Spacer
- add_speaker_btn = st.button("Add Speaker", key="add_speaker")
-
- if add_speaker_btn:
- if audio_uuid_input.strip() and speaker_id_input.strip():
- result = post_data(f"/api/conversations/{audio_uuid_input.strip()}/speakers",
- {"speaker_id": speaker_id_input.strip()})
- if result:
- st.success(f"Speaker '{speaker_id_input.strip()}' added to conversation!")
- else:
- st.error("Please enter both Audio UUID and Speaker ID")
-
- st.divider()
-
- st.subheader("Update Transcript Segment")
- st.write("Modify speaker identification or timing information for transcript segments.")
-
- col1, col2 = st.columns([1, 1])
- with col1:
- update_audio_uuid = st.text_input("Audio UUID:", placeholder="Enter the audio UUID", key="update_uuid")
- segment_index = st.number_input("Segment Index:", min_value=0, value=0, step=1)
- new_speaker = st.text_input("New Speaker ID (optional):", placeholder="Leave empty to keep current")
-
- with col2:
- start_time = st.number_input("Start Time (seconds):", min_value=0.0, value=0.0, step=0.1, format="%.1f")
- end_time = st.number_input("End Time (seconds):", min_value=0.0, value=0.0, step=0.1, format="%.1f")
- update_segment_btn = st.button("Update Segment", key="update_segment")
-
- if update_segment_btn:
- if update_audio_uuid.strip():
- params = {}
- if new_speaker.strip():
- params["speaker_id"] = new_speaker.strip()
- if start_time > 0:
- params["start_time"] = start_time
- if end_time > 0:
- params["end_time"] = end_time
-
- if params:
- # Use requests.put for this endpoint
- try:
- response = requests.put(
- f"{BACKEND_API_URL}/api/conversations/{update_audio_uuid.strip()}/transcript/{segment_index}",
- params=params
- )
- response.raise_for_status()
- result = response.json()
- st.success("Transcript segment updated successfully!")
- except requests.exceptions.RequestException as e:
- st.error(f"Error updating segment: {e}")
- else:
- st.warning("Please specify at least one field to update")
- else:
- st.error("Please enter the Audio UUID")
-
- st.divider()
-
- st.subheader("๐ก Schema Information")
- st.markdown("""
- **New Conversation Schema:**
- ```json
- {
- "audio_uuid": "unique_identifier",
- "audio_path": "path/to/audio/file.wav",
- "client_id": "user_or_client_id",
- "timestamp": 1234567890,
- "transcript": [
- {
- "speaker": "speaker_1",
- "text": "Hello, how are you?",
- "start": 0.0,
- "end": 3.2
- },
- {
- "speaker": "speaker_2",
- "text": "I'm good, thanks!",
- "start": 3.3,
- "end": 5.0
- }
- ],
- "speakers_identified": ["speaker_1", "speaker_2"]
- }
- ```
- """)
-
- st.info("๐ก **Tip**: You can find Audio UUIDs in the conversation details on the 'Conversations' tab.")
diff --git a/backends/advanced-backend/webui/uv.lock b/backends/advanced-backend/webui/uv.lock
deleted file mode 100644
index 8f7ded7a..00000000
--- a/backends/advanced-backend/webui/uv.lock
+++ /dev/null
@@ -1,1343 +0,0 @@
-version = 1
-revision = 2
-requires-python = ">=3.11"
-resolution-markers = [
- "python_full_version >= '3.13'",
- "python_full_version == '3.12.*'",
- "python_full_version < '3.12'",
-]
-
-[[package]]
-name = "altair"
-version = "5.5.0"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "jinja2" },
- { name = "jsonschema" },
- { name = "narwhals" },
- { name = "packaging" },
- { name = "typing-extensions", marker = "python_full_version < '3.14'" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/16/b1/f2969c7bdb8ad8bbdda031687defdce2c19afba2aa2c8e1d2a17f78376d8/altair-5.5.0.tar.gz", hash = "sha256:d960ebe6178c56de3855a68c47b516be38640b73fb3b5111c2a9ca90546dd73d", size = 705305, upload_time = "2024-11-23T23:39:58.542Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/aa/f3/0b6ced594e51cc95d8c1fc1640d3623770d01e4969d29c0bd09945fafefa/altair-5.5.0-py3-none-any.whl", hash = "sha256:91a310b926508d560fe0148d02a194f38b824122641ef528113d029fcd129f8c", size = 731200, upload_time = "2024-11-23T23:39:56.4Z" },
-]
-
-[[package]]
-name = "annotated-types"
-version = "0.7.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/ee/67/531ea369ba64dcff5ec9c3402f9f51bf748cec26dde048a2f973a4eea7f5/annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89", size = 16081, upload_time = "2024-05-20T21:33:25.928Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/78/b6/6307fbef88d9b5ee7421e68d78a9f162e0da4900bc5f5793f6d3d0e34fb8/annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53", size = 13643, upload_time = "2024-05-20T21:33:24.1Z" },
-]
-
-[[package]]
-name = "anyio"
-version = "4.9.0"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "idna" },
- { name = "sniffio" },
- { name = "typing-extensions", marker = "python_full_version < '3.13'" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/95/7d/4c1bd541d4dffa1b52bd83fb8527089e097a106fc90b467a7313b105f840/anyio-4.9.0.tar.gz", hash = "sha256:673c0c244e15788651a4ff38710fea9675823028a6f08a5eda409e0c9840a028", size = 190949, upload_time = "2025-03-17T00:02:54.77Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/a1/ee/48ca1a7c89ffec8b6a0c5d02b89c305671d5ffd8d3c94acf8b8c408575bb/anyio-4.9.0-py3-none-any.whl", hash = "sha256:9f76d541cad6e36af7beb62e978876f3b41e3e04f2c1fbf0884604c0a9c4d93c", size = 100916, upload_time = "2025-03-17T00:02:52.713Z" },
-]
-
-[[package]]
-name = "attrs"
-version = "25.3.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/5a/b0/1367933a8532ee6ff8d63537de4f1177af4bff9f3e829baf7331f595bb24/attrs-25.3.0.tar.gz", hash = "sha256:75d7cefc7fb576747b2c81b4442d4d4a1ce0900973527c011d1030fd3bf4af1b", size = 812032, upload_time = "2025-03-13T11:10:22.779Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/77/06/bb80f5f86020c4551da315d78b3ab75e8228f89f0162f2c3a819e407941a/attrs-25.3.0-py3-none-any.whl", hash = "sha256:427318ce031701fea540783410126f03899a97ffc6f61596ad581ac2e40e3bc3", size = 63815, upload_time = "2025-03-13T11:10:21.14Z" },
-]
-
-[[package]]
-name = "backoff"
-version = "2.2.1"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/47/d7/5bbeb12c44d7c4f2fb5b56abce497eb5ed9f34d85701de869acedd602619/backoff-2.2.1.tar.gz", hash = "sha256:03f829f5bb1923180821643f8753b0502c3b682293992485b0eef2807afa5cba", size = 17001, upload_time = "2022-10-05T19:19:32.061Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/df/73/b6e24bd22e6720ca8ee9a85a0c4a2971af8497d8f3193fa05390cbd46e09/backoff-2.2.1-py3-none-any.whl", hash = "sha256:63579f9a0628e06278f7e47b7d7d5b6ce20dc65c5e96a6f3ca99a6adca0396e8", size = 15148, upload_time = "2022-10-05T19:19:30.546Z" },
-]
-
-[[package]]
-name = "blinker"
-version = "1.9.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/21/28/9b3f50ce0e048515135495f198351908d99540d69bfdc8c1d15b73dc55ce/blinker-1.9.0.tar.gz", hash = "sha256:b4ce2265a7abece45e7cc896e98dbebe6cead56bcf805a3d23136d145f5445bf", size = 22460, upload_time = "2024-11-08T17:25:47.436Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/10/cb/f2ad4230dc2eb1a74edf38f1a38b9b52277f75bef262d8908e60d957e13c/blinker-1.9.0-py3-none-any.whl", hash = "sha256:ba0efaa9080b619ff2f3459d1d500c57bddea4a6b424b60a91141db6fd2f08bc", size = 8458, upload_time = "2024-11-08T17:25:46.184Z" },
-]
-
-[[package]]
-name = "cachetools"
-version = "5.5.2"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/6c/81/3747dad6b14fa2cf53fcf10548cf5aea6913e96fab41a3c198676f8948a5/cachetools-5.5.2.tar.gz", hash = "sha256:1a661caa9175d26759571b2e19580f9d6393969e5dfca11fdb1f947a23e640d4", size = 28380, upload_time = "2025-02-20T21:01:19.524Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/72/76/20fa66124dbe6be5cafeb312ece67de6b61dd91a0247d1ea13db4ebb33c2/cachetools-5.5.2-py3-none-any.whl", hash = "sha256:d26a22bcc62eb95c3beabd9f1ee5e820d3d2704fe2967cbe350e20c8ffcd3f0a", size = 10080, upload_time = "2025-02-20T21:01:16.647Z" },
-]
-
-[[package]]
-name = "certifi"
-version = "2025.4.26"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/e8/9e/c05b3920a3b7d20d3d3310465f50348e5b3694f4f88c6daf736eef3024c4/certifi-2025.4.26.tar.gz", hash = "sha256:0a816057ea3cdefcef70270d2c515e4506bbc954f417fa5ade2021213bb8f0c6", size = 160705, upload_time = "2025-04-26T02:12:29.51Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/4a/7e/3db2bd1b1f9e95f7cddca6d6e75e2f2bd9f51b1246e546d88addca0106bd/certifi-2025.4.26-py3-none-any.whl", hash = "sha256:30350364dfe371162649852c63336a15c70c6510c2ad5015b21c2345311805f3", size = 159618, upload_time = "2025-04-26T02:12:27.662Z" },
-]
-
-[[package]]
-name = "charset-normalizer"
-version = "3.4.2"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/e4/33/89c2ced2b67d1c2a61c19c6751aa8902d46ce3dacb23600a283619f5a12d/charset_normalizer-3.4.2.tar.gz", hash = "sha256:5baececa9ecba31eff645232d59845c07aa030f0c81ee70184a90d35099a0e63", size = 126367, upload_time = "2025-05-02T08:34:42.01Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/05/85/4c40d00dcc6284a1c1ad5de5e0996b06f39d8232f1031cd23c2f5c07ee86/charset_normalizer-3.4.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:be1e352acbe3c78727a16a455126d9ff83ea2dfdcbc83148d2982305a04714c2", size = 198794, upload_time = "2025-05-02T08:32:11.945Z" },
- { url = "https://files.pythonhosted.org/packages/41/d9/7a6c0b9db952598e97e93cbdfcb91bacd89b9b88c7c983250a77c008703c/charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aa88ca0b1932e93f2d961bf3addbb2db902198dca337d88c89e1559e066e7645", size = 142846, upload_time = "2025-05-02T08:32:13.946Z" },
- { url = "https://files.pythonhosted.org/packages/66/82/a37989cda2ace7e37f36c1a8ed16c58cf48965a79c2142713244bf945c89/charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d524ba3f1581b35c03cb42beebab4a13e6cdad7b36246bd22541fa585a56cccd", size = 153350, upload_time = "2025-05-02T08:32:15.873Z" },
- { url = "https://files.pythonhosted.org/packages/df/68/a576b31b694d07b53807269d05ec3f6f1093e9545e8607121995ba7a8313/charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28a1005facc94196e1fb3e82a3d442a9d9110b8434fc1ded7a24a2983c9888d8", size = 145657, upload_time = "2025-05-02T08:32:17.283Z" },
- { url = "https://files.pythonhosted.org/packages/92/9b/ad67f03d74554bed3aefd56fe836e1623a50780f7c998d00ca128924a499/charset_normalizer-3.4.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fdb20a30fe1175ecabed17cbf7812f7b804b8a315a25f24678bcdf120a90077f", size = 147260, upload_time = "2025-05-02T08:32:18.807Z" },
- { url = "https://files.pythonhosted.org/packages/a6/e6/8aebae25e328160b20e31a7e9929b1578bbdc7f42e66f46595a432f8539e/charset_normalizer-3.4.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0f5d9ed7f254402c9e7d35d2f5972c9bbea9040e99cd2861bd77dc68263277c7", size = 149164, upload_time = "2025-05-02T08:32:20.333Z" },
- { url = "https://files.pythonhosted.org/packages/8b/f2/b3c2f07dbcc248805f10e67a0262c93308cfa149a4cd3d1fe01f593e5fd2/charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:efd387a49825780ff861998cd959767800d54f8308936b21025326de4b5a42b9", size = 144571, upload_time = "2025-05-02T08:32:21.86Z" },
- { url = "https://files.pythonhosted.org/packages/60/5b/c3f3a94bc345bc211622ea59b4bed9ae63c00920e2e8f11824aa5708e8b7/charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:f0aa37f3c979cf2546b73e8222bbfa3dc07a641585340179d768068e3455e544", size = 151952, upload_time = "2025-05-02T08:32:23.434Z" },
- { url = "https://files.pythonhosted.org/packages/e2/4d/ff460c8b474122334c2fa394a3f99a04cf11c646da895f81402ae54f5c42/charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:e70e990b2137b29dc5564715de1e12701815dacc1d056308e2b17e9095372a82", size = 155959, upload_time = "2025-05-02T08:32:24.993Z" },
- { url = "https://files.pythonhosted.org/packages/a2/2b/b964c6a2fda88611a1fe3d4c400d39c66a42d6c169c924818c848f922415/charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:0c8c57f84ccfc871a48a47321cfa49ae1df56cd1d965a09abe84066f6853b9c0", size = 153030, upload_time = "2025-05-02T08:32:26.435Z" },
- { url = "https://files.pythonhosted.org/packages/59/2e/d3b9811db26a5ebf444bc0fa4f4be5aa6d76fc6e1c0fd537b16c14e849b6/charset_normalizer-3.4.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6b66f92b17849b85cad91259efc341dce9c1af48e2173bf38a85c6329f1033e5", size = 148015, upload_time = "2025-05-02T08:32:28.376Z" },
- { url = "https://files.pythonhosted.org/packages/90/07/c5fd7c11eafd561bb51220d600a788f1c8d77c5eef37ee49454cc5c35575/charset_normalizer-3.4.2-cp311-cp311-win32.whl", hash = "sha256:daac4765328a919a805fa5e2720f3e94767abd632ae410a9062dff5412bae65a", size = 98106, upload_time = "2025-05-02T08:32:30.281Z" },
- { url = "https://files.pythonhosted.org/packages/a8/05/5e33dbef7e2f773d672b6d79f10ec633d4a71cd96db6673625838a4fd532/charset_normalizer-3.4.2-cp311-cp311-win_amd64.whl", hash = "sha256:e53efc7c7cee4c1e70661e2e112ca46a575f90ed9ae3fef200f2a25e954f4b28", size = 105402, upload_time = "2025-05-02T08:32:32.191Z" },
- { url = "https://files.pythonhosted.org/packages/d7/a4/37f4d6035c89cac7930395a35cc0f1b872e652eaafb76a6075943754f095/charset_normalizer-3.4.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:0c29de6a1a95f24b9a1aa7aefd27d2487263f00dfd55a77719b530788f75cff7", size = 199936, upload_time = "2025-05-02T08:32:33.712Z" },
- { url = "https://files.pythonhosted.org/packages/ee/8a/1a5e33b73e0d9287274f899d967907cd0bf9c343e651755d9307e0dbf2b3/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cddf7bd982eaa998934a91f69d182aec997c6c468898efe6679af88283b498d3", size = 143790, upload_time = "2025-05-02T08:32:35.768Z" },
- { url = "https://files.pythonhosted.org/packages/66/52/59521f1d8e6ab1482164fa21409c5ef44da3e9f653c13ba71becdd98dec3/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fcbe676a55d7445b22c10967bceaaf0ee69407fbe0ece4d032b6eb8d4565982a", size = 153924, upload_time = "2025-05-02T08:32:37.284Z" },
- { url = "https://files.pythonhosted.org/packages/86/2d/fb55fdf41964ec782febbf33cb64be480a6b8f16ded2dbe8db27a405c09f/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d41c4d287cfc69060fa91cae9683eacffad989f1a10811995fa309df656ec214", size = 146626, upload_time = "2025-05-02T08:32:38.803Z" },
- { url = "https://files.pythonhosted.org/packages/8c/73/6ede2ec59bce19b3edf4209d70004253ec5f4e319f9a2e3f2f15601ed5f7/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4e594135de17ab3866138f496755f302b72157d115086d100c3f19370839dd3a", size = 148567, upload_time = "2025-05-02T08:32:40.251Z" },
- { url = "https://files.pythonhosted.org/packages/09/14/957d03c6dc343c04904530b6bef4e5efae5ec7d7990a7cbb868e4595ee30/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cf713fe9a71ef6fd5adf7a79670135081cd4431c2943864757f0fa3a65b1fafd", size = 150957, upload_time = "2025-05-02T08:32:41.705Z" },
- { url = "https://files.pythonhosted.org/packages/0d/c8/8174d0e5c10ccebdcb1b53cc959591c4c722a3ad92461a273e86b9f5a302/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a370b3e078e418187da8c3674eddb9d983ec09445c99a3a263c2011993522981", size = 145408, upload_time = "2025-05-02T08:32:43.709Z" },
- { url = "https://files.pythonhosted.org/packages/58/aa/8904b84bc8084ac19dc52feb4f5952c6df03ffb460a887b42615ee1382e8/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:a955b438e62efdf7e0b7b52a64dc5c3396e2634baa62471768a64bc2adb73d5c", size = 153399, upload_time = "2025-05-02T08:32:46.197Z" },
- { url = "https://files.pythonhosted.org/packages/c2/26/89ee1f0e264d201cb65cf054aca6038c03b1a0c6b4ae998070392a3ce605/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:7222ffd5e4de8e57e03ce2cef95a4c43c98fcb72ad86909abdfc2c17d227fc1b", size = 156815, upload_time = "2025-05-02T08:32:48.105Z" },
- { url = "https://files.pythonhosted.org/packages/fd/07/68e95b4b345bad3dbbd3a8681737b4338ff2c9df29856a6d6d23ac4c73cb/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:bee093bf902e1d8fc0ac143c88902c3dfc8941f7ea1d6a8dd2bcb786d33db03d", size = 154537, upload_time = "2025-05-02T08:32:49.719Z" },
- { url = "https://files.pythonhosted.org/packages/77/1a/5eefc0ce04affb98af07bc05f3bac9094513c0e23b0562d64af46a06aae4/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:dedb8adb91d11846ee08bec4c8236c8549ac721c245678282dcb06b221aab59f", size = 149565, upload_time = "2025-05-02T08:32:51.404Z" },
- { url = "https://files.pythonhosted.org/packages/37/a0/2410e5e6032a174c95e0806b1a6585eb21e12f445ebe239fac441995226a/charset_normalizer-3.4.2-cp312-cp312-win32.whl", hash = "sha256:db4c7bf0e07fc3b7d89ac2a5880a6a8062056801b83ff56d8464b70f65482b6c", size = 98357, upload_time = "2025-05-02T08:32:53.079Z" },
- { url = "https://files.pythonhosted.org/packages/6c/4f/c02d5c493967af3eda9c771ad4d2bbc8df6f99ddbeb37ceea6e8716a32bc/charset_normalizer-3.4.2-cp312-cp312-win_amd64.whl", hash = "sha256:5a9979887252a82fefd3d3ed2a8e3b937a7a809f65dcb1e068b090e165bbe99e", size = 105776, upload_time = "2025-05-02T08:32:54.573Z" },
- { url = "https://files.pythonhosted.org/packages/ea/12/a93df3366ed32db1d907d7593a94f1fe6293903e3e92967bebd6950ed12c/charset_normalizer-3.4.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:926ca93accd5d36ccdabd803392ddc3e03e6d4cd1cf17deff3b989ab8e9dbcf0", size = 199622, upload_time = "2025-05-02T08:32:56.363Z" },
- { url = "https://files.pythonhosted.org/packages/04/93/bf204e6f344c39d9937d3c13c8cd5bbfc266472e51fc8c07cb7f64fcd2de/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eba9904b0f38a143592d9fc0e19e2df0fa2e41c3c3745554761c5f6447eedabf", size = 143435, upload_time = "2025-05-02T08:32:58.551Z" },
- { url = "https://files.pythonhosted.org/packages/22/2a/ea8a2095b0bafa6c5b5a55ffdc2f924455233ee7b91c69b7edfcc9e02284/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3fddb7e2c84ac87ac3a947cb4e66d143ca5863ef48e4a5ecb83bd48619e4634e", size = 153653, upload_time = "2025-05-02T08:33:00.342Z" },
- { url = "https://files.pythonhosted.org/packages/b6/57/1b090ff183d13cef485dfbe272e2fe57622a76694061353c59da52c9a659/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:98f862da73774290f251b9df8d11161b6cf25b599a66baf087c1ffe340e9bfd1", size = 146231, upload_time = "2025-05-02T08:33:02.081Z" },
- { url = "https://files.pythonhosted.org/packages/e2/28/ffc026b26f441fc67bd21ab7f03b313ab3fe46714a14b516f931abe1a2d8/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c9379d65defcab82d07b2a9dfbfc2e95bc8fe0ebb1b176a3190230a3ef0e07c", size = 148243, upload_time = "2025-05-02T08:33:04.063Z" },
- { url = "https://files.pythonhosted.org/packages/c0/0f/9abe9bd191629c33e69e47c6ef45ef99773320e9ad8e9cb08b8ab4a8d4cb/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e635b87f01ebc977342e2697d05b56632f5f879a4f15955dfe8cef2448b51691", size = 150442, upload_time = "2025-05-02T08:33:06.418Z" },
- { url = "https://files.pythonhosted.org/packages/67/7c/a123bbcedca91d5916c056407f89a7f5e8fdfce12ba825d7d6b9954a1a3c/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:1c95a1e2902a8b722868587c0e1184ad5c55631de5afc0eb96bc4b0d738092c0", size = 145147, upload_time = "2025-05-02T08:33:08.183Z" },
- { url = "https://files.pythonhosted.org/packages/ec/fe/1ac556fa4899d967b83e9893788e86b6af4d83e4726511eaaad035e36595/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:ef8de666d6179b009dce7bcb2ad4c4a779f113f12caf8dc77f0162c29d20490b", size = 153057, upload_time = "2025-05-02T08:33:09.986Z" },
- { url = "https://files.pythonhosted.org/packages/2b/ff/acfc0b0a70b19e3e54febdd5301a98b72fa07635e56f24f60502e954c461/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:32fc0341d72e0f73f80acb0a2c94216bd704f4f0bce10aedea38f30502b271ff", size = 156454, upload_time = "2025-05-02T08:33:11.814Z" },
- { url = "https://files.pythonhosted.org/packages/92/08/95b458ce9c740d0645feb0e96cea1f5ec946ea9c580a94adfe0b617f3573/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:289200a18fa698949d2b39c671c2cc7a24d44096784e76614899a7ccf2574b7b", size = 154174, upload_time = "2025-05-02T08:33:13.707Z" },
- { url = "https://files.pythonhosted.org/packages/78/be/8392efc43487ac051eee6c36d5fbd63032d78f7728cb37aebcc98191f1ff/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:4a476b06fbcf359ad25d34a057b7219281286ae2477cc5ff5e3f70a246971148", size = 149166, upload_time = "2025-05-02T08:33:15.458Z" },
- { url = "https://files.pythonhosted.org/packages/44/96/392abd49b094d30b91d9fbda6a69519e95802250b777841cf3bda8fe136c/charset_normalizer-3.4.2-cp313-cp313-win32.whl", hash = "sha256:aaeeb6a479c7667fbe1099af9617c83aaca22182d6cf8c53966491a0f1b7ffb7", size = 98064, upload_time = "2025-05-02T08:33:17.06Z" },
- { url = "https://files.pythonhosted.org/packages/e9/b0/0200da600134e001d91851ddc797809e2fe0ea72de90e09bec5a2fbdaccb/charset_normalizer-3.4.2-cp313-cp313-win_amd64.whl", hash = "sha256:aa6af9e7d59f9c12b33ae4e9450619cf2488e2bbe9b44030905877f0b2324980", size = 105641, upload_time = "2025-05-02T08:33:18.753Z" },
- { url = "https://files.pythonhosted.org/packages/20/94/c5790835a017658cbfabd07f3bfb549140c3ac458cfc196323996b10095a/charset_normalizer-3.4.2-py3-none-any.whl", hash = "sha256:7f56930ab0abd1c45cd15be65cc741c28b1c9a34876ce8c17a2fa107810c0af0", size = 52626, upload_time = "2025-05-02T08:34:40.053Z" },
-]
-
-[[package]]
-name = "click"
-version = "8.2.1"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "colorama", marker = "sys_platform == 'win32'" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/60/6c/8ca2efa64cf75a977a0d7fac081354553ebe483345c734fb6b6515d96bbc/click-8.2.1.tar.gz", hash = "sha256:27c491cc05d968d271d5a1db13e3b5a184636d9d930f148c50b038f0d0646202", size = 286342, upload_time = "2025-05-20T23:19:49.832Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/85/32/10bb5764d90a8eee674e9dc6f4db6a0ab47c8c4d0d83c27f7c39ac415a4d/click-8.2.1-py3-none-any.whl", hash = "sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b", size = 102215, upload_time = "2025-05-20T23:19:47.796Z" },
-]
-
-[[package]]
-name = "colorama"
-version = "0.4.6"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload_time = "2022-10-25T02:36:22.414Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload_time = "2022-10-25T02:36:20.889Z" },
-]
-
-[[package]]
-name = "distro"
-version = "1.9.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/fc/f8/98eea607f65de6527f8a2e8885fc8015d3e6f5775df186e443e0964a11c3/distro-1.9.0.tar.gz", hash = "sha256:2fa77c6fd8940f116ee1d6b94a2f90b13b5ea8d019b98bc8bafdcabcdd9bdbed", size = 60722, upload_time = "2023-12-24T09:54:32.31Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/12/b3/231ffd4ab1fc9d679809f356cebee130ac7daa00d6d6f3206dd4fd137e9e/distro-1.9.0-py3-none-any.whl", hash = "sha256:7bffd925d65168f85027d8da9af6bddab658135b840670a223589bc0c8ef02b2", size = 20277, upload_time = "2023-12-24T09:54:30.421Z" },
-]
-
-[[package]]
-name = "dnspython"
-version = "2.7.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/b5/4a/263763cb2ba3816dd94b08ad3a33d5fdae34ecb856678773cc40a3605829/dnspython-2.7.0.tar.gz", hash = "sha256:ce9c432eda0dc91cf618a5cedf1a4e142651196bbcd2c80e89ed5a907e5cfaf1", size = 345197, upload_time = "2024-10-05T20:14:59.362Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/68/1b/e0a87d256e40e8c888847551b20a017a6b98139178505dc7ffb96f04e954/dnspython-2.7.0-py3-none-any.whl", hash = "sha256:b4c34b7d10b51bcc3a5071e7b8dee77939f1e878477eeecc965e9835f63c6c86", size = 313632, upload_time = "2024-10-05T20:14:57.687Z" },
-]
-
-[[package]]
-name = "gitdb"
-version = "4.0.12"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "smmap" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/72/94/63b0fc47eb32792c7ba1fe1b694daec9a63620db1e313033d18140c2320a/gitdb-4.0.12.tar.gz", hash = "sha256:5ef71f855d191a3326fcfbc0d5da835f26b13fbcba60c32c21091c349ffdb571", size = 394684, upload_time = "2025-01-02T07:20:46.413Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/a0/61/5c78b91c3143ed5c14207f463aecfc8f9dbb5092fb2869baf37c273b2705/gitdb-4.0.12-py3-none-any.whl", hash = "sha256:67073e15955400952c6565cc3e707c554a4eea2e428946f7a4c162fab9bd9bcf", size = 62794, upload_time = "2025-01-02T07:20:43.624Z" },
-]
-
-[[package]]
-name = "gitpython"
-version = "3.1.44"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "gitdb" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/c0/89/37df0b71473153574a5cdef8f242de422a0f5d26d7a9e231e6f169b4ad14/gitpython-3.1.44.tar.gz", hash = "sha256:c87e30b26253bf5418b01b0660f818967f3c503193838337fe5e573331249269", size = 214196, upload_time = "2025-01-02T07:32:43.59Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/1d/9a/4114a9057db2f1462d5c8f8390ab7383925fe1ac012eaa42402ad65c2963/GitPython-3.1.44-py3-none-any.whl", hash = "sha256:9e0e10cda9bed1ee64bc9a6de50e7e38a9c9943241cd7f585f6df3ed28011110", size = 207599, upload_time = "2025-01-02T07:32:40.731Z" },
-]
-
-[[package]]
-name = "greenlet"
-version = "3.2.2"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/34/c1/a82edae11d46c0d83481aacaa1e578fea21d94a1ef400afd734d47ad95ad/greenlet-3.2.2.tar.gz", hash = "sha256:ad053d34421a2debba45aa3cc39acf454acbcd025b3fc1a9f8a0dee237abd485", size = 185797, upload_time = "2025-05-09T19:47:35.066Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/a3/9f/a47e19261747b562ce88219e5ed8c859d42c6e01e73da6fbfa3f08a7be13/greenlet-3.2.2-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:dcb9cebbf3f62cb1e5afacae90761ccce0effb3adaa32339a0670fe7805d8068", size = 268635, upload_time = "2025-05-09T14:50:39.007Z" },
- { url = "https://files.pythonhosted.org/packages/11/80/a0042b91b66975f82a914d515e81c1944a3023f2ce1ed7a9b22e10b46919/greenlet-3.2.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bf3fc9145141250907730886b031681dfcc0de1c158f3cc51c092223c0f381ce", size = 628786, upload_time = "2025-05-09T15:24:00.692Z" },
- { url = "https://files.pythonhosted.org/packages/38/a2/8336bf1e691013f72a6ebab55da04db81a11f68e82bb691f434909fa1327/greenlet-3.2.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:efcdfb9df109e8a3b475c016f60438fcd4be68cd13a365d42b35914cdab4bb2b", size = 640866, upload_time = "2025-05-09T15:24:48.153Z" },
- { url = "https://files.pythonhosted.org/packages/f8/7e/f2a3a13e424670a5d08826dab7468fa5e403e0fbe0b5f951ff1bc4425b45/greenlet-3.2.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4bd139e4943547ce3a56ef4b8b1b9479f9e40bb47e72cc906f0f66b9d0d5cab3", size = 636752, upload_time = "2025-05-09T15:29:23.182Z" },
- { url = "https://files.pythonhosted.org/packages/fd/5d/ce4a03a36d956dcc29b761283f084eb4a3863401c7cb505f113f73af8774/greenlet-3.2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:71566302219b17ca354eb274dfd29b8da3c268e41b646f330e324e3967546a74", size = 636028, upload_time = "2025-05-09T14:53:32.854Z" },
- { url = "https://files.pythonhosted.org/packages/4b/29/b130946b57e3ceb039238413790dd3793c5e7b8e14a54968de1fe449a7cf/greenlet-3.2.2-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3091bc45e6b0c73f225374fefa1536cd91b1e987377b12ef5b19129b07d93ebe", size = 583869, upload_time = "2025-05-09T14:53:43.614Z" },
- { url = "https://files.pythonhosted.org/packages/ac/30/9f538dfe7f87b90ecc75e589d20cbd71635531a617a336c386d775725a8b/greenlet-3.2.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:44671c29da26539a5f142257eaba5110f71887c24d40df3ac87f1117df589e0e", size = 1112886, upload_time = "2025-05-09T15:27:01.304Z" },
- { url = "https://files.pythonhosted.org/packages/be/92/4b7deeb1a1e9c32c1b59fdca1cac3175731c23311ddca2ea28a8b6ada91c/greenlet-3.2.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c23ea227847c9dbe0b3910f5c0dd95658b607137614eb821e6cbaecd60d81cc6", size = 1138355, upload_time = "2025-05-09T14:53:58.011Z" },
- { url = "https://files.pythonhosted.org/packages/c5/eb/7551c751a2ea6498907b2fcbe31d7a54b602ba5e8eb9550a9695ca25d25c/greenlet-3.2.2-cp311-cp311-win_amd64.whl", hash = "sha256:0a16fb934fcabfdfacf21d79e6fed81809d8cd97bc1be9d9c89f0e4567143d7b", size = 295437, upload_time = "2025-05-09T15:00:57.733Z" },
- { url = "https://files.pythonhosted.org/packages/2c/a1/88fdc6ce0df6ad361a30ed78d24c86ea32acb2b563f33e39e927b1da9ea0/greenlet-3.2.2-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:df4d1509efd4977e6a844ac96d8be0b9e5aa5d5c77aa27ca9f4d3f92d3fcf330", size = 270413, upload_time = "2025-05-09T14:51:32.455Z" },
- { url = "https://files.pythonhosted.org/packages/a6/2e/6c1caffd65490c68cd9bcec8cb7feb8ac7b27d38ba1fea121fdc1f2331dc/greenlet-3.2.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da956d534a6d1b9841f95ad0f18ace637668f680b1339ca4dcfb2c1837880a0b", size = 637242, upload_time = "2025-05-09T15:24:02.63Z" },
- { url = "https://files.pythonhosted.org/packages/98/28/088af2cedf8823b6b7ab029a5626302af4ca1037cf8b998bed3a8d3cb9e2/greenlet-3.2.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9c7b15fb9b88d9ee07e076f5a683027bc3befd5bb5d25954bb633c385d8b737e", size = 651444, upload_time = "2025-05-09T15:24:49.856Z" },
- { url = "https://files.pythonhosted.org/packages/4a/9f/0116ab876bb0bc7a81eadc21c3f02cd6100dcd25a1cf2a085a130a63a26a/greenlet-3.2.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:752f0e79785e11180ebd2e726c8a88109ded3e2301d40abced2543aa5d164275", size = 646067, upload_time = "2025-05-09T15:29:24.989Z" },
- { url = "https://files.pythonhosted.org/packages/35/17/bb8f9c9580e28a94a9575da847c257953d5eb6e39ca888239183320c1c28/greenlet-3.2.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9ae572c996ae4b5e122331e12bbb971ea49c08cc7c232d1bd43150800a2d6c65", size = 648153, upload_time = "2025-05-09T14:53:34.716Z" },
- { url = "https://files.pythonhosted.org/packages/2c/ee/7f31b6f7021b8df6f7203b53b9cc741b939a2591dcc6d899d8042fcf66f2/greenlet-3.2.2-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:02f5972ff02c9cf615357c17ab713737cccfd0eaf69b951084a9fd43f39833d3", size = 603865, upload_time = "2025-05-09T14:53:45.738Z" },
- { url = "https://files.pythonhosted.org/packages/b5/2d/759fa59323b521c6f223276a4fc3d3719475dc9ae4c44c2fe7fc750f8de0/greenlet-3.2.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:4fefc7aa68b34b9224490dfda2e70ccf2131368493add64b4ef2d372955c207e", size = 1119575, upload_time = "2025-05-09T15:27:04.248Z" },
- { url = "https://files.pythonhosted.org/packages/30/05/356813470060bce0e81c3df63ab8cd1967c1ff6f5189760c1a4734d405ba/greenlet-3.2.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:a31ead8411a027c2c4759113cf2bd473690517494f3d6e4bf67064589afcd3c5", size = 1147460, upload_time = "2025-05-09T14:54:00.315Z" },
- { url = "https://files.pythonhosted.org/packages/07/f4/b2a26a309a04fb844c7406a4501331b9400e1dd7dd64d3450472fd47d2e1/greenlet-3.2.2-cp312-cp312-win_amd64.whl", hash = "sha256:b24c7844c0a0afc3ccbeb0b807adeefb7eff2b5599229ecedddcfeb0ef333bec", size = 296239, upload_time = "2025-05-09T14:57:17.633Z" },
- { url = "https://files.pythonhosted.org/packages/89/30/97b49779fff8601af20972a62cc4af0c497c1504dfbb3e93be218e093f21/greenlet-3.2.2-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:3ab7194ee290302ca15449f601036007873028712e92ca15fc76597a0aeb4c59", size = 269150, upload_time = "2025-05-09T14:50:30.784Z" },
- { url = "https://files.pythonhosted.org/packages/21/30/877245def4220f684bc2e01df1c2e782c164e84b32e07373992f14a2d107/greenlet-3.2.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2dc5c43bb65ec3669452af0ab10729e8fdc17f87a1f2ad7ec65d4aaaefabf6bf", size = 637381, upload_time = "2025-05-09T15:24:12.893Z" },
- { url = "https://files.pythonhosted.org/packages/8e/16/adf937908e1f913856b5371c1d8bdaef5f58f251d714085abeea73ecc471/greenlet-3.2.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:decb0658ec19e5c1f519faa9a160c0fc85a41a7e6654b3ce1b44b939f8bf1325", size = 651427, upload_time = "2025-05-09T15:24:51.074Z" },
- { url = "https://files.pythonhosted.org/packages/ad/49/6d79f58fa695b618654adac64e56aff2eeb13344dc28259af8f505662bb1/greenlet-3.2.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6fadd183186db360b61cb34e81117a096bff91c072929cd1b529eb20dd46e6c5", size = 645795, upload_time = "2025-05-09T15:29:26.673Z" },
- { url = "https://files.pythonhosted.org/packages/5a/e6/28ed5cb929c6b2f001e96b1d0698c622976cd8f1e41fe7ebc047fa7c6dd4/greenlet-3.2.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1919cbdc1c53ef739c94cf2985056bcc0838c1f217b57647cbf4578576c63825", size = 648398, upload_time = "2025-05-09T14:53:36.61Z" },
- { url = "https://files.pythonhosted.org/packages/9d/70/b200194e25ae86bc57077f695b6cc47ee3118becf54130c5514456cf8dac/greenlet-3.2.2-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3885f85b61798f4192d544aac7b25a04ece5fe2704670b4ab73c2d2c14ab740d", size = 606795, upload_time = "2025-05-09T14:53:47.039Z" },
- { url = "https://files.pythonhosted.org/packages/f8/c8/ba1def67513a941154ed8f9477ae6e5a03f645be6b507d3930f72ed508d3/greenlet-3.2.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:85f3e248507125bf4af607a26fd6cb8578776197bd4b66e35229cdf5acf1dfbf", size = 1117976, upload_time = "2025-05-09T15:27:06.542Z" },
- { url = "https://files.pythonhosted.org/packages/c3/30/d0e88c1cfcc1b3331d63c2b54a0a3a4a950ef202fb8b92e772ca714a9221/greenlet-3.2.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:1e76106b6fc55fa3d6fe1c527f95ee65e324a13b62e243f77b48317346559708", size = 1145509, upload_time = "2025-05-09T14:54:02.223Z" },
- { url = "https://files.pythonhosted.org/packages/90/2e/59d6491834b6e289051b252cf4776d16da51c7c6ca6a87ff97e3a50aa0cd/greenlet-3.2.2-cp313-cp313-win_amd64.whl", hash = "sha256:fe46d4f8e94e637634d54477b0cfabcf93c53f29eedcbdeecaf2af32029b4421", size = 296023, upload_time = "2025-05-09T14:53:24.157Z" },
- { url = "https://files.pythonhosted.org/packages/65/66/8a73aace5a5335a1cba56d0da71b7bd93e450f17d372c5b7c5fa547557e9/greenlet-3.2.2-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba30e88607fb6990544d84caf3c706c4b48f629e18853fc6a646f82db9629418", size = 629911, upload_time = "2025-05-09T15:24:22.376Z" },
- { url = "https://files.pythonhosted.org/packages/48/08/c8b8ebac4e0c95dcc68ec99198842e7db53eda4ab3fb0a4e785690883991/greenlet-3.2.2-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:055916fafad3e3388d27dd68517478933a97edc2fc54ae79d3bec827de2c64c4", size = 635251, upload_time = "2025-05-09T15:24:52.205Z" },
- { url = "https://files.pythonhosted.org/packages/37/26/7db30868f73e86b9125264d2959acabea132b444b88185ba5c462cb8e571/greenlet-3.2.2-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2593283bf81ca37d27d110956b79e8723f9aa50c4bcdc29d3c0543d4743d2763", size = 632620, upload_time = "2025-05-09T15:29:28.051Z" },
- { url = "https://files.pythonhosted.org/packages/10/ec/718a3bd56249e729016b0b69bee4adea0dfccf6ca43d147ef3b21edbca16/greenlet-3.2.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:89c69e9a10670eb7a66b8cef6354c24671ba241f46152dd3eed447f79c29fb5b", size = 628851, upload_time = "2025-05-09T14:53:38.472Z" },
- { url = "https://files.pythonhosted.org/packages/9b/9d/d1c79286a76bc62ccdc1387291464af16a4204ea717f24e77b0acd623b99/greenlet-3.2.2-cp313-cp313t-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:02a98600899ca1ca5d3a2590974c9e3ec259503b2d6ba6527605fcd74e08e207", size = 593718, upload_time = "2025-05-09T14:53:48.313Z" },
- { url = "https://files.pythonhosted.org/packages/cd/41/96ba2bf948f67b245784cd294b84e3d17933597dffd3acdb367a210d1949/greenlet-3.2.2-cp313-cp313t-musllinux_1_1_aarch64.whl", hash = "sha256:b50a8c5c162469c3209e5ec92ee4f95c8231b11db6a04db09bbe338176723bb8", size = 1105752, upload_time = "2025-05-09T15:27:08.217Z" },
- { url = "https://files.pythonhosted.org/packages/68/3b/3b97f9d33c1f2eb081759da62bd6162159db260f602f048bc2f36b4c453e/greenlet-3.2.2-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:45f9f4853fb4cc46783085261c9ec4706628f3b57de3e68bae03e8f8b3c0de51", size = 1125170, upload_time = "2025-05-09T14:54:04.082Z" },
- { url = "https://files.pythonhosted.org/packages/31/df/b7d17d66c8d0f578d2885a3d8f565e9e4725eacc9d3fdc946d0031c055c4/greenlet-3.2.2-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:9ea5231428af34226c05f927e16fc7f6fa5e39e3ad3cd24ffa48ba53a47f4240", size = 269899, upload_time = "2025-05-09T14:54:01.581Z" },
-]
-
-[[package]]
-name = "grpcio"
-version = "1.71.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/1c/95/aa11fc09a85d91fbc7dd405dcb2a1e0256989d67bf89fa65ae24b3ba105a/grpcio-1.71.0.tar.gz", hash = "sha256:2b85f7820475ad3edec209d3d89a7909ada16caab05d3f2e08a7e8ae3200a55c", size = 12549828, upload_time = "2025-03-10T19:28:49.203Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/63/04/a085f3ad4133426f6da8c1becf0749872a49feb625a407a2e864ded3fb12/grpcio-1.71.0-cp311-cp311-linux_armv7l.whl", hash = "sha256:d6aa986318c36508dc1d5001a3ff169a15b99b9f96ef5e98e13522c506b37eef", size = 5210453, upload_time = "2025-03-10T19:24:33.342Z" },
- { url = "https://files.pythonhosted.org/packages/b4/d5/0bc53ed33ba458de95020970e2c22aa8027b26cc84f98bea7fcad5d695d1/grpcio-1.71.0-cp311-cp311-macosx_10_14_universal2.whl", hash = "sha256:d2c170247315f2d7e5798a22358e982ad6eeb68fa20cf7a820bb74c11f0736e7", size = 11347567, upload_time = "2025-03-10T19:24:35.215Z" },
- { url = "https://files.pythonhosted.org/packages/e3/6d/ce334f7e7a58572335ccd61154d808fe681a4c5e951f8a1ff68f5a6e47ce/grpcio-1.71.0-cp311-cp311-manylinux_2_17_aarch64.whl", hash = "sha256:e6f83a583ed0a5b08c5bc7a3fe860bb3c2eac1f03f1f63e0bc2091325605d2b7", size = 5696067, upload_time = "2025-03-10T19:24:37.988Z" },
- { url = "https://files.pythonhosted.org/packages/05/4a/80befd0b8b1dc2b9ac5337e57473354d81be938f87132e147c4a24a581bd/grpcio-1.71.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4be74ddeeb92cc87190e0e376dbc8fc7736dbb6d3d454f2fa1f5be1dee26b9d7", size = 6348377, upload_time = "2025-03-10T19:24:40.361Z" },
- { url = "https://files.pythonhosted.org/packages/c7/67/cbd63c485051eb78663355d9efd1b896cfb50d4a220581ec2cb9a15cd750/grpcio-1.71.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4dd0dfbe4d5eb1fcfec9490ca13f82b089a309dc3678e2edabc144051270a66e", size = 5940407, upload_time = "2025-03-10T19:24:42.685Z" },
- { url = "https://files.pythonhosted.org/packages/98/4b/7a11aa4326d7faa499f764eaf8a9b5a0eb054ce0988ee7ca34897c2b02ae/grpcio-1.71.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a2242d6950dc892afdf9e951ed7ff89473aaf744b7d5727ad56bdaace363722b", size = 6030915, upload_time = "2025-03-10T19:24:44.463Z" },
- { url = "https://files.pythonhosted.org/packages/eb/a2/cdae2d0e458b475213a011078b0090f7a1d87f9a68c678b76f6af7c6ac8c/grpcio-1.71.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:0fa05ee31a20456b13ae49ad2e5d585265f71dd19fbd9ef983c28f926d45d0a7", size = 6648324, upload_time = "2025-03-10T19:24:46.287Z" },
- { url = "https://files.pythonhosted.org/packages/27/df/f345c8daaa8d8574ce9869f9b36ca220c8845923eb3087e8f317eabfc2a8/grpcio-1.71.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:3d081e859fb1ebe176de33fc3adb26c7d46b8812f906042705346b314bde32c3", size = 6197839, upload_time = "2025-03-10T19:24:48.565Z" },
- { url = "https://files.pythonhosted.org/packages/f2/2c/cd488dc52a1d0ae1bad88b0d203bc302efbb88b82691039a6d85241c5781/grpcio-1.71.0-cp311-cp311-win32.whl", hash = "sha256:d6de81c9c00c8a23047136b11794b3584cdc1460ed7cbc10eada50614baa1444", size = 3619978, upload_time = "2025-03-10T19:24:50.518Z" },
- { url = "https://files.pythonhosted.org/packages/ee/3f/cf92e7e62ccb8dbdf977499547dfc27133124d6467d3a7d23775bcecb0f9/grpcio-1.71.0-cp311-cp311-win_amd64.whl", hash = "sha256:24e867651fc67717b6f896d5f0cac0ec863a8b5fb7d6441c2ab428f52c651c6b", size = 4282279, upload_time = "2025-03-10T19:24:52.313Z" },
- { url = "https://files.pythonhosted.org/packages/4c/83/bd4b6a9ba07825bd19c711d8b25874cd5de72c2a3fbf635c3c344ae65bd2/grpcio-1.71.0-cp312-cp312-linux_armv7l.whl", hash = "sha256:0ff35c8d807c1c7531d3002be03221ff9ae15712b53ab46e2a0b4bb271f38537", size = 5184101, upload_time = "2025-03-10T19:24:54.11Z" },
- { url = "https://files.pythonhosted.org/packages/31/ea/2e0d90c0853568bf714693447f5c73272ea95ee8dad107807fde740e595d/grpcio-1.71.0-cp312-cp312-macosx_10_14_universal2.whl", hash = "sha256:b78a99cd1ece4be92ab7c07765a0b038194ded2e0a26fd654591ee136088d8d7", size = 11310927, upload_time = "2025-03-10T19:24:56.1Z" },
- { url = "https://files.pythonhosted.org/packages/ac/bc/07a3fd8af80467390af491d7dc66882db43884128cdb3cc8524915e0023c/grpcio-1.71.0-cp312-cp312-manylinux_2_17_aarch64.whl", hash = "sha256:dc1a1231ed23caac1de9f943d031f1bc38d0f69d2a3b243ea0d664fc1fbd7fec", size = 5654280, upload_time = "2025-03-10T19:24:58.55Z" },
- { url = "https://files.pythonhosted.org/packages/16/af/21f22ea3eed3d0538b6ef7889fce1878a8ba4164497f9e07385733391e2b/grpcio-1.71.0-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e6beeea5566092c5e3c4896c6d1d307fb46b1d4bdf3e70c8340b190a69198594", size = 6312051, upload_time = "2025-03-10T19:25:00.682Z" },
- { url = "https://files.pythonhosted.org/packages/49/9d/e12ddc726dc8bd1aa6cba67c85ce42a12ba5b9dd75d5042214a59ccf28ce/grpcio-1.71.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d5170929109450a2c031cfe87d6716f2fae39695ad5335d9106ae88cc32dc84c", size = 5910666, upload_time = "2025-03-10T19:25:03.01Z" },
- { url = "https://files.pythonhosted.org/packages/d9/e9/38713d6d67aedef738b815763c25f092e0454dc58e77b1d2a51c9d5b3325/grpcio-1.71.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:5b08d03ace7aca7b2fadd4baf291139b4a5f058805a8327bfe9aece7253b6d67", size = 6012019, upload_time = "2025-03-10T19:25:05.174Z" },
- { url = "https://files.pythonhosted.org/packages/80/da/4813cd7adbae6467724fa46c952d7aeac5e82e550b1c62ed2aeb78d444ae/grpcio-1.71.0-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:f903017db76bf9cc2b2d8bdd37bf04b505bbccad6be8a81e1542206875d0e9db", size = 6637043, upload_time = "2025-03-10T19:25:06.987Z" },
- { url = "https://files.pythonhosted.org/packages/52/ca/c0d767082e39dccb7985c73ab4cf1d23ce8613387149e9978c70c3bf3b07/grpcio-1.71.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:469f42a0b410883185eab4689060a20488a1a0a00f8bbb3cbc1061197b4c5a79", size = 6186143, upload_time = "2025-03-10T19:25:08.877Z" },
- { url = "https://files.pythonhosted.org/packages/00/61/7b2c8ec13303f8fe36832c13d91ad4d4ba57204b1c723ada709c346b2271/grpcio-1.71.0-cp312-cp312-win32.whl", hash = "sha256:ad9f30838550695b5eb302add33f21f7301b882937460dd24f24b3cc5a95067a", size = 3604083, upload_time = "2025-03-10T19:25:10.736Z" },
- { url = "https://files.pythonhosted.org/packages/fd/7c/1e429c5fb26122055d10ff9a1d754790fb067d83c633ff69eddcf8e3614b/grpcio-1.71.0-cp312-cp312-win_amd64.whl", hash = "sha256:652350609332de6dac4ece254e5d7e1ff834e203d6afb769601f286886f6f3a8", size = 4272191, upload_time = "2025-03-10T19:25:13.12Z" },
- { url = "https://files.pythonhosted.org/packages/04/dd/b00cbb45400d06b26126dcfdbdb34bb6c4f28c3ebbd7aea8228679103ef6/grpcio-1.71.0-cp313-cp313-linux_armv7l.whl", hash = "sha256:cebc1b34ba40a312ab480ccdb396ff3c529377a2fce72c45a741f7215bfe8379", size = 5184138, upload_time = "2025-03-10T19:25:15.101Z" },
- { url = "https://files.pythonhosted.org/packages/ed/0a/4651215983d590ef53aac40ba0e29dda941a02b097892c44fa3357e706e5/grpcio-1.71.0-cp313-cp313-macosx_10_14_universal2.whl", hash = "sha256:85da336e3649a3d2171e82f696b5cad2c6231fdd5bad52616476235681bee5b3", size = 11310747, upload_time = "2025-03-10T19:25:17.201Z" },
- { url = "https://files.pythonhosted.org/packages/57/a3/149615b247f321e13f60aa512d3509d4215173bdb982c9098d78484de216/grpcio-1.71.0-cp313-cp313-manylinux_2_17_aarch64.whl", hash = "sha256:f9a412f55bb6e8f3bb000e020dbc1e709627dcb3a56f6431fa7076b4c1aab0db", size = 5653991, upload_time = "2025-03-10T19:25:20.39Z" },
- { url = "https://files.pythonhosted.org/packages/ca/56/29432a3e8d951b5e4e520a40cd93bebaa824a14033ea8e65b0ece1da6167/grpcio-1.71.0-cp313-cp313-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:47be9584729534660416f6d2a3108aaeac1122f6b5bdbf9fd823e11fe6fbaa29", size = 6312781, upload_time = "2025-03-10T19:25:22.823Z" },
- { url = "https://files.pythonhosted.org/packages/a3/f8/286e81a62964ceb6ac10b10925261d4871a762d2a763fbf354115f9afc98/grpcio-1.71.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7c9c80ac6091c916db81131d50926a93ab162a7e97e4428ffc186b6e80d6dda4", size = 5910479, upload_time = "2025-03-10T19:25:24.828Z" },
- { url = "https://files.pythonhosted.org/packages/35/67/d1febb49ec0f599b9e6d4d0d44c2d4afdbed9c3e80deb7587ec788fcf252/grpcio-1.71.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:789d5e2a3a15419374b7b45cd680b1e83bbc1e52b9086e49308e2c0b5bbae6e3", size = 6013262, upload_time = "2025-03-10T19:25:26.987Z" },
- { url = "https://files.pythonhosted.org/packages/a1/04/f9ceda11755f0104a075ad7163fc0d96e2e3a9fe25ef38adfc74c5790daf/grpcio-1.71.0-cp313-cp313-musllinux_1_1_i686.whl", hash = "sha256:1be857615e26a86d7363e8a163fade914595c81fec962b3d514a4b1e8760467b", size = 6643356, upload_time = "2025-03-10T19:25:29.606Z" },
- { url = "https://files.pythonhosted.org/packages/fb/ce/236dbc3dc77cf9a9242adcf1f62538734ad64727fabf39e1346ad4bd5c75/grpcio-1.71.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:a76d39b5fafd79ed604c4be0a869ec3581a172a707e2a8d7a4858cb05a5a7637", size = 6186564, upload_time = "2025-03-10T19:25:31.537Z" },
- { url = "https://files.pythonhosted.org/packages/10/fd/b3348fce9dd4280e221f513dd54024e765b21c348bc475516672da4218e9/grpcio-1.71.0-cp313-cp313-win32.whl", hash = "sha256:74258dce215cb1995083daa17b379a1a5a87d275387b7ffe137f1d5131e2cfbb", size = 3601890, upload_time = "2025-03-10T19:25:33.421Z" },
- { url = "https://files.pythonhosted.org/packages/be/f8/db5d5f3fc7e296166286c2a397836b8b042f7ad1e11028d82b061701f0f7/grpcio-1.71.0-cp313-cp313-win_amd64.whl", hash = "sha256:22c3bc8d488c039a199f7a003a38cb7635db6656fa96437a8accde8322ce2366", size = 4273308, upload_time = "2025-03-10T19:25:35.79Z" },
-]
-
-[[package]]
-name = "h11"
-version = "0.16.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/01/ee/02a2c011bdab74c6fb3c75474d40b3052059d95df7e73351460c8588d963/h11-0.16.0.tar.gz", hash = "sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1", size = 101250, upload_time = "2025-04-24T03:35:25.427Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/04/4b/29cac41a4d98d144bf5f6d33995617b185d14b22401f75ca86f384e87ff1/h11-0.16.0-py3-none-any.whl", hash = "sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86", size = 37515, upload_time = "2025-04-24T03:35:24.344Z" },
-]
-
-[[package]]
-name = "h2"
-version = "4.2.0"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "hpack" },
- { name = "hyperframe" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/1b/38/d7f80fd13e6582fb8e0df8c9a653dcc02b03ca34f4d72f34869298c5baf8/h2-4.2.0.tar.gz", hash = "sha256:c8a52129695e88b1a0578d8d2cc6842bbd79128ac685463b887ee278126ad01f", size = 2150682, upload_time = "2025-02-02T07:43:51.815Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/d0/9e/984486f2d0a0bd2b024bf4bc1c62688fcafa9e61991f041fb0e2def4a982/h2-4.2.0-py3-none-any.whl", hash = "sha256:479a53ad425bb29af087f3458a61d30780bc818e4ebcf01f0b536ba916462ed0", size = 60957, upload_time = "2025-02-01T11:02:26.481Z" },
-]
-
-[[package]]
-name = "hpack"
-version = "4.1.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/2c/48/71de9ed269fdae9c8057e5a4c0aa7402e8bb16f2c6e90b3aa53327b113f8/hpack-4.1.0.tar.gz", hash = "sha256:ec5eca154f7056aa06f196a557655c5b009b382873ac8d1e66e79e87535f1dca", size = 51276, upload_time = "2025-01-22T21:44:58.347Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/07/c6/80c95b1b2b94682a72cbdbfb85b81ae2daffa4291fbfa1b1464502ede10d/hpack-4.1.0-py3-none-any.whl", hash = "sha256:157ac792668d995c657d93111f46b4535ed114f0c9c8d672271bbec7eae1b496", size = 34357, upload_time = "2025-01-22T21:44:56.92Z" },
-]
-
-[[package]]
-name = "httpcore"
-version = "1.0.9"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "certifi" },
- { name = "h11" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/06/94/82699a10bca87a5556c9c59b5963f2d039dbd239f25bc2a63907a05a14cb/httpcore-1.0.9.tar.gz", hash = "sha256:6e34463af53fd2ab5d807f399a9b45ea31c3dfa2276f15a2c3f00afff6e176e8", size = 85484, upload_time = "2025-04-24T22:06:22.219Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/7e/f5/f66802a942d491edb555dd61e3a9961140fd64c90bce1eafd741609d334d/httpcore-1.0.9-py3-none-any.whl", hash = "sha256:2d400746a40668fc9dec9810239072b40b4484b640a8c38fd654a024c7a1bf55", size = 78784, upload_time = "2025-04-24T22:06:20.566Z" },
-]
-
-[[package]]
-name = "httpx"
-version = "0.28.1"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "anyio" },
- { name = "certifi" },
- { name = "httpcore" },
- { name = "idna" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/b1/df/48c586a5fe32a0f01324ee087459e112ebb7224f646c0b5023f5e79e9956/httpx-0.28.1.tar.gz", hash = "sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc", size = 141406, upload_time = "2024-12-06T15:37:23.222Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad", size = 73517, upload_time = "2024-12-06T15:37:21.509Z" },
-]
-
-[package.optional-dependencies]
-http2 = [
- { name = "h2" },
-]
-
-[[package]]
-name = "hyperframe"
-version = "6.1.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/02/e7/94f8232d4a74cc99514c13a9f995811485a6903d48e5d952771ef6322e30/hyperframe-6.1.0.tar.gz", hash = "sha256:f630908a00854a7adeabd6382b43923a4c4cd4b821fcb527e6ab9e15382a3b08", size = 26566, upload_time = "2025-01-22T21:41:49.302Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/48/30/47d0bf6072f7252e6521f3447ccfa40b421b6824517f82854703d0f5a98b/hyperframe-6.1.0-py3-none-any.whl", hash = "sha256:b03380493a519fce58ea5af42e4a42317bf9bd425596f7a0835ffce80f1a42e5", size = 13007, upload_time = "2025-01-22T21:41:47.295Z" },
-]
-
-[[package]]
-name = "idna"
-version = "3.10"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/f1/70/7703c29685631f5a7590aa73f1f1d3fa9a380e654b86af429e0934a32f7d/idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9", size = 190490, upload_time = "2024-09-15T18:07:39.745Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3", size = 70442, upload_time = "2024-09-15T18:07:37.964Z" },
-]
-
-[[package]]
-name = "jinja2"
-version = "3.1.6"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "markupsafe" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/df/bf/f7da0350254c0ed7c72f3e33cef02e048281fec7ecec5f032d4aac52226b/jinja2-3.1.6.tar.gz", hash = "sha256:0137fb05990d35f1275a587e9aee6d56da821fc83491a0fb838183be43f66d6d", size = 245115, upload_time = "2025-03-05T20:05:02.478Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/62/a1/3d680cbfd5f4b8f15abc1d571870c5fc3e594bb582bc3b64ea099db13e56/jinja2-3.1.6-py3-none-any.whl", hash = "sha256:85ece4451f492d0c13c5dd7c13a64681a86afae63a5f347908daf103ce6d2f67", size = 134899, upload_time = "2025-03-05T20:05:00.369Z" },
-]
-
-[[package]]
-name = "jiter"
-version = "0.10.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/ee/9d/ae7ddb4b8ab3fb1b51faf4deb36cb48a4fbbd7cb36bad6a5fca4741306f7/jiter-0.10.0.tar.gz", hash = "sha256:07a7142c38aacc85194391108dc91b5b57093c978a9932bd86a36862759d9500", size = 162759, upload_time = "2025-05-18T19:04:59.73Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/1b/dd/6cefc6bd68b1c3c979cecfa7029ab582b57690a31cd2f346c4d0ce7951b6/jiter-0.10.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:3bebe0c558e19902c96e99217e0b8e8b17d570906e72ed8a87170bc290b1e978", size = 317473, upload_time = "2025-05-18T19:03:25.942Z" },
- { url = "https://files.pythonhosted.org/packages/be/cf/fc33f5159ce132be1d8dd57251a1ec7a631c7df4bd11e1cd198308c6ae32/jiter-0.10.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:558cc7e44fd8e507a236bee6a02fa17199ba752874400a0ca6cd6e2196cdb7dc", size = 321971, upload_time = "2025-05-18T19:03:27.255Z" },
- { url = "https://files.pythonhosted.org/packages/68/a4/da3f150cf1d51f6c472616fb7650429c7ce053e0c962b41b68557fdf6379/jiter-0.10.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4d613e4b379a07d7c8453c5712ce7014e86c6ac93d990a0b8e7377e18505e98d", size = 345574, upload_time = "2025-05-18T19:03:28.63Z" },
- { url = "https://files.pythonhosted.org/packages/84/34/6e8d412e60ff06b186040e77da5f83bc158e9735759fcae65b37d681f28b/jiter-0.10.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f62cf8ba0618eda841b9bf61797f21c5ebd15a7a1e19daab76e4e4b498d515b2", size = 371028, upload_time = "2025-05-18T19:03:30.292Z" },
- { url = "https://files.pythonhosted.org/packages/fb/d9/9ee86173aae4576c35a2f50ae930d2ccb4c4c236f6cb9353267aa1d626b7/jiter-0.10.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:919d139cdfa8ae8945112398511cb7fca58a77382617d279556b344867a37e61", size = 491083, upload_time = "2025-05-18T19:03:31.654Z" },
- { url = "https://files.pythonhosted.org/packages/d9/2c/f955de55e74771493ac9e188b0f731524c6a995dffdcb8c255b89c6fb74b/jiter-0.10.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:13ddbc6ae311175a3b03bd8994881bc4635c923754932918e18da841632349db", size = 388821, upload_time = "2025-05-18T19:03:33.184Z" },
- { url = "https://files.pythonhosted.org/packages/81/5a/0e73541b6edd3f4aada586c24e50626c7815c561a7ba337d6a7eb0a915b4/jiter-0.10.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4c440ea003ad10927a30521a9062ce10b5479592e8a70da27f21eeb457b4a9c5", size = 352174, upload_time = "2025-05-18T19:03:34.965Z" },
- { url = "https://files.pythonhosted.org/packages/1c/c0/61eeec33b8c75b31cae42be14d44f9e6fe3ac15a4e58010256ac3abf3638/jiter-0.10.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dc347c87944983481e138dea467c0551080c86b9d21de6ea9306efb12ca8f606", size = 391869, upload_time = "2025-05-18T19:03:36.436Z" },
- { url = "https://files.pythonhosted.org/packages/41/22/5beb5ee4ad4ef7d86f5ea5b4509f680a20706c4a7659e74344777efb7739/jiter-0.10.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:13252b58c1f4d8c5b63ab103c03d909e8e1e7842d302473f482915d95fefd605", size = 523741, upload_time = "2025-05-18T19:03:38.168Z" },
- { url = "https://files.pythonhosted.org/packages/ea/10/768e8818538e5817c637b0df52e54366ec4cebc3346108a4457ea7a98f32/jiter-0.10.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:7d1bbf3c465de4a24ab12fb7766a0003f6f9bce48b8b6a886158c4d569452dc5", size = 514527, upload_time = "2025-05-18T19:03:39.577Z" },
- { url = "https://files.pythonhosted.org/packages/73/6d/29b7c2dc76ce93cbedabfd842fc9096d01a0550c52692dfc33d3cc889815/jiter-0.10.0-cp311-cp311-win32.whl", hash = "sha256:db16e4848b7e826edca4ccdd5b145939758dadf0dc06e7007ad0e9cfb5928ae7", size = 210765, upload_time = "2025-05-18T19:03:41.271Z" },
- { url = "https://files.pythonhosted.org/packages/c2/c9/d394706deb4c660137caf13e33d05a031d734eb99c051142e039d8ceb794/jiter-0.10.0-cp311-cp311-win_amd64.whl", hash = "sha256:9c9c1d5f10e18909e993f9641f12fe1c77b3e9b533ee94ffa970acc14ded3812", size = 209234, upload_time = "2025-05-18T19:03:42.918Z" },
- { url = "https://files.pythonhosted.org/packages/6d/b5/348b3313c58f5fbfb2194eb4d07e46a35748ba6e5b3b3046143f3040bafa/jiter-0.10.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:1e274728e4a5345a6dde2d343c8da018b9d4bd4350f5a472fa91f66fda44911b", size = 312262, upload_time = "2025-05-18T19:03:44.637Z" },
- { url = "https://files.pythonhosted.org/packages/9c/4a/6a2397096162b21645162825f058d1709a02965606e537e3304b02742e9b/jiter-0.10.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:7202ae396446c988cb2a5feb33a543ab2165b786ac97f53b59aafb803fef0744", size = 320124, upload_time = "2025-05-18T19:03:46.341Z" },
- { url = "https://files.pythonhosted.org/packages/2a/85/1ce02cade7516b726dd88f59a4ee46914bf79d1676d1228ef2002ed2f1c9/jiter-0.10.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23ba7722d6748b6920ed02a8f1726fb4b33e0fd2f3f621816a8b486c66410ab2", size = 345330, upload_time = "2025-05-18T19:03:47.596Z" },
- { url = "https://files.pythonhosted.org/packages/75/d0/bb6b4f209a77190ce10ea8d7e50bf3725fc16d3372d0a9f11985a2b23eff/jiter-0.10.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:371eab43c0a288537d30e1f0b193bc4eca90439fc08a022dd83e5e07500ed026", size = 369670, upload_time = "2025-05-18T19:03:49.334Z" },
- { url = "https://files.pythonhosted.org/packages/a0/f5/a61787da9b8847a601e6827fbc42ecb12be2c925ced3252c8ffcb56afcaf/jiter-0.10.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6c675736059020365cebc845a820214765162728b51ab1e03a1b7b3abb70f74c", size = 489057, upload_time = "2025-05-18T19:03:50.66Z" },
- { url = "https://files.pythonhosted.org/packages/12/e4/6f906272810a7b21406c760a53aadbe52e99ee070fc5c0cb191e316de30b/jiter-0.10.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0c5867d40ab716e4684858e4887489685968a47e3ba222e44cde6e4a2154f959", size = 389372, upload_time = "2025-05-18T19:03:51.98Z" },
- { url = "https://files.pythonhosted.org/packages/e2/ba/77013b0b8ba904bf3762f11e0129b8928bff7f978a81838dfcc958ad5728/jiter-0.10.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:395bb9a26111b60141757d874d27fdea01b17e8fac958b91c20128ba8f4acc8a", size = 352038, upload_time = "2025-05-18T19:03:53.703Z" },
- { url = "https://files.pythonhosted.org/packages/67/27/c62568e3ccb03368dbcc44a1ef3a423cb86778a4389e995125d3d1aaa0a4/jiter-0.10.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6842184aed5cdb07e0c7e20e5bdcfafe33515ee1741a6835353bb45fe5d1bd95", size = 391538, upload_time = "2025-05-18T19:03:55.046Z" },
- { url = "https://files.pythonhosted.org/packages/c0/72/0d6b7e31fc17a8fdce76164884edef0698ba556b8eb0af9546ae1a06b91d/jiter-0.10.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:62755d1bcea9876770d4df713d82606c8c1a3dca88ff39046b85a048566d56ea", size = 523557, upload_time = "2025-05-18T19:03:56.386Z" },
- { url = "https://files.pythonhosted.org/packages/2f/09/bc1661fbbcbeb6244bd2904ff3a06f340aa77a2b94e5a7373fd165960ea3/jiter-0.10.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:533efbce2cacec78d5ba73a41756beff8431dfa1694b6346ce7af3a12c42202b", size = 514202, upload_time = "2025-05-18T19:03:57.675Z" },
- { url = "https://files.pythonhosted.org/packages/1b/84/5a5d5400e9d4d54b8004c9673bbe4403928a00d28529ff35b19e9d176b19/jiter-0.10.0-cp312-cp312-win32.whl", hash = "sha256:8be921f0cadd245e981b964dfbcd6fd4bc4e254cdc069490416dd7a2632ecc01", size = 211781, upload_time = "2025-05-18T19:03:59.025Z" },
- { url = "https://files.pythonhosted.org/packages/9b/52/7ec47455e26f2d6e5f2ea4951a0652c06e5b995c291f723973ae9e724a65/jiter-0.10.0-cp312-cp312-win_amd64.whl", hash = "sha256:a7c7d785ae9dda68c2678532a5a1581347e9c15362ae9f6e68f3fdbfb64f2e49", size = 206176, upload_time = "2025-05-18T19:04:00.305Z" },
- { url = "https://files.pythonhosted.org/packages/2e/b0/279597e7a270e8d22623fea6c5d4eeac328e7d95c236ed51a2b884c54f70/jiter-0.10.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:e0588107ec8e11b6f5ef0e0d656fb2803ac6cf94a96b2b9fc675c0e3ab5e8644", size = 311617, upload_time = "2025-05-18T19:04:02.078Z" },
- { url = "https://files.pythonhosted.org/packages/91/e3/0916334936f356d605f54cc164af4060e3e7094364add445a3bc79335d46/jiter-0.10.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:cafc4628b616dc32530c20ee53d71589816cf385dd9449633e910d596b1f5c8a", size = 318947, upload_time = "2025-05-18T19:04:03.347Z" },
- { url = "https://files.pythonhosted.org/packages/6a/8e/fd94e8c02d0e94539b7d669a7ebbd2776e51f329bb2c84d4385e8063a2ad/jiter-0.10.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:520ef6d981172693786a49ff5b09eda72a42e539f14788124a07530f785c3ad6", size = 344618, upload_time = "2025-05-18T19:04:04.709Z" },
- { url = "https://files.pythonhosted.org/packages/6f/b0/f9f0a2ec42c6e9c2e61c327824687f1e2415b767e1089c1d9135f43816bd/jiter-0.10.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:554dedfd05937f8fc45d17ebdf298fe7e0c77458232bcb73d9fbbf4c6455f5b3", size = 368829, upload_time = "2025-05-18T19:04:06.912Z" },
- { url = "https://files.pythonhosted.org/packages/e8/57/5bbcd5331910595ad53b9fd0c610392ac68692176f05ae48d6ce5c852967/jiter-0.10.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5bc299da7789deacf95f64052d97f75c16d4fc8c4c214a22bf8d859a4288a1c2", size = 491034, upload_time = "2025-05-18T19:04:08.222Z" },
- { url = "https://files.pythonhosted.org/packages/9b/be/c393df00e6e6e9e623a73551774449f2f23b6ec6a502a3297aeeece2c65a/jiter-0.10.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5161e201172de298a8a1baad95eb85db4fb90e902353b1f6a41d64ea64644e25", size = 388529, upload_time = "2025-05-18T19:04:09.566Z" },
- { url = "https://files.pythonhosted.org/packages/42/3e/df2235c54d365434c7f150b986a6e35f41ebdc2f95acea3036d99613025d/jiter-0.10.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2e2227db6ba93cb3e2bf67c87e594adde0609f146344e8207e8730364db27041", size = 350671, upload_time = "2025-05-18T19:04:10.98Z" },
- { url = "https://files.pythonhosted.org/packages/c6/77/71b0b24cbcc28f55ab4dbfe029f9a5b73aeadaba677843fc6dc9ed2b1d0a/jiter-0.10.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:15acb267ea5e2c64515574b06a8bf393fbfee6a50eb1673614aa45f4613c0cca", size = 390864, upload_time = "2025-05-18T19:04:12.722Z" },
- { url = "https://files.pythonhosted.org/packages/6a/d3/ef774b6969b9b6178e1d1e7a89a3bd37d241f3d3ec5f8deb37bbd203714a/jiter-0.10.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:901b92f2e2947dc6dfcb52fd624453862e16665ea909a08398dde19c0731b7f4", size = 522989, upload_time = "2025-05-18T19:04:14.261Z" },
- { url = "https://files.pythonhosted.org/packages/0c/41/9becdb1d8dd5d854142f45a9d71949ed7e87a8e312b0bede2de849388cb9/jiter-0.10.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:d0cb9a125d5a3ec971a094a845eadde2db0de85b33c9f13eb94a0c63d463879e", size = 513495, upload_time = "2025-05-18T19:04:15.603Z" },
- { url = "https://files.pythonhosted.org/packages/9c/36/3468e5a18238bdedae7c4d19461265b5e9b8e288d3f86cd89d00cbb48686/jiter-0.10.0-cp313-cp313-win32.whl", hash = "sha256:48a403277ad1ee208fb930bdf91745e4d2d6e47253eedc96e2559d1e6527006d", size = 211289, upload_time = "2025-05-18T19:04:17.541Z" },
- { url = "https://files.pythonhosted.org/packages/7e/07/1c96b623128bcb913706e294adb5f768fb7baf8db5e1338ce7b4ee8c78ef/jiter-0.10.0-cp313-cp313-win_amd64.whl", hash = "sha256:75f9eb72ecb640619c29bf714e78c9c46c9c4eaafd644bf78577ede459f330d4", size = 205074, upload_time = "2025-05-18T19:04:19.21Z" },
- { url = "https://files.pythonhosted.org/packages/54/46/caa2c1342655f57d8f0f2519774c6d67132205909c65e9aa8255e1d7b4f4/jiter-0.10.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:28ed2a4c05a1f32ef0e1d24c2611330219fed727dae01789f4a335617634b1ca", size = 318225, upload_time = "2025-05-18T19:04:20.583Z" },
- { url = "https://files.pythonhosted.org/packages/43/84/c7d44c75767e18946219ba2d703a5a32ab37b0bc21886a97bc6062e4da42/jiter-0.10.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:14a4c418b1ec86a195f1ca69da8b23e8926c752b685af665ce30777233dfe070", size = 350235, upload_time = "2025-05-18T19:04:22.363Z" },
- { url = "https://files.pythonhosted.org/packages/01/16/f5a0135ccd968b480daad0e6ab34b0c7c5ba3bc447e5088152696140dcb3/jiter-0.10.0-cp313-cp313t-win_amd64.whl", hash = "sha256:d7bfed2fe1fe0e4dda6ef682cee888ba444b21e7a6553e03252e4feb6cf0adca", size = 207278, upload_time = "2025-05-18T19:04:23.627Z" },
- { url = "https://files.pythonhosted.org/packages/1c/9b/1d646da42c3de6c2188fdaa15bce8ecb22b635904fc68be025e21249ba44/jiter-0.10.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:5e9251a5e83fab8d87799d3e1a46cb4b7f2919b895c6f4483629ed2446f66522", size = 310866, upload_time = "2025-05-18T19:04:24.891Z" },
- { url = "https://files.pythonhosted.org/packages/ad/0e/26538b158e8a7c7987e94e7aeb2999e2e82b1f9d2e1f6e9874ddf71ebda0/jiter-0.10.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:023aa0204126fe5b87ccbcd75c8a0d0261b9abdbbf46d55e7ae9f8e22424eeb8", size = 318772, upload_time = "2025-05-18T19:04:26.161Z" },
- { url = "https://files.pythonhosted.org/packages/7b/fb/d302893151caa1c2636d6574d213e4b34e31fd077af6050a9c5cbb42f6fb/jiter-0.10.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3c189c4f1779c05f75fc17c0c1267594ed918996a231593a21a5ca5438445216", size = 344534, upload_time = "2025-05-18T19:04:27.495Z" },
- { url = "https://files.pythonhosted.org/packages/01/d8/5780b64a149d74e347c5128d82176eb1e3241b1391ac07935693466d6219/jiter-0.10.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:15720084d90d1098ca0229352607cd68256c76991f6b374af96f36920eae13c4", size = 369087, upload_time = "2025-05-18T19:04:28.896Z" },
- { url = "https://files.pythonhosted.org/packages/e8/5b/f235a1437445160e777544f3ade57544daf96ba7e96c1a5b24a6f7ac7004/jiter-0.10.0-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e4f2fb68e5f1cfee30e2b2a09549a00683e0fde4c6a2ab88c94072fc33cb7426", size = 490694, upload_time = "2025-05-18T19:04:30.183Z" },
- { url = "https://files.pythonhosted.org/packages/85/a9/9c3d4617caa2ff89cf61b41e83820c27ebb3f7b5fae8a72901e8cd6ff9be/jiter-0.10.0-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ce541693355fc6da424c08b7edf39a2895f58d6ea17d92cc2b168d20907dee12", size = 388992, upload_time = "2025-05-18T19:04:32.028Z" },
- { url = "https://files.pythonhosted.org/packages/68/b1/344fd14049ba5c94526540af7eb661871f9c54d5f5601ff41a959b9a0bbd/jiter-0.10.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:31c50c40272e189d50006ad5c73883caabb73d4e9748a688b216e85a9a9ca3b9", size = 351723, upload_time = "2025-05-18T19:04:33.467Z" },
- { url = "https://files.pythonhosted.org/packages/41/89/4c0e345041186f82a31aee7b9d4219a910df672b9fef26f129f0cda07a29/jiter-0.10.0-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:fa3402a2ff9815960e0372a47b75c76979d74402448509ccd49a275fa983ef8a", size = 392215, upload_time = "2025-05-18T19:04:34.827Z" },
- { url = "https://files.pythonhosted.org/packages/55/58/ee607863e18d3f895feb802154a2177d7e823a7103f000df182e0f718b38/jiter-0.10.0-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:1956f934dca32d7bb647ea21d06d93ca40868b505c228556d3373cbd255ce853", size = 522762, upload_time = "2025-05-18T19:04:36.19Z" },
- { url = "https://files.pythonhosted.org/packages/15/d0/9123fb41825490d16929e73c212de9a42913d68324a8ce3c8476cae7ac9d/jiter-0.10.0-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:fcedb049bdfc555e261d6f65a6abe1d5ad68825b7202ccb9692636c70fcced86", size = 513427, upload_time = "2025-05-18T19:04:37.544Z" },
- { url = "https://files.pythonhosted.org/packages/d8/b3/2bd02071c5a2430d0b70403a34411fc519c2f227da7b03da9ba6a956f931/jiter-0.10.0-cp314-cp314-win32.whl", hash = "sha256:ac509f7eccca54b2a29daeb516fb95b6f0bd0d0d8084efaf8ed5dfc7b9f0b357", size = 210127, upload_time = "2025-05-18T19:04:38.837Z" },
- { url = "https://files.pythonhosted.org/packages/03/0c/5fe86614ea050c3ecd728ab4035534387cd41e7c1855ef6c031f1ca93e3f/jiter-0.10.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:5ed975b83a2b8639356151cef5c0d597c68376fc4922b45d0eb384ac058cfa00", size = 318527, upload_time = "2025-05-18T19:04:40.612Z" },
- { url = "https://files.pythonhosted.org/packages/b3/4a/4175a563579e884192ba6e81725fc0448b042024419be8d83aa8a80a3f44/jiter-0.10.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3aa96f2abba33dc77f79b4cf791840230375f9534e5fac927ccceb58c5e604a5", size = 354213, upload_time = "2025-05-18T19:04:41.894Z" },
-]
-
-[[package]]
-name = "jsonschema"
-version = "4.24.0"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "attrs" },
- { name = "jsonschema-specifications" },
- { name = "referencing" },
- { name = "rpds-py" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/bf/d3/1cf5326b923a53515d8f3a2cd442e6d7e94fcc444716e879ea70a0ce3177/jsonschema-4.24.0.tar.gz", hash = "sha256:0b4e8069eb12aedfa881333004bccaec24ecef5a8a6a4b6df142b2cc9599d196", size = 353480, upload_time = "2025-05-26T18:48:10.459Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/a2/3d/023389198f69c722d039351050738d6755376c8fd343e91dc493ea485905/jsonschema-4.24.0-py3-none-any.whl", hash = "sha256:a462455f19f5faf404a7902952b6f0e3ce868f3ee09a359b05eca6673bd8412d", size = 88709, upload_time = "2025-05-26T18:48:08.417Z" },
-]
-
-[[package]]
-name = "jsonschema-specifications"
-version = "2025.4.1"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "referencing" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/bf/ce/46fbd9c8119cfc3581ee5643ea49464d168028cfb5caff5fc0596d0cf914/jsonschema_specifications-2025.4.1.tar.gz", hash = "sha256:630159c9f4dbea161a6a2205c3011cc4f18ff381b189fff48bb39b9bf26ae608", size = 15513, upload_time = "2025-04-23T12:34:07.418Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/01/0e/b27cdbaccf30b890c40ed1da9fd4a3593a5cf94dae54fb34f8a4b74fcd3f/jsonschema_specifications-2025.4.1-py3-none-any.whl", hash = "sha256:4653bffbd6584f7de83a67e0d620ef16900b390ddc7939d56684d6c81e33f1af", size = 18437, upload_time = "2025-04-23T12:34:05.422Z" },
-]
-
-[[package]]
-name = "markupsafe"
-version = "3.0.2"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/b2/97/5d42485e71dfc078108a86d6de8fa46db44a1a9295e89c5d6d4a06e23a62/markupsafe-3.0.2.tar.gz", hash = "sha256:ee55d3edf80167e48ea11a923c7386f4669df67d7994554387f84e7d8b0a2bf0", size = 20537, upload_time = "2024-10-18T15:21:54.129Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/6b/28/bbf83e3f76936960b850435576dd5e67034e200469571be53f69174a2dfd/MarkupSafe-3.0.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9025b4018f3a1314059769c7bf15441064b2207cb3f065e6ea1e7359cb46db9d", size = 14353, upload_time = "2024-10-18T15:21:02.187Z" },
- { url = "https://files.pythonhosted.org/packages/6c/30/316d194b093cde57d448a4c3209f22e3046c5bb2fb0820b118292b334be7/MarkupSafe-3.0.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:93335ca3812df2f366e80509ae119189886b0f3c2b81325d39efdb84a1e2ae93", size = 12392, upload_time = "2024-10-18T15:21:02.941Z" },
- { url = "https://files.pythonhosted.org/packages/f2/96/9cdafba8445d3a53cae530aaf83c38ec64c4d5427d975c974084af5bc5d2/MarkupSafe-3.0.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2cb8438c3cbb25e220c2ab33bb226559e7afb3baec11c4f218ffa7308603c832", size = 23984, upload_time = "2024-10-18T15:21:03.953Z" },
- { url = "https://files.pythonhosted.org/packages/f1/a4/aefb044a2cd8d7334c8a47d3fb2c9f328ac48cb349468cc31c20b539305f/MarkupSafe-3.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a123e330ef0853c6e822384873bef7507557d8e4a082961e1defa947aa59ba84", size = 23120, upload_time = "2024-10-18T15:21:06.495Z" },
- { url = "https://files.pythonhosted.org/packages/8d/21/5e4851379f88f3fad1de30361db501300d4f07bcad047d3cb0449fc51f8c/MarkupSafe-3.0.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e084f686b92e5b83186b07e8a17fc09e38fff551f3602b249881fec658d3eca", size = 23032, upload_time = "2024-10-18T15:21:07.295Z" },
- { url = "https://files.pythonhosted.org/packages/00/7b/e92c64e079b2d0d7ddf69899c98842f3f9a60a1ae72657c89ce2655c999d/MarkupSafe-3.0.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d8213e09c917a951de9d09ecee036d5c7d36cb6cb7dbaece4c71a60d79fb9798", size = 24057, upload_time = "2024-10-18T15:21:08.073Z" },
- { url = "https://files.pythonhosted.org/packages/f9/ac/46f960ca323037caa0a10662ef97d0a4728e890334fc156b9f9e52bcc4ca/MarkupSafe-3.0.2-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:5b02fb34468b6aaa40dfc198d813a641e3a63b98c2b05a16b9f80b7ec314185e", size = 23359, upload_time = "2024-10-18T15:21:09.318Z" },
- { url = "https://files.pythonhosted.org/packages/69/84/83439e16197337b8b14b6a5b9c2105fff81d42c2a7c5b58ac7b62ee2c3b1/MarkupSafe-3.0.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:0bff5e0ae4ef2e1ae4fdf2dfd5b76c75e5c2fa4132d05fc1b0dabcd20c7e28c4", size = 23306, upload_time = "2024-10-18T15:21:10.185Z" },
- { url = "https://files.pythonhosted.org/packages/9a/34/a15aa69f01e2181ed8d2b685c0d2f6655d5cca2c4db0ddea775e631918cd/MarkupSafe-3.0.2-cp311-cp311-win32.whl", hash = "sha256:6c89876f41da747c8d3677a2b540fb32ef5715f97b66eeb0c6b66f5e3ef6f59d", size = 15094, upload_time = "2024-10-18T15:21:11.005Z" },
- { url = "https://files.pythonhosted.org/packages/da/b8/3a3bd761922d416f3dc5d00bfbed11f66b1ab89a0c2b6e887240a30b0f6b/MarkupSafe-3.0.2-cp311-cp311-win_amd64.whl", hash = "sha256:70a87b411535ccad5ef2f1df5136506a10775d267e197e4cf531ced10537bd6b", size = 15521, upload_time = "2024-10-18T15:21:12.911Z" },
- { url = "https://files.pythonhosted.org/packages/22/09/d1f21434c97fc42f09d290cbb6350d44eb12f09cc62c9476effdb33a18aa/MarkupSafe-3.0.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:9778bd8ab0a994ebf6f84c2b949e65736d5575320a17ae8984a77fab08db94cf", size = 14274, upload_time = "2024-10-18T15:21:13.777Z" },
- { url = "https://files.pythonhosted.org/packages/6b/b0/18f76bba336fa5aecf79d45dcd6c806c280ec44538b3c13671d49099fdd0/MarkupSafe-3.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:846ade7b71e3536c4e56b386c2a47adf5741d2d8b94ec9dc3e92e5e1ee1e2225", size = 12348, upload_time = "2024-10-18T15:21:14.822Z" },
- { url = "https://files.pythonhosted.org/packages/e0/25/dd5c0f6ac1311e9b40f4af06c78efde0f3b5cbf02502f8ef9501294c425b/MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1c99d261bd2d5f6b59325c92c73df481e05e57f19837bdca8413b9eac4bd8028", size = 24149, upload_time = "2024-10-18T15:21:15.642Z" },
- { url = "https://files.pythonhosted.org/packages/f3/f0/89e7aadfb3749d0f52234a0c8c7867877876e0a20b60e2188e9850794c17/MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e17c96c14e19278594aa4841ec148115f9c7615a47382ecb6b82bd8fea3ab0c8", size = 23118, upload_time = "2024-10-18T15:21:17.133Z" },
- { url = "https://files.pythonhosted.org/packages/d5/da/f2eeb64c723f5e3777bc081da884b414671982008c47dcc1873d81f625b6/MarkupSafe-3.0.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:88416bd1e65dcea10bc7569faacb2c20ce071dd1f87539ca2ab364bf6231393c", size = 22993, upload_time = "2024-10-18T15:21:18.064Z" },
- { url = "https://files.pythonhosted.org/packages/da/0e/1f32af846df486dce7c227fe0f2398dc7e2e51d4a370508281f3c1c5cddc/MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2181e67807fc2fa785d0592dc2d6206c019b9502410671cc905d132a92866557", size = 24178, upload_time = "2024-10-18T15:21:18.859Z" },
- { url = "https://files.pythonhosted.org/packages/c4/f6/bb3ca0532de8086cbff5f06d137064c8410d10779c4c127e0e47d17c0b71/MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:52305740fe773d09cffb16f8ed0427942901f00adedac82ec8b67752f58a1b22", size = 23319, upload_time = "2024-10-18T15:21:19.671Z" },
- { url = "https://files.pythonhosted.org/packages/a2/82/8be4c96ffee03c5b4a034e60a31294daf481e12c7c43ab8e34a1453ee48b/MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:ad10d3ded218f1039f11a75f8091880239651b52e9bb592ca27de44eed242a48", size = 23352, upload_time = "2024-10-18T15:21:20.971Z" },
- { url = "https://files.pythonhosted.org/packages/51/ae/97827349d3fcffee7e184bdf7f41cd6b88d9919c80f0263ba7acd1bbcb18/MarkupSafe-3.0.2-cp312-cp312-win32.whl", hash = "sha256:0f4ca02bea9a23221c0182836703cbf8930c5e9454bacce27e767509fa286a30", size = 15097, upload_time = "2024-10-18T15:21:22.646Z" },
- { url = "https://files.pythonhosted.org/packages/c1/80/a61f99dc3a936413c3ee4e1eecac96c0da5ed07ad56fd975f1a9da5bc630/MarkupSafe-3.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:8e06879fc22a25ca47312fbe7c8264eb0b662f6db27cb2d3bbbc74b1df4b9b87", size = 15601, upload_time = "2024-10-18T15:21:23.499Z" },
- { url = "https://files.pythonhosted.org/packages/83/0e/67eb10a7ecc77a0c2bbe2b0235765b98d164d81600746914bebada795e97/MarkupSafe-3.0.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ba9527cdd4c926ed0760bc301f6728ef34d841f405abf9d4f959c478421e4efd", size = 14274, upload_time = "2024-10-18T15:21:24.577Z" },
- { url = "https://files.pythonhosted.org/packages/2b/6d/9409f3684d3335375d04e5f05744dfe7e9f120062c9857df4ab490a1031a/MarkupSafe-3.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f8b3d067f2e40fe93e1ccdd6b2e1d16c43140e76f02fb1319a05cf2b79d99430", size = 12352, upload_time = "2024-10-18T15:21:25.382Z" },
- { url = "https://files.pythonhosted.org/packages/d2/f5/6eadfcd3885ea85fe2a7c128315cc1bb7241e1987443d78c8fe712d03091/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:569511d3b58c8791ab4c2e1285575265991e6d8f8700c7be0e88f86cb0672094", size = 24122, upload_time = "2024-10-18T15:21:26.199Z" },
- { url = "https://files.pythonhosted.org/packages/0c/91/96cf928db8236f1bfab6ce15ad070dfdd02ed88261c2afafd4b43575e9e9/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:15ab75ef81add55874e7ab7055e9c397312385bd9ced94920f2802310c930396", size = 23085, upload_time = "2024-10-18T15:21:27.029Z" },
- { url = "https://files.pythonhosted.org/packages/c2/cf/c9d56af24d56ea04daae7ac0940232d31d5a8354f2b457c6d856b2057d69/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f3818cb119498c0678015754eba762e0d61e5b52d34c8b13d770f0719f7b1d79", size = 22978, upload_time = "2024-10-18T15:21:27.846Z" },
- { url = "https://files.pythonhosted.org/packages/2a/9f/8619835cd6a711d6272d62abb78c033bda638fdc54c4e7f4272cf1c0962b/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:cdb82a876c47801bb54a690c5ae105a46b392ac6099881cdfb9f6e95e4014c6a", size = 24208, upload_time = "2024-10-18T15:21:28.744Z" },
- { url = "https://files.pythonhosted.org/packages/f9/bf/176950a1792b2cd2102b8ffeb5133e1ed984547b75db47c25a67d3359f77/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:cabc348d87e913db6ab4aa100f01b08f481097838bdddf7c7a84b7575b7309ca", size = 23357, upload_time = "2024-10-18T15:21:29.545Z" },
- { url = "https://files.pythonhosted.org/packages/ce/4f/9a02c1d335caabe5c4efb90e1b6e8ee944aa245c1aaaab8e8a618987d816/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:444dcda765c8a838eaae23112db52f1efaf750daddb2d9ca300bcae1039adc5c", size = 23344, upload_time = "2024-10-18T15:21:30.366Z" },
- { url = "https://files.pythonhosted.org/packages/ee/55/c271b57db36f748f0e04a759ace9f8f759ccf22b4960c270c78a394f58be/MarkupSafe-3.0.2-cp313-cp313-win32.whl", hash = "sha256:bcf3e58998965654fdaff38e58584d8937aa3096ab5354d493c77d1fdd66d7a1", size = 15101, upload_time = "2024-10-18T15:21:31.207Z" },
- { url = "https://files.pythonhosted.org/packages/29/88/07df22d2dd4df40aba9f3e402e6dc1b8ee86297dddbad4872bd5e7b0094f/MarkupSafe-3.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:e6a2a455bd412959b57a172ce6328d2dd1f01cb2135efda2e4576e8a23fa3b0f", size = 15603, upload_time = "2024-10-18T15:21:32.032Z" },
- { url = "https://files.pythonhosted.org/packages/62/6a/8b89d24db2d32d433dffcd6a8779159da109842434f1dd2f6e71f32f738c/MarkupSafe-3.0.2-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:b5a6b3ada725cea8a5e634536b1b01c30bcdcd7f9c6fff4151548d5bf6b3a36c", size = 14510, upload_time = "2024-10-18T15:21:33.625Z" },
- { url = "https://files.pythonhosted.org/packages/7a/06/a10f955f70a2e5a9bf78d11a161029d278eeacbd35ef806c3fd17b13060d/MarkupSafe-3.0.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:a904af0a6162c73e3edcb969eeeb53a63ceeb5d8cf642fade7d39e7963a22ddb", size = 12486, upload_time = "2024-10-18T15:21:34.611Z" },
- { url = "https://files.pythonhosted.org/packages/34/cf/65d4a571869a1a9078198ca28f39fba5fbb910f952f9dbc5220afff9f5e6/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4aa4e5faecf353ed117801a068ebab7b7e09ffb6e1d5e412dc852e0da018126c", size = 25480, upload_time = "2024-10-18T15:21:35.398Z" },
- { url = "https://files.pythonhosted.org/packages/0c/e3/90e9651924c430b885468b56b3d597cabf6d72be4b24a0acd1fa0e12af67/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0ef13eaeee5b615fb07c9a7dadb38eac06a0608b41570d8ade51c56539e509d", size = 23914, upload_time = "2024-10-18T15:21:36.231Z" },
- { url = "https://files.pythonhosted.org/packages/66/8c/6c7cf61f95d63bb866db39085150df1f2a5bd3335298f14a66b48e92659c/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d16a81a06776313e817c951135cf7340a3e91e8c1ff2fac444cfd75fffa04afe", size = 23796, upload_time = "2024-10-18T15:21:37.073Z" },
- { url = "https://files.pythonhosted.org/packages/bb/35/cbe9238ec3f47ac9a7c8b3df7a808e7cb50fe149dc7039f5f454b3fba218/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:6381026f158fdb7c72a168278597a5e3a5222e83ea18f543112b2662a9b699c5", size = 25473, upload_time = "2024-10-18T15:21:37.932Z" },
- { url = "https://files.pythonhosted.org/packages/e6/32/7621a4382488aa283cc05e8984a9c219abad3bca087be9ec77e89939ded9/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:3d79d162e7be8f996986c064d1c7c817f6df3a77fe3d6859f6f9e7be4b8c213a", size = 24114, upload_time = "2024-10-18T15:21:39.799Z" },
- { url = "https://files.pythonhosted.org/packages/0d/80/0985960e4b89922cb5a0bac0ed39c5b96cbc1a536a99f30e8c220a996ed9/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:131a3c7689c85f5ad20f9f6fb1b866f402c445b220c19fe4308c0b147ccd2ad9", size = 24098, upload_time = "2024-10-18T15:21:40.813Z" },
- { url = "https://files.pythonhosted.org/packages/82/78/fedb03c7d5380df2427038ec8d973587e90561b2d90cd472ce9254cf348b/MarkupSafe-3.0.2-cp313-cp313t-win32.whl", hash = "sha256:ba8062ed2cf21c07a9e295d5b8a2a5ce678b913b45fdf68c32d95d6c1291e0b6", size = 15208, upload_time = "2024-10-18T15:21:41.814Z" },
- { url = "https://files.pythonhosted.org/packages/4f/65/6079a46068dfceaeabb5dcad6d674f5f5c61a6fa5673746f42a9f4c233b3/MarkupSafe-3.0.2-cp313-cp313t-win_amd64.whl", hash = "sha256:e444a31f8db13eb18ada366ab3cf45fd4b31e4db1236a4448f68778c1d1a5a2f", size = 15739, upload_time = "2024-10-18T15:21:42.784Z" },
-]
-
-[[package]]
-name = "mem0ai"
-version = "0.1.102"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "openai" },
- { name = "posthog" },
- { name = "pydantic" },
- { name = "pytz" },
- { name = "qdrant-client" },
- { name = "sqlalchemy" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/6d/b3/27e34961b02ddb46d5d4d3ddb06dfeb76345656ac318db97c73a03b0bb7f/mem0ai-0.1.102.tar.gz", hash = "sha256:7358dba4fbe954b9c3f33204c14df7babaf9067e2eb48241d89a32e6bc774988", size = 100162, upload_time = "2025-05-26T17:56:43.845Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/8a/46/7e0f3b56b5275ac92285f4aa3f5ed32d5c489231fdfb30c4ca19975fd19d/mem0ai-0.1.102-py3-none-any.whl", hash = "sha256:1401ccfd2369e2182ce78abb61b817e739fe49508b5a8ad98abcd4f8ad4db0b4", size = 156042, upload_time = "2025-05-26T17:56:41.911Z" },
-]
-
-[[package]]
-name = "narwhals"
-version = "1.41.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/32/fc/7b9a3689911662be59889b1b0b40e17d5dba6f98080994d86ca1f3154d41/narwhals-1.41.0.tar.gz", hash = "sha256:0ab2e5a1757a19b071e37ca74b53b0b5426789321d68939738337dfddea629b5", size = 488446, upload_time = "2025-05-26T12:46:07.43Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/c9/e0/ade8619846645461c012498f02b93a659e50f07d9d9a6ffefdf5ea2c02a0/narwhals-1.41.0-py3-none-any.whl", hash = "sha256:d958336b40952e4c4b7aeef259a7074851da0800cf902186a58f2faeff97be02", size = 357968, upload_time = "2025-05-26T12:46:05.207Z" },
-]
-
-[[package]]
-name = "numpy"
-version = "2.2.6"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/76/21/7d2a95e4bba9dc13d043ee156a356c0a8f0c6309dff6b21b4d71a073b8a8/numpy-2.2.6.tar.gz", hash = "sha256:e29554e2bef54a90aa5cc07da6ce955accb83f21ab5de01a62c8478897b264fd", size = 20276440, upload_time = "2025-05-17T22:38:04.611Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/da/a8/4f83e2aa666a9fbf56d6118faaaf5f1974d456b1823fda0a176eff722839/numpy-2.2.6-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f9f1adb22318e121c5c69a09142811a201ef17ab257a1e66ca3025065b7f53ae", size = 21176963, upload_time = "2025-05-17T21:31:19.36Z" },
- { url = "https://files.pythonhosted.org/packages/b3/2b/64e1affc7972decb74c9e29e5649fac940514910960ba25cd9af4488b66c/numpy-2.2.6-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c820a93b0255bc360f53eca31a0e676fd1101f673dda8da93454a12e23fc5f7a", size = 14406743, upload_time = "2025-05-17T21:31:41.087Z" },
- { url = "https://files.pythonhosted.org/packages/4a/9f/0121e375000b5e50ffdd8b25bf78d8e1a5aa4cca3f185d41265198c7b834/numpy-2.2.6-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:3d70692235e759f260c3d837193090014aebdf026dfd167834bcba43e30c2a42", size = 5352616, upload_time = "2025-05-17T21:31:50.072Z" },
- { url = "https://files.pythonhosted.org/packages/31/0d/b48c405c91693635fbe2dcd7bc84a33a602add5f63286e024d3b6741411c/numpy-2.2.6-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:481b49095335f8eed42e39e8041327c05b0f6f4780488f61286ed3c01368d491", size = 6889579, upload_time = "2025-05-17T21:32:01.712Z" },
- { url = "https://files.pythonhosted.org/packages/52/b8/7f0554d49b565d0171eab6e99001846882000883998e7b7d9f0d98b1f934/numpy-2.2.6-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b64d8d4d17135e00c8e346e0a738deb17e754230d7e0810ac5012750bbd85a5a", size = 14312005, upload_time = "2025-05-17T21:32:23.332Z" },
- { url = "https://files.pythonhosted.org/packages/b3/dd/2238b898e51bd6d389b7389ffb20d7f4c10066d80351187ec8e303a5a475/numpy-2.2.6-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba10f8411898fc418a521833e014a77d3ca01c15b0c6cdcce6a0d2897e6dbbdf", size = 16821570, upload_time = "2025-05-17T21:32:47.991Z" },
- { url = "https://files.pythonhosted.org/packages/83/6c/44d0325722cf644f191042bf47eedad61c1e6df2432ed65cbe28509d404e/numpy-2.2.6-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:bd48227a919f1bafbdda0583705e547892342c26fb127219d60a5c36882609d1", size = 15818548, upload_time = "2025-05-17T21:33:11.728Z" },
- { url = "https://files.pythonhosted.org/packages/ae/9d/81e8216030ce66be25279098789b665d49ff19eef08bfa8cb96d4957f422/numpy-2.2.6-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:9551a499bf125c1d4f9e250377c1ee2eddd02e01eac6644c080162c0c51778ab", size = 18620521, upload_time = "2025-05-17T21:33:39.139Z" },
- { url = "https://files.pythonhosted.org/packages/6a/fd/e19617b9530b031db51b0926eed5345ce8ddc669bb3bc0044b23e275ebe8/numpy-2.2.6-cp311-cp311-win32.whl", hash = "sha256:0678000bb9ac1475cd454c6b8c799206af8107e310843532b04d49649c717a47", size = 6525866, upload_time = "2025-05-17T21:33:50.273Z" },
- { url = "https://files.pythonhosted.org/packages/31/0a/f354fb7176b81747d870f7991dc763e157a934c717b67b58456bc63da3df/numpy-2.2.6-cp311-cp311-win_amd64.whl", hash = "sha256:e8213002e427c69c45a52bbd94163084025f533a55a59d6f9c5b820774ef3303", size = 12907455, upload_time = "2025-05-17T21:34:09.135Z" },
- { url = "https://files.pythonhosted.org/packages/82/5d/c00588b6cf18e1da539b45d3598d3557084990dcc4331960c15ee776ee41/numpy-2.2.6-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:41c5a21f4a04fa86436124d388f6ed60a9343a6f767fced1a8a71c3fbca038ff", size = 20875348, upload_time = "2025-05-17T21:34:39.648Z" },
- { url = "https://files.pythonhosted.org/packages/66/ee/560deadcdde6c2f90200450d5938f63a34b37e27ebff162810f716f6a230/numpy-2.2.6-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:de749064336d37e340f640b05f24e9e3dd678c57318c7289d222a8a2f543e90c", size = 14119362, upload_time = "2025-05-17T21:35:01.241Z" },
- { url = "https://files.pythonhosted.org/packages/3c/65/4baa99f1c53b30adf0acd9a5519078871ddde8d2339dc5a7fde80d9d87da/numpy-2.2.6-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:894b3a42502226a1cac872f840030665f33326fc3dac8e57c607905773cdcde3", size = 5084103, upload_time = "2025-05-17T21:35:10.622Z" },
- { url = "https://files.pythonhosted.org/packages/cc/89/e5a34c071a0570cc40c9a54eb472d113eea6d002e9ae12bb3a8407fb912e/numpy-2.2.6-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:71594f7c51a18e728451bb50cc60a3ce4e6538822731b2933209a1f3614e9282", size = 6625382, upload_time = "2025-05-17T21:35:21.414Z" },
- { url = "https://files.pythonhosted.org/packages/f8/35/8c80729f1ff76b3921d5c9487c7ac3de9b2a103b1cd05e905b3090513510/numpy-2.2.6-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f2618db89be1b4e05f7a1a847a9c1c0abd63e63a1607d892dd54668dd92faf87", size = 14018462, upload_time = "2025-05-17T21:35:42.174Z" },
- { url = "https://files.pythonhosted.org/packages/8c/3d/1e1db36cfd41f895d266b103df00ca5b3cbe965184df824dec5c08c6b803/numpy-2.2.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fd83c01228a688733f1ded5201c678f0c53ecc1006ffbc404db9f7a899ac6249", size = 16527618, upload_time = "2025-05-17T21:36:06.711Z" },
- { url = "https://files.pythonhosted.org/packages/61/c6/03ed30992602c85aa3cd95b9070a514f8b3c33e31124694438d88809ae36/numpy-2.2.6-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:37c0ca431f82cd5fa716eca9506aefcabc247fb27ba69c5062a6d3ade8cf8f49", size = 15505511, upload_time = "2025-05-17T21:36:29.965Z" },
- { url = "https://files.pythonhosted.org/packages/b7/25/5761d832a81df431e260719ec45de696414266613c9ee268394dd5ad8236/numpy-2.2.6-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:fe27749d33bb772c80dcd84ae7e8df2adc920ae8297400dabec45f0dedb3f6de", size = 18313783, upload_time = "2025-05-17T21:36:56.883Z" },
- { url = "https://files.pythonhosted.org/packages/57/0a/72d5a3527c5ebffcd47bde9162c39fae1f90138c961e5296491ce778e682/numpy-2.2.6-cp312-cp312-win32.whl", hash = "sha256:4eeaae00d789f66c7a25ac5f34b71a7035bb474e679f410e5e1a94deb24cf2d4", size = 6246506, upload_time = "2025-05-17T21:37:07.368Z" },
- { url = "https://files.pythonhosted.org/packages/36/fa/8c9210162ca1b88529ab76b41ba02d433fd54fecaf6feb70ef9f124683f1/numpy-2.2.6-cp312-cp312-win_amd64.whl", hash = "sha256:c1f9540be57940698ed329904db803cf7a402f3fc200bfe599334c9bd84a40b2", size = 12614190, upload_time = "2025-05-17T21:37:26.213Z" },
- { url = "https://files.pythonhosted.org/packages/f9/5c/6657823f4f594f72b5471f1db1ab12e26e890bb2e41897522d134d2a3e81/numpy-2.2.6-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:0811bb762109d9708cca4d0b13c4f67146e3c3b7cf8d34018c722adb2d957c84", size = 20867828, upload_time = "2025-05-17T21:37:56.699Z" },
- { url = "https://files.pythonhosted.org/packages/dc/9e/14520dc3dadf3c803473bd07e9b2bd1b69bc583cb2497b47000fed2fa92f/numpy-2.2.6-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:287cc3162b6f01463ccd86be154f284d0893d2b3ed7292439ea97eafa8170e0b", size = 14143006, upload_time = "2025-05-17T21:38:18.291Z" },
- { url = "https://files.pythonhosted.org/packages/4f/06/7e96c57d90bebdce9918412087fc22ca9851cceaf5567a45c1f404480e9e/numpy-2.2.6-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:f1372f041402e37e5e633e586f62aa53de2eac8d98cbfb822806ce4bbefcb74d", size = 5076765, upload_time = "2025-05-17T21:38:27.319Z" },
- { url = "https://files.pythonhosted.org/packages/73/ed/63d920c23b4289fdac96ddbdd6132e9427790977d5457cd132f18e76eae0/numpy-2.2.6-cp313-cp313-macosx_14_0_x86_64.whl", hash = "sha256:55a4d33fa519660d69614a9fad433be87e5252f4b03850642f88993f7b2ca566", size = 6617736, upload_time = "2025-05-17T21:38:38.141Z" },
- { url = "https://files.pythonhosted.org/packages/85/c5/e19c8f99d83fd377ec8c7e0cf627a8049746da54afc24ef0a0cb73d5dfb5/numpy-2.2.6-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f92729c95468a2f4f15e9bb94c432a9229d0d50de67304399627a943201baa2f", size = 14010719, upload_time = "2025-05-17T21:38:58.433Z" },
- { url = "https://files.pythonhosted.org/packages/19/49/4df9123aafa7b539317bf6d342cb6d227e49f7a35b99c287a6109b13dd93/numpy-2.2.6-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1bc23a79bfabc5d056d106f9befb8d50c31ced2fbc70eedb8155aec74a45798f", size = 16526072, upload_time = "2025-05-17T21:39:22.638Z" },
- { url = "https://files.pythonhosted.org/packages/b2/6c/04b5f47f4f32f7c2b0e7260442a8cbcf8168b0e1a41ff1495da42f42a14f/numpy-2.2.6-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:e3143e4451880bed956e706a3220b4e5cf6172ef05fcc397f6f36a550b1dd868", size = 15503213, upload_time = "2025-05-17T21:39:45.865Z" },
- { url = "https://files.pythonhosted.org/packages/17/0a/5cd92e352c1307640d5b6fec1b2ffb06cd0dabe7d7b8227f97933d378422/numpy-2.2.6-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:b4f13750ce79751586ae2eb824ba7e1e8dba64784086c98cdbbcc6a42112ce0d", size = 18316632, upload_time = "2025-05-17T21:40:13.331Z" },
- { url = "https://files.pythonhosted.org/packages/f0/3b/5cba2b1d88760ef86596ad0f3d484b1cbff7c115ae2429678465057c5155/numpy-2.2.6-cp313-cp313-win32.whl", hash = "sha256:5beb72339d9d4fa36522fc63802f469b13cdbe4fdab4a288f0c441b74272ebfd", size = 6244532, upload_time = "2025-05-17T21:43:46.099Z" },
- { url = "https://files.pythonhosted.org/packages/cb/3b/d58c12eafcb298d4e6d0d40216866ab15f59e55d148a5658bb3132311fcf/numpy-2.2.6-cp313-cp313-win_amd64.whl", hash = "sha256:b0544343a702fa80c95ad5d3d608ea3599dd54d4632df855e4c8d24eb6ecfa1c", size = 12610885, upload_time = "2025-05-17T21:44:05.145Z" },
- { url = "https://files.pythonhosted.org/packages/6b/9e/4bf918b818e516322db999ac25d00c75788ddfd2d2ade4fa66f1f38097e1/numpy-2.2.6-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:0bca768cd85ae743b2affdc762d617eddf3bcf8724435498a1e80132d04879e6", size = 20963467, upload_time = "2025-05-17T21:40:44Z" },
- { url = "https://files.pythonhosted.org/packages/61/66/d2de6b291507517ff2e438e13ff7b1e2cdbdb7cb40b3ed475377aece69f9/numpy-2.2.6-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:fc0c5673685c508a142ca65209b4e79ed6740a4ed6b2267dbba90f34b0b3cfda", size = 14225144, upload_time = "2025-05-17T21:41:05.695Z" },
- { url = "https://files.pythonhosted.org/packages/e4/25/480387655407ead912e28ba3a820bc69af9adf13bcbe40b299d454ec011f/numpy-2.2.6-cp313-cp313t-macosx_14_0_arm64.whl", hash = "sha256:5bd4fc3ac8926b3819797a7c0e2631eb889b4118a9898c84f585a54d475b7e40", size = 5200217, upload_time = "2025-05-17T21:41:15.903Z" },
- { url = "https://files.pythonhosted.org/packages/aa/4a/6e313b5108f53dcbf3aca0c0f3e9c92f4c10ce57a0a721851f9785872895/numpy-2.2.6-cp313-cp313t-macosx_14_0_x86_64.whl", hash = "sha256:fee4236c876c4e8369388054d02d0e9bb84821feb1a64dd59e137e6511a551f8", size = 6712014, upload_time = "2025-05-17T21:41:27.321Z" },
- { url = "https://files.pythonhosted.org/packages/b7/30/172c2d5c4be71fdf476e9de553443cf8e25feddbe185e0bd88b096915bcc/numpy-2.2.6-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e1dda9c7e08dc141e0247a5b8f49cf05984955246a327d4c48bda16821947b2f", size = 14077935, upload_time = "2025-05-17T21:41:49.738Z" },
- { url = "https://files.pythonhosted.org/packages/12/fb/9e743f8d4e4d3c710902cf87af3512082ae3d43b945d5d16563f26ec251d/numpy-2.2.6-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f447e6acb680fd307f40d3da4852208af94afdfab89cf850986c3ca00562f4fa", size = 16600122, upload_time = "2025-05-17T21:42:14.046Z" },
- { url = "https://files.pythonhosted.org/packages/12/75/ee20da0e58d3a66f204f38916757e01e33a9737d0b22373b3eb5a27358f9/numpy-2.2.6-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:389d771b1623ec92636b0786bc4ae56abafad4a4c513d36a55dce14bd9ce8571", size = 15586143, upload_time = "2025-05-17T21:42:37.464Z" },
- { url = "https://files.pythonhosted.org/packages/76/95/bef5b37f29fc5e739947e9ce5179ad402875633308504a52d188302319c8/numpy-2.2.6-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:8e9ace4a37db23421249ed236fdcdd457d671e25146786dfc96835cd951aa7c1", size = 18385260, upload_time = "2025-05-17T21:43:05.189Z" },
- { url = "https://files.pythonhosted.org/packages/09/04/f2f83279d287407cf36a7a8053a5abe7be3622a4363337338f2585e4afda/numpy-2.2.6-cp313-cp313t-win32.whl", hash = "sha256:038613e9fb8c72b0a41f025a7e4c3f0b7a1b5d768ece4796b674c8f3fe13efff", size = 6377225, upload_time = "2025-05-17T21:43:16.254Z" },
- { url = "https://files.pythonhosted.org/packages/67/0e/35082d13c09c02c011cf21570543d202ad929d961c02a147493cb0c2bdf5/numpy-2.2.6-cp313-cp313t-win_amd64.whl", hash = "sha256:6031dd6dfecc0cf9f668681a37648373bddd6421fff6c66ec1624eed0180ee06", size = 12771374, upload_time = "2025-05-17T21:43:35.479Z" },
-]
-
-[[package]]
-name = "ollama"
-version = "0.5.1"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "httpx" },
- { name = "pydantic" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/8d/96/c7fe0d2d1b3053be614822a7b722c7465161b3672ce90df71515137580a0/ollama-0.5.1.tar.gz", hash = "sha256:5a799e4dc4e7af638b11e3ae588ab17623ee019e496caaf4323efbaa8feeff93", size = 41112, upload_time = "2025-05-30T21:32:48.679Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/d6/76/3f96c8cdbf3955d7a73ee94ce3e0db0755d6de1e0098a70275940d1aff2f/ollama-0.5.1-py3-none-any.whl", hash = "sha256:4c8839f35bc173c7057b1eb2cbe7f498c1a7e134eafc9192824c8aecb3617506", size = 13369, upload_time = "2025-05-30T21:32:47.429Z" },
-]
-
-[[package]]
-name = "openai"
-version = "1.82.1"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "anyio" },
- { name = "distro" },
- { name = "httpx" },
- { name = "jiter" },
- { name = "pydantic" },
- { name = "sniffio" },
- { name = "tqdm" },
- { name = "typing-extensions" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/5e/53/fd5318cd79202744711c120f008d9bd987eacc063b15910a820bc9b9f40e/openai-1.82.1.tar.gz", hash = "sha256:ffc529680018e0417acac85f926f92aa0bbcbc26e82e2621087303c66bc7f95d", size = 461322, upload_time = "2025-05-29T16:15:14.526Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/a8/d9/7ec61c010f0d0b0bc57dab8b8dff398f84230d269e8bfa068ad542ff050c/openai-1.82.1-py3-none-any.whl", hash = "sha256:334eb5006edf59aa464c9e932b9d137468d810b2659e5daea9b3a8c39d052395", size = 720466, upload_time = "2025-05-29T16:15:12.531Z" },
-]
-
-[[package]]
-name = "packaging"
-version = "24.2"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/d0/63/68dbb6eb2de9cb10ee4c9c14a0148804425e13c4fb20d61cce69f53106da/packaging-24.2.tar.gz", hash = "sha256:c228a6dc5e932d346bc5739379109d49e8853dd8223571c7c5b55260edc0b97f", size = 163950, upload_time = "2024-11-08T09:47:47.202Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/88/ef/eb23f262cca3c0c4eb7ab1933c3b1f03d021f2c48f54763065b6f0e321be/packaging-24.2-py3-none-any.whl", hash = "sha256:09abb1bccd265c01f4a3aa3f7a7db064b36514d2cba19a2f694fe6150451a759", size = 65451, upload_time = "2024-11-08T09:47:44.722Z" },
-]
-
-[[package]]
-name = "pandas"
-version = "2.2.3"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "numpy" },
- { name = "python-dateutil" },
- { name = "pytz" },
- { name = "tzdata" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/9c/d6/9f8431bacc2e19dca897724cd097b1bb224a6ad5433784a44b587c7c13af/pandas-2.2.3.tar.gz", hash = "sha256:4f18ba62b61d7e192368b84517265a99b4d7ee8912f8708660fb4a366cc82667", size = 4399213, upload_time = "2024-09-20T13:10:04.827Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/a8/44/d9502bf0ed197ba9bf1103c9867d5904ddcaf869e52329787fc54ed70cc8/pandas-2.2.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:66108071e1b935240e74525006034333f98bcdb87ea116de573a6a0dccb6c039", size = 12602222, upload_time = "2024-09-20T13:08:56.254Z" },
- { url = "https://files.pythonhosted.org/packages/52/11/9eac327a38834f162b8250aab32a6781339c69afe7574368fffe46387edf/pandas-2.2.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7c2875855b0ff77b2a64a0365e24455d9990730d6431b9e0ee18ad8acee13dbd", size = 11321274, upload_time = "2024-09-20T13:08:58.645Z" },
- { url = "https://files.pythonhosted.org/packages/45/fb/c4beeb084718598ba19aa9f5abbc8aed8b42f90930da861fcb1acdb54c3a/pandas-2.2.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cd8d0c3be0515c12fed0bdbae072551c8b54b7192c7b1fda0ba56059a0179698", size = 15579836, upload_time = "2024-09-20T19:01:57.571Z" },
- { url = "https://files.pythonhosted.org/packages/cd/5f/4dba1d39bb9c38d574a9a22548c540177f78ea47b32f99c0ff2ec499fac5/pandas-2.2.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c124333816c3a9b03fbeef3a9f230ba9a737e9e5bb4060aa2107a86cc0a497fc", size = 13058505, upload_time = "2024-09-20T13:09:01.501Z" },
- { url = "https://files.pythonhosted.org/packages/b9/57/708135b90391995361636634df1f1130d03ba456e95bcf576fada459115a/pandas-2.2.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:63cc132e40a2e084cf01adf0775b15ac515ba905d7dcca47e9a251819c575ef3", size = 16744420, upload_time = "2024-09-20T19:02:00.678Z" },
- { url = "https://files.pythonhosted.org/packages/86/4a/03ed6b7ee323cf30404265c284cee9c65c56a212e0a08d9ee06984ba2240/pandas-2.2.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:29401dbfa9ad77319367d36940cd8a0b3a11aba16063e39632d98b0e931ddf32", size = 14440457, upload_time = "2024-09-20T13:09:04.105Z" },
- { url = "https://files.pythonhosted.org/packages/ed/8c/87ddf1fcb55d11f9f847e3c69bb1c6f8e46e2f40ab1a2d2abadb2401b007/pandas-2.2.3-cp311-cp311-win_amd64.whl", hash = "sha256:3fc6873a41186404dad67245896a6e440baacc92f5b716ccd1bc9ed2995ab2c5", size = 11617166, upload_time = "2024-09-20T13:09:06.917Z" },
- { url = "https://files.pythonhosted.org/packages/17/a3/fb2734118db0af37ea7433f57f722c0a56687e14b14690edff0cdb4b7e58/pandas-2.2.3-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:b1d432e8d08679a40e2a6d8b2f9770a5c21793a6f9f47fdd52c5ce1948a5a8a9", size = 12529893, upload_time = "2024-09-20T13:09:09.655Z" },
- { url = "https://files.pythonhosted.org/packages/e1/0c/ad295fd74bfac85358fd579e271cded3ac969de81f62dd0142c426b9da91/pandas-2.2.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:a5a1595fe639f5988ba6a8e5bc9649af3baf26df3998a0abe56c02609392e0a4", size = 11363475, upload_time = "2024-09-20T13:09:14.718Z" },
- { url = "https://files.pythonhosted.org/packages/c6/2a/4bba3f03f7d07207481fed47f5b35f556c7441acddc368ec43d6643c5777/pandas-2.2.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:5de54125a92bb4d1c051c0659e6fcb75256bf799a732a87184e5ea503965bce3", size = 15188645, upload_time = "2024-09-20T19:02:03.88Z" },
- { url = "https://files.pythonhosted.org/packages/38/f8/d8fddee9ed0d0c0f4a2132c1dfcf0e3e53265055da8df952a53e7eaf178c/pandas-2.2.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fffb8ae78d8af97f849404f21411c95062db1496aeb3e56f146f0355c9989319", size = 12739445, upload_time = "2024-09-20T13:09:17.621Z" },
- { url = "https://files.pythonhosted.org/packages/20/e8/45a05d9c39d2cea61ab175dbe6a2de1d05b679e8de2011da4ee190d7e748/pandas-2.2.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:6dfcb5ee8d4d50c06a51c2fffa6cff6272098ad6540aed1a76d15fb9318194d8", size = 16359235, upload_time = "2024-09-20T19:02:07.094Z" },
- { url = "https://files.pythonhosted.org/packages/1d/99/617d07a6a5e429ff90c90da64d428516605a1ec7d7bea494235e1c3882de/pandas-2.2.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:062309c1b9ea12a50e8ce661145c6aab431b1e99530d3cd60640e255778bd43a", size = 14056756, upload_time = "2024-09-20T13:09:20.474Z" },
- { url = "https://files.pythonhosted.org/packages/29/d4/1244ab8edf173a10fd601f7e13b9566c1b525c4f365d6bee918e68381889/pandas-2.2.3-cp312-cp312-win_amd64.whl", hash = "sha256:59ef3764d0fe818125a5097d2ae867ca3fa64df032331b7e0917cf5d7bf66b13", size = 11504248, upload_time = "2024-09-20T13:09:23.137Z" },
- { url = "https://files.pythonhosted.org/packages/64/22/3b8f4e0ed70644e85cfdcd57454686b9057c6c38d2f74fe4b8bc2527214a/pandas-2.2.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f00d1345d84d8c86a63e476bb4955e46458b304b9575dcf71102b5c705320015", size = 12477643, upload_time = "2024-09-20T13:09:25.522Z" },
- { url = "https://files.pythonhosted.org/packages/e4/93/b3f5d1838500e22c8d793625da672f3eec046b1a99257666c94446969282/pandas-2.2.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:3508d914817e153ad359d7e069d752cdd736a247c322d932eb89e6bc84217f28", size = 11281573, upload_time = "2024-09-20T13:09:28.012Z" },
- { url = "https://files.pythonhosted.org/packages/f5/94/6c79b07f0e5aab1dcfa35a75f4817f5c4f677931d4234afcd75f0e6a66ca/pandas-2.2.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:22a9d949bfc9a502d320aa04e5d02feab689d61da4e7764b62c30b991c42c5f0", size = 15196085, upload_time = "2024-09-20T19:02:10.451Z" },
- { url = "https://files.pythonhosted.org/packages/e8/31/aa8da88ca0eadbabd0a639788a6da13bb2ff6edbbb9f29aa786450a30a91/pandas-2.2.3-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f3a255b2c19987fbbe62a9dfd6cff7ff2aa9ccab3fc75218fd4b7530f01efa24", size = 12711809, upload_time = "2024-09-20T13:09:30.814Z" },
- { url = "https://files.pythonhosted.org/packages/ee/7c/c6dbdb0cb2a4344cacfb8de1c5808ca885b2e4dcfde8008266608f9372af/pandas-2.2.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:800250ecdadb6d9c78eae4990da62743b857b470883fa27f652db8bdde7f6659", size = 16356316, upload_time = "2024-09-20T19:02:13.825Z" },
- { url = "https://files.pythonhosted.org/packages/57/b7/8b757e7d92023b832869fa8881a992696a0bfe2e26f72c9ae9f255988d42/pandas-2.2.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6374c452ff3ec675a8f46fd9ab25c4ad0ba590b71cf0656f8b6daa5202bca3fb", size = 14022055, upload_time = "2024-09-20T13:09:33.462Z" },
- { url = "https://files.pythonhosted.org/packages/3b/bc/4b18e2b8c002572c5a441a64826252ce5da2aa738855747247a971988043/pandas-2.2.3-cp313-cp313-win_amd64.whl", hash = "sha256:61c5ad4043f791b61dd4752191d9f07f0ae412515d59ba8f005832a532f8736d", size = 11481175, upload_time = "2024-09-20T13:09:35.871Z" },
- { url = "https://files.pythonhosted.org/packages/76/a3/a5d88146815e972d40d19247b2c162e88213ef51c7c25993942c39dbf41d/pandas-2.2.3-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:3b71f27954685ee685317063bf13c7709a7ba74fc996b84fc6821c59b0f06468", size = 12615650, upload_time = "2024-09-20T13:09:38.685Z" },
- { url = "https://files.pythonhosted.org/packages/9c/8c/f0fd18f6140ddafc0c24122c8a964e48294acc579d47def376fef12bcb4a/pandas-2.2.3-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:38cf8125c40dae9d5acc10fa66af8ea6fdf760b2714ee482ca691fc66e6fcb18", size = 11290177, upload_time = "2024-09-20T13:09:41.141Z" },
- { url = "https://files.pythonhosted.org/packages/ed/f9/e995754eab9c0f14c6777401f7eece0943840b7a9fc932221c19d1abee9f/pandas-2.2.3-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ba96630bc17c875161df3818780af30e43be9b166ce51c9a18c1feae342906c2", size = 14651526, upload_time = "2024-09-20T19:02:16.905Z" },
- { url = "https://files.pythonhosted.org/packages/25/b0/98d6ae2e1abac4f35230aa756005e8654649d305df9a28b16b9ae4353bff/pandas-2.2.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1db71525a1538b30142094edb9adc10be3f3e176748cd7acc2240c2f2e5aa3a4", size = 11871013, upload_time = "2024-09-20T13:09:44.39Z" },
- { url = "https://files.pythonhosted.org/packages/cc/57/0f72a10f9db6a4628744c8e8f0df4e6e21de01212c7c981d31e50ffc8328/pandas-2.2.3-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:15c0e1e02e93116177d29ff83e8b1619c93ddc9c49083f237d4312337a61165d", size = 15711620, upload_time = "2024-09-20T19:02:20.639Z" },
- { url = "https://files.pythonhosted.org/packages/ab/5f/b38085618b950b79d2d9164a711c52b10aefc0ae6833b96f626b7021b2ed/pandas-2.2.3-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:ad5b65698ab28ed8d7f18790a0dc58005c7629f227be9ecc1072aa74c0c1d43a", size = 13098436, upload_time = "2024-09-20T13:09:48.112Z" },
-]
-
-[[package]]
-name = "pillow"
-version = "11.2.1"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/af/cb/bb5c01fcd2a69335b86c22142b2bccfc3464087efb7fd382eee5ffc7fdf7/pillow-11.2.1.tar.gz", hash = "sha256:a64dd61998416367b7ef979b73d3a85853ba9bec4c2925f74e588879a58716b6", size = 47026707, upload_time = "2025-04-12T17:50:03.289Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/68/08/3fbf4b98924c73037a8e8b4c2c774784805e0fb4ebca6c5bb60795c40125/pillow-11.2.1-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:35ca289f712ccfc699508c4658a1d14652e8033e9b69839edf83cbdd0ba39e70", size = 3198450, upload_time = "2025-04-12T17:47:37.135Z" },
- { url = "https://files.pythonhosted.org/packages/84/92/6505b1af3d2849d5e714fc75ba9e69b7255c05ee42383a35a4d58f576b16/pillow-11.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e0409af9f829f87a2dfb7e259f78f317a5351f2045158be321fd135973fff7bf", size = 3030550, upload_time = "2025-04-12T17:47:39.345Z" },
- { url = "https://files.pythonhosted.org/packages/3c/8c/ac2f99d2a70ff966bc7eb13dacacfaab57c0549b2ffb351b6537c7840b12/pillow-11.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4e5c5edee874dce4f653dbe59db7c73a600119fbea8d31f53423586ee2aafd7", size = 4415018, upload_time = "2025-04-12T17:47:41.128Z" },
- { url = "https://files.pythonhosted.org/packages/1f/e3/0a58b5d838687f40891fff9cbaf8669f90c96b64dc8f91f87894413856c6/pillow-11.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b93a07e76d13bff9444f1a029e0af2964e654bfc2e2c2d46bfd080df5ad5f3d8", size = 4498006, upload_time = "2025-04-12T17:47:42.912Z" },
- { url = "https://files.pythonhosted.org/packages/21/f5/6ba14718135f08fbfa33308efe027dd02b781d3f1d5c471444a395933aac/pillow-11.2.1-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:e6def7eed9e7fa90fde255afaf08060dc4b343bbe524a8f69bdd2a2f0018f600", size = 4517773, upload_time = "2025-04-12T17:47:44.611Z" },
- { url = "https://files.pythonhosted.org/packages/20/f2/805ad600fc59ebe4f1ba6129cd3a75fb0da126975c8579b8f57abeb61e80/pillow-11.2.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:8f4f3724c068be008c08257207210c138d5f3731af6c155a81c2b09a9eb3a788", size = 4607069, upload_time = "2025-04-12T17:47:46.46Z" },
- { url = "https://files.pythonhosted.org/packages/71/6b/4ef8a288b4bb2e0180cba13ca0a519fa27aa982875882392b65131401099/pillow-11.2.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:a0a6709b47019dff32e678bc12c63008311b82b9327613f534e496dacaefb71e", size = 4583460, upload_time = "2025-04-12T17:47:49.255Z" },
- { url = "https://files.pythonhosted.org/packages/62/ae/f29c705a09cbc9e2a456590816e5c234382ae5d32584f451c3eb41a62062/pillow-11.2.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f6b0c664ccb879109ee3ca702a9272d877f4fcd21e5eb63c26422fd6e415365e", size = 4661304, upload_time = "2025-04-12T17:47:51.067Z" },
- { url = "https://files.pythonhosted.org/packages/6e/1a/c8217b6f2f73794a5e219fbad087701f412337ae6dbb956db37d69a9bc43/pillow-11.2.1-cp311-cp311-win32.whl", hash = "sha256:cc5d875d56e49f112b6def6813c4e3d3036d269c008bf8aef72cd08d20ca6df6", size = 2331809, upload_time = "2025-04-12T17:47:54.425Z" },
- { url = "https://files.pythonhosted.org/packages/e2/72/25a8f40170dc262e86e90f37cb72cb3de5e307f75bf4b02535a61afcd519/pillow-11.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:0f5c7eda47bf8e3c8a283762cab94e496ba977a420868cb819159980b6709193", size = 2676338, upload_time = "2025-04-12T17:47:56.535Z" },
- { url = "https://files.pythonhosted.org/packages/06/9e/76825e39efee61efea258b479391ca77d64dbd9e5804e4ad0fa453b4ba55/pillow-11.2.1-cp311-cp311-win_arm64.whl", hash = "sha256:4d375eb838755f2528ac8cbc926c3e31cc49ca4ad0cf79cff48b20e30634a4a7", size = 2414918, upload_time = "2025-04-12T17:47:58.217Z" },
- { url = "https://files.pythonhosted.org/packages/c7/40/052610b15a1b8961f52537cc8326ca6a881408bc2bdad0d852edeb6ed33b/pillow-11.2.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:78afba22027b4accef10dbd5eed84425930ba41b3ea0a86fa8d20baaf19d807f", size = 3190185, upload_time = "2025-04-12T17:48:00.417Z" },
- { url = "https://files.pythonhosted.org/packages/e5/7e/b86dbd35a5f938632093dc40d1682874c33dcfe832558fc80ca56bfcb774/pillow-11.2.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:78092232a4ab376a35d68c4e6d5e00dfd73454bd12b230420025fbe178ee3b0b", size = 3030306, upload_time = "2025-04-12T17:48:02.391Z" },
- { url = "https://files.pythonhosted.org/packages/a4/5c/467a161f9ed53e5eab51a42923c33051bf8d1a2af4626ac04f5166e58e0c/pillow-11.2.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25a5f306095c6780c52e6bbb6109624b95c5b18e40aab1c3041da3e9e0cd3e2d", size = 4416121, upload_time = "2025-04-12T17:48:04.554Z" },
- { url = "https://files.pythonhosted.org/packages/62/73/972b7742e38ae0e2ac76ab137ca6005dcf877480da0d9d61d93b613065b4/pillow-11.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0c7b29dbd4281923a2bfe562acb734cee96bbb129e96e6972d315ed9f232bef4", size = 4501707, upload_time = "2025-04-12T17:48:06.831Z" },
- { url = "https://files.pythonhosted.org/packages/e4/3a/427e4cb0b9e177efbc1a84798ed20498c4f233abde003c06d2650a6d60cb/pillow-11.2.1-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:3e645b020f3209a0181a418bffe7b4a93171eef6c4ef6cc20980b30bebf17b7d", size = 4522921, upload_time = "2025-04-12T17:48:09.229Z" },
- { url = "https://files.pythonhosted.org/packages/fe/7c/d8b1330458e4d2f3f45d9508796d7caf0c0d3764c00c823d10f6f1a3b76d/pillow-11.2.1-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:b2dbea1012ccb784a65349f57bbc93730b96e85b42e9bf7b01ef40443db720b4", size = 4612523, upload_time = "2025-04-12T17:48:11.631Z" },
- { url = "https://files.pythonhosted.org/packages/b3/2f/65738384e0b1acf451de5a573d8153fe84103772d139e1e0bdf1596be2ea/pillow-11.2.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:da3104c57bbd72948d75f6a9389e6727d2ab6333c3617f0a89d72d4940aa0443", size = 4587836, upload_time = "2025-04-12T17:48:13.592Z" },
- { url = "https://files.pythonhosted.org/packages/6a/c5/e795c9f2ddf3debb2dedd0df889f2fe4b053308bb59a3cc02a0cd144d641/pillow-11.2.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:598174aef4589af795f66f9caab87ba4ff860ce08cd5bb447c6fc553ffee603c", size = 4669390, upload_time = "2025-04-12T17:48:15.938Z" },
- { url = "https://files.pythonhosted.org/packages/96/ae/ca0099a3995976a9fce2f423166f7bff9b12244afdc7520f6ed38911539a/pillow-11.2.1-cp312-cp312-win32.whl", hash = "sha256:1d535df14716e7f8776b9e7fee118576d65572b4aad3ed639be9e4fa88a1cad3", size = 2332309, upload_time = "2025-04-12T17:48:17.885Z" },
- { url = "https://files.pythonhosted.org/packages/7c/18/24bff2ad716257fc03da964c5e8f05d9790a779a8895d6566e493ccf0189/pillow-11.2.1-cp312-cp312-win_amd64.whl", hash = "sha256:14e33b28bf17c7a38eede290f77db7c664e4eb01f7869e37fa98a5aa95978941", size = 2676768, upload_time = "2025-04-12T17:48:19.655Z" },
- { url = "https://files.pythonhosted.org/packages/da/bb/e8d656c9543276517ee40184aaa39dcb41e683bca121022f9323ae11b39d/pillow-11.2.1-cp312-cp312-win_arm64.whl", hash = "sha256:21e1470ac9e5739ff880c211fc3af01e3ae505859392bf65458c224d0bf283eb", size = 2415087, upload_time = "2025-04-12T17:48:21.991Z" },
- { url = "https://files.pythonhosted.org/packages/36/9c/447528ee3776e7ab8897fe33697a7ff3f0475bb490c5ac1456a03dc57956/pillow-11.2.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:fdec757fea0b793056419bca3e9932eb2b0ceec90ef4813ea4c1e072c389eb28", size = 3190098, upload_time = "2025-04-12T17:48:23.915Z" },
- { url = "https://files.pythonhosted.org/packages/b5/09/29d5cd052f7566a63e5b506fac9c60526e9ecc553825551333e1e18a4858/pillow-11.2.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:b0e130705d568e2f43a17bcbe74d90958e8a16263868a12c3e0d9c8162690830", size = 3030166, upload_time = "2025-04-12T17:48:25.738Z" },
- { url = "https://files.pythonhosted.org/packages/71/5d/446ee132ad35e7600652133f9c2840b4799bbd8e4adba881284860da0a36/pillow-11.2.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7bdb5e09068332578214cadd9c05e3d64d99e0e87591be22a324bdbc18925be0", size = 4408674, upload_time = "2025-04-12T17:48:27.908Z" },
- { url = "https://files.pythonhosted.org/packages/69/5f/cbe509c0ddf91cc3a03bbacf40e5c2339c4912d16458fcb797bb47bcb269/pillow-11.2.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d189ba1bebfbc0c0e529159631ec72bb9e9bc041f01ec6d3233d6d82eb823bc1", size = 4496005, upload_time = "2025-04-12T17:48:29.888Z" },
- { url = "https://files.pythonhosted.org/packages/f9/b3/dd4338d8fb8a5f312021f2977fb8198a1184893f9b00b02b75d565c33b51/pillow-11.2.1-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:191955c55d8a712fab8934a42bfefbf99dd0b5875078240943f913bb66d46d9f", size = 4518707, upload_time = "2025-04-12T17:48:31.874Z" },
- { url = "https://files.pythonhosted.org/packages/13/eb/2552ecebc0b887f539111c2cd241f538b8ff5891b8903dfe672e997529be/pillow-11.2.1-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:ad275964d52e2243430472fc5d2c2334b4fc3ff9c16cb0a19254e25efa03a155", size = 4610008, upload_time = "2025-04-12T17:48:34.422Z" },
- { url = "https://files.pythonhosted.org/packages/72/d1/924ce51bea494cb6e7959522d69d7b1c7e74f6821d84c63c3dc430cbbf3b/pillow-11.2.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:750f96efe0597382660d8b53e90dd1dd44568a8edb51cb7f9d5d918b80d4de14", size = 4585420, upload_time = "2025-04-12T17:48:37.641Z" },
- { url = "https://files.pythonhosted.org/packages/43/ab/8f81312d255d713b99ca37479a4cb4b0f48195e530cdc1611990eb8fd04b/pillow-11.2.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:fe15238d3798788d00716637b3d4e7bb6bde18b26e5d08335a96e88564a36b6b", size = 4667655, upload_time = "2025-04-12T17:48:39.652Z" },
- { url = "https://files.pythonhosted.org/packages/94/86/8f2e9d2dc3d308dfd137a07fe1cc478df0a23d42a6c4093b087e738e4827/pillow-11.2.1-cp313-cp313-win32.whl", hash = "sha256:3fe735ced9a607fee4f481423a9c36701a39719252a9bb251679635f99d0f7d2", size = 2332329, upload_time = "2025-04-12T17:48:41.765Z" },
- { url = "https://files.pythonhosted.org/packages/6d/ec/1179083b8d6067a613e4d595359b5fdea65d0a3b7ad623fee906e1b3c4d2/pillow-11.2.1-cp313-cp313-win_amd64.whl", hash = "sha256:74ee3d7ecb3f3c05459ba95eed5efa28d6092d751ce9bf20e3e253a4e497e691", size = 2676388, upload_time = "2025-04-12T17:48:43.625Z" },
- { url = "https://files.pythonhosted.org/packages/23/f1/2fc1e1e294de897df39fa8622d829b8828ddad938b0eaea256d65b84dd72/pillow-11.2.1-cp313-cp313-win_arm64.whl", hash = "sha256:5119225c622403afb4b44bad4c1ca6c1f98eed79db8d3bc6e4e160fc6339d66c", size = 2414950, upload_time = "2025-04-12T17:48:45.475Z" },
- { url = "https://files.pythonhosted.org/packages/c4/3e/c328c48b3f0ead7bab765a84b4977acb29f101d10e4ef57a5e3400447c03/pillow-11.2.1-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:8ce2e8411c7aaef53e6bb29fe98f28cd4fbd9a1d9be2eeea434331aac0536b22", size = 3192759, upload_time = "2025-04-12T17:48:47.866Z" },
- { url = "https://files.pythonhosted.org/packages/18/0e/1c68532d833fc8b9f404d3a642991441d9058eccd5606eab31617f29b6d4/pillow-11.2.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:9ee66787e095127116d91dea2143db65c7bb1e232f617aa5957c0d9d2a3f23a7", size = 3033284, upload_time = "2025-04-12T17:48:50.189Z" },
- { url = "https://files.pythonhosted.org/packages/b7/cb/6faf3fb1e7705fd2db74e070f3bf6f88693601b0ed8e81049a8266de4754/pillow-11.2.1-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9622e3b6c1d8b551b6e6f21873bdcc55762b4b2126633014cea1803368a9aa16", size = 4445826, upload_time = "2025-04-12T17:48:52.346Z" },
- { url = "https://files.pythonhosted.org/packages/07/94/8be03d50b70ca47fb434a358919d6a8d6580f282bbb7af7e4aa40103461d/pillow-11.2.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63b5dff3a68f371ea06025a1a6966c9a1e1ee452fc8020c2cd0ea41b83e9037b", size = 4527329, upload_time = "2025-04-12T17:48:54.403Z" },
- { url = "https://files.pythonhosted.org/packages/fd/a4/bfe78777076dc405e3bd2080bc32da5ab3945b5a25dc5d8acaa9de64a162/pillow-11.2.1-cp313-cp313t-manylinux_2_28_aarch64.whl", hash = "sha256:31df6e2d3d8fc99f993fd253e97fae451a8db2e7207acf97859732273e108406", size = 4549049, upload_time = "2025-04-12T17:48:56.383Z" },
- { url = "https://files.pythonhosted.org/packages/65/4d/eaf9068dc687c24979e977ce5677e253624bd8b616b286f543f0c1b91662/pillow-11.2.1-cp313-cp313t-manylinux_2_28_x86_64.whl", hash = "sha256:062b7a42d672c45a70fa1f8b43d1d38ff76b63421cbbe7f88146b39e8a558d91", size = 4635408, upload_time = "2025-04-12T17:48:58.782Z" },
- { url = "https://files.pythonhosted.org/packages/1d/26/0fd443365d9c63bc79feb219f97d935cd4b93af28353cba78d8e77b61719/pillow-11.2.1-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:4eb92eca2711ef8be42fd3f67533765d9fd043b8c80db204f16c8ea62ee1a751", size = 4614863, upload_time = "2025-04-12T17:49:00.709Z" },
- { url = "https://files.pythonhosted.org/packages/49/65/dca4d2506be482c2c6641cacdba5c602bc76d8ceb618fd37de855653a419/pillow-11.2.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:f91ebf30830a48c825590aede79376cb40f110b387c17ee9bd59932c961044f9", size = 4692938, upload_time = "2025-04-12T17:49:02.946Z" },
- { url = "https://files.pythonhosted.org/packages/b3/92/1ca0c3f09233bd7decf8f7105a1c4e3162fb9142128c74adad0fb361b7eb/pillow-11.2.1-cp313-cp313t-win32.whl", hash = "sha256:e0b55f27f584ed623221cfe995c912c61606be8513bfa0e07d2c674b4516d9dd", size = 2335774, upload_time = "2025-04-12T17:49:04.889Z" },
- { url = "https://files.pythonhosted.org/packages/a5/ac/77525347cb43b83ae905ffe257bbe2cc6fd23acb9796639a1f56aa59d191/pillow-11.2.1-cp313-cp313t-win_amd64.whl", hash = "sha256:36d6b82164c39ce5482f649b437382c0fb2395eabc1e2b1702a6deb8ad647d6e", size = 2681895, upload_time = "2025-04-12T17:49:06.635Z" },
- { url = "https://files.pythonhosted.org/packages/67/32/32dc030cfa91ca0fc52baebbba2e009bb001122a1daa8b6a79ad830b38d3/pillow-11.2.1-cp313-cp313t-win_arm64.whl", hash = "sha256:225c832a13326e34f212d2072982bb1adb210e0cc0b153e688743018c94a2681", size = 2417234, upload_time = "2025-04-12T17:49:08.399Z" },
- { url = "https://files.pythonhosted.org/packages/a4/ad/2613c04633c7257d9481ab21d6b5364b59fc5d75faafd7cb8693523945a3/pillow-11.2.1-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:80f1df8dbe9572b4b7abdfa17eb5d78dd620b1d55d9e25f834efdbee872d3aed", size = 3181734, upload_time = "2025-04-12T17:49:46.789Z" },
- { url = "https://files.pythonhosted.org/packages/a4/fd/dcdda4471ed667de57bb5405bb42d751e6cfdd4011a12c248b455c778e03/pillow-11.2.1-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:ea926cfbc3957090becbcbbb65ad177161a2ff2ad578b5a6ec9bb1e1cd78753c", size = 2999841, upload_time = "2025-04-12T17:49:48.812Z" },
- { url = "https://files.pythonhosted.org/packages/ac/89/8a2536e95e77432833f0db6fd72a8d310c8e4272a04461fb833eb021bf94/pillow-11.2.1-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:738db0e0941ca0376804d4de6a782c005245264edaa253ffce24e5a15cbdc7bd", size = 3437470, upload_time = "2025-04-12T17:49:50.831Z" },
- { url = "https://files.pythonhosted.org/packages/9d/8f/abd47b73c60712f88e9eda32baced7bfc3e9bd6a7619bb64b93acff28c3e/pillow-11.2.1-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9db98ab6565c69082ec9b0d4e40dd9f6181dab0dd236d26f7a50b8b9bfbd5076", size = 3460013, upload_time = "2025-04-12T17:49:53.278Z" },
- { url = "https://files.pythonhosted.org/packages/f6/20/5c0a0aa83b213b7a07ec01e71a3d6ea2cf4ad1d2c686cc0168173b6089e7/pillow-11.2.1-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:036e53f4170e270ddb8797d4c590e6dd14d28e15c7da375c18978045f7e6c37b", size = 3527165, upload_time = "2025-04-12T17:49:55.164Z" },
- { url = "https://files.pythonhosted.org/packages/58/0e/2abab98a72202d91146abc839e10c14f7cf36166f12838ea0c4db3ca6ecb/pillow-11.2.1-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:14f73f7c291279bd65fda51ee87affd7c1e097709f7fdd0188957a16c264601f", size = 3571586, upload_time = "2025-04-12T17:49:57.171Z" },
- { url = "https://files.pythonhosted.org/packages/21/2c/5e05f58658cf49b6667762cca03d6e7d85cededde2caf2ab37b81f80e574/pillow-11.2.1-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:208653868d5c9ecc2b327f9b9ef34e0e42a4cdd172c2988fd81d62d2bc9bc044", size = 2674751, upload_time = "2025-04-12T17:49:59.628Z" },
-]
-
-[[package]]
-name = "portalocker"
-version = "2.10.1"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "pywin32", marker = "sys_platform == 'win32'" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/ed/d3/c6c64067759e87af98cc668c1cc75171347d0f1577fab7ca3749134e3cd4/portalocker-2.10.1.tar.gz", hash = "sha256:ef1bf844e878ab08aee7e40184156e1151f228f103aa5c6bd0724cc330960f8f", size = 40891, upload_time = "2024-07-13T23:15:34.86Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/9b/fb/a70a4214956182e0d7a9099ab17d50bfcba1056188e9b14f35b9e2b62a0d/portalocker-2.10.1-py3-none-any.whl", hash = "sha256:53a5984ebc86a025552264b459b46a2086e269b21823cb572f8f28ee759e45bf", size = 18423, upload_time = "2024-07-13T23:15:32.602Z" },
-]
-
-[[package]]
-name = "posthog"
-version = "4.2.0"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "backoff" },
- { name = "distro" },
- { name = "python-dateutil" },
- { name = "requests" },
- { name = "six" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/ce/5b/2e9890700b7b55a370edbfbe5948eae780d48af9b46ad06ea2e7970576f4/posthog-4.2.0.tar.gz", hash = "sha256:c4abc95de03294be005b3b7e8735e9d7abab88583da26262112bacce64b0c3b5", size = 80727, upload_time = "2025-05-23T23:23:55.943Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/51/16/7b6c5844acee2d343d463ee0e3143cd8c7c48a6c0d079a2f7daf0c80b95c/posthog-4.2.0-py2.py3-none-any.whl", hash = "sha256:60c7066caac43e43e326e9196d8c1aadeafc8b0be9e5c108446e352711fa456b", size = 96692, upload_time = "2025-05-23T23:23:54.384Z" },
-]
-
-[[package]]
-name = "protobuf"
-version = "6.31.1"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/52/f3/b9655a711b32c19720253f6f06326faf90580834e2e83f840472d752bc8b/protobuf-6.31.1.tar.gz", hash = "sha256:d8cac4c982f0b957a4dc73a80e2ea24fab08e679c0de9deb835f4a12d69aca9a", size = 441797, upload_time = "2025-05-28T19:25:54.947Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/f3/6f/6ab8e4bf962fd5570d3deaa2d5c38f0a363f57b4501047b5ebeb83ab1125/protobuf-6.31.1-cp310-abi3-win32.whl", hash = "sha256:7fa17d5a29c2e04b7d90e5e32388b8bfd0e7107cd8e616feef7ed3fa6bdab5c9", size = 423603, upload_time = "2025-05-28T19:25:41.198Z" },
- { url = "https://files.pythonhosted.org/packages/44/3a/b15c4347dd4bf3a1b0ee882f384623e2063bb5cf9fa9d57990a4f7df2fb6/protobuf-6.31.1-cp310-abi3-win_amd64.whl", hash = "sha256:426f59d2964864a1a366254fa703b8632dcec0790d8862d30034d8245e1cd447", size = 435283, upload_time = "2025-05-28T19:25:44.275Z" },
- { url = "https://files.pythonhosted.org/packages/6a/c9/b9689a2a250264a84e66c46d8862ba788ee7a641cdca39bccf64f59284b7/protobuf-6.31.1-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:6f1227473dc43d44ed644425268eb7c2e488ae245d51c6866d19fe158e207402", size = 425604, upload_time = "2025-05-28T19:25:45.702Z" },
- { url = "https://files.pythonhosted.org/packages/76/a1/7a5a94032c83375e4fe7e7f56e3976ea6ac90c5e85fac8576409e25c39c3/protobuf-6.31.1-cp39-abi3-manylinux2014_aarch64.whl", hash = "sha256:a40fc12b84c154884d7d4c4ebd675d5b3b5283e155f324049ae396b95ddebc39", size = 322115, upload_time = "2025-05-28T19:25:47.128Z" },
- { url = "https://files.pythonhosted.org/packages/fa/b1/b59d405d64d31999244643d88c45c8241c58f17cc887e73bcb90602327f8/protobuf-6.31.1-cp39-abi3-manylinux2014_x86_64.whl", hash = "sha256:4ee898bf66f7a8b0bd21bce523814e6fbd8c6add948045ce958b73af7e8878c6", size = 321070, upload_time = "2025-05-28T19:25:50.036Z" },
- { url = "https://files.pythonhosted.org/packages/f7/af/ab3c51ab7507a7325e98ffe691d9495ee3d3aa5f589afad65ec920d39821/protobuf-6.31.1-py3-none-any.whl", hash = "sha256:720a6c7e6b77288b85063569baae8536671b39f15cc22037ec7045658d80489e", size = 168724, upload_time = "2025-05-28T19:25:53.926Z" },
-]
-
-[[package]]
-name = "pyarrow"
-version = "20.0.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/a2/ee/a7810cb9f3d6e9238e61d312076a9859bf3668fd21c69744de9532383912/pyarrow-20.0.0.tar.gz", hash = "sha256:febc4a913592573c8d5805091a6c2b5064c8bd6e002131f01061797d91c783c1", size = 1125187, upload_time = "2025-04-27T12:34:23.264Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/47/a2/b7930824181ceadd0c63c1042d01fa4ef63eee233934826a7a2a9af6e463/pyarrow-20.0.0-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:24ca380585444cb2a31324c546a9a56abbe87e26069189e14bdba19c86c049f0", size = 30856035, upload_time = "2025-04-27T12:28:40.78Z" },
- { url = "https://files.pythonhosted.org/packages/9b/18/c765770227d7f5bdfa8a69f64b49194352325c66a5c3bb5e332dfd5867d9/pyarrow-20.0.0-cp311-cp311-macosx_12_0_x86_64.whl", hash = "sha256:95b330059ddfdc591a3225f2d272123be26c8fa76e8c9ee1a77aad507361cfdb", size = 32309552, upload_time = "2025-04-27T12:28:47.051Z" },
- { url = "https://files.pythonhosted.org/packages/44/fb/dfb2dfdd3e488bb14f822d7335653092dde150cffc2da97de6e7500681f9/pyarrow-20.0.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5f0fb1041267e9968c6d0d2ce3ff92e3928b243e2b6d11eeb84d9ac547308232", size = 41334704, upload_time = "2025-04-27T12:28:55.064Z" },
- { url = "https://files.pythonhosted.org/packages/58/0d/08a95878d38808051a953e887332d4a76bc06c6ee04351918ee1155407eb/pyarrow-20.0.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b8ff87cc837601532cc8242d2f7e09b4e02404de1b797aee747dd4ba4bd6313f", size = 42399836, upload_time = "2025-04-27T12:29:02.13Z" },
- { url = "https://files.pythonhosted.org/packages/f3/cd/efa271234dfe38f0271561086eedcad7bc0f2ddd1efba423916ff0883684/pyarrow-20.0.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:7a3a5dcf54286e6141d5114522cf31dd67a9e7c9133d150799f30ee302a7a1ab", size = 40711789, upload_time = "2025-04-27T12:29:09.951Z" },
- { url = "https://files.pythonhosted.org/packages/46/1f/7f02009bc7fc8955c391defee5348f510e589a020e4b40ca05edcb847854/pyarrow-20.0.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:a6ad3e7758ecf559900261a4df985662df54fb7fdb55e8e3b3aa99b23d526b62", size = 42301124, upload_time = "2025-04-27T12:29:17.187Z" },
- { url = "https://files.pythonhosted.org/packages/4f/92/692c562be4504c262089e86757a9048739fe1acb4024f92d39615e7bab3f/pyarrow-20.0.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:6bb830757103a6cb300a04610e08d9636f0cd223d32f388418ea893a3e655f1c", size = 42916060, upload_time = "2025-04-27T12:29:24.253Z" },
- { url = "https://files.pythonhosted.org/packages/a4/ec/9f5c7e7c828d8e0a3c7ef50ee62eca38a7de2fa6eb1b8fa43685c9414fef/pyarrow-20.0.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:96e37f0766ecb4514a899d9a3554fadda770fb57ddf42b63d80f14bc20aa7db3", size = 44547640, upload_time = "2025-04-27T12:29:32.782Z" },
- { url = "https://files.pythonhosted.org/packages/54/96/46613131b4727f10fd2ffa6d0d6f02efcc09a0e7374eff3b5771548aa95b/pyarrow-20.0.0-cp311-cp311-win_amd64.whl", hash = "sha256:3346babb516f4b6fd790da99b98bed9708e3f02e734c84971faccb20736848dc", size = 25781491, upload_time = "2025-04-27T12:29:38.464Z" },
- { url = "https://files.pythonhosted.org/packages/a1/d6/0c10e0d54f6c13eb464ee9b67a68b8c71bcf2f67760ef5b6fbcddd2ab05f/pyarrow-20.0.0-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:75a51a5b0eef32727a247707d4755322cb970be7e935172b6a3a9f9ae98404ba", size = 30815067, upload_time = "2025-04-27T12:29:44.384Z" },
- { url = "https://files.pythonhosted.org/packages/7e/e2/04e9874abe4094a06fd8b0cbb0f1312d8dd7d707f144c2ec1e5e8f452ffa/pyarrow-20.0.0-cp312-cp312-macosx_12_0_x86_64.whl", hash = "sha256:211d5e84cecc640c7a3ab900f930aaff5cd2702177e0d562d426fb7c4f737781", size = 32297128, upload_time = "2025-04-27T12:29:52.038Z" },
- { url = "https://files.pythonhosted.org/packages/31/fd/c565e5dcc906a3b471a83273039cb75cb79aad4a2d4a12f76cc5ae90a4b8/pyarrow-20.0.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4ba3cf4182828be7a896cbd232aa8dd6a31bd1f9e32776cc3796c012855e1199", size = 41334890, upload_time = "2025-04-27T12:29:59.452Z" },
- { url = "https://files.pythonhosted.org/packages/af/a9/3bdd799e2c9b20c1ea6dc6fa8e83f29480a97711cf806e823f808c2316ac/pyarrow-20.0.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2c3a01f313ffe27ac4126f4c2e5ea0f36a5fc6ab51f8726cf41fee4b256680bd", size = 42421775, upload_time = "2025-04-27T12:30:06.875Z" },
- { url = "https://files.pythonhosted.org/packages/10/f7/da98ccd86354c332f593218101ae56568d5dcedb460e342000bd89c49cc1/pyarrow-20.0.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:a2791f69ad72addd33510fec7bb14ee06c2a448e06b649e264c094c5b5f7ce28", size = 40687231, upload_time = "2025-04-27T12:30:13.954Z" },
- { url = "https://files.pythonhosted.org/packages/bb/1b/2168d6050e52ff1e6cefc61d600723870bf569cbf41d13db939c8cf97a16/pyarrow-20.0.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:4250e28a22302ce8692d3a0e8ec9d9dde54ec00d237cff4dfa9c1fbf79e472a8", size = 42295639, upload_time = "2025-04-27T12:30:21.949Z" },
- { url = "https://files.pythonhosted.org/packages/b2/66/2d976c0c7158fd25591c8ca55aee026e6d5745a021915a1835578707feb3/pyarrow-20.0.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:89e030dc58fc760e4010148e6ff164d2f44441490280ef1e97a542375e41058e", size = 42908549, upload_time = "2025-04-27T12:30:29.551Z" },
- { url = "https://files.pythonhosted.org/packages/31/a9/dfb999c2fc6911201dcbf348247f9cc382a8990f9ab45c12eabfd7243a38/pyarrow-20.0.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:6102b4864d77102dbbb72965618e204e550135a940c2534711d5ffa787df2a5a", size = 44557216, upload_time = "2025-04-27T12:30:36.977Z" },
- { url = "https://files.pythonhosted.org/packages/a0/8e/9adee63dfa3911be2382fb4d92e4b2e7d82610f9d9f668493bebaa2af50f/pyarrow-20.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:96d6a0a37d9c98be08f5ed6a10831d88d52cac7b13f5287f1e0f625a0de8062b", size = 25660496, upload_time = "2025-04-27T12:30:42.809Z" },
- { url = "https://files.pythonhosted.org/packages/9b/aa/daa413b81446d20d4dad2944110dcf4cf4f4179ef7f685dd5a6d7570dc8e/pyarrow-20.0.0-cp313-cp313-macosx_12_0_arm64.whl", hash = "sha256:a15532e77b94c61efadde86d10957950392999503b3616b2ffcef7621a002893", size = 30798501, upload_time = "2025-04-27T12:30:48.351Z" },
- { url = "https://files.pythonhosted.org/packages/ff/75/2303d1caa410925de902d32ac215dc80a7ce7dd8dfe95358c165f2adf107/pyarrow-20.0.0-cp313-cp313-macosx_12_0_x86_64.whl", hash = "sha256:dd43f58037443af715f34f1322c782ec463a3c8a94a85fdb2d987ceb5658e061", size = 32277895, upload_time = "2025-04-27T12:30:55.238Z" },
- { url = "https://files.pythonhosted.org/packages/92/41/fe18c7c0b38b20811b73d1bdd54b1fccba0dab0e51d2048878042d84afa8/pyarrow-20.0.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aa0d288143a8585806e3cc7c39566407aab646fb9ece164609dac1cfff45f6ae", size = 41327322, upload_time = "2025-04-27T12:31:05.587Z" },
- { url = "https://files.pythonhosted.org/packages/da/ab/7dbf3d11db67c72dbf36ae63dcbc9f30b866c153b3a22ef728523943eee6/pyarrow-20.0.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b6953f0114f8d6f3d905d98e987d0924dabce59c3cda380bdfaa25a6201563b4", size = 42411441, upload_time = "2025-04-27T12:31:15.675Z" },
- { url = "https://files.pythonhosted.org/packages/90/c3/0c7da7b6dac863af75b64e2f827e4742161128c350bfe7955b426484e226/pyarrow-20.0.0-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:991f85b48a8a5e839b2128590ce07611fae48a904cae6cab1f089c5955b57eb5", size = 40677027, upload_time = "2025-04-27T12:31:24.631Z" },
- { url = "https://files.pythonhosted.org/packages/be/27/43a47fa0ff9053ab5203bb3faeec435d43c0d8bfa40179bfd076cdbd4e1c/pyarrow-20.0.0-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:97c8dc984ed09cb07d618d57d8d4b67a5100a30c3818c2fb0b04599f0da2de7b", size = 42281473, upload_time = "2025-04-27T12:31:31.311Z" },
- { url = "https://files.pythonhosted.org/packages/bc/0b/d56c63b078876da81bbb9ba695a596eabee9b085555ed12bf6eb3b7cab0e/pyarrow-20.0.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:9b71daf534f4745818f96c214dbc1e6124d7daf059167330b610fc69b6f3d3e3", size = 42893897, upload_time = "2025-04-27T12:31:39.406Z" },
- { url = "https://files.pythonhosted.org/packages/92/ac/7d4bd020ba9145f354012838692d48300c1b8fe5634bfda886abcada67ed/pyarrow-20.0.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:e8b88758f9303fa5a83d6c90e176714b2fd3852e776fc2d7e42a22dd6c2fb368", size = 44543847, upload_time = "2025-04-27T12:31:45.997Z" },
- { url = "https://files.pythonhosted.org/packages/9d/07/290f4abf9ca702c5df7b47739c1b2c83588641ddfa2cc75e34a301d42e55/pyarrow-20.0.0-cp313-cp313-win_amd64.whl", hash = "sha256:30b3051b7975801c1e1d387e17c588d8ab05ced9b1e14eec57915f79869b5031", size = 25653219, upload_time = "2025-04-27T12:31:54.11Z" },
- { url = "https://files.pythonhosted.org/packages/95/df/720bb17704b10bd69dde086e1400b8eefb8f58df3f8ac9cff6c425bf57f1/pyarrow-20.0.0-cp313-cp313t-macosx_12_0_arm64.whl", hash = "sha256:ca151afa4f9b7bc45bcc791eb9a89e90a9eb2772767d0b1e5389609c7d03db63", size = 30853957, upload_time = "2025-04-27T12:31:59.215Z" },
- { url = "https://files.pythonhosted.org/packages/d9/72/0d5f875efc31baef742ba55a00a25213a19ea64d7176e0fe001c5d8b6e9a/pyarrow-20.0.0-cp313-cp313t-macosx_12_0_x86_64.whl", hash = "sha256:4680f01ecd86e0dd63e39eb5cd59ef9ff24a9d166db328679e36c108dc993d4c", size = 32247972, upload_time = "2025-04-27T12:32:05.369Z" },
- { url = "https://files.pythonhosted.org/packages/d5/bc/e48b4fa544d2eea72f7844180eb77f83f2030b84c8dad860f199f94307ed/pyarrow-20.0.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7f4c8534e2ff059765647aa69b75d6543f9fef59e2cd4c6d18015192565d2b70", size = 41256434, upload_time = "2025-04-27T12:32:11.814Z" },
- { url = "https://files.pythonhosted.org/packages/c3/01/974043a29874aa2cf4f87fb07fd108828fc7362300265a2a64a94965e35b/pyarrow-20.0.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3e1f8a47f4b4ae4c69c4d702cfbdfe4d41e18e5c7ef6f1bb1c50918c1e81c57b", size = 42353648, upload_time = "2025-04-27T12:32:20.766Z" },
- { url = "https://files.pythonhosted.org/packages/68/95/cc0d3634cde9ca69b0e51cbe830d8915ea32dda2157560dda27ff3b3337b/pyarrow-20.0.0-cp313-cp313t-manylinux_2_28_aarch64.whl", hash = "sha256:a1f60dc14658efaa927f8214734f6a01a806d7690be4b3232ba526836d216122", size = 40619853, upload_time = "2025-04-27T12:32:28.1Z" },
- { url = "https://files.pythonhosted.org/packages/29/c2/3ad40e07e96a3e74e7ed7cc8285aadfa84eb848a798c98ec0ad009eb6bcc/pyarrow-20.0.0-cp313-cp313t-manylinux_2_28_x86_64.whl", hash = "sha256:204a846dca751428991346976b914d6d2a82ae5b8316a6ed99789ebf976551e6", size = 42241743, upload_time = "2025-04-27T12:32:35.792Z" },
- { url = "https://files.pythonhosted.org/packages/eb/cb/65fa110b483339add6a9bc7b6373614166b14e20375d4daa73483755f830/pyarrow-20.0.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:f3b117b922af5e4c6b9a9115825726cac7d8b1421c37c2b5e24fbacc8930612c", size = 42839441, upload_time = "2025-04-27T12:32:46.64Z" },
- { url = "https://files.pythonhosted.org/packages/98/7b/f30b1954589243207d7a0fbc9997401044bf9a033eec78f6cb50da3f304a/pyarrow-20.0.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:e724a3fd23ae5b9c010e7be857f4405ed5e679db5c93e66204db1a69f733936a", size = 44503279, upload_time = "2025-04-27T12:32:56.503Z" },
- { url = "https://files.pythonhosted.org/packages/37/40/ad395740cd641869a13bcf60851296c89624662575621968dcfafabaa7f6/pyarrow-20.0.0-cp313-cp313t-win_amd64.whl", hash = "sha256:82f1ee5133bd8f49d31be1299dc07f585136679666b502540db854968576faf9", size = 25944982, upload_time = "2025-04-27T12:33:04.72Z" },
-]
-
-[[package]]
-name = "pydantic"
-version = "2.11.5"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "annotated-types" },
- { name = "pydantic-core" },
- { name = "typing-extensions" },
- { name = "typing-inspection" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/f0/86/8ce9040065e8f924d642c58e4a344e33163a07f6b57f836d0d734e0ad3fb/pydantic-2.11.5.tar.gz", hash = "sha256:7f853db3d0ce78ce8bbb148c401c2cdd6431b3473c0cdff2755c7690952a7b7a", size = 787102, upload_time = "2025-05-22T21:18:08.761Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/b5/69/831ed22b38ff9b4b64b66569f0e5b7b97cf3638346eb95a2147fdb49ad5f/pydantic-2.11.5-py3-none-any.whl", hash = "sha256:f9c26ba06f9747749ca1e5c94d6a85cb84254577553c8785576fd38fa64dc0f7", size = 444229, upload_time = "2025-05-22T21:18:06.329Z" },
-]
-
-[[package]]
-name = "pydantic-core"
-version = "2.33.2"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "typing-extensions" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/ad/88/5f2260bdfae97aabf98f1778d43f69574390ad787afb646292a638c923d4/pydantic_core-2.33.2.tar.gz", hash = "sha256:7cb8bc3605c29176e1b105350d2e6474142d7c1bd1d9327c4a9bdb46bf827acc", size = 435195, upload_time = "2025-04-23T18:33:52.104Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/3f/8d/71db63483d518cbbf290261a1fc2839d17ff89fce7089e08cad07ccfce67/pydantic_core-2.33.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4c5b0a576fb381edd6d27f0a85915c6daf2f8138dc5c267a57c08a62900758c7", size = 2028584, upload_time = "2025-04-23T18:31:03.106Z" },
- { url = "https://files.pythonhosted.org/packages/24/2f/3cfa7244ae292dd850989f328722d2aef313f74ffc471184dc509e1e4e5a/pydantic_core-2.33.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e799c050df38a639db758c617ec771fd8fb7a5f8eaaa4b27b101f266b216a246", size = 1855071, upload_time = "2025-04-23T18:31:04.621Z" },
- { url = "https://files.pythonhosted.org/packages/b3/d3/4ae42d33f5e3f50dd467761304be2fa0a9417fbf09735bc2cce003480f2a/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc46a01bf8d62f227d5ecee74178ffc448ff4e5197c756331f71efcc66dc980f", size = 1897823, upload_time = "2025-04-23T18:31:06.377Z" },
- { url = "https://files.pythonhosted.org/packages/f4/f3/aa5976e8352b7695ff808599794b1fba2a9ae2ee954a3426855935799488/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a144d4f717285c6d9234a66778059f33a89096dfb9b39117663fd8413d582dcc", size = 1983792, upload_time = "2025-04-23T18:31:07.93Z" },
- { url = "https://files.pythonhosted.org/packages/d5/7a/cda9b5a23c552037717f2b2a5257e9b2bfe45e687386df9591eff7b46d28/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:73cf6373c21bc80b2e0dc88444f41ae60b2f070ed02095754eb5a01df12256de", size = 2136338, upload_time = "2025-04-23T18:31:09.283Z" },
- { url = "https://files.pythonhosted.org/packages/2b/9f/b8f9ec8dd1417eb9da784e91e1667d58a2a4a7b7b34cf4af765ef663a7e5/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3dc625f4aa79713512d1976fe9f0bc99f706a9dee21dfd1810b4bbbf228d0e8a", size = 2730998, upload_time = "2025-04-23T18:31:11.7Z" },
- { url = "https://files.pythonhosted.org/packages/47/bc/cd720e078576bdb8255d5032c5d63ee5c0bf4b7173dd955185a1d658c456/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:881b21b5549499972441da4758d662aeea93f1923f953e9cbaff14b8b9565aef", size = 2003200, upload_time = "2025-04-23T18:31:13.536Z" },
- { url = "https://files.pythonhosted.org/packages/ca/22/3602b895ee2cd29d11a2b349372446ae9727c32e78a94b3d588a40fdf187/pydantic_core-2.33.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:bdc25f3681f7b78572699569514036afe3c243bc3059d3942624e936ec93450e", size = 2113890, upload_time = "2025-04-23T18:31:15.011Z" },
- { url = "https://files.pythonhosted.org/packages/ff/e6/e3c5908c03cf00d629eb38393a98fccc38ee0ce8ecce32f69fc7d7b558a7/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d", size = 2073359, upload_time = "2025-04-23T18:31:16.393Z" },
- { url = "https://files.pythonhosted.org/packages/12/e7/6a36a07c59ebefc8777d1ffdaf5ae71b06b21952582e4b07eba88a421c79/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:bc7aee6f634a6f4a95676fcb5d6559a2c2a390330098dba5e5a5f28a2e4ada30", size = 2245883, upload_time = "2025-04-23T18:31:17.892Z" },
- { url = "https://files.pythonhosted.org/packages/16/3f/59b3187aaa6cc0c1e6616e8045b284de2b6a87b027cce2ffcea073adf1d2/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:235f45e5dbcccf6bd99f9f472858849f73d11120d76ea8707115415f8e5ebebf", size = 2241074, upload_time = "2025-04-23T18:31:19.205Z" },
- { url = "https://files.pythonhosted.org/packages/e0/ed/55532bb88f674d5d8f67ab121a2a13c385df382de2a1677f30ad385f7438/pydantic_core-2.33.2-cp311-cp311-win32.whl", hash = "sha256:6368900c2d3ef09b69cb0b913f9f8263b03786e5b2a387706c5afb66800efd51", size = 1910538, upload_time = "2025-04-23T18:31:20.541Z" },
- { url = "https://files.pythonhosted.org/packages/fe/1b/25b7cccd4519c0b23c2dd636ad39d381abf113085ce4f7bec2b0dc755eb1/pydantic_core-2.33.2-cp311-cp311-win_amd64.whl", hash = "sha256:1e063337ef9e9820c77acc768546325ebe04ee38b08703244c1309cccc4f1bab", size = 1952909, upload_time = "2025-04-23T18:31:22.371Z" },
- { url = "https://files.pythonhosted.org/packages/49/a9/d809358e49126438055884c4366a1f6227f0f84f635a9014e2deb9b9de54/pydantic_core-2.33.2-cp311-cp311-win_arm64.whl", hash = "sha256:6b99022f1d19bc32a4c2a0d544fc9a76e3be90f0b3f4af413f87d38749300e65", size = 1897786, upload_time = "2025-04-23T18:31:24.161Z" },
- { url = "https://files.pythonhosted.org/packages/18/8a/2b41c97f554ec8c71f2a8a5f85cb56a8b0956addfe8b0efb5b3d77e8bdc3/pydantic_core-2.33.2-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a7ec89dc587667f22b6a0b6579c249fca9026ce7c333fc142ba42411fa243cdc", size = 2009000, upload_time = "2025-04-23T18:31:25.863Z" },
- { url = "https://files.pythonhosted.org/packages/a1/02/6224312aacb3c8ecbaa959897af57181fb6cf3a3d7917fd44d0f2917e6f2/pydantic_core-2.33.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3c6db6e52c6d70aa0d00d45cdb9b40f0433b96380071ea80b09277dba021ddf7", size = 1847996, upload_time = "2025-04-23T18:31:27.341Z" },
- { url = "https://files.pythonhosted.org/packages/d6/46/6dcdf084a523dbe0a0be59d054734b86a981726f221f4562aed313dbcb49/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e61206137cbc65e6d5256e1166f88331d3b6238e082d9f74613b9b765fb9025", size = 1880957, upload_time = "2025-04-23T18:31:28.956Z" },
- { url = "https://files.pythonhosted.org/packages/ec/6b/1ec2c03837ac00886ba8160ce041ce4e325b41d06a034adbef11339ae422/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:eb8c529b2819c37140eb51b914153063d27ed88e3bdc31b71198a198e921e011", size = 1964199, upload_time = "2025-04-23T18:31:31.025Z" },
- { url = "https://files.pythonhosted.org/packages/2d/1d/6bf34d6adb9debd9136bd197ca72642203ce9aaaa85cfcbfcf20f9696e83/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c52b02ad8b4e2cf14ca7b3d918f3eb0ee91e63b3167c32591e57c4317e134f8f", size = 2120296, upload_time = "2025-04-23T18:31:32.514Z" },
- { url = "https://files.pythonhosted.org/packages/e0/94/2bd0aaf5a591e974b32a9f7123f16637776c304471a0ab33cf263cf5591a/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:96081f1605125ba0855dfda83f6f3df5ec90c61195421ba72223de35ccfb2f88", size = 2676109, upload_time = "2025-04-23T18:31:33.958Z" },
- { url = "https://files.pythonhosted.org/packages/f9/41/4b043778cf9c4285d59742281a769eac371b9e47e35f98ad321349cc5d61/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f57a69461af2a5fa6e6bbd7a5f60d3b7e6cebb687f55106933188e79ad155c1", size = 2002028, upload_time = "2025-04-23T18:31:39.095Z" },
- { url = "https://files.pythonhosted.org/packages/cb/d5/7bb781bf2748ce3d03af04d5c969fa1308880e1dca35a9bd94e1a96a922e/pydantic_core-2.33.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:572c7e6c8bb4774d2ac88929e3d1f12bc45714ae5ee6d9a788a9fb35e60bb04b", size = 2100044, upload_time = "2025-04-23T18:31:41.034Z" },
- { url = "https://files.pythonhosted.org/packages/fe/36/def5e53e1eb0ad896785702a5bbfd25eed546cdcf4087ad285021a90ed53/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:db4b41f9bd95fbe5acd76d89920336ba96f03e149097365afe1cb092fceb89a1", size = 2058881, upload_time = "2025-04-23T18:31:42.757Z" },
- { url = "https://files.pythonhosted.org/packages/01/6c/57f8d70b2ee57fc3dc8b9610315949837fa8c11d86927b9bb044f8705419/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:fa854f5cf7e33842a892e5c73f45327760bc7bc516339fda888c75ae60edaeb6", size = 2227034, upload_time = "2025-04-23T18:31:44.304Z" },
- { url = "https://files.pythonhosted.org/packages/27/b9/9c17f0396a82b3d5cbea4c24d742083422639e7bb1d5bf600e12cb176a13/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:5f483cfb75ff703095c59e365360cb73e00185e01aaea067cd19acffd2ab20ea", size = 2234187, upload_time = "2025-04-23T18:31:45.891Z" },
- { url = "https://files.pythonhosted.org/packages/b0/6a/adf5734ffd52bf86d865093ad70b2ce543415e0e356f6cacabbc0d9ad910/pydantic_core-2.33.2-cp312-cp312-win32.whl", hash = "sha256:9cb1da0f5a471435a7bc7e439b8a728e8b61e59784b2af70d7c169f8dd8ae290", size = 1892628, upload_time = "2025-04-23T18:31:47.819Z" },
- { url = "https://files.pythonhosted.org/packages/43/e4/5479fecb3606c1368d496a825d8411e126133c41224c1e7238be58b87d7e/pydantic_core-2.33.2-cp312-cp312-win_amd64.whl", hash = "sha256:f941635f2a3d96b2973e867144fde513665c87f13fe0e193c158ac51bfaaa7b2", size = 1955866, upload_time = "2025-04-23T18:31:49.635Z" },
- { url = "https://files.pythonhosted.org/packages/0d/24/8b11e8b3e2be9dd82df4b11408a67c61bb4dc4f8e11b5b0fc888b38118b5/pydantic_core-2.33.2-cp312-cp312-win_arm64.whl", hash = "sha256:cca3868ddfaccfbc4bfb1d608e2ccaaebe0ae628e1416aeb9c4d88c001bb45ab", size = 1888894, upload_time = "2025-04-23T18:31:51.609Z" },
- { url = "https://files.pythonhosted.org/packages/46/8c/99040727b41f56616573a28771b1bfa08a3d3fe74d3d513f01251f79f172/pydantic_core-2.33.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:1082dd3e2d7109ad8b7da48e1d4710c8d06c253cbc4a27c1cff4fbcaa97a9e3f", size = 2015688, upload_time = "2025-04-23T18:31:53.175Z" },
- { url = "https://files.pythonhosted.org/packages/3a/cc/5999d1eb705a6cefc31f0b4a90e9f7fc400539b1a1030529700cc1b51838/pydantic_core-2.33.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f517ca031dfc037a9c07e748cefd8d96235088b83b4f4ba8939105d20fa1dcd6", size = 1844808, upload_time = "2025-04-23T18:31:54.79Z" },
- { url = "https://files.pythonhosted.org/packages/6f/5e/a0a7b8885c98889a18b6e376f344da1ef323d270b44edf8174d6bce4d622/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a9f2c9dd19656823cb8250b0724ee9c60a82f3cdf68a080979d13092a3b0fef", size = 1885580, upload_time = "2025-04-23T18:31:57.393Z" },
- { url = "https://files.pythonhosted.org/packages/3b/2a/953581f343c7d11a304581156618c3f592435523dd9d79865903272c256a/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2b0a451c263b01acebe51895bfb0e1cc842a5c666efe06cdf13846c7418caa9a", size = 1973859, upload_time = "2025-04-23T18:31:59.065Z" },
- { url = "https://files.pythonhosted.org/packages/e6/55/f1a813904771c03a3f97f676c62cca0c0a4138654107c1b61f19c644868b/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ea40a64d23faa25e62a70ad163571c0b342b8bf66d5fa612ac0dec4f069d916", size = 2120810, upload_time = "2025-04-23T18:32:00.78Z" },
- { url = "https://files.pythonhosted.org/packages/aa/c3/053389835a996e18853ba107a63caae0b9deb4a276c6b472931ea9ae6e48/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0fb2d542b4d66f9470e8065c5469ec676978d625a8b7a363f07d9a501a9cb36a", size = 2676498, upload_time = "2025-04-23T18:32:02.418Z" },
- { url = "https://files.pythonhosted.org/packages/eb/3c/f4abd740877a35abade05e437245b192f9d0ffb48bbbbd708df33d3cda37/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fdac5d6ffa1b5a83bca06ffe7583f5576555e6c8b3a91fbd25ea7780f825f7d", size = 2000611, upload_time = "2025-04-23T18:32:04.152Z" },
- { url = "https://files.pythonhosted.org/packages/59/a7/63ef2fed1837d1121a894d0ce88439fe3e3b3e48c7543b2a4479eb99c2bd/pydantic_core-2.33.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:04a1a413977ab517154eebb2d326da71638271477d6ad87a769102f7c2488c56", size = 2107924, upload_time = "2025-04-23T18:32:06.129Z" },
- { url = "https://files.pythonhosted.org/packages/04/8f/2551964ef045669801675f1cfc3b0d74147f4901c3ffa42be2ddb1f0efc4/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:c8e7af2f4e0194c22b5b37205bfb293d166a7344a5b0d0eaccebc376546d77d5", size = 2063196, upload_time = "2025-04-23T18:32:08.178Z" },
- { url = "https://files.pythonhosted.org/packages/26/bd/d9602777e77fc6dbb0c7db9ad356e9a985825547dce5ad1d30ee04903918/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:5c92edd15cd58b3c2d34873597a1e20f13094f59cf88068adb18947df5455b4e", size = 2236389, upload_time = "2025-04-23T18:32:10.242Z" },
- { url = "https://files.pythonhosted.org/packages/42/db/0e950daa7e2230423ab342ae918a794964b053bec24ba8af013fc7c94846/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:65132b7b4a1c0beded5e057324b7e16e10910c106d43675d9bd87d4f38dde162", size = 2239223, upload_time = "2025-04-23T18:32:12.382Z" },
- { url = "https://files.pythonhosted.org/packages/58/4d/4f937099c545a8a17eb52cb67fe0447fd9a373b348ccfa9a87f141eeb00f/pydantic_core-2.33.2-cp313-cp313-win32.whl", hash = "sha256:52fb90784e0a242bb96ec53f42196a17278855b0f31ac7c3cc6f5c1ec4811849", size = 1900473, upload_time = "2025-04-23T18:32:14.034Z" },
- { url = "https://files.pythonhosted.org/packages/a0/75/4a0a9bac998d78d889def5e4ef2b065acba8cae8c93696906c3a91f310ca/pydantic_core-2.33.2-cp313-cp313-win_amd64.whl", hash = "sha256:c083a3bdd5a93dfe480f1125926afcdbf2917ae714bdb80b36d34318b2bec5d9", size = 1955269, upload_time = "2025-04-23T18:32:15.783Z" },
- { url = "https://files.pythonhosted.org/packages/f9/86/1beda0576969592f1497b4ce8e7bc8cbdf614c352426271b1b10d5f0aa64/pydantic_core-2.33.2-cp313-cp313-win_arm64.whl", hash = "sha256:e80b087132752f6b3d714f041ccf74403799d3b23a72722ea2e6ba2e892555b9", size = 1893921, upload_time = "2025-04-23T18:32:18.473Z" },
- { url = "https://files.pythonhosted.org/packages/a4/7d/e09391c2eebeab681df2b74bfe6c43422fffede8dc74187b2b0bf6fd7571/pydantic_core-2.33.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:61c18fba8e5e9db3ab908620af374db0ac1baa69f0f32df4f61ae23f15e586ac", size = 1806162, upload_time = "2025-04-23T18:32:20.188Z" },
- { url = "https://files.pythonhosted.org/packages/f1/3d/847b6b1fed9f8ed3bb95a9ad04fbd0b212e832d4f0f50ff4d9ee5a9f15cf/pydantic_core-2.33.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95237e53bb015f67b63c91af7518a62a8660376a6a0db19b89acc77a4d6199f5", size = 1981560, upload_time = "2025-04-23T18:32:22.354Z" },
- { url = "https://files.pythonhosted.org/packages/6f/9a/e73262f6c6656262b5fdd723ad90f518f579b7bc8622e43a942eec53c938/pydantic_core-2.33.2-cp313-cp313t-win_amd64.whl", hash = "sha256:c2fc0a768ef76c15ab9238afa6da7f69895bb5d1ee83aeea2e3509af4472d0b9", size = 1935777, upload_time = "2025-04-23T18:32:25.088Z" },
- { url = "https://files.pythonhosted.org/packages/7b/27/d4ae6487d73948d6f20dddcd94be4ea43e74349b56eba82e9bdee2d7494c/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:dd14041875d09cc0f9308e37a6f8b65f5585cf2598a53aa0123df8b129d481f8", size = 2025200, upload_time = "2025-04-23T18:33:14.199Z" },
- { url = "https://files.pythonhosted.org/packages/f1/b8/b3cb95375f05d33801024079b9392a5ab45267a63400bf1866e7ce0f0de4/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:d87c561733f66531dced0da6e864f44ebf89a8fba55f31407b00c2f7f9449593", size = 1859123, upload_time = "2025-04-23T18:33:16.555Z" },
- { url = "https://files.pythonhosted.org/packages/05/bc/0d0b5adeda59a261cd30a1235a445bf55c7e46ae44aea28f7bd6ed46e091/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2f82865531efd18d6e07a04a17331af02cb7a651583c418df8266f17a63c6612", size = 1892852, upload_time = "2025-04-23T18:33:18.513Z" },
- { url = "https://files.pythonhosted.org/packages/3e/11/d37bdebbda2e449cb3f519f6ce950927b56d62f0b84fd9cb9e372a26a3d5/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bfb5112df54209d820d7bf9317c7a6c9025ea52e49f46b6a2060104bba37de7", size = 2067484, upload_time = "2025-04-23T18:33:20.475Z" },
- { url = "https://files.pythonhosted.org/packages/8c/55/1f95f0a05ce72ecb02a8a8a1c3be0579bbc29b1d5ab68f1378b7bebc5057/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:64632ff9d614e5eecfb495796ad51b0ed98c453e447a76bcbeeb69615079fc7e", size = 2108896, upload_time = "2025-04-23T18:33:22.501Z" },
- { url = "https://files.pythonhosted.org/packages/53/89/2b2de6c81fa131f423246a9109d7b2a375e83968ad0800d6e57d0574629b/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:f889f7a40498cc077332c7ab6b4608d296d852182211787d4f3ee377aaae66e8", size = 2069475, upload_time = "2025-04-23T18:33:24.528Z" },
- { url = "https://files.pythonhosted.org/packages/b8/e9/1f7efbe20d0b2b10f6718944b5d8ece9152390904f29a78e68d4e7961159/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:de4b83bb311557e439b9e186f733f6c645b9417c84e2eb8203f3f820a4b988bf", size = 2239013, upload_time = "2025-04-23T18:33:26.621Z" },
- { url = "https://files.pythonhosted.org/packages/3c/b2/5309c905a93811524a49b4e031e9851a6b00ff0fb668794472ea7746b448/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:82f68293f055f51b51ea42fafc74b6aad03e70e191799430b90c13d643059ebb", size = 2238715, upload_time = "2025-04-23T18:33:28.656Z" },
- { url = "https://files.pythonhosted.org/packages/32/56/8a7ca5d2cd2cda1d245d34b1c9a942920a718082ae8e54e5f3e5a58b7add/pydantic_core-2.33.2-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:329467cecfb529c925cf2bbd4d60d2c509bc2fb52a20c1045bf09bb70971a9c1", size = 2066757, upload_time = "2025-04-23T18:33:30.645Z" },
-]
-
-[[package]]
-name = "pydeck"
-version = "0.9.1"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "jinja2" },
- { name = "numpy" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/a1/ca/40e14e196864a0f61a92abb14d09b3d3da98f94ccb03b49cf51688140dab/pydeck-0.9.1.tar.gz", hash = "sha256:f74475ae637951d63f2ee58326757f8d4f9cd9f2a457cf42950715003e2cb605", size = 3832240, upload_time = "2024-05-10T15:36:21.153Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/ab/4c/b888e6cf58bd9db9c93f40d1c6be8283ff49d88919231afe93a6bcf61626/pydeck-0.9.1-py2.py3-none-any.whl", hash = "sha256:b3f75ba0d273fc917094fa61224f3f6076ca8752b93d46faf3bcfd9f9d59b038", size = 6900403, upload_time = "2024-05-10T15:36:17.36Z" },
-]
-
-[[package]]
-name = "pymongo"
-version = "4.13.0"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "dnspython" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/74/0c/1fb60383ab4b20566407b87f1a95b7f5cda83e8d5594da6fc84e2a543405/pymongo-4.13.0.tar.gz", hash = "sha256:92a06e3709e3c7e50820d352d3d4e60015406bcba69808937dac2a6d22226fde", size = 2166443, upload_time = "2025-05-14T19:11:08.649Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/27/21/422381c97454a56021c50f776847c1db6082f84a0944dda3823ef76b4860/pymongo-4.13.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:46c8bce9af98556110a950939f3eaa3f7648308d60df65feb783c780f8b9bfa9", size = 856909, upload_time = "2025-05-14T19:09:37.257Z" },
- { url = "https://files.pythonhosted.org/packages/c3/e6/b34ab65ad524bc34dc3aa634d3dc411f65c495842ebb25b2d8593fc4bbed/pymongo-4.13.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:dc9e412911f210d9b0eca42d25c22d3725809dda03dedbaf6f9ffa192d461905", size = 857202, upload_time = "2025-05-14T19:09:38.862Z" },
- { url = "https://files.pythonhosted.org/packages/ff/62/17d3f8ff1d2ff67d3ed2985fdf616520362cfe4ae3802df0e9601d5686c9/pymongo-4.13.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b9288188101506a9d1aa3f70f65b7f5f499f8f7d5c23ec86a47551d756e32059", size = 1426272, upload_time = "2025-05-14T19:09:41.103Z" },
- { url = "https://files.pythonhosted.org/packages/51/e2/22582d886d5a382fb605b3025047d75ec38f497cddefe86e29fca39c4363/pymongo-4.13.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5303e2074b85234e337ebe622d353ce38a35696cd47a7d970f84b545288aee01", size = 1477235, upload_time = "2025-05-14T19:09:43.099Z" },
- { url = "https://files.pythonhosted.org/packages/bd/e3/10bce21b8c0bf954c144638619099012a3e247c7d009df044f450fbaf340/pymongo-4.13.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d842e11eb94f7074314ff1d97a05790539a1d74c3048ce50ea9f0da1f4f96b0a", size = 1451677, upload_time = "2025-05-14T19:09:45.417Z" },
- { url = "https://files.pythonhosted.org/packages/30/10/4c54a4adf90a04e6147260e16f9cfeab11cb661d71ddd12a98449a279977/pymongo-4.13.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b63d9d8be87f4be11972c5a63d815974c298ada59a2e1d56ef5b6984d81c544a", size = 1430799, upload_time = "2025-05-14T19:09:47.516Z" },
- { url = "https://files.pythonhosted.org/packages/86/52/99620c5e106663a3679541b2316e0631b39cb49a6be14291597b28a8b428/pymongo-4.13.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c7d740560710be0c514bc9d26f5dcbb3c85dbb6b450c4c3246d8136ca84055bd", size = 1399450, upload_time = "2025-05-14T19:09:49.095Z" },
- { url = "https://files.pythonhosted.org/packages/f1/23/73d0379e46f98eed5339b6d44527e366b553c39327c69ba543f7beafb237/pymongo-4.13.0-cp311-cp311-win32.whl", hash = "sha256:936f7be9ed6919e3be7369b858d1c58ebaa4f3ef231cf4860779b8ba3b4fcd11", size = 834134, upload_time = "2025-05-14T19:09:50.682Z" },
- { url = "https://files.pythonhosted.org/packages/45/bd/d6286b923e852dc080330182a8b57023555870d875b7523454ad1bdd1579/pymongo-4.13.0-cp311-cp311-win_amd64.whl", hash = "sha256:6a8f060f8ad139d1d45f75ef7aa0084bd7f714fc666f98ef00009efc7db34acd", size = 848068, upload_time = "2025-05-14T19:09:52.778Z" },
- { url = "https://files.pythonhosted.org/packages/42/5e/db6871892ec41860339e94e20fabce664b64c193636dc69b572503382f12/pymongo-4.13.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:007450b8c8d17b4e5b779ab6e1938983309eac26b5b8f0863c48effa4b151b07", size = 911769, upload_time = "2025-05-14T19:09:54.483Z" },
- { url = "https://files.pythonhosted.org/packages/86/8b/6960dc8baf2b6e1b809513160913e90234160c5df2dc1f2baf1cf1d25ac9/pymongo-4.13.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:899a5ea9cd32b1b0880015fdceaa36a41140a8c2ce8621626c52f7023724aed6", size = 911464, upload_time = "2025-05-14T19:09:56.253Z" },
- { url = "https://files.pythonhosted.org/packages/41/fb/d682bf1c4cb656f47616796f707a1316862f71b3c1899cb6b6806803dff6/pymongo-4.13.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f0b26cd4e090161927b7a81741a3627a41b74265dfb41c6957bfb474504b4b42", size = 1690111, upload_time = "2025-05-14T19:09:58.331Z" },
- { url = "https://files.pythonhosted.org/packages/03/d4/0047767ee5b6c66e4b5b67a5d85de14da9910ee8f7d8159e7c1d5d627358/pymongo-4.13.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b54e19e0f6c8a7ad0c5074a8cbefb29c12267c784ceb9a1577a62bbc43150161", size = 1754348, upload_time = "2025-05-14T19:10:00.088Z" },
- { url = "https://files.pythonhosted.org/packages/7c/ea/e64f2501eaca552b0f303c2eb828c69963c8bf1a663111686a900502792d/pymongo-4.13.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6208b83e7d566935218c0837f3b74c7d2dda83804d5d843ce21a55f22255ab74", size = 1723390, upload_time = "2025-05-14T19:10:02.28Z" },
- { url = "https://files.pythonhosted.org/packages/d1/5c/fad80bc263281c8b819ce29ed1d88c2023c5576ecc608d15ca1628078e29/pymongo-4.13.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3f33b8c1405d05517dce06756f2800b37dd098216cae5903cd80ad4f0a9dad08", size = 1693367, upload_time = "2025-05-14T19:10:04.405Z" },
- { url = "https://files.pythonhosted.org/packages/c1/3d/4ff09614c996f8574d36008763b9fc01532ec7e954b5edde9254455b279b/pymongo-4.13.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:02f0e1af87280697a1a8304238b863d4eee98c8b97f554ee456c3041c0f3a021", size = 1652496, upload_time = "2025-05-14T19:10:06.528Z" },
- { url = "https://files.pythonhosted.org/packages/f2/2f/c4e54ac337e0ad3d91aae7de59849aaed28de6340112da2e2427f5e0c689/pymongo-4.13.0-cp312-cp312-win32.whl", hash = "sha256:5dea2f6b44697eda38a11ef754d2adfff5373c51b1ffda00b9fedc5facbd605f", size = 880497, upload_time = "2025-05-14T19:10:08.626Z" },
- { url = "https://files.pythonhosted.org/packages/6a/43/6595a52fe144bb0dae4d592e49c6c909f98033c4fa2eaa544b13e22ac6e8/pymongo-4.13.0-cp312-cp312-win_amd64.whl", hash = "sha256:c03e02129ad202d8e146480b398c4a3ea18266ee0754b6a4805de6baf4a6a8c7", size = 898742, upload_time = "2025-05-14T19:10:10.214Z" },
- { url = "https://files.pythonhosted.org/packages/5a/dc/9afa6091bce4adad7cad736dcdc35c139a9b551fc61032ef20c7ba17eae5/pymongo-4.13.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:92f5e75ae265e798be1a8a40a29e2ab934e156f3827ca0e1c47e69d43f4dcb31", size = 965996, upload_time = "2025-05-14T19:10:12.319Z" },
- { url = "https://files.pythonhosted.org/packages/36/69/e4242abffc0ee1501bb426d8a540e712e4f917491735f18622838b17f5a1/pymongo-4.13.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:3d631d879e934b46222f5092d8951cbb9fe83542649697c8d342ea7b5479f118", size = 965702, upload_time = "2025-05-14T19:10:14.051Z" },
- { url = "https://files.pythonhosted.org/packages/fc/3e/0732876b48b1285bada803f4b0d7da5b720cf8f778d2117bbed9e04473a3/pymongo-4.13.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:be048fb78e165243272a8cdbeb40d53eace82424b95417ab3ab6ec8e9b00c59b", size = 1953825, upload_time = "2025-05-14T19:10:16.214Z" },
- { url = "https://files.pythonhosted.org/packages/dc/3b/6713fed92cab64508a1fb8359397c0720202e5f36d7faf4ed71b05875180/pymongo-4.13.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d81d159bd23d8ac53a6e819cccee991cb9350ab2541dfaa25aeb2f712d23b0a5", size = 2031179, upload_time = "2025-05-14T19:10:18.307Z" },
- { url = "https://files.pythonhosted.org/packages/89/2b/1aad904563c312a0dc2ff752acf0f11194f836304d6e15d05dff3a33df08/pymongo-4.13.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8af08ba2886f08d334bc7e5d5c662c60ea2f16e813a2c35106f399463fa11087", size = 1995093, upload_time = "2025-05-14T19:10:20.089Z" },
- { url = "https://files.pythonhosted.org/packages/4c/cc/33786f4ce9a46c776f0d32601b353f8c42b552ea9ff8060c290c912b661e/pymongo-4.13.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6b91f59137e46cd3ff17d5684a18e8006d65d0ee62eb1068b512262d1c2c5ae8", size = 1955820, upload_time = "2025-05-14T19:10:21.788Z" },
- { url = "https://files.pythonhosted.org/packages/2d/dd/9a2a87bd4aab12a2281ac20d179912eed824cc6f67df49edd87fa4879b3e/pymongo-4.13.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:61733c8f1ded90ab671a08033ee99b837073c73e505b3b3b633a55a0326e77f4", size = 1905394, upload_time = "2025-05-14T19:10:23.684Z" },
- { url = "https://files.pythonhosted.org/packages/04/be/0a70db5e4c4e1c162207e31eaa3debf98476e0265b154f6d2252f85969b0/pymongo-4.13.0-cp313-cp313-win32.whl", hash = "sha256:d10d3967e87c21869f084af5716d02626a17f6f9ccc9379fcbece5821c2a9fb4", size = 926840, upload_time = "2025-05-14T19:10:25.505Z" },
- { url = "https://files.pythonhosted.org/packages/dd/a6/fb104175a7f15dd69691c8c32bd4b99c4338ec89fe094b6895c940cf2afb/pymongo-4.13.0-cp313-cp313-win_amd64.whl", hash = "sha256:a9fe172e93551ddfdb94b9ad34dccebc4b7b680dc1d131bc6bd661c4a5b2945c", size = 949383, upload_time = "2025-05-14T19:10:27.234Z" },
- { url = "https://files.pythonhosted.org/packages/62/3f/c89a6121b0143fde431f04c267a0d49159b499f518630a43aa6288709749/pymongo-4.13.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:5adc1349fd5c94d5dfbcbd1ad9858d1df61945a07f5905dcf17bb62eb4c81f93", size = 1022500, upload_time = "2025-05-14T19:10:29.002Z" },
- { url = "https://files.pythonhosted.org/packages/4b/89/8fc36b83768b44805dd3a1caf755f019b110d2111671950b39c8c7781cd9/pymongo-4.13.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:8e11ea726ff8ddc8c8393895cd7e93a57e2558c27273d3712797895c53d25692", size = 1022503, upload_time = "2025-05-14T19:10:30.757Z" },
- { url = "https://files.pythonhosted.org/packages/67/dc/f216cf6218f8ceb4025fd10e3de486553bd5373c3b71a45fef3483b745bb/pymongo-4.13.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c02160ab3a67eca393a2a2bb83dccddf4db2196d0d7c6a980a55157e4bdadc06", size = 2282184, upload_time = "2025-05-14T19:10:32.699Z" },
- { url = "https://files.pythonhosted.org/packages/56/32/08a9045dbcd76a25d36a0bd42c635b56d9aed47126bcca0e630a63e08444/pymongo-4.13.0-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fca24e4df05501420b2ce2207c03f21fcbdfac1e3f41e312e61b8f416c5b4963", size = 2369224, upload_time = "2025-05-14T19:10:34.942Z" },
- { url = "https://files.pythonhosted.org/packages/16/63/7991853fa6cf5e52222f8f480081840fb452d78c1dcd6803cabe2d3557a6/pymongo-4.13.0-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:50c503b7e809e54740704ec4c87a0f2ccdb910c3b1d36c07dbd2029b6eaa6a50", size = 2328611, upload_time = "2025-05-14T19:10:36.791Z" },
- { url = "https://files.pythonhosted.org/packages/e9/0f/11beecc8d48c7549db3f13f2101fd1c06ccb668697d33a6a5a05bb955574/pymongo-4.13.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:66800de4f4487e7c437991b44bc1e717aadaf06e67451a760efe5cd81ce86575", size = 2279806, upload_time = "2025-05-14T19:10:38.652Z" },
- { url = "https://files.pythonhosted.org/packages/17/a7/0358efc8dba796545e9bd4642d1337a9b67b60928c583799fb0726594855/pymongo-4.13.0-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:82c36928c1c26580ce4f2497a6875968636e87c77108ff253d76b1355181a405", size = 2219131, upload_time = "2025-05-14T19:10:40.444Z" },
- { url = "https://files.pythonhosted.org/packages/58/d5/373cd1cd21eff769e22e4e0924dcbfd770dfa1298566d51a7097857267fc/pymongo-4.13.0-cp313-cp313t-win32.whl", hash = "sha256:1397eac713b84946210ab556666cfdd787eee824e910fbbe661d147e110ec516", size = 975711, upload_time = "2025-05-14T19:10:42.213Z" },
- { url = "https://files.pythonhosted.org/packages/b0/39/1e204091bdf264a0d9eccc21f7da099903a7a30045f055a91178686c0259/pymongo-4.13.0-cp313-cp313t-win_amd64.whl", hash = "sha256:99a52cfbf31579cc63c926048cd0ada6f96c98c1c4c211356193e07418e6207c", size = 1004287, upload_time = "2025-05-14T19:10:45.468Z" },
-]
-
-[[package]]
-name = "python-dateutil"
-version = "2.9.0.post0"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "six" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/66/c0/0c8b6ad9f17a802ee498c46e004a0eb49bc148f2fd230864601a86dcf6db/python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3", size = 342432, upload_time = "2024-03-01T18:36:20.211Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/ec/57/56b9bcc3c9c6a792fcbaf139543cee77261f3651ca9da0c93f5c1221264b/python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427", size = 229892, upload_time = "2024-03-01T18:36:18.57Z" },
-]
-
-[[package]]
-name = "python-dotenv"
-version = "1.1.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/88/2c/7bb1416c5620485aa793f2de31d3df393d3686aa8a8506d11e10e13c5baf/python_dotenv-1.1.0.tar.gz", hash = "sha256:41f90bc6f5f177fb41f53e87666db362025010eb28f60a01c9143bfa33a2b2d5", size = 39920, upload_time = "2025-03-25T10:14:56.835Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/1e/18/98a99ad95133c6a6e2005fe89faedf294a748bd5dc803008059409ac9b1e/python_dotenv-1.1.0-py3-none-any.whl", hash = "sha256:d7c01d9e2293916c18baf562d95698754b0dbbb5e74d457c45d4f6561fb9d55d", size = 20256, upload_time = "2025-03-25T10:14:55.034Z" },
-]
-
-[[package]]
-name = "pytz"
-version = "2025.2"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/f8/bf/abbd3cdfb8fbc7fb3d4d38d320f2441b1e7cbe29be4f23797b4a2b5d8aac/pytz-2025.2.tar.gz", hash = "sha256:360b9e3dbb49a209c21ad61809c7fb453643e048b38924c765813546746e81c3", size = 320884, upload_time = "2025-03-25T02:25:00.538Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/81/c4/34e93fe5f5429d7570ec1fa436f1986fb1f00c3e0f43a589fe2bbcd22c3f/pytz-2025.2-py2.py3-none-any.whl", hash = "sha256:5ddf76296dd8c44c26eb8f4b6f35488f3ccbf6fbbd7adee0b7262d43f0ec2f00", size = 509225, upload_time = "2025-03-25T02:24:58.468Z" },
-]
-
-[[package]]
-name = "pywin32"
-version = "310"
-source = { registry = "https://pypi.org/simple" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/f7/b1/68aa2986129fb1011dabbe95f0136f44509afaf072b12b8f815905a39f33/pywin32-310-cp311-cp311-win32.whl", hash = "sha256:1e765f9564e83011a63321bb9d27ec456a0ed90d3732c4b2e312b855365ed8bd", size = 8784284, upload_time = "2025-03-17T00:55:53.124Z" },
- { url = "https://files.pythonhosted.org/packages/b3/bd/d1592635992dd8db5bb8ace0551bc3a769de1ac8850200cfa517e72739fb/pywin32-310-cp311-cp311-win_amd64.whl", hash = "sha256:126298077a9d7c95c53823934f000599f66ec9296b09167810eb24875f32689c", size = 9520748, upload_time = "2025-03-17T00:55:55.203Z" },
- { url = "https://files.pythonhosted.org/packages/90/b1/ac8b1ffce6603849eb45a91cf126c0fa5431f186c2e768bf56889c46f51c/pywin32-310-cp311-cp311-win_arm64.whl", hash = "sha256:19ec5fc9b1d51c4350be7bb00760ffce46e6c95eaf2f0b2f1150657b1a43c582", size = 8455941, upload_time = "2025-03-17T00:55:57.048Z" },
- { url = "https://files.pythonhosted.org/packages/6b/ec/4fdbe47932f671d6e348474ea35ed94227fb5df56a7c30cbbb42cd396ed0/pywin32-310-cp312-cp312-win32.whl", hash = "sha256:8a75a5cc3893e83a108c05d82198880704c44bbaee4d06e442e471d3c9ea4f3d", size = 8796239, upload_time = "2025-03-17T00:55:58.807Z" },
- { url = "https://files.pythonhosted.org/packages/e3/e5/b0627f8bb84e06991bea89ad8153a9e50ace40b2e1195d68e9dff6b03d0f/pywin32-310-cp312-cp312-win_amd64.whl", hash = "sha256:bf5c397c9a9a19a6f62f3fb821fbf36cac08f03770056711f765ec1503972060", size = 9503839, upload_time = "2025-03-17T00:56:00.8Z" },
- { url = "https://files.pythonhosted.org/packages/1f/32/9ccf53748df72301a89713936645a664ec001abd35ecc8578beda593d37d/pywin32-310-cp312-cp312-win_arm64.whl", hash = "sha256:2349cc906eae872d0663d4d6290d13b90621eaf78964bb1578632ff20e152966", size = 8459470, upload_time = "2025-03-17T00:56:02.601Z" },
- { url = "https://files.pythonhosted.org/packages/1c/09/9c1b978ffc4ae53999e89c19c77ba882d9fce476729f23ef55211ea1c034/pywin32-310-cp313-cp313-win32.whl", hash = "sha256:5d241a659c496ada3253cd01cfaa779b048e90ce4b2b38cd44168ad555ce74ab", size = 8794384, upload_time = "2025-03-17T00:56:04.383Z" },
- { url = "https://files.pythonhosted.org/packages/45/3c/b4640f740ffebadd5d34df35fecba0e1cfef8fde9f3e594df91c28ad9b50/pywin32-310-cp313-cp313-win_amd64.whl", hash = "sha256:667827eb3a90208ddbdcc9e860c81bde63a135710e21e4cb3348968e4bd5249e", size = 9503039, upload_time = "2025-03-17T00:56:06.207Z" },
- { url = "https://files.pythonhosted.org/packages/b4/f4/f785020090fb050e7fb6d34b780f2231f302609dc964672f72bfaeb59a28/pywin32-310-cp313-cp313-win_arm64.whl", hash = "sha256:e308f831de771482b7cf692a1f308f8fca701b2d8f9dde6cc440c7da17e47b33", size = 8458152, upload_time = "2025-03-17T00:56:07.819Z" },
-]
-
-[[package]]
-name = "qdrant-client"
-version = "1.14.2"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "grpcio" },
- { name = "httpx", extra = ["http2"] },
- { name = "numpy" },
- { name = "portalocker" },
- { name = "protobuf" },
- { name = "pydantic" },
- { name = "urllib3" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/00/80/b84c4c52106b6da291829d8ec632f58a5692d2772e8d3c1d3be4f9a47a2e/qdrant_client-1.14.2.tar.gz", hash = "sha256:da5cab4d367d099d1330b6f30d45aefc8bd76f8b8f9d8fa5d4f813501b93af0d", size = 285531, upload_time = "2025-04-24T14:44:43.307Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/e4/52/f49b0aa96253010f57cf80315edecec4f469e7a39c1ed92bf727fa290e57/qdrant_client-1.14.2-py3-none-any.whl", hash = "sha256:7c283b1f0e71db9c21b85d898fb395791caca2a6d56ee751da96d797b001410c", size = 327691, upload_time = "2025-04-24T14:44:41.794Z" },
-]
-
-[[package]]
-name = "referencing"
-version = "0.36.2"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "attrs" },
- { name = "rpds-py" },
- { name = "typing-extensions", marker = "python_full_version < '3.13'" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/2f/db/98b5c277be99dd18bfd91dd04e1b759cad18d1a338188c936e92f921c7e2/referencing-0.36.2.tar.gz", hash = "sha256:df2e89862cd09deabbdba16944cc3f10feb6b3e6f18e902f7cc25609a34775aa", size = 74744, upload_time = "2025-01-25T08:48:16.138Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/c1/b1/3baf80dc6d2b7bc27a95a67752d0208e410351e3feb4eb78de5f77454d8d/referencing-0.36.2-py3-none-any.whl", hash = "sha256:e8699adbbf8b5c7de96d8ffa0eb5c158b3beafce084968e2ea8bb08c6794dcd0", size = 26775, upload_time = "2025-01-25T08:48:14.241Z" },
-]
-
-[[package]]
-name = "requests"
-version = "2.32.3"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "certifi" },
- { name = "charset-normalizer" },
- { name = "idna" },
- { name = "urllib3" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/63/70/2bf7780ad2d390a8d301ad0b550f1581eadbd9a20f896afe06353c2a2913/requests-2.32.3.tar.gz", hash = "sha256:55365417734eb18255590a9ff9eb97e9e1da868d4ccd6402399eaf68af20a760", size = 131218, upload_time = "2024-05-29T15:37:49.536Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/f9/9b/335f9764261e915ed497fcdeb11df5dfd6f7bf257d4a6a2a686d80da4d54/requests-2.32.3-py3-none-any.whl", hash = "sha256:70761cfe03c773ceb22aa2f671b4757976145175cdfca038c02654d061d6dcc6", size = 64928, upload_time = "2024-05-29T15:37:47.027Z" },
-]
-
-[[package]]
-name = "rpds-py"
-version = "0.25.1"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/8c/a6/60184b7fc00dd3ca80ac635dd5b8577d444c57e8e8742cecabfacb829921/rpds_py-0.25.1.tar.gz", hash = "sha256:8960b6dac09b62dac26e75d7e2c4a22efb835d827a7278c34f72b2b84fa160e3", size = 27304, upload_time = "2025-05-21T12:46:12.502Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/95/e1/df13fe3ddbbea43567e07437f097863b20c99318ae1f58a0fe389f763738/rpds_py-0.25.1-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:5f048bbf18b1f9120685c6d6bb70cc1a52c8cc11bdd04e643d28d3be0baf666d", size = 373341, upload_time = "2025-05-21T12:43:02.978Z" },
- { url = "https://files.pythonhosted.org/packages/7a/58/deef4d30fcbcbfef3b6d82d17c64490d5c94585a2310544ce8e2d3024f83/rpds_py-0.25.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4fbb0dbba559959fcb5d0735a0f87cdbca9e95dac87982e9b95c0f8f7ad10255", size = 359111, upload_time = "2025-05-21T12:43:05.128Z" },
- { url = "https://files.pythonhosted.org/packages/bb/7e/39f1f4431b03e96ebaf159e29a0f82a77259d8f38b2dd474721eb3a8ac9b/rpds_py-0.25.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4ca54b9cf9d80b4016a67a0193ebe0bcf29f6b0a96f09db942087e294d3d4c2", size = 386112, upload_time = "2025-05-21T12:43:07.13Z" },
- { url = "https://files.pythonhosted.org/packages/db/e7/847068a48d63aec2ae695a1646089620b3b03f8ccf9f02c122ebaf778f3c/rpds_py-0.25.1-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:1ee3e26eb83d39b886d2cb6e06ea701bba82ef30a0de044d34626ede51ec98b0", size = 400362, upload_time = "2025-05-21T12:43:08.693Z" },
- { url = "https://files.pythonhosted.org/packages/3b/3d/9441d5db4343d0cee759a7ab4d67420a476cebb032081763de934719727b/rpds_py-0.25.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:89706d0683c73a26f76a5315d893c051324d771196ae8b13e6ffa1ffaf5e574f", size = 522214, upload_time = "2025-05-21T12:43:10.694Z" },
- { url = "https://files.pythonhosted.org/packages/a2/ec/2cc5b30d95f9f1a432c79c7a2f65d85e52812a8f6cbf8768724571710786/rpds_py-0.25.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c2013ee878c76269c7b557a9a9c042335d732e89d482606990b70a839635feb7", size = 411491, upload_time = "2025-05-21T12:43:12.739Z" },
- { url = "https://files.pythonhosted.org/packages/dc/6c/44695c1f035077a017dd472b6a3253553780837af2fac9b6ac25f6a5cb4d/rpds_py-0.25.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:45e484db65e5380804afbec784522de84fa95e6bb92ef1bd3325d33d13efaebd", size = 386978, upload_time = "2025-05-21T12:43:14.25Z" },
- { url = "https://files.pythonhosted.org/packages/b1/74/b4357090bb1096db5392157b4e7ed8bb2417dc7799200fcbaee633a032c9/rpds_py-0.25.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:48d64155d02127c249695abb87d39f0faf410733428d499867606be138161d65", size = 420662, upload_time = "2025-05-21T12:43:15.8Z" },
- { url = "https://files.pythonhosted.org/packages/26/dd/8cadbebf47b96e59dfe8b35868e5c38a42272699324e95ed522da09d3a40/rpds_py-0.25.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:048893e902132fd6548a2e661fb38bf4896a89eea95ac5816cf443524a85556f", size = 563385, upload_time = "2025-05-21T12:43:17.78Z" },
- { url = "https://files.pythonhosted.org/packages/c3/ea/92960bb7f0e7a57a5ab233662f12152085c7dc0d5468534c65991a3d48c9/rpds_py-0.25.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:0317177b1e8691ab5879f4f33f4b6dc55ad3b344399e23df2e499de7b10a548d", size = 592047, upload_time = "2025-05-21T12:43:19.457Z" },
- { url = "https://files.pythonhosted.org/packages/61/ad/71aabc93df0d05dabcb4b0c749277881f8e74548582d96aa1bf24379493a/rpds_py-0.25.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:bffcf57826d77a4151962bf1701374e0fc87f536e56ec46f1abdd6a903354042", size = 557863, upload_time = "2025-05-21T12:43:21.69Z" },
- { url = "https://files.pythonhosted.org/packages/93/0f/89df0067c41f122b90b76f3660028a466eb287cbe38efec3ea70e637ca78/rpds_py-0.25.1-cp311-cp311-win32.whl", hash = "sha256:cda776f1967cb304816173b30994faaf2fd5bcb37e73118a47964a02c348e1bc", size = 219627, upload_time = "2025-05-21T12:43:23.311Z" },
- { url = "https://files.pythonhosted.org/packages/7c/8d/93b1a4c1baa903d0229374d9e7aa3466d751f1d65e268c52e6039c6e338e/rpds_py-0.25.1-cp311-cp311-win_amd64.whl", hash = "sha256:dc3c1ff0abc91444cd20ec643d0f805df9a3661fcacf9c95000329f3ddf268a4", size = 231603, upload_time = "2025-05-21T12:43:25.145Z" },
- { url = "https://files.pythonhosted.org/packages/cb/11/392605e5247bead2f23e6888e77229fbd714ac241ebbebb39a1e822c8815/rpds_py-0.25.1-cp311-cp311-win_arm64.whl", hash = "sha256:5a3ddb74b0985c4387719fc536faced33cadf2172769540c62e2a94b7b9be1c4", size = 223967, upload_time = "2025-05-21T12:43:26.566Z" },
- { url = "https://files.pythonhosted.org/packages/7f/81/28ab0408391b1dc57393653b6a0cf2014cc282cc2909e4615e63e58262be/rpds_py-0.25.1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:b5ffe453cde61f73fea9430223c81d29e2fbf412a6073951102146c84e19e34c", size = 364647, upload_time = "2025-05-21T12:43:28.559Z" },
- { url = "https://files.pythonhosted.org/packages/2c/9a/7797f04cad0d5e56310e1238434f71fc6939d0bc517192a18bb99a72a95f/rpds_py-0.25.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:115874ae5e2fdcfc16b2aedc95b5eef4aebe91b28e7e21951eda8a5dc0d3461b", size = 350454, upload_time = "2025-05-21T12:43:30.615Z" },
- { url = "https://files.pythonhosted.org/packages/69/3c/93d2ef941b04898011e5d6eaa56a1acf46a3b4c9f4b3ad1bbcbafa0bee1f/rpds_py-0.25.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a714bf6e5e81b0e570d01f56e0c89c6375101b8463999ead3a93a5d2a4af91fa", size = 389665, upload_time = "2025-05-21T12:43:32.629Z" },
- { url = "https://files.pythonhosted.org/packages/c1/57/ad0e31e928751dde8903a11102559628d24173428a0f85e25e187defb2c1/rpds_py-0.25.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:35634369325906bcd01577da4c19e3b9541a15e99f31e91a02d010816b49bfda", size = 403873, upload_time = "2025-05-21T12:43:34.576Z" },
- { url = "https://files.pythonhosted.org/packages/16/ad/c0c652fa9bba778b4f54980a02962748479dc09632e1fd34e5282cf2556c/rpds_py-0.25.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d4cb2b3ddc16710548801c6fcc0cfcdeeff9dafbc983f77265877793f2660309", size = 525866, upload_time = "2025-05-21T12:43:36.123Z" },
- { url = "https://files.pythonhosted.org/packages/2a/39/3e1839bc527e6fcf48d5fec4770070f872cdee6c6fbc9b259932f4e88a38/rpds_py-0.25.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9ceca1cf097ed77e1a51f1dbc8d174d10cb5931c188a4505ff9f3e119dfe519b", size = 416886, upload_time = "2025-05-21T12:43:38.034Z" },
- { url = "https://files.pythonhosted.org/packages/7a/95/dd6b91cd4560da41df9d7030a038298a67d24f8ca38e150562644c829c48/rpds_py-0.25.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2c2cd1a4b0c2b8c5e31ffff50d09f39906fe351389ba143c195566056c13a7ea", size = 390666, upload_time = "2025-05-21T12:43:40.065Z" },
- { url = "https://files.pythonhosted.org/packages/64/48/1be88a820e7494ce0a15c2d390ccb7c52212370badabf128e6a7bb4cb802/rpds_py-0.25.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1de336a4b164c9188cb23f3703adb74a7623ab32d20090d0e9bf499a2203ad65", size = 425109, upload_time = "2025-05-21T12:43:42.263Z" },
- { url = "https://files.pythonhosted.org/packages/cf/07/3e2a17927ef6d7720b9949ec1b37d1e963b829ad0387f7af18d923d5cfa5/rpds_py-0.25.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:9fca84a15333e925dd59ce01da0ffe2ffe0d6e5d29a9eeba2148916d1824948c", size = 567244, upload_time = "2025-05-21T12:43:43.846Z" },
- { url = "https://files.pythonhosted.org/packages/d2/e5/76cf010998deccc4f95305d827847e2eae9c568099c06b405cf96384762b/rpds_py-0.25.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:88ec04afe0c59fa64e2f6ea0dd9657e04fc83e38de90f6de201954b4d4eb59bd", size = 596023, upload_time = "2025-05-21T12:43:45.932Z" },
- { url = "https://files.pythonhosted.org/packages/52/9a/df55efd84403736ba37a5a6377b70aad0fd1cb469a9109ee8a1e21299a1c/rpds_py-0.25.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:a8bd2f19e312ce3e1d2c635618e8a8d8132892bb746a7cf74780a489f0f6cdcb", size = 561634, upload_time = "2025-05-21T12:43:48.263Z" },
- { url = "https://files.pythonhosted.org/packages/ab/aa/dc3620dd8db84454aaf9374bd318f1aa02578bba5e567f5bf6b79492aca4/rpds_py-0.25.1-cp312-cp312-win32.whl", hash = "sha256:e5e2f7280d8d0d3ef06f3ec1b4fd598d386cc6f0721e54f09109a8132182fbfe", size = 222713, upload_time = "2025-05-21T12:43:49.897Z" },
- { url = "https://files.pythonhosted.org/packages/a3/7f/7cef485269a50ed5b4e9bae145f512d2a111ca638ae70cc101f661b4defd/rpds_py-0.25.1-cp312-cp312-win_amd64.whl", hash = "sha256:db58483f71c5db67d643857404da360dce3573031586034b7d59f245144cc192", size = 235280, upload_time = "2025-05-21T12:43:51.893Z" },
- { url = "https://files.pythonhosted.org/packages/99/f2/c2d64f6564f32af913bf5f3f7ae41c7c263c5ae4c4e8f1a17af8af66cd46/rpds_py-0.25.1-cp312-cp312-win_arm64.whl", hash = "sha256:6d50841c425d16faf3206ddbba44c21aa3310a0cebc3c1cdfc3e3f4f9f6f5728", size = 225399, upload_time = "2025-05-21T12:43:53.351Z" },
- { url = "https://files.pythonhosted.org/packages/2b/da/323848a2b62abe6a0fec16ebe199dc6889c5d0a332458da8985b2980dffe/rpds_py-0.25.1-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:659d87430a8c8c704d52d094f5ba6fa72ef13b4d385b7e542a08fc240cb4a559", size = 364498, upload_time = "2025-05-21T12:43:54.841Z" },
- { url = "https://files.pythonhosted.org/packages/1f/b4/4d3820f731c80fd0cd823b3e95b9963fec681ae45ba35b5281a42382c67d/rpds_py-0.25.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:68f6f060f0bbdfb0245267da014d3a6da9be127fe3e8cc4a68c6f833f8a23bb1", size = 350083, upload_time = "2025-05-21T12:43:56.428Z" },
- { url = "https://files.pythonhosted.org/packages/d5/b1/3a8ee1c9d480e8493619a437dec685d005f706b69253286f50f498cbdbcf/rpds_py-0.25.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:083a9513a33e0b92cf6e7a6366036c6bb43ea595332c1ab5c8ae329e4bcc0a9c", size = 389023, upload_time = "2025-05-21T12:43:57.995Z" },
- { url = "https://files.pythonhosted.org/packages/3b/31/17293edcfc934dc62c3bf74a0cb449ecd549531f956b72287203e6880b87/rpds_py-0.25.1-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:816568614ecb22b18a010c7a12559c19f6fe993526af88e95a76d5a60b8b75fb", size = 403283, upload_time = "2025-05-21T12:43:59.546Z" },
- { url = "https://files.pythonhosted.org/packages/d1/ca/e0f0bc1a75a8925024f343258c8ecbd8828f8997ea2ac71e02f67b6f5299/rpds_py-0.25.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3c6564c0947a7f52e4792983f8e6cf9bac140438ebf81f527a21d944f2fd0a40", size = 524634, upload_time = "2025-05-21T12:44:01.087Z" },
- { url = "https://files.pythonhosted.org/packages/3e/03/5d0be919037178fff33a6672ffc0afa04ea1cfcb61afd4119d1b5280ff0f/rpds_py-0.25.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5c4a128527fe415d73cf1f70a9a688d06130d5810be69f3b553bf7b45e8acf79", size = 416233, upload_time = "2025-05-21T12:44:02.604Z" },
- { url = "https://files.pythonhosted.org/packages/05/7c/8abb70f9017a231c6c961a8941403ed6557664c0913e1bf413cbdc039e75/rpds_py-0.25.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a49e1d7a4978ed554f095430b89ecc23f42014a50ac385eb0c4d163ce213c325", size = 390375, upload_time = "2025-05-21T12:44:04.162Z" },
- { url = "https://files.pythonhosted.org/packages/7a/ac/a87f339f0e066b9535074a9f403b9313fd3892d4a164d5d5f5875ac9f29f/rpds_py-0.25.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d74ec9bc0e2feb81d3f16946b005748119c0f52a153f6db6a29e8cd68636f295", size = 424537, upload_time = "2025-05-21T12:44:06.175Z" },
- { url = "https://files.pythonhosted.org/packages/1f/8f/8d5c1567eaf8c8afe98a838dd24de5013ce6e8f53a01bd47fe8bb06b5533/rpds_py-0.25.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:3af5b4cc10fa41e5bc64e5c198a1b2d2864337f8fcbb9a67e747e34002ce812b", size = 566425, upload_time = "2025-05-21T12:44:08.242Z" },
- { url = "https://files.pythonhosted.org/packages/95/33/03016a6be5663b389c8ab0bbbcca68d9e96af14faeff0a04affcb587e776/rpds_py-0.25.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:79dc317a5f1c51fd9c6a0c4f48209c6b8526d0524a6904fc1076476e79b00f98", size = 595197, upload_time = "2025-05-21T12:44:10.449Z" },
- { url = "https://files.pythonhosted.org/packages/33/8d/da9f4d3e208c82fda311bff0cf0a19579afceb77cf456e46c559a1c075ba/rpds_py-0.25.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:1521031351865e0181bc585147624d66b3b00a84109b57fcb7a779c3ec3772cd", size = 561244, upload_time = "2025-05-21T12:44:12.387Z" },
- { url = "https://files.pythonhosted.org/packages/e2/b3/39d5dcf7c5f742ecd6dbc88f6f84ae54184b92f5f387a4053be2107b17f1/rpds_py-0.25.1-cp313-cp313-win32.whl", hash = "sha256:5d473be2b13600b93a5675d78f59e63b51b1ba2d0476893415dfbb5477e65b31", size = 222254, upload_time = "2025-05-21T12:44:14.261Z" },
- { url = "https://files.pythonhosted.org/packages/5f/19/2d6772c8eeb8302c5f834e6d0dfd83935a884e7c5ce16340c7eaf89ce925/rpds_py-0.25.1-cp313-cp313-win_amd64.whl", hash = "sha256:a7b74e92a3b212390bdce1d93da9f6488c3878c1d434c5e751cbc202c5e09500", size = 234741, upload_time = "2025-05-21T12:44:16.236Z" },
- { url = "https://files.pythonhosted.org/packages/5b/5a/145ada26cfaf86018d0eb304fe55eafdd4f0b6b84530246bb4a7c4fb5c4b/rpds_py-0.25.1-cp313-cp313-win_arm64.whl", hash = "sha256:dd326a81afe332ede08eb39ab75b301d5676802cdffd3a8f287a5f0b694dc3f5", size = 224830, upload_time = "2025-05-21T12:44:17.749Z" },
- { url = "https://files.pythonhosted.org/packages/4b/ca/d435844829c384fd2c22754ff65889c5c556a675d2ed9eb0e148435c6690/rpds_py-0.25.1-cp313-cp313t-macosx_10_12_x86_64.whl", hash = "sha256:a58d1ed49a94d4183483a3ce0af22f20318d4a1434acee255d683ad90bf78129", size = 359668, upload_time = "2025-05-21T12:44:19.322Z" },
- { url = "https://files.pythonhosted.org/packages/1f/01/b056f21db3a09f89410d493d2f6614d87bb162499f98b649d1dbd2a81988/rpds_py-0.25.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:f251bf23deb8332823aef1da169d5d89fa84c89f67bdfb566c49dea1fccfd50d", size = 345649, upload_time = "2025-05-21T12:44:20.962Z" },
- { url = "https://files.pythonhosted.org/packages/e0/0f/e0d00dc991e3d40e03ca36383b44995126c36b3eafa0ccbbd19664709c88/rpds_py-0.25.1-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8dbd586bfa270c1103ece2109314dd423df1fa3d9719928b5d09e4840cec0d72", size = 384776, upload_time = "2025-05-21T12:44:22.516Z" },
- { url = "https://files.pythonhosted.org/packages/9f/a2/59374837f105f2ca79bde3c3cd1065b2f8c01678900924949f6392eab66d/rpds_py-0.25.1-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:6d273f136e912aa101a9274c3145dcbddbe4bac560e77e6d5b3c9f6e0ed06d34", size = 395131, upload_time = "2025-05-21T12:44:24.147Z" },
- { url = "https://files.pythonhosted.org/packages/9c/dc/48e8d84887627a0fe0bac53f0b4631e90976fd5d35fff8be66b8e4f3916b/rpds_py-0.25.1-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:666fa7b1bd0a3810a7f18f6d3a25ccd8866291fbbc3c9b912b917a6715874bb9", size = 520942, upload_time = "2025-05-21T12:44:25.915Z" },
- { url = "https://files.pythonhosted.org/packages/7c/f5/ee056966aeae401913d37befeeab57a4a43a4f00099e0a20297f17b8f00c/rpds_py-0.25.1-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:921954d7fbf3fccc7de8f717799304b14b6d9a45bbeec5a8d7408ccbf531faf5", size = 411330, upload_time = "2025-05-21T12:44:27.638Z" },
- { url = "https://files.pythonhosted.org/packages/ab/74/b2cffb46a097cefe5d17f94ede7a174184b9d158a0aeb195f39f2c0361e8/rpds_py-0.25.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f3d86373ff19ca0441ebeb696ef64cb58b8b5cbacffcda5a0ec2f3911732a194", size = 387339, upload_time = "2025-05-21T12:44:29.292Z" },
- { url = "https://files.pythonhosted.org/packages/7f/9a/0ff0b375dcb5161c2b7054e7d0b7575f1680127505945f5cabaac890bc07/rpds_py-0.25.1-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c8980cde3bb8575e7c956a530f2c217c1d6aac453474bf3ea0f9c89868b531b6", size = 418077, upload_time = "2025-05-21T12:44:30.877Z" },
- { url = "https://files.pythonhosted.org/packages/0d/a1/fda629bf20d6b698ae84c7c840cfb0e9e4200f664fc96e1f456f00e4ad6e/rpds_py-0.25.1-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:8eb8c84ecea987a2523e057c0d950bcb3f789696c0499290b8d7b3107a719d78", size = 562441, upload_time = "2025-05-21T12:44:32.541Z" },
- { url = "https://files.pythonhosted.org/packages/20/15/ce4b5257f654132f326f4acd87268e1006cc071e2c59794c5bdf4bebbb51/rpds_py-0.25.1-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:e43a005671a9ed5a650f3bc39e4dbccd6d4326b24fb5ea8be5f3a43a6f576c72", size = 590750, upload_time = "2025-05-21T12:44:34.557Z" },
- { url = "https://files.pythonhosted.org/packages/fb/ab/e04bf58a8d375aeedb5268edcc835c6a660ebf79d4384d8e0889439448b0/rpds_py-0.25.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:58f77c60956501a4a627749a6dcb78dac522f249dd96b5c9f1c6af29bfacfb66", size = 558891, upload_time = "2025-05-21T12:44:37.358Z" },
- { url = "https://files.pythonhosted.org/packages/90/82/cb8c6028a6ef6cd2b7991e2e4ced01c854b6236ecf51e81b64b569c43d73/rpds_py-0.25.1-cp313-cp313t-win32.whl", hash = "sha256:2cb9e5b5e26fc02c8a4345048cd9998c2aca7c2712bd1b36da0c72ee969a3523", size = 218718, upload_time = "2025-05-21T12:44:38.969Z" },
- { url = "https://files.pythonhosted.org/packages/b6/97/5a4b59697111c89477d20ba8a44df9ca16b41e737fa569d5ae8bff99e650/rpds_py-0.25.1-cp313-cp313t-win_amd64.whl", hash = "sha256:401ca1c4a20cc0510d3435d89c069fe0a9ae2ee6495135ac46bdd49ec0495763", size = 232218, upload_time = "2025-05-21T12:44:40.512Z" },
- { url = "https://files.pythonhosted.org/packages/49/74/48f3df0715a585cbf5d34919c9c757a4c92c1a9eba059f2d334e72471f70/rpds_py-0.25.1-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:ee86d81551ec68a5c25373c5643d343150cc54672b5e9a0cafc93c1870a53954", size = 374208, upload_time = "2025-05-21T12:45:26.306Z" },
- { url = "https://files.pythonhosted.org/packages/55/b0/9b01bb11ce01ec03d05e627249cc2c06039d6aa24ea5a22a39c312167c10/rpds_py-0.25.1-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:89c24300cd4a8e4a51e55c31a8ff3918e6651b241ee8876a42cc2b2a078533ba", size = 359262, upload_time = "2025-05-21T12:45:28.322Z" },
- { url = "https://files.pythonhosted.org/packages/a9/eb/5395621618f723ebd5116c53282052943a726dba111b49cd2071f785b665/rpds_py-0.25.1-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:771c16060ff4e79584dc48902a91ba79fd93eade3aa3a12d6d2a4aadaf7d542b", size = 387366, upload_time = "2025-05-21T12:45:30.42Z" },
- { url = "https://files.pythonhosted.org/packages/68/73/3d51442bdb246db619d75039a50ea1cf8b5b4ee250c3e5cd5c3af5981cd4/rpds_py-0.25.1-pp311-pypy311_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:785ffacd0ee61c3e60bdfde93baa6d7c10d86f15655bd706c89da08068dc5038", size = 400759, upload_time = "2025-05-21T12:45:32.516Z" },
- { url = "https://files.pythonhosted.org/packages/b7/4c/3a32d5955d7e6cb117314597bc0f2224efc798428318b13073efe306512a/rpds_py-0.25.1-pp311-pypy311_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2a40046a529cc15cef88ac5ab589f83f739e2d332cb4d7399072242400ed68c9", size = 523128, upload_time = "2025-05-21T12:45:34.396Z" },
- { url = "https://files.pythonhosted.org/packages/be/95/1ffccd3b0bb901ae60b1dd4b1be2ab98bb4eb834cd9b15199888f5702f7b/rpds_py-0.25.1-pp311-pypy311_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:85fc223d9c76cabe5d0bff82214459189720dc135db45f9f66aa7cffbf9ff6c1", size = 411597, upload_time = "2025-05-21T12:45:36.164Z" },
- { url = "https://files.pythonhosted.org/packages/ef/6d/6e6cd310180689db8b0d2de7f7d1eabf3fb013f239e156ae0d5a1a85c27f/rpds_py-0.25.1-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b0be9965f93c222fb9b4cc254235b3b2b215796c03ef5ee64f995b1b69af0762", size = 388053, upload_time = "2025-05-21T12:45:38.45Z" },
- { url = "https://files.pythonhosted.org/packages/4a/87/ec4186b1fe6365ced6fa470960e68fc7804bafbe7c0cf5a36237aa240efa/rpds_py-0.25.1-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:8378fa4a940f3fb509c081e06cb7f7f2adae8cf46ef258b0e0ed7519facd573e", size = 421821, upload_time = "2025-05-21T12:45:40.732Z" },
- { url = "https://files.pythonhosted.org/packages/7a/60/84f821f6bf4e0e710acc5039d91f8f594fae0d93fc368704920d8971680d/rpds_py-0.25.1-pp311-pypy311_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:33358883a4490287e67a2c391dfaea4d9359860281db3292b6886bf0be3d8692", size = 564534, upload_time = "2025-05-21T12:45:42.672Z" },
- { url = "https://files.pythonhosted.org/packages/41/3a/bc654eb15d3b38f9330fe0f545016ba154d89cdabc6177b0295910cd0ebe/rpds_py-0.25.1-pp311-pypy311_pp73-musllinux_1_2_i686.whl", hash = "sha256:1d1fadd539298e70cac2f2cb36f5b8a65f742b9b9f1014dd4ea1f7785e2470bf", size = 592674, upload_time = "2025-05-21T12:45:44.533Z" },
- { url = "https://files.pythonhosted.org/packages/2e/ba/31239736f29e4dfc7a58a45955c5db852864c306131fd6320aea214d5437/rpds_py-0.25.1-pp311-pypy311_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:9a46c2fb2545e21181445515960006e85d22025bd2fe6db23e76daec6eb689fe", size = 558781, upload_time = "2025-05-21T12:45:46.281Z" },
-]
-
-[[package]]
-name = "six"
-version = "1.17.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/94/e7/b2c673351809dca68a0e064b6af791aa332cf192da575fd474ed7d6f16a2/six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81", size = 34031, upload_time = "2024-12-04T17:35:28.174Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274", size = 11050, upload_time = "2024-12-04T17:35:26.475Z" },
-]
-
-[[package]]
-name = "smmap"
-version = "5.0.2"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/44/cd/a040c4b3119bbe532e5b0732286f805445375489fceaec1f48306068ee3b/smmap-5.0.2.tar.gz", hash = "sha256:26ea65a03958fa0c8a1c7e8c7a58fdc77221b8910f6be2131affade476898ad5", size = 22329, upload_time = "2025-01-02T07:14:40.909Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/04/be/d09147ad1ec7934636ad912901c5fd7667e1c858e19d355237db0d0cd5e4/smmap-5.0.2-py3-none-any.whl", hash = "sha256:b30115f0def7d7531d22a0fb6502488d879e75b260a9db4d0819cfb25403af5e", size = 24303, upload_time = "2025-01-02T07:14:38.724Z" },
-]
-
-[[package]]
-name = "sniffio"
-version = "1.3.1"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/a2/87/a6771e1546d97e7e041b6ae58d80074f81b7d5121207425c964ddf5cfdbd/sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc", size = 20372, upload_time = "2024-02-25T23:20:04.057Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/e9/44/75a9c9421471a6c4805dbf2356f7c181a29c1879239abab1ea2cc8f38b40/sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2", size = 10235, upload_time = "2024-02-25T23:20:01.196Z" },
-]
-
-[[package]]
-name = "sqlalchemy"
-version = "2.0.41"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "greenlet", marker = "(python_full_version < '3.14' and platform_machine == 'AMD64') or (python_full_version < '3.14' and platform_machine == 'WIN32') or (python_full_version < '3.14' and platform_machine == 'aarch64') or (python_full_version < '3.14' and platform_machine == 'amd64') or (python_full_version < '3.14' and platform_machine == 'ppc64le') or (python_full_version < '3.14' and platform_machine == 'win32') or (python_full_version < '3.14' and platform_machine == 'x86_64')" },
- { name = "typing-extensions" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/63/66/45b165c595ec89aa7dcc2c1cd222ab269bc753f1fc7a1e68f8481bd957bf/sqlalchemy-2.0.41.tar.gz", hash = "sha256:edba70118c4be3c2b1f90754d308d0b79c6fe2c0fdc52d8ddf603916f83f4db9", size = 9689424, upload_time = "2025-05-14T17:10:32.339Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/37/4e/b00e3ffae32b74b5180e15d2ab4040531ee1bef4c19755fe7926622dc958/sqlalchemy-2.0.41-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6375cd674fe82d7aa9816d1cb96ec592bac1726c11e0cafbf40eeee9a4516b5f", size = 2121232, upload_time = "2025-05-14T17:48:20.444Z" },
- { url = "https://files.pythonhosted.org/packages/ef/30/6547ebb10875302074a37e1970a5dce7985240665778cfdee2323709f749/sqlalchemy-2.0.41-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9f8c9fdd15a55d9465e590a402f42082705d66b05afc3ffd2d2eb3c6ba919560", size = 2110897, upload_time = "2025-05-14T17:48:21.634Z" },
- { url = "https://files.pythonhosted.org/packages/9e/21/59df2b41b0f6c62da55cd64798232d7349a9378befa7f1bb18cf1dfd510a/sqlalchemy-2.0.41-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:32f9dc8c44acdee06c8fc6440db9eae8b4af8b01e4b1aee7bdd7241c22edff4f", size = 3273313, upload_time = "2025-05-14T17:51:56.205Z" },
- { url = "https://files.pythonhosted.org/packages/62/e4/b9a7a0e5c6f79d49bcd6efb6e90d7536dc604dab64582a9dec220dab54b6/sqlalchemy-2.0.41-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90c11ceb9a1f482c752a71f203a81858625d8df5746d787a4786bca4ffdf71c6", size = 3273807, upload_time = "2025-05-14T17:55:26.928Z" },
- { url = "https://files.pythonhosted.org/packages/39/d8/79f2427251b44ddee18676c04eab038d043cff0e764d2d8bb08261d6135d/sqlalchemy-2.0.41-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:911cc493ebd60de5f285bcae0491a60b4f2a9f0f5c270edd1c4dbaef7a38fc04", size = 3209632, upload_time = "2025-05-14T17:51:59.384Z" },
- { url = "https://files.pythonhosted.org/packages/d4/16/730a82dda30765f63e0454918c982fb7193f6b398b31d63c7c3bd3652ae5/sqlalchemy-2.0.41-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:03968a349db483936c249f4d9cd14ff2c296adfa1290b660ba6516f973139582", size = 3233642, upload_time = "2025-05-14T17:55:29.901Z" },
- { url = "https://files.pythonhosted.org/packages/04/61/c0d4607f7799efa8b8ea3c49b4621e861c8f5c41fd4b5b636c534fcb7d73/sqlalchemy-2.0.41-cp311-cp311-win32.whl", hash = "sha256:293cd444d82b18da48c9f71cd7005844dbbd06ca19be1ccf6779154439eec0b8", size = 2086475, upload_time = "2025-05-14T17:56:02.095Z" },
- { url = "https://files.pythonhosted.org/packages/9d/8e/8344f8ae1cb6a479d0741c02cd4f666925b2bf02e2468ddaf5ce44111f30/sqlalchemy-2.0.41-cp311-cp311-win_amd64.whl", hash = "sha256:3d3549fc3e40667ec7199033a4e40a2f669898a00a7b18a931d3efb4c7900504", size = 2110903, upload_time = "2025-05-14T17:56:03.499Z" },
- { url = "https://files.pythonhosted.org/packages/3e/2a/f1f4e068b371154740dd10fb81afb5240d5af4aa0087b88d8b308b5429c2/sqlalchemy-2.0.41-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:81f413674d85cfd0dfcd6512e10e0f33c19c21860342a4890c3a2b59479929f9", size = 2119645, upload_time = "2025-05-14T17:55:24.854Z" },
- { url = "https://files.pythonhosted.org/packages/9b/e8/c664a7e73d36fbfc4730f8cf2bf930444ea87270f2825efbe17bf808b998/sqlalchemy-2.0.41-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:598d9ebc1e796431bbd068e41e4de4dc34312b7aa3292571bb3674a0cb415dd1", size = 2107399, upload_time = "2025-05-14T17:55:28.097Z" },
- { url = "https://files.pythonhosted.org/packages/5c/78/8a9cf6c5e7135540cb682128d091d6afa1b9e48bd049b0d691bf54114f70/sqlalchemy-2.0.41-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a104c5694dfd2d864a6f91b0956eb5d5883234119cb40010115fd45a16da5e70", size = 3293269, upload_time = "2025-05-14T17:50:38.227Z" },
- { url = "https://files.pythonhosted.org/packages/3c/35/f74add3978c20de6323fb11cb5162702670cc7a9420033befb43d8d5b7a4/sqlalchemy-2.0.41-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6145afea51ff0af7f2564a05fa95eb46f542919e6523729663a5d285ecb3cf5e", size = 3303364, upload_time = "2025-05-14T17:51:49.829Z" },
- { url = "https://files.pythonhosted.org/packages/6a/d4/c990f37f52c3f7748ebe98883e2a0f7d038108c2c5a82468d1ff3eec50b7/sqlalchemy-2.0.41-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b46fa6eae1cd1c20e6e6f44e19984d438b6b2d8616d21d783d150df714f44078", size = 3229072, upload_time = "2025-05-14T17:50:39.774Z" },
- { url = "https://files.pythonhosted.org/packages/15/69/cab11fecc7eb64bc561011be2bd03d065b762d87add52a4ca0aca2e12904/sqlalchemy-2.0.41-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:41836fe661cc98abfae476e14ba1906220f92c4e528771a8a3ae6a151242d2ae", size = 3268074, upload_time = "2025-05-14T17:51:51.736Z" },
- { url = "https://files.pythonhosted.org/packages/5c/ca/0c19ec16858585d37767b167fc9602593f98998a68a798450558239fb04a/sqlalchemy-2.0.41-cp312-cp312-win32.whl", hash = "sha256:a8808d5cf866c781150d36a3c8eb3adccfa41a8105d031bf27e92c251e3969d6", size = 2084514, upload_time = "2025-05-14T17:55:49.915Z" },
- { url = "https://files.pythonhosted.org/packages/7f/23/4c2833d78ff3010a4e17f984c734f52b531a8c9060a50429c9d4b0211be6/sqlalchemy-2.0.41-cp312-cp312-win_amd64.whl", hash = "sha256:5b14e97886199c1f52c14629c11d90c11fbb09e9334fa7bb5f6d068d9ced0ce0", size = 2111557, upload_time = "2025-05-14T17:55:51.349Z" },
- { url = "https://files.pythonhosted.org/packages/d3/ad/2e1c6d4f235a97eeef52d0200d8ddda16f6c4dd70ae5ad88c46963440480/sqlalchemy-2.0.41-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:4eeb195cdedaf17aab6b247894ff2734dcead6c08f748e617bfe05bd5a218443", size = 2115491, upload_time = "2025-05-14T17:55:31.177Z" },
- { url = "https://files.pythonhosted.org/packages/cf/8d/be490e5db8400dacc89056f78a52d44b04fbf75e8439569d5b879623a53b/sqlalchemy-2.0.41-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:d4ae769b9c1c7757e4ccce94b0641bc203bbdf43ba7a2413ab2523d8d047d8dc", size = 2102827, upload_time = "2025-05-14T17:55:34.921Z" },
- { url = "https://files.pythonhosted.org/packages/a0/72/c97ad430f0b0e78efaf2791342e13ffeafcbb3c06242f01a3bb8fe44f65d/sqlalchemy-2.0.41-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a62448526dd9ed3e3beedc93df9bb6b55a436ed1474db31a2af13b313a70a7e1", size = 3225224, upload_time = "2025-05-14T17:50:41.418Z" },
- { url = "https://files.pythonhosted.org/packages/5e/51/5ba9ea3246ea068630acf35a6ba0d181e99f1af1afd17e159eac7e8bc2b8/sqlalchemy-2.0.41-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dc56c9788617b8964ad02e8fcfeed4001c1f8ba91a9e1f31483c0dffb207002a", size = 3230045, upload_time = "2025-05-14T17:51:54.722Z" },
- { url = "https://files.pythonhosted.org/packages/78/2f/8c14443b2acea700c62f9b4a8bad9e49fc1b65cfb260edead71fd38e9f19/sqlalchemy-2.0.41-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:c153265408d18de4cc5ded1941dcd8315894572cddd3c58df5d5b5705b3fa28d", size = 3159357, upload_time = "2025-05-14T17:50:43.483Z" },
- { url = "https://files.pythonhosted.org/packages/fc/b2/43eacbf6ccc5276d76cea18cb7c3d73e294d6fb21f9ff8b4eef9b42bbfd5/sqlalchemy-2.0.41-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:4f67766965996e63bb46cfbf2ce5355fc32d9dd3b8ad7e536a920ff9ee422e23", size = 3197511, upload_time = "2025-05-14T17:51:57.308Z" },
- { url = "https://files.pythonhosted.org/packages/fa/2e/677c17c5d6a004c3c45334ab1dbe7b7deb834430b282b8a0f75ae220c8eb/sqlalchemy-2.0.41-cp313-cp313-win32.whl", hash = "sha256:bfc9064f6658a3d1cadeaa0ba07570b83ce6801a1314985bf98ec9b95d74e15f", size = 2082420, upload_time = "2025-05-14T17:55:52.69Z" },
- { url = "https://files.pythonhosted.org/packages/e9/61/e8c1b9b6307c57157d328dd8b8348ddc4c47ffdf1279365a13b2b98b8049/sqlalchemy-2.0.41-cp313-cp313-win_amd64.whl", hash = "sha256:82ca366a844eb551daff9d2e6e7a9e5e76d2612c8564f58db6c19a726869c1df", size = 2108329, upload_time = "2025-05-14T17:55:54.495Z" },
- { url = "https://files.pythonhosted.org/packages/1c/fc/9ba22f01b5cdacc8f5ed0d22304718d2c758fce3fd49a5372b886a86f37c/sqlalchemy-2.0.41-py3-none-any.whl", hash = "sha256:57df5dc6fdb5ed1a88a1ed2195fd31927e705cad62dedd86b46972752a80f576", size = 1911224, upload_time = "2025-05-14T17:39:42.154Z" },
-]
-
-[[package]]
-name = "streamlit"
-version = "1.45.1"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "altair" },
- { name = "blinker" },
- { name = "cachetools" },
- { name = "click" },
- { name = "gitpython" },
- { name = "numpy" },
- { name = "packaging" },
- { name = "pandas" },
- { name = "pillow" },
- { name = "protobuf" },
- { name = "pyarrow" },
- { name = "pydeck" },
- { name = "requests" },
- { name = "tenacity" },
- { name = "toml" },
- { name = "tornado" },
- { name = "typing-extensions" },
- { name = "watchdog", marker = "sys_platform != 'darwin'" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/f0/46/9b3f73886f82d27849ce1e7a74ae7c39f5323e46da0b6e8847ad4c25f44c/streamlit-1.45.1.tar.gz", hash = "sha256:e37d56c0af5240dbc240976880e81366689c290a559376417246f9b3f51b4217", size = 9463953, upload_time = "2025-05-12T20:40:30.562Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/13/e6/69fcbae3dd2fcb2f54283a7cbe03c8b944b79997f1b526984f91d4796a02/streamlit-1.45.1-py3-none-any.whl", hash = "sha256:9ab6951585e9444672dd650850f81767b01bba5d87c8dac9bc2e1c859d6cc254", size = 9856294, upload_time = "2025-05-12T20:40:27.875Z" },
-]
-
-[[package]]
-name = "tenacity"
-version = "9.1.2"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/0a/d4/2b0cd0fe285e14b36db076e78c93766ff1d529d70408bd1d2a5a84f1d929/tenacity-9.1.2.tar.gz", hash = "sha256:1169d376c297e7de388d18b4481760d478b0e99a777cad3a9c86e556f4b697cb", size = 48036, upload_time = "2025-04-02T08:25:09.966Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/e5/30/643397144bfbfec6f6ef821f36f33e57d35946c44a2352d3c9f0ae847619/tenacity-9.1.2-py3-none-any.whl", hash = "sha256:f77bf36710d8b73a50b2dd155c97b870017ad21afe6ab300326b0371b3b05138", size = 28248, upload_time = "2025-04-02T08:25:07.678Z" },
-]
-
-[[package]]
-name = "toml"
-version = "0.10.2"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/be/ba/1f744cdc819428fc6b5084ec34d9b30660f6f9daaf70eead706e3203ec3c/toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f", size = 22253, upload_time = "2020-11-01T01:40:22.204Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/44/6f/7120676b6d73228c96e17f1f794d8ab046fc910d781c8d151120c3f1569e/toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b", size = 16588, upload_time = "2020-11-01T01:40:20.672Z" },
-]
-
-[[package]]
-name = "tornado"
-version = "6.5.1"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/51/89/c72771c81d25d53fe33e3dca61c233b665b2780f21820ba6fd2c6793c12b/tornado-6.5.1.tar.gz", hash = "sha256:84ceece391e8eb9b2b95578db65e920d2a61070260594819589609ba9bc6308c", size = 509934, upload_time = "2025-05-22T18:15:38.788Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/77/89/f4532dee6843c9e0ebc4e28d4be04c67f54f60813e4bf73d595fe7567452/tornado-6.5.1-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:d50065ba7fd11d3bd41bcad0825227cc9a95154bad83239357094c36708001f7", size = 441948, upload_time = "2025-05-22T18:15:20.862Z" },
- { url = "https://files.pythonhosted.org/packages/15/9a/557406b62cffa395d18772e0cdcf03bed2fff03b374677348eef9f6a3792/tornado-6.5.1-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:9e9ca370f717997cb85606d074b0e5b247282cf5e2e1611568b8821afe0342d6", size = 440112, upload_time = "2025-05-22T18:15:22.591Z" },
- { url = "https://files.pythonhosted.org/packages/55/82/7721b7319013a3cf881f4dffa4f60ceff07b31b394e459984e7a36dc99ec/tornado-6.5.1-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b77e9dfa7ed69754a54c89d82ef746398be82f749df69c4d3abe75c4d1ff4888", size = 443672, upload_time = "2025-05-22T18:15:24.027Z" },
- { url = "https://files.pythonhosted.org/packages/7d/42/d11c4376e7d101171b94e03cef0cbce43e823ed6567ceda571f54cf6e3ce/tornado-6.5.1-cp39-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:253b76040ee3bab8bcf7ba9feb136436a3787208717a1fb9f2c16b744fba7331", size = 443019, upload_time = "2025-05-22T18:15:25.735Z" },
- { url = "https://files.pythonhosted.org/packages/7d/f7/0c48ba992d875521ac761e6e04b0a1750f8150ae42ea26df1852d6a98942/tornado-6.5.1-cp39-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:308473f4cc5a76227157cdf904de33ac268af770b2c5f05ca6c1161d82fdd95e", size = 443252, upload_time = "2025-05-22T18:15:27.499Z" },
- { url = "https://files.pythonhosted.org/packages/89/46/d8d7413d11987e316df4ad42e16023cd62666a3c0dfa1518ffa30b8df06c/tornado-6.5.1-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:caec6314ce8a81cf69bd89909f4b633b9f523834dc1a352021775d45e51d9401", size = 443930, upload_time = "2025-05-22T18:15:29.299Z" },
- { url = "https://files.pythonhosted.org/packages/78/b2/f8049221c96a06df89bed68260e8ca94beca5ea532ffc63b1175ad31f9cc/tornado-6.5.1-cp39-abi3-musllinux_1_2_i686.whl", hash = "sha256:13ce6e3396c24e2808774741331638ee6c2f50b114b97a55c5b442df65fd9692", size = 443351, upload_time = "2025-05-22T18:15:31.038Z" },
- { url = "https://files.pythonhosted.org/packages/76/ff/6a0079e65b326cc222a54720a748e04a4db246870c4da54ece4577bfa702/tornado-6.5.1-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:5cae6145f4cdf5ab24744526cc0f55a17d76f02c98f4cff9daa08ae9a217448a", size = 443328, upload_time = "2025-05-22T18:15:32.426Z" },
- { url = "https://files.pythonhosted.org/packages/49/18/e3f902a1d21f14035b5bc6246a8c0f51e0eef562ace3a2cea403c1fb7021/tornado-6.5.1-cp39-abi3-win32.whl", hash = "sha256:e0a36e1bc684dca10b1aa75a31df8bdfed656831489bc1e6a6ebed05dc1ec365", size = 444396, upload_time = "2025-05-22T18:15:34.205Z" },
- { url = "https://files.pythonhosted.org/packages/7b/09/6526e32bf1049ee7de3bebba81572673b19a2a8541f795d887e92af1a8bc/tornado-6.5.1-cp39-abi3-win_amd64.whl", hash = "sha256:908e7d64567cecd4c2b458075589a775063453aeb1d2a1853eedb806922f568b", size = 444840, upload_time = "2025-05-22T18:15:36.1Z" },
- { url = "https://files.pythonhosted.org/packages/55/a7/535c44c7bea4578e48281d83c615219f3ab19e6abc67625ef637c73987be/tornado-6.5.1-cp39-abi3-win_arm64.whl", hash = "sha256:02420a0eb7bf617257b9935e2b754d1b63897525d8a289c9d65690d580b4dcf7", size = 443596, upload_time = "2025-05-22T18:15:37.433Z" },
-]
-
-[[package]]
-name = "tqdm"
-version = "4.67.1"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "colorama", marker = "sys_platform == 'win32'" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/a8/4b/29b4ef32e036bb34e4ab51796dd745cdba7ed47ad142a9f4a1eb8e0c744d/tqdm-4.67.1.tar.gz", hash = "sha256:f8aef9c52c08c13a65f30ea34f4e5aac3fd1a34959879d7e59e63027286627f2", size = 169737, upload_time = "2024-11-24T20:12:22.481Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl", hash = "sha256:26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2", size = 78540, upload_time = "2024-11-24T20:12:19.698Z" },
-]
-
-[[package]]
-name = "typing-extensions"
-version = "4.13.2"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/f6/37/23083fcd6e35492953e8d2aaaa68b860eb422b34627b13f2ce3eb6106061/typing_extensions-4.13.2.tar.gz", hash = "sha256:e6c81219bd689f51865d9e372991c540bda33a0379d5573cddb9a3a23f7caaef", size = 106967, upload_time = "2025-04-10T14:19:05.416Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/8b/54/b1ae86c0973cc6f0210b53d508ca3641fb6d0c56823f288d108bc7ab3cc8/typing_extensions-4.13.2-py3-none-any.whl", hash = "sha256:a439e7c04b49fec3e5d3e2beaa21755cadbbdc391694e28ccdd36ca4a1408f8c", size = 45806, upload_time = "2025-04-10T14:19:03.967Z" },
-]
-
-[[package]]
-name = "typing-inspection"
-version = "0.4.1"
-source = { registry = "https://pypi.org/simple" }
-dependencies = [
- { name = "typing-extensions" },
-]
-sdist = { url = "https://files.pythonhosted.org/packages/f8/b1/0c11f5058406b3af7609f121aaa6b609744687f1d158b3c3a5bf4cc94238/typing_inspection-0.4.1.tar.gz", hash = "sha256:6ae134cc0203c33377d43188d4064e9b357dba58cff3185f22924610e70a9d28", size = 75726, upload_time = "2025-05-21T18:55:23.885Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/17/69/cd203477f944c353c31bade965f880aa1061fd6bf05ded0726ca845b6ff7/typing_inspection-0.4.1-py3-none-any.whl", hash = "sha256:389055682238f53b04f7badcb49b989835495a96700ced5dab2d8feae4b26f51", size = 14552, upload_time = "2025-05-21T18:55:22.152Z" },
-]
-
-[[package]]
-name = "tzdata"
-version = "2025.2"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/95/32/1a225d6164441be760d75c2c42e2780dc0873fe382da3e98a2e1e48361e5/tzdata-2025.2.tar.gz", hash = "sha256:b60a638fcc0daffadf82fe0f57e53d06bdec2f36c4df66280ae79bce6bd6f2b9", size = 196380, upload_time = "2025-03-23T13:54:43.652Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/5c/23/c7abc0ca0a1526a0774eca151daeb8de62ec457e77262b66b359c3c7679e/tzdata-2025.2-py2.py3-none-any.whl", hash = "sha256:1a403fada01ff9221ca8044d701868fa132215d84beb92242d9acd2147f667a8", size = 347839, upload_time = "2025-03-23T13:54:41.845Z" },
-]
-
-[[package]]
-name = "urllib3"
-version = "2.4.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/8a/78/16493d9c386d8e60e442a35feac5e00f0913c0f4b7c217c11e8ec2ff53e0/urllib3-2.4.0.tar.gz", hash = "sha256:414bc6535b787febd7567804cc015fee39daab8ad86268f1310a9250697de466", size = 390672, upload_time = "2025-04-10T15:23:39.232Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/6b/11/cc635220681e93a0183390e26485430ca2c7b5f9d33b15c74c2861cb8091/urllib3-2.4.0-py3-none-any.whl", hash = "sha256:4e16665048960a0900c702d4a66415956a584919c03361cac9f1df5c5dd7e813", size = 128680, upload_time = "2025-04-10T15:23:37.377Z" },
-]
-
-[[package]]
-name = "watchdog"
-version = "6.0.0"
-source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/db/7d/7f3d619e951c88ed75c6037b246ddcf2d322812ee8ea189be89511721d54/watchdog-6.0.0.tar.gz", hash = "sha256:9ddf7c82fda3ae8e24decda1338ede66e1c99883db93711d8fb941eaa2d8c282", size = 131220, upload_time = "2024-11-01T14:07:13.037Z" }
-wheels = [
- { url = "https://files.pythonhosted.org/packages/a9/c7/ca4bf3e518cb57a686b2feb4f55a1892fd9a3dd13f470fca14e00f80ea36/watchdog-6.0.0-py3-none-manylinux2014_aarch64.whl", hash = "sha256:7607498efa04a3542ae3e05e64da8202e58159aa1fa4acddf7678d34a35d4f13", size = 79079, upload_time = "2024-11-01T14:06:59.472Z" },
- { url = "https://files.pythonhosted.org/packages/5c/51/d46dc9332f9a647593c947b4b88e2381c8dfc0942d15b8edc0310fa4abb1/watchdog-6.0.0-py3-none-manylinux2014_armv7l.whl", hash = "sha256:9041567ee8953024c83343288ccc458fd0a2d811d6a0fd68c4c22609e3490379", size = 79078, upload_time = "2024-11-01T14:07:01.431Z" },
- { url = "https://files.pythonhosted.org/packages/d4/57/04edbf5e169cd318d5f07b4766fee38e825d64b6913ca157ca32d1a42267/watchdog-6.0.0-py3-none-manylinux2014_i686.whl", hash = "sha256:82dc3e3143c7e38ec49d61af98d6558288c415eac98486a5c581726e0737c00e", size = 79076, upload_time = "2024-11-01T14:07:02.568Z" },
- { url = "https://files.pythonhosted.org/packages/ab/cc/da8422b300e13cb187d2203f20b9253e91058aaf7db65b74142013478e66/watchdog-6.0.0-py3-none-manylinux2014_ppc64.whl", hash = "sha256:212ac9b8bf1161dc91bd09c048048a95ca3a4c4f5e5d4a7d1b1a7d5752a7f96f", size = 79077, upload_time = "2024-11-01T14:07:03.893Z" },
- { url = "https://files.pythonhosted.org/packages/2c/3b/b8964e04ae1a025c44ba8e4291f86e97fac443bca31de8bd98d3263d2fcf/watchdog-6.0.0-py3-none-manylinux2014_ppc64le.whl", hash = "sha256:e3df4cbb9a450c6d49318f6d14f4bbc80d763fa587ba46ec86f99f9e6876bb26", size = 79078, upload_time = "2024-11-01T14:07:05.189Z" },
- { url = "https://files.pythonhosted.org/packages/62/ae/a696eb424bedff7407801c257d4b1afda455fe40821a2be430e173660e81/watchdog-6.0.0-py3-none-manylinux2014_s390x.whl", hash = "sha256:2cce7cfc2008eb51feb6aab51251fd79b85d9894e98ba847408f662b3395ca3c", size = 79077, upload_time = "2024-11-01T14:07:06.376Z" },
- { url = "https://files.pythonhosted.org/packages/b5/e8/dbf020b4d98251a9860752a094d09a65e1b436ad181faf929983f697048f/watchdog-6.0.0-py3-none-manylinux2014_x86_64.whl", hash = "sha256:20ffe5b202af80ab4266dcd3e91aae72bf2da48c0d33bdb15c66658e685e94e2", size = 79078, upload_time = "2024-11-01T14:07:07.547Z" },
- { url = "https://files.pythonhosted.org/packages/07/f6/d0e5b343768e8bcb4cda79f0f2f55051bf26177ecd5651f84c07567461cf/watchdog-6.0.0-py3-none-win32.whl", hash = "sha256:07df1fdd701c5d4c8e55ef6cf55b8f0120fe1aef7ef39a1c6fc6bc2e606d517a", size = 79065, upload_time = "2024-11-01T14:07:09.525Z" },
- { url = "https://files.pythonhosted.org/packages/db/d9/c495884c6e548fce18a8f40568ff120bc3a4b7b99813081c8ac0c936fa64/watchdog-6.0.0-py3-none-win_amd64.whl", hash = "sha256:cbafb470cf848d93b5d013e2ecb245d4aa1c8fd0504e863ccefa32445359d680", size = 79070, upload_time = "2024-11-01T14:07:10.686Z" },
- { url = "https://files.pythonhosted.org/packages/33/e8/e40370e6d74ddba47f002a32919d91310d6074130fe4e17dabcafc15cbf1/watchdog-6.0.0-py3-none-win_ia64.whl", hash = "sha256:a1914259fa9e1454315171103c6a30961236f508b9b623eae470268bbcc6a22f", size = 79067, upload_time = "2024-11-01T14:07:11.845Z" },
-]
-
-[[package]]
-name = "webui"
-version = "0.1.0"
-source = { virtual = "." }
-dependencies = [
- { name = "mem0ai" },
- { name = "ollama" },
- { name = "pandas" },
- { name = "pymongo" },
- { name = "python-dotenv" },
- { name = "streamlit" },
-]
-
-[package.metadata]
-requires-dist = [
- { name = "mem0ai", specifier = ">=0.1.102" },
- { name = "ollama", specifier = ">=0.5.1" },
- { name = "pandas", specifier = ">=2.2.3" },
- { name = "pymongo", specifier = ">=4.13.0" },
- { name = "python-dotenv", specifier = ">=1.1.0" },
- { name = "streamlit", specifier = ">=1.45.1" },
-]
diff --git a/extras/havpe-relay/.dockerignore b/extras/havpe-relay/.dockerignore
index 3e88997c..cdac220a 100644
--- a/extras/havpe-relay/.dockerignore
+++ b/extras/havpe-relay/.dockerignore
@@ -1,4 +1,5 @@
*
!main.py
!pyproject.toml
-!uv.lock
\ No newline at end of file
+!uv.lock
+!.env
\ No newline at end of file
diff --git a/extras/havpe-relay/.env.template b/extras/havpe-relay/.env.template
index ed2b1427..2c13fda5 100644
--- a/extras/havpe-relay/.env.template
+++ b/extras/havpe-relay/.env.template
@@ -1,2 +1,11 @@
-WS_URL="ws://host.docker.internal:8000/ws_pcm"
+# Backend Configuration
+BACKEND_URL="http://host.docker.internal:8000"
+BACKEND_WS_URL="ws://host.docker.internal:8000"
+
+# Authentication
+AUTH_USERNAME=
+AUTH_PASSWORD=
+
+# Device Configuration
+DEVICE_NAME=havpe
TCP_PORT=8989
\ No newline at end of file
diff --git a/extras/havpe-relay/docker-compose.yml b/extras/havpe-relay/docker-compose.yml
index 9d97ea23..a5c0aa10 100644
--- a/extras/havpe-relay/docker-compose.yml
+++ b/extras/havpe-relay/docker-compose.yml
@@ -9,6 +9,9 @@ services:
# Connect to backend running on host (adjust as needed)
- WS_URL=${WS_URL:-ws://host.docker.internal:8000/ws_pcm}
- TCP_PORT=${TCP_PORT:-8989}
+ # Authentication credentials for backend
+ - AUTH_USERNAME=${AUTH_USERNAME}
+ - AUTH_PASSWORD=${AUTH_PASSWORD}
# - VERBOSE=${VERBOSE:-1}
- DEBUG=${DEBUG:-0}
restart: unless-stopped
diff --git a/extras/havpe-relay/main.py b/extras/havpe-relay/main.py
index 1cc072b3..9899385d 100644
--- a/extras/havpe-relay/main.py
+++ b/extras/havpe-relay/main.py
@@ -6,6 +6,7 @@
- Forwards audio to backend
"""
+import os
import argparse
import asyncio
import logging
@@ -13,6 +14,7 @@
from typing import Optional
import numpy as np
+import requests
from wyoming.audio import AudioChunk
from easy_audio_interfaces import RollingFileSink
@@ -25,10 +27,122 @@
SAMP_WIDTH = 2 # bytes (16-bit)
RECONNECT_DELAY = 5 # seconds
+# Authentication configuration
+BACKEND_URL = "http://host.docker.internal:8000" # Backend API URL
+BACKEND_WS_URL = "ws://host.docker.internal:8000" # Backend WebSocket URL
+AUTH_USERNAME = os.getenv("AUTH_USERNAME") # Can be email or 6-character user_id
+AUTH_PASSWORD = os.getenv("AUTH_PASSWORD")
+DEVICE_NAME = "havpe" # Device name for client ID generation
+
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
+async def get_jwt_token(username: str, password: str, backend_url: str) -> Optional[str]:
+ """
+ Get JWT token from backend using username and password.
+
+ Args:
+ username: User email/username
+ password: User password
+ backend_url: Backend API URL
+
+ Returns:
+ JWT token string or None if authentication failed
+ """
+ try:
+ logger.info(f"๐ Authenticating with backend as: {username}")
+
+ # Run the blocking request in a thread pool to avoid blocking the event loop
+ loop = asyncio.get_running_loop()
+ response = await loop.run_in_executor(
+ None,
+ lambda: requests.post(
+ f"{backend_url}/auth/jwt/login",
+ data={'username': username, 'password': password},
+ headers={'Content-Type': 'application/x-www-form-urlencoded'},
+ timeout=10
+ )
+ )
+
+ if response.status_code == 200:
+ auth_data = response.json()
+ token = auth_data.get('access_token')
+
+ if token:
+ logger.info("โ
JWT authentication successful")
+ return token
+ else:
+ logger.error("โ No access token in response")
+ return None
+ else:
+ error_msg = "Invalid credentials"
+ try:
+ error_data = response.json()
+ error_msg = error_data.get('detail', error_msg)
+ except:
+ pass
+ logger.error(f"โ Authentication failed: {error_msg}")
+ return None
+
+ except requests.exceptions.Timeout:
+ logger.error("โ Authentication request timed out")
+ return None
+ except requests.exceptions.RequestException as e:
+ logger.error(f"โ Authentication request failed: {e}")
+ return None
+ except Exception as e:
+ logger.error(f"โ Unexpected authentication error: {e}")
+ return None
+
+
+def create_authenticated_websocket_uri(base_ws_url: str, client_id: str, jwt_token: str) -> str:
+ """
+ Create WebSocket URI with JWT authentication.
+
+ Args:
+ base_ws_url: Base WebSocket URL (e.g., "ws://localhost:8000")
+ client_id: Client ID for the connection (not used in URL anymore)
+ jwt_token: JWT token for authentication
+
+ Returns:
+ Authenticated WebSocket URI
+ """
+ return f"{base_ws_url}/ws_pcm?token={jwt_token}&device_name={DEVICE_NAME}"
+
+
+async def get_authenticated_socket_client(
+ backend_url: str,
+ backend_ws_url: str,
+ username: str,
+ password: str
+) -> Optional[SocketClient]:
+ """
+ Create an authenticated WebSocket client for the backend.
+
+ Args:
+ backend_url: Backend API URL for authentication
+ backend_ws_url: Backend WebSocket URL
+ username: Authentication username (email or user_id)
+ password: Authentication password
+
+ Returns:
+ Authenticated SocketClient or None if authentication failed
+ """
+ # Get JWT token
+ jwt_token = await get_jwt_token(username, password, backend_url)
+ if not jwt_token:
+ logger.error("Failed to get JWT token, cannot create authenticated WebSocket client")
+ return None
+
+ # Create authenticated WebSocket URI (client_id will be generated by backend)
+ ws_uri = create_authenticated_websocket_uri(backend_ws_url, "", jwt_token)
+ logger.info(f"๐ Creating WebSocket connection to: {backend_ws_url}/ws_pcm?token={jwt_token[:20]}...&device_name={DEVICE_NAME}")
+
+ # Create socket client
+ return SocketClient(uri=ws_uri)
+
+
class ESP32TCPServer(TCPServer):
"""
A TCP server for ESP32 devices streaming 32-bit stereo audio.
@@ -110,33 +224,83 @@ async def read(self) -> Optional[AudioChunk]:
async def ensure_socket_connection(socket_client: SocketClient) -> bool:
"""Ensure socket client is connected, with retry logic."""
- while True:
+ max_retries = 3
+ for attempt in range(max_retries):
try:
- logger.info("Attempting to connect to socket...")
+ logger.info(f"Attempting to connect to authenticated WebSocket (attempt {attempt + 1}/{max_retries})...")
await socket_client.open()
- logger.info("Socket connection established")
+ logger.info("โ
Authenticated WebSocket connection established")
return True
except Exception as e:
- logger.error(f"Failed to connect to socket: {e}")
- logger.info(f"Retrying in {RECONNECT_DELAY} seconds...")
- await asyncio.sleep(RECONNECT_DELAY)
+ logger.error(f"โ Failed to connect to WebSocket: {e}")
+ if attempt < max_retries - 1:
+ logger.info(f"Retrying in {RECONNECT_DELAY} seconds...")
+ await asyncio.sleep(RECONNECT_DELAY)
+ else:
+ logger.error("โ All WebSocket connection attempts failed")
+ return False
+ return False
-async def send_with_retry(socket_client: SocketClient, chunk: AudioChunk) -> bool:
- """Send chunk with retry logic."""
- max_retries = 3
+async def create_and_connect_socket_client() -> Optional[SocketClient]:
+ """Create a new authenticated socket client and connect it."""
+ if not AUTH_USERNAME:
+ logger.error("โ AUTH_USERNAME is required for authentication")
+ return None
+
+ socket_client = await get_authenticated_socket_client(
+ backend_url=BACKEND_URL,
+ backend_ws_url=BACKEND_WS_URL,
+ username=str(AUTH_USERNAME),
+ password=str(AUTH_PASSWORD)
+ )
+
+ if not socket_client:
+ logger.error("โ Failed to create authenticated socket client")
+ return None
+
+ # Try to connect
+ if await ensure_socket_connection(socket_client):
+ return socket_client
+ else:
+ logger.error("โ Failed to establish connection with new socket client")
+ return None
+
+
+async def send_with_retry(socket_client: SocketClient, chunk: AudioChunk) -> tuple[bool, bool]:
+ """
+ Send chunk with retry logic.
+
+ Returns:
+ Tuple of (success, needs_reconnect)
+ - success: True if chunk was sent successfully
+ - needs_reconnect: True if we should create a new authenticated client
+ """
+ max_retries = 2
for attempt in range(max_retries):
try:
await socket_client.write(chunk)
- return True
+ return True, False # Success, no reconnect needed
except Exception as e:
- logger.warning(f"Failed to send chunk (attempt {attempt + 1}): {e}")
+ error_str = str(e).lower()
+
+ # Check for authentication-related errors
+ if any(auth_err in error_str for auth_err in ['401', 'unauthorized', 'forbidden', 'authentication']):
+ logger.warning(f"โ Authentication error detected: {e}")
+ return False, True # Failed, needs new auth token
+
+ logger.warning(f"โ ๏ธ Failed to send chunk (attempt {attempt + 1}): {e}")
if attempt < max_retries - 1:
- await ensure_socket_connection(socket_client)
+ if await ensure_socket_connection(socket_client):
+ continue # Try again with reconnected client
+ else:
+ logger.warning("๐ Connection failed, will need fresh authentication")
+ return False, True # Connection failed, try new auth
else:
- logger.error("Failed to send chunk after all retries")
- return False
- return False
+ logger.error("โ Failed to send chunk after all retries")
+ return False, True # Failed after retries, try new auth
+
+ return False, True
@@ -148,23 +312,21 @@ async def process_esp32_audio(
asr_client: Optional[AsyncClient] = None,
file_sink: Optional[RollingFileSink] = None
):
- """Process audio chunks from ESP32 server, save to file sink and send to ASR client."""
+ """Process audio chunks from ESP32 server, save to file sink and send to authenticated backend."""
if (not socket_client) and (not asr_client):
raise ValueError("Either socket_client or asr_client must be provided")
-
- if socket_client:
- await ensure_socket_connection(socket_client)
try:
- logger.info("Starting to process ESP32 audio for ASR and file saving...")
+ logger.info("๐ต Starting to process ESP32 audio with authentication...")
chunk_count = 0
failed_sends = 0
+ auth_failures = 0
async for chunk in esp32_server:
chunk_count += 1
- if chunk_count % 10 == 1: # Log every 10th chunk
+ if chunk_count % 100 == 1: # Log every 100th chunk to reduce spam
logger.debug(
- f"Received chunk {chunk_count} from ESP32, size: {len(chunk.audio)} bytes"
+ f"๐ฆ Processed {chunk_count} chunks from ESP32, current chunk size: {len(chunk.audio)} bytes"
)
# Write to rolling file sink
@@ -172,32 +334,56 @@ async def process_esp32_audio(
try:
await file_sink.write(chunk)
except Exception as e:
- logger.warning(f"Failed to write to file sink: {e}")
+ logger.warning(f"โ ๏ธ Failed to write to file sink: {e}")
- # Send to backend
+ # Send to authenticated backend
if socket_client:
- success = await send_with_retry(socket_client, chunk)
- if not success:
- failed_sends += 1
- if failed_sends > 10:
- logger.error("Too many failed sends, reconnecting...")
- await ensure_socket_connection(socket_client)
- failed_sends = 0
- else:
+ success, needs_reconnect = await send_with_retry(socket_client, chunk)
+
+ if success:
failed_sends = 0
+ auth_failures = 0
+ elif needs_reconnect:
+ auth_failures += 1
+ logger.warning(f"๐ Need to re-authenticate (failure #{auth_failures})")
+
+ # Create new authenticated client
+ new_socket_client = await create_and_connect_socket_client()
+ if new_socket_client:
+ socket_client = new_socket_client
+ logger.info("โ
Successfully re-authenticated and reconnected")
+ auth_failures = 0
+
+ # Retry sending this chunk with new client
+ retry_success, _ = await send_with_retry(socket_client, chunk)
+ if retry_success:
+ logger.debug("โ
Chunk sent successfully after re-authentication")
+ else:
+ logger.warning("โ ๏ธ Failed to send chunk even after re-authentication")
+ else:
+ logger.error("โ Failed to re-authenticate, will retry on next chunk")
+ if auth_failures > 5:
+ logger.error("โ Too many authentication failures, stopping audio processor")
+ break
+ else:
+ failed_sends += 1
+ if failed_sends > 20:
+ logger.error("โ Too many consecutive send failures, stopping audio processor")
+ break
- # Send to ASR
+ # Send to ASR (if implemented)
# await asr_client.write_event(chunk.event())
+
except asyncio.CancelledError:
- logger.info("ESP32 audio processor cancelled")
+ logger.info("๐ ESP32 audio processor cancelled")
raise
except Exception as e:
- logger.error(f"Error in ESP32 audio processor: {e}")
+ logger.error(f"โ Error in ESP32 audio processor: {e}")
raise
async def run_audio_processor(args, esp32_file_sink):
- """Run the audio processor with reconnect logic."""
+ """Run the audio processor with authentication and reconnect logic."""
while True:
try:
# Create ESP32 TCP server with automatic IยฒS swap detection
@@ -209,12 +395,23 @@ async def run_audio_processor(args, esp32_file_sink):
sample_width=4,
)
- socket_client = SocketClient(uri="ws://host.docker.internal:8000/ws_pcm?user_id=havpe")
+ # Create authenticated WebSocket client for sending audio to backend
+ logger.info(f"๐ Setting up authenticated connection to backend...")
+ logger.info(f"๐ก Backend API: {BACKEND_URL}")
+ logger.info(f"๐ Backend WebSocket: {BACKEND_WS_URL}")
+ logger.info(f"๐ค Auth Username: {AUTH_USERNAME}")
+ logger.info(f"๐ง Device: {DEVICE_NAME}")
+
+ socket_client = await create_and_connect_socket_client()
+ if not socket_client:
+ logger.error("โ Failed to create authenticated WebSocket client, retrying...")
+ await asyncio.sleep(RECONNECT_DELAY)
+ continue
# Start ESP32 server
async with esp32_server:
- logger.info(f"ESP32 server listening on {args.host}:{args.port}")
- logger.info("Starting audio recording and processing...")
+ logger.info(f"๐ง ESP32 server listening on {args.host}:{args.port}")
+ logger.info("๐ต Starting authenticated audio recording and processing...")
# Start audio processing task
await process_esp32_audio(
@@ -225,16 +422,19 @@ async def run_audio_processor(args, esp32_file_sink):
)
except KeyboardInterrupt:
- logger.info("Interrupted โ stopping")
+ logger.info("๐ Interrupted โ stopping")
break
except Exception as e:
- logger.error(f"Audio processor error: {e}")
- logger.info(f"Restarting in {RECONNECT_DELAY} seconds...")
+ logger.error(f"โ Audio processor error: {e}")
+ logger.info(f"๐ Restarting in {RECONNECT_DELAY} seconds...")
await asyncio.sleep(RECONNECT_DELAY)
async def main():
- parser = argparse.ArgumentParser(description="TCP WAV recorder with ESP32 IยฒS swap detection")
+ # Override global constants with command line arguments
+ global BACKEND_URL, BACKEND_WS_URL, AUTH_USERNAME, AUTH_PASSWORD
+
+ parser = argparse.ArgumentParser(description="TCP WAV recorder with ESP32 IยฒS swap detection and backend authentication")
parser.add_argument(
"--port",
type=int,
@@ -253,12 +453,72 @@ async def main():
default=5,
help="Duration of each audio segment in seconds (default 5)",
)
+ parser.add_argument(
+ "--username",
+ type=str,
+ default=AUTH_USERNAME,
+ help="Backend authentication username (email or 6-character user_id)",
+ )
+ parser.add_argument(
+ "--password",
+ type=str,
+ default=AUTH_PASSWORD,
+ help="Backend authentication password",
+ )
+ parser.add_argument(
+ "--backend-url",
+ type=str,
+ default=BACKEND_URL,
+ help=f"Backend API URL (default: {BACKEND_URL})",
+ )
+ parser.add_argument(
+ "--backend-ws-url",
+ type=str,
+ default=BACKEND_WS_URL,
+ help=f"Backend WebSocket URL (default: {BACKEND_WS_URL})",
+ )
parser.add_argument("-v", "--verbose", action="count", default=0, help="-v: INFO, -vv: DEBUG")
parser.add_argument("--debug-audio", action="store_true", help="Debug audio recording")
args = parser.parse_args()
+
+ BACKEND_URL = args.backend_url
+ BACKEND_WS_URL = args.backend_ws_url
+ AUTH_USERNAME = args.username
+ AUTH_PASSWORD = args.password
loglevel = logging.WARNING - (10 * min(args.verbose, 2))
logging.basicConfig(format="%(asctime)s %(levelname)s %(message)s", level=loglevel)
+
+ # Print startup banner with authentication info
+ logger.info("๐ต ========================================")
+ logger.info("๐ต Friend-Lite HAVPE Relay with Authentication")
+ logger.info("๐ต ========================================")
+ logger.info(f"๐ง ESP32 Server: {args.host}:{args.port}")
+ logger.info(f"๐ก Backend API: {BACKEND_URL}")
+ logger.info(f"๐ Backend WebSocket: {BACKEND_WS_URL}")
+ logger.info(f"๐ค Auth Username: {AUTH_USERNAME}")
+ logger.info(f"๐ง Device: {DEVICE_NAME}")
+ logger.info(f"๐ง Debug Audio: {'Enabled' if args.debug_audio else 'Disabled'}")
+ logger.info("๐ต ========================================")
+
+ # Test authentication on startup
+ logger.info("๐ Testing backend authentication...")
+ try:
+ if not AUTH_USERNAME or not AUTH_PASSWORD:
+ logger.error("โ Missing authentication credentials")
+ logger.error("๐ก Set AUTH_USERNAME and AUTH_PASSWORD environment variables or use command line arguments")
+ return
+ test_token = await get_jwt_token(AUTH_USERNAME, AUTH_PASSWORD, BACKEND_URL)
+ if test_token:
+ logger.info("โ
Authentication test successful! Ready to start.")
+ else:
+ logger.error("โ Authentication test failed! Please check credentials.")
+ logger.error("๐ก Update AUTH_USERNAME and AUTH_PASSWORD constants or use command line arguments")
+ return
+ except Exception as e:
+ logger.error(f"โ Authentication test error: {e}")
+ logger.error("๐ก Make sure the backend is running and accessible")
+ return
# Create recordings directory
recordings = pathlib.Path("audio_chunks")
diff --git a/extras/havpe-relay/pyproject.toml b/extras/havpe-relay/pyproject.toml
index 159ef46f..9dc7a9e8 100644
--- a/extras/havpe-relay/pyproject.toml
+++ b/extras/havpe-relay/pyproject.toml
@@ -6,6 +6,7 @@ readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"easy-audio-interfaces>=0.5",
+ "requests>=2.32.4",
"websockets>=15.0.1",
]
diff --git a/extras/havpe-relay/uv.lock b/extras/havpe-relay/uv.lock
index 7b2901e4..f2432816 100644
--- a/extras/havpe-relay/uv.lock
+++ b/extras/havpe-relay/uv.lock
@@ -25,6 +25,50 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/09/71/54e999902aed72baf26bca0d50781b01838251a462612966e9fc4891eadd/black-25.1.0-py3-none-any.whl", hash = "sha256:95e8176dae143ba9097f351d174fdaf0ccd29efb414b362ae3fd72bf0f710717", size = 207646 },
]
+[[package]]
+name = "certifi"
+version = "2025.7.9"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/de/8a/c729b6b60c66a38f590c4e774decc4b2ec7b0576be8f1aa984a53ffa812a/certifi-2025.7.9.tar.gz", hash = "sha256:c1d2ec05395148ee10cf672ffc28cd37ea0ab0d99f9cc74c43e588cbd111b079", size = 160386 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/66/f3/80a3f974c8b535d394ff960a11ac20368e06b736da395b551a49ce950cce/certifi-2025.7.9-py3-none-any.whl", hash = "sha256:d842783a14f8fdd646895ac26f719a061408834473cfc10203f6a575beb15d39", size = 159230 },
+]
+
+[[package]]
+name = "charset-normalizer"
+version = "3.4.2"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/e4/33/89c2ced2b67d1c2a61c19c6751aa8902d46ce3dacb23600a283619f5a12d/charset_normalizer-3.4.2.tar.gz", hash = "sha256:5baececa9ecba31eff645232d59845c07aa030f0c81ee70184a90d35099a0e63", size = 126367 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/d7/a4/37f4d6035c89cac7930395a35cc0f1b872e652eaafb76a6075943754f095/charset_normalizer-3.4.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:0c29de6a1a95f24b9a1aa7aefd27d2487263f00dfd55a77719b530788f75cff7", size = 199936 },
+ { url = "https://files.pythonhosted.org/packages/ee/8a/1a5e33b73e0d9287274f899d967907cd0bf9c343e651755d9307e0dbf2b3/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cddf7bd982eaa998934a91f69d182aec997c6c468898efe6679af88283b498d3", size = 143790 },
+ { url = "https://files.pythonhosted.org/packages/66/52/59521f1d8e6ab1482164fa21409c5ef44da3e9f653c13ba71becdd98dec3/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fcbe676a55d7445b22c10967bceaaf0ee69407fbe0ece4d032b6eb8d4565982a", size = 153924 },
+ { url = "https://files.pythonhosted.org/packages/86/2d/fb55fdf41964ec782febbf33cb64be480a6b8f16ded2dbe8db27a405c09f/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d41c4d287cfc69060fa91cae9683eacffad989f1a10811995fa309df656ec214", size = 146626 },
+ { url = "https://files.pythonhosted.org/packages/8c/73/6ede2ec59bce19b3edf4209d70004253ec5f4e319f9a2e3f2f15601ed5f7/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4e594135de17ab3866138f496755f302b72157d115086d100c3f19370839dd3a", size = 148567 },
+ { url = "https://files.pythonhosted.org/packages/09/14/957d03c6dc343c04904530b6bef4e5efae5ec7d7990a7cbb868e4595ee30/charset_normalizer-3.4.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cf713fe9a71ef6fd5adf7a79670135081cd4431c2943864757f0fa3a65b1fafd", size = 150957 },
+ { url = "https://files.pythonhosted.org/packages/0d/c8/8174d0e5c10ccebdcb1b53cc959591c4c722a3ad92461a273e86b9f5a302/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:a370b3e078e418187da8c3674eddb9d983ec09445c99a3a263c2011993522981", size = 145408 },
+ { url = "https://files.pythonhosted.org/packages/58/aa/8904b84bc8084ac19dc52feb4f5952c6df03ffb460a887b42615ee1382e8/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:a955b438e62efdf7e0b7b52a64dc5c3396e2634baa62471768a64bc2adb73d5c", size = 153399 },
+ { url = "https://files.pythonhosted.org/packages/c2/26/89ee1f0e264d201cb65cf054aca6038c03b1a0c6b4ae998070392a3ce605/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:7222ffd5e4de8e57e03ce2cef95a4c43c98fcb72ad86909abdfc2c17d227fc1b", size = 156815 },
+ { url = "https://files.pythonhosted.org/packages/fd/07/68e95b4b345bad3dbbd3a8681737b4338ff2c9df29856a6d6d23ac4c73cb/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:bee093bf902e1d8fc0ac143c88902c3dfc8941f7ea1d6a8dd2bcb786d33db03d", size = 154537 },
+ { url = "https://files.pythonhosted.org/packages/77/1a/5eefc0ce04affb98af07bc05f3bac9094513c0e23b0562d64af46a06aae4/charset_normalizer-3.4.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:dedb8adb91d11846ee08bec4c8236c8549ac721c245678282dcb06b221aab59f", size = 149565 },
+ { url = "https://files.pythonhosted.org/packages/37/a0/2410e5e6032a174c95e0806b1a6585eb21e12f445ebe239fac441995226a/charset_normalizer-3.4.2-cp312-cp312-win32.whl", hash = "sha256:db4c7bf0e07fc3b7d89ac2a5880a6a8062056801b83ff56d8464b70f65482b6c", size = 98357 },
+ { url = "https://files.pythonhosted.org/packages/6c/4f/c02d5c493967af3eda9c771ad4d2bbc8df6f99ddbeb37ceea6e8716a32bc/charset_normalizer-3.4.2-cp312-cp312-win_amd64.whl", hash = "sha256:5a9979887252a82fefd3d3ed2a8e3b937a7a809f65dcb1e068b090e165bbe99e", size = 105776 },
+ { url = "https://files.pythonhosted.org/packages/ea/12/a93df3366ed32db1d907d7593a94f1fe6293903e3e92967bebd6950ed12c/charset_normalizer-3.4.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:926ca93accd5d36ccdabd803392ddc3e03e6d4cd1cf17deff3b989ab8e9dbcf0", size = 199622 },
+ { url = "https://files.pythonhosted.org/packages/04/93/bf204e6f344c39d9937d3c13c8cd5bbfc266472e51fc8c07cb7f64fcd2de/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eba9904b0f38a143592d9fc0e19e2df0fa2e41c3c3745554761c5f6447eedabf", size = 143435 },
+ { url = "https://files.pythonhosted.org/packages/22/2a/ea8a2095b0bafa6c5b5a55ffdc2f924455233ee7b91c69b7edfcc9e02284/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3fddb7e2c84ac87ac3a947cb4e66d143ca5863ef48e4a5ecb83bd48619e4634e", size = 153653 },
+ { url = "https://files.pythonhosted.org/packages/b6/57/1b090ff183d13cef485dfbe272e2fe57622a76694061353c59da52c9a659/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:98f862da73774290f251b9df8d11161b6cf25b599a66baf087c1ffe340e9bfd1", size = 146231 },
+ { url = "https://files.pythonhosted.org/packages/e2/28/ffc026b26f441fc67bd21ab7f03b313ab3fe46714a14b516f931abe1a2d8/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c9379d65defcab82d07b2a9dfbfc2e95bc8fe0ebb1b176a3190230a3ef0e07c", size = 148243 },
+ { url = "https://files.pythonhosted.org/packages/c0/0f/9abe9bd191629c33e69e47c6ef45ef99773320e9ad8e9cb08b8ab4a8d4cb/charset_normalizer-3.4.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e635b87f01ebc977342e2697d05b56632f5f879a4f15955dfe8cef2448b51691", size = 150442 },
+ { url = "https://files.pythonhosted.org/packages/67/7c/a123bbcedca91d5916c056407f89a7f5e8fdfce12ba825d7d6b9954a1a3c/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:1c95a1e2902a8b722868587c0e1184ad5c55631de5afc0eb96bc4b0d738092c0", size = 145147 },
+ { url = "https://files.pythonhosted.org/packages/ec/fe/1ac556fa4899d967b83e9893788e86b6af4d83e4726511eaaad035e36595/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:ef8de666d6179b009dce7bcb2ad4c4a779f113f12caf8dc77f0162c29d20490b", size = 153057 },
+ { url = "https://files.pythonhosted.org/packages/2b/ff/acfc0b0a70b19e3e54febdd5301a98b72fa07635e56f24f60502e954c461/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:32fc0341d72e0f73f80acb0a2c94216bd704f4f0bce10aedea38f30502b271ff", size = 156454 },
+ { url = "https://files.pythonhosted.org/packages/92/08/95b458ce9c740d0645feb0e96cea1f5ec946ea9c580a94adfe0b617f3573/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:289200a18fa698949d2b39c671c2cc7a24d44096784e76614899a7ccf2574b7b", size = 154174 },
+ { url = "https://files.pythonhosted.org/packages/78/be/8392efc43487ac051eee6c36d5fbd63032d78f7728cb37aebcc98191f1ff/charset_normalizer-3.4.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:4a476b06fbcf359ad25d34a057b7219281286ae2477cc5ff5e3f70a246971148", size = 149166 },
+ { url = "https://files.pythonhosted.org/packages/44/96/392abd49b094d30b91d9fbda6a69519e95802250b777841cf3bda8fe136c/charset_normalizer-3.4.2-cp313-cp313-win32.whl", hash = "sha256:aaeeb6a479c7667fbe1099af9617c83aaca22182d6cf8c53966491a0f1b7ffb7", size = 98064 },
+ { url = "https://files.pythonhosted.org/packages/e9/b0/0200da600134e001d91851ddc797809e2fe0ea72de90e09bec5a2fbdaccb/charset_normalizer-3.4.2-cp313-cp313-win_amd64.whl", hash = "sha256:aa6af9e7d59f9c12b33ae4e9450619cf2488e2bbe9b44030905877f0b2324980", size = 105641 },
+ { url = "https://files.pythonhosted.org/packages/20/94/c5790835a017658cbfabd07f3bfb549140c3ac458cfc196323996b10095a/charset_normalizer-3.4.2-py3-none-any.whl", hash = "sha256:7f56930ab0abd1c45cd15be65cc741c28b1c9a34876ce8c17a2fa107810c0af0", size = 52626 },
+]
+
[[package]]
name = "click"
version = "8.2.1"
@@ -48,7 +92,7 @@ wheels = [
[[package]]
name = "easy-audio-interfaces"
-version = "0.4.0"
+version = "0.6.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "fire" },
@@ -59,9 +103,9 @@ dependencies = [
{ name = "websockets" },
{ name = "wyoming" },
]
-sdist = { url = "https://files.pythonhosted.org/packages/3d/e4/e15a6349080b65ebf784d67299274360997def214ac85deb9e3a01baa239/easy_audio_interfaces-0.4.0.tar.gz", hash = "sha256:be023a2bc4fb7fdf4a658e8271b535583b5bbe774dc1f3d172e6f4c8f7b0672e", size = 31157 }
+sdist = { url = "https://files.pythonhosted.org/packages/cc/0a/ab7313eca395daf3e69ca484ac39d555aa17609e7cd4209a489f33408ca8/easy_audio_interfaces-0.6.0.tar.gz", hash = "sha256:70fbf82df690a59aeb0e41799289a9f18ccb77b58978935269b928c617b5568d", size = 35602 }
wheels = [
- { url = "https://files.pythonhosted.org/packages/88/92/0d7424bb767731dc7caf306cb9c384854ddf620ba1f6410f786776de5946/easy_audio_interfaces-0.4.0-py3-none-any.whl", hash = "sha256:bb1a98352472959a589dd89171d55c092b38d1de0fd70500d544a2b440ec04b3", size = 36568 },
+ { url = "https://files.pythonhosted.org/packages/69/a5/9eaecdbdcdc0d429f129c69e7f6a085578e2a066dffab104ef073d7f53c5/easy_audio_interfaces-0.6.0-py3-none-any.whl", hash = "sha256:146888390f77e9b00dad4d8812e0cdbb1d48c31bb201eb33910408b010bfbd2d", size = 41550 },
]
[[package]]
@@ -80,6 +124,7 @@ version = "0.1.0"
source = { virtual = "." }
dependencies = [
{ name = "easy-audio-interfaces" },
+ { name = "requests" },
{ name = "websockets" },
]
@@ -90,13 +135,23 @@ dev = [
[package.metadata]
requires-dist = [
- { name = "easy-audio-interfaces", specifier = ">=0.4" },
+ { name = "easy-audio-interfaces", specifier = ">=0.5" },
+ { name = "requests", specifier = ">=2.32.4" },
{ name = "websockets", specifier = ">=15.0.1" },
]
[package.metadata.requires-dev]
dev = [{ name = "black", specifier = ">=25.1.0" }]
+[[package]]
+name = "idna"
+version = "3.10"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/f1/70/7703c29685631f5a7590aa73f1f1d3fa9a380e654b86af429e0934a32f7d/idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9", size = 190490 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3", size = 70442 },
+]
+
[[package]]
name = "markdown-it-py"
version = "3.0.0"
@@ -210,6 +265,21 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/8a/0b/9fcc47d19c48b59121088dd6da2488a49d5f72dacf8262e2790a1d2c7d15/pygments-2.19.1-py3-none-any.whl", hash = "sha256:9ea1544ad55cecf4b8242fab6dd35a93bbce657034b0611ee383099054ab6d8c", size = 1225293 },
]
+[[package]]
+name = "requests"
+version = "2.32.4"
+source = { registry = "https://pypi.org/simple" }
+dependencies = [
+ { name = "certifi" },
+ { name = "charset-normalizer" },
+ { name = "idna" },
+ { name = "urllib3" },
+]
+sdist = { url = "https://files.pythonhosted.org/packages/e1/0a/929373653770d8a0d7ea76c37de6e41f11eb07559b103b1c02cafb3f7cf8/requests-2.32.4.tar.gz", hash = "sha256:27d0316682c8a29834d3264820024b62a36942083d52caf2f14c0591336d3422", size = 135258 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/7c/e4/56027c4a6b4ae70ca9de302488c5ca95ad4a39e190093d6c1a8ace08341b/requests-2.32.4-py3-none-any.whl", hash = "sha256:27babd3cda2a6d50b30443204ee89830707d396671944c998b5975b031ac2b2c", size = 64847 },
+]
+
[[package]]
name = "rich"
version = "13.9.4"
@@ -295,6 +365,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/4f/bd/de8d508070629b6d84a30d01d57e4a65c69aa7f5abe7560b8fad3b50ea59/termcolor-3.1.0-py3-none-any.whl", hash = "sha256:591dd26b5c2ce03b9e43f391264626557873ce1d379019786f99b0c2bee140aa", size = 7684 },
]
+[[package]]
+name = "urllib3"
+version = "2.5.0"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/15/22/9ee70a2574a4f4599c47dd506532914ce044817c7752a79b6a51286319bc/urllib3-2.5.0.tar.gz", hash = "sha256:3fc47733c7e419d4bc3f6b3dc2b4f890bb743906a30d56ba4a5bfa4bbff92760", size = 393185 }
+wheels = [
+ { url = "https://files.pythonhosted.org/packages/a7/c2/fe1e52489ae3122415c51f387e221dd0773709bad6c6cdaa599e8a2c5185/urllib3-2.5.0-py3-none-any.whl", hash = "sha256:e6b01673c0fa6a13e374b50871808eb3bf7046c4b125b216f6bf1cc604cff0dc", size = 129795 },
+]
+
[[package]]
name = "websockets"
version = "15.0.1"
diff --git a/extras/speaker-recognition/speaker-recognition.md b/extras/speaker-recognition/speaker-recognition.md
new file mode 100644
index 00000000..25e8d6f6
--- /dev/null
+++ b/extras/speaker-recognition/speaker-recognition.md
@@ -0,0 +1,121 @@
+The speaker recognition service will provide the following functionality:
+- Enroll a speaker
+- List enrolled speakers
+- Remove a speaker
+- Health check
+- Identify speakers in audio segment
+
+The speaker recognition service will be used to identify speakers in chunks of audio
+The service will make use of SpeechBrain, FAISS, pytorch and pyannote to do speaker recognition.
+
+## Flow
+
+### 1. Service Initialization Flow
+1. **Environment Setup**
+ - Load environment variables (HF_TOKEN, SIMILARITY_THRESHOLD)
+
+2. **Model Loading**
+ - Load pyannote speaker diarization pipeline (`pyannote/speaker-diarization-3.1`)
+ - Load SpeechBrain speaker embedding model (`speechbrain/spkrec-ecapa-voxceleb`)
+ - Initialize audio loader with 16kHz sample rate
+
+3. **Database Initialization**
+ - Create FAISS index for vector similarity search (IndexFlatIP with embedding dimension)
+ - Initialize empty enrolled speakers list
+ - Set up FastAPI application with health endpoints
+
+### 2. Speaker Enrollment Flow
+1. **Audio Input**
+ - Receive audio file path or uploaded audio file
+ - Optional: specify time segment (start_time, end_time)
+
+2. **Audio Processing**
+ - Load audio using pyannote Audio loader
+ - Crop to specified segment if time bounds provided
+ - Ensure proper format (16kHz, mono)
+
+3. **Embedding Extraction**
+ - Pass audio waveform through SpeechBrain embedding model
+ - Apply L2 normalization for cosine similarity compatibility
+ - Extract 512-dimensional speaker embedding vector
+
+4. **Speaker Registration**
+ - Check if speaker ID already exists (update if found)
+ - Add new speaker entry with ID, name, and embedding
+ - Add embedding to FAISS index for fast similarity search
+ - Rebuild FAISS index if updating existing speaker
+
+### 3. Speaker Identification Flow
+1. **Audio Input**
+ - Receive audio file path and optional time segment
+ - Load and preprocess audio (same as enrollment)
+
+2. **Embedding Extraction**
+ - Extract speaker embedding using same model as enrollment
+ - Apply L2 normalization
+
+3. **Similarity Search**
+ - Query FAISS index with extracted embedding
+ - Find closest match using inner product (cosine similarity)
+ - Compare similarity score against threshold (default: 0.85)
+
+4. **Identity Resolution**
+ - Return speaker ID and info if similarity > threshold
+ - Return "not identified" if no match found
+
+### 4. Speaker Diarization Flow
+1. **Audio Processing**
+ - Run pyannote diarization pipeline on entire audio file
+ - Extract speaker segments with timestamps
+
+2. **Speaker Segmentation**
+ - Identify distinct speakers (SPEAKER_00, SPEAKER_01, etc.)
+ - Get temporal boundaries for each speaker's speech
+
+3. **Speaker Verification**
+ - For each detected speaker:
+ - Find longest speech segment for that speaker
+ - Extract embedding from longest segment
+ - Attempt to identify against enrolled speakers
+ - Assign verified speaker ID or generate unknown speaker ID
+
+4. **Result Compilation**
+ - Return segments with timestamps and speaker assignments
+ - Include both diarization labels and verified speaker IDs
+ - Provide speaker embeddings for further processing
+
+### 5. Speaker Management Flow
+1. **List Speakers**
+ - Return all enrolled speakers with IDs and names
+ - Provide count of enrolled speakers
+
+2. **Remove Speaker**
+ - Find speaker by ID in enrolled speakers list
+ - Remove from speakers list
+ - Rebuild FAISS index without removed speaker's embedding
+ - Return success/failure status
+
+### 6. Health Check Flow
+1. **Service Status**
+ - Check if all required models are loaded
+ - Verify device availability (CPU/CUDA)
+ - Report number of enrolled speakers
+ - Return overall service health status
+
+### Data Flow Architecture
+```
+Audio Input โ Audio Loader โ Embedding Model โ FAISS Index
+ โ
+Enrolled Speakers Database โ Speaker Registration
+ โ
+Similarity Search โ Identity Resolution โ API Response
+```
+
+### Key Components
+- **FAISS Index**: Fast similarity search for speaker embeddings
+- **SpeechBrain Model**: Speaker embedding extraction
+- **Pyannote Pipeline**: Speaker diarization and audio processing
+- **Enrolled Speakers DB**: In-memory storage of registered speakers
+- **Similarity Threshold**: Configurable threshold for speaker matching (default: 0.85)
+
+
diff --git a/friend-lite/.gitignore b/friend-lite/.gitignore
index 05647d55..6bf33056 100644
--- a/friend-lite/.gitignore
+++ b/friend-lite/.gitignore
@@ -33,3 +33,5 @@ yarn-error.*
# typescript
*.tsbuildinfo
+
+android/*
\ No newline at end of file