Skip to content

Merge mycelia memories#178

Merged
10 commits merged intomainfrom
merge-mycelia-memories
Dec 10, 2025
Merged

Merge mycelia memories#178
10 commits merged intomainfrom
merge-mycelia-memories

Conversation

@AnkushMalaker
Copy link
Collaborator

@AnkushMalaker AnkushMalaker commented Dec 8, 2025

Summary by CodeRabbit

  • New Features

    • Added Mycelia OAuth integration for user synchronization
    • Added interactive timeline visualization for memories with multiple view options (Frappe Gantt, React Gantt, Mycelia)
    • Added memory detail page with metadata and navigation
    • Added memory provider configuration interface
    • Added management commands for Mycelia user sync and orphan handling
  • Chores

    • Refactored internal memory service architecture
    • Updated speech detection thresholds for improved accuracy
    • Integrated Mycelia as a Docker test service

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 8, 2025

Walkthrough

This PR integrates Mycelia OAuth backend into the Friend-Lite system, adding user synchronization, API key management, memory service support, and timeline visualization UIs. Changes include a new Mycelia submodule, MongoDB-backed sync service, memory provider implementation, frontend router components for timeline views, and reorganization of memory service imports.

Changes

Cohort / File(s) Summary
Submodule & Configuration
.gitmodules, Makefile
Adds Mycelia submodule and five new Makefile targets (mycelia-sync-*, mycelia-check-orphans, mycelia-reassign-orphans) for OAuth synchronization and orphan object management.
Environment & Docker Setup
backends/advanced/.env.template, backends/advanced/docker-compose-test.yml
Adds Mycelia environment variables (MYCELIA_URL, MYCELIA_DB, MYCELIA_TIMEOUT) and introduces two new test services (mycelia-backend-test, mycelia-frontend-test) with health checks and dependencies.
MongoDB Sync Scripts
backends/advanced/scripts/create_mycelia_api_key.py, backends/advanced/scripts/sync_friendlite_mycelia.py
Implements programmatic API key generation and comprehensive Friend-Lite to Mycelia user synchronization, orphan detection, and reassignment workflows.
Memory Service Reorganization & Imports
backends/advanced/src/advanced_omi_backend/memory/__init__.py, backends/advanced/src/advanced_omi_backend/memory/compat_service.py, backends/advanced/src/advanced_omi_backend/memory/providers/__init__.py, backends/advanced/src/advanced_omi_backend/{app_factory,chat_service,controllers/*,routers/*,services/audio_stream/producer,workers/*}.py
Moves memory service to services/ package structure, removes deprecated compatibility layer, updates imports across controllers/workers/routers, and centralizes memory provider exports.
Mycelia Memory Service Implementation
backends/advanced/src/advanced_omi_backend/services/memory/{base,config,prompts,service_factory}.py, backends/advanced/src/advanced_omi_backend/services/memory/providers/{mycelia,mcp_client,friend_lite,openmemory_mcp,vector_stores}.py
Adds MyceliaMemoryService with JWT auth, async API calls, LLM-based memory extraction, temporal entity support, and extends base/provider interfaces with get_memory, update_memory, and enhanced delete_memory signatures.
Mycelia Sync Service
backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py
Introduces MyceliaSyncService and sync_admin_on_startup hook for automatic admin user OAuth provisioning at app startup.
Memory Controllers & Routes
backends/advanced/src/advanced_omi_backend/controllers/{memory_controller,system_controller}.py, backends/advanced/src/advanced_omi_backend/routers/modules/{memory_routes,system_routes,health_routes}.py
Adds add_memory endpoint, memory provider get/set admin endpoints, MemoryEntry.to_dict serialization, health checks for Mycelia provider, and new AddMemoryRequest model.
Authentication & Config Updates
backends/advanced/src/advanced_omi_backend/{auth,config,models/conversation}.py
Adds generate_jwt_for_user, tightens speech detection thresholds, adds Mycelia to MemoryProvider enum.
Transcription Service Relocation
backends/advanced/src/advanced_omi_backend/services/transcription/{__init__,deepgram,parakeet}.py
Relocates BaseTranscriptionProvider import from models to services package.
Worker Job Enhancements
backends/advanced/src/advanced_omi_backend/workers/{audio_jobs,conversation_jobs,memory_jobs,transcription_jobs}.py, backends/advanced/src/advanced_omi_backend/utils/job_utils.py
Adds zombie job detection via check_job_alive utility, validates speech duration/quality before conversation processing, improved memory entry text extraction.
Frontend Routing & Pages
backends/advanced/webui/src/pages/{ConversationsRouter,ConversationsTimeline,MemoriesRouter,MemoryDetail,TimelineRouter,FrappeGanttTimeline,ReactGanttTimeline,MyceliaTimeline}.tsx
New routable components for tabbed conversations/timeline views, memory detail page, and three timeline visualization implementations (Frappé Gantt, React Gantt, Mycelia D3).
Frontend Hooks & Utilities
backends/advanced/webui/src/hooks/{useD3Zoom,useAudioRecording,useSimpleAudioRecording}.ts
Adds useD3Zoom for D3 zoom/pan, updates interval ref types.
Frontend Layout & Context
backends/advanced/webui/src/components/layout/Layout.tsx, backends/advanced/webui/src/contexts/AuthContext.tsx
Adds Timeline navigation item with Calendar icon; stores Mycelia JWT in localStorage on login.
Frontend App & Services
backends/advanced/webui/src/App.tsx, backends/advanced/webui/src/services/api.ts, backends/advanced/webui/src/pages/Memories.tsx, backends/advanced/webui/src/pages/System.tsx
Routes updated to use new routers; Memories now clickable with navigation; System page gains memory provider selector UI; API endpoints for provider get/set.
Frontend Typing
backends/advanced/webui/src/types/react-gantt-timeline.d.ts
Ambient TypeScript module declarations for react-gantt-timeline library.
Styling & Assets
backends/advanced/webui/public/frappe-gantt.css
Comprehensive CSS theming for Frappé Gantt chart (variables, colors, layout, z-index, interactivity).

Sequence Diagram(s)

sequenceDiagram
    participant User as User (Frontend)
    participant BackendAPI as Friend-Lite Backend
    participant MongoDB as MongoDB
    participant MyceliaSync as MyceliaSyncService
    participant MyceliaBackend as Mycelia Backend

    User->>BackendAPI: Request sync user to Mycelia<br/>(make mycelia-sync-user EMAIL=...)
    BackendAPI->>MyceliaSync: sync_user_to_mycelia(user_id, email)
    MyceliaSync->>MongoDB: Check existing active API key
    alt Existing Key Found
        MyceliaSync->>User: Return existing credentials
    else No Key
        MyceliaSync->>MyceliaSync: Generate API key & salt<br/>Hash with SHA-256
        MyceliaSync->>MongoDB: Create api_keys document<br/>(hashedKey, salt, openPrefix, policies)
        MyceliaSync->>MongoDB: Update user with Mycelia<br/>client_id metadata
        MyceliaSync->>User: Display new credentials &<br/>test setup instructions
    end

    Note over User,MyceliaBackend: Memory Operations Flow
    User->>BackendAPI: Add memory (content)
    BackendAPI->>BackendAPI: Route to MyceliaMemoryService
    BackendAPI->>BackendAPI: Extract facts via LLM
    BackendAPI->>BackendAPI: Extract temporal entities
    BackendAPI->>BackendAPI: Get JWT for user
    BackendAPI->>MyceliaBackend: Create memory objects<br/>(POST /resource)
    MyceliaBackend->>MongoDB: Store memory documents
    MyceliaBackend-->>BackendAPI: Return created IDs
    BackendAPI-->>User: Success response with IDs
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Areas requiring extra attention:

  • backends/advanced/src/advanced_omi_backend/services/memory/ — Major refactoring of memory service architecture: removal of compat layer, reorganization of provider exports, new base class methods, and comprehensive Mycelia provider implementation with async/await patterns, JWT generation, and LLM integration.
  • backends/advanced/webui/src/pages/{TimelineRouter,FrappeGanttTimeline,MyceliaTimeline,ReactGanttTimeline}.tsx — Multiple new frontend timeline visualization pages with D3, Frappé Gantt, and custom timeline logic; coordinate validation with API expectations and CSS custom properties.
  • backends/advanced/scripts/sync_friendlite_mycelia.py — Complex MongoDB-based sync orchestration with orphan detection, deactivation logic, and multi-step user migration workflows; verify data consistency and error recovery paths.
  • backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py — Service initialization, JWT handling, and startup hook integration; confirm non-fatal exception handling doesn't mask failures.
  • Worker job enhancements (audio_jobs.py, transcription_jobs.py, conversation_jobs.py) — New zombie job detection and speech validation flows; verify job cleanup and state consistency across failure scenarios.
  • Memory provider signature changes across base.py, mcp_client.py, openmemory_mcp.py, vector_stores.py — User context parameters added to delete/update methods; check backward compatibility and all consumer code paths.

Possibly related PRs

  • PR #69 — Adds OpenMemory MCP memory provider; both modify memory service factory, provider architecture, and config handling.
  • PR #121 — Modifies RQ-based job handling and worker architecture; both introduce zombie job detection and conversation/audio job enhancements.
  • PR #74 — Extends memory service layer (search/count APIs, MemoryProvider enum, provider implementations); overlapping changes to memory routes and controller signatures.

Suggested reviewers

  • thestumonkey

Poem

🐰 Mycelia blooms where Friend-Lite grows,
With OAuth keys and sync flows,
Timelines dance in D3 delight,
Memories dance from day to night,
Zombie jobs are checked with care—
A full-stack feast of features rare!

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Merge mycelia memories' directly describes the primary change of integrating Mycelia into the memory system, though it uses a generic 'Merge' prefix.
Docstring Coverage ✅ Passed Docstring coverage is 87.93% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch merge-mycelia-memories

Comment @coderabbitai help to get the list of available commands and usage tips.

@AnkushMalaker
Copy link
Collaborator Author

@coderabbitai review please

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 8, 2025

@AnkushMalaker: I'll review the changes in this PR right away!

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 17

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
backends/advanced/src/advanced_omi_backend/services/memory/providers/vector_stores.py (2)

246-258: Remove duplicate import - uuid is already imported at module level.

The uuid module is imported at line 11. As per coding guidelines, imports should not be placed in the middle of functions.

         try:
             # Convert memory_id to proper format for Qdrant
-            import uuid
             try:
                 # Try to parse as UUID first
                 uuid.UUID(memory_id)

326-339: Remove duplicate import - same issue as in delete_memory.

             # Convert memory_id to proper format for Qdrant
             # Qdrant accepts either UUID strings or unsigned integers
-            import uuid
             try:
                 # Try to parse as UUID first
                 uuid.UUID(memory_id)
backends/advanced/src/advanced_omi_backend/services/memory/providers/openmemory_mcp.py (1)

366-385: Unused user_id and user_email parameters in delete_memory.

Unlike other methods in this class (e.g., get_memory, update_memory), delete_memory accepts user_id and user_email parameters but doesn't use them to set the MCP client context. This is inconsistent with the pattern established elsewhere.

-    async def delete_memory(self, memory_id: str, user_id: Optional[str] = None, user_email: Optional[str] = None) -> bool:
+    async def delete_memory(self, memory_id: str, user_id: Optional[str] = None, user_email: Optional[str] = None) -> bool:
         """Delete a specific memory by ID.
         
         Args:
             memory_id: Unique identifier of the memory to delete
+            user_id: Optional user ID for context
+            user_email: Optional user email (not used by OpenMemory MCP)
             
         Returns:
             True if successfully deleted, False otherwise
         """
         if not self._initialized:
             await self.initialize()
         
+        # Update MCP client user context for this operation
+        original_user_id = self.mcp_client.user_id
+        self.mcp_client.user_id = user_id or self.user_id
+        
         try:
             success = await self.mcp_client.delete_memory(memory_id)
             if success:
                 memory_logger.info(f"🗑️ Deleted memory {memory_id} via MCP")
             return success
         except Exception as e:
             memory_logger.error(f"Delete memory failed: {e}")
             return False
+        finally:
+            self.mcp_client.user_id = original_user_id
backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py (1)

47-52: Move os import to the top of the file.

The os module is imported inside the __init__ method, which violates the coding guidelines. All imports must be at the top of the file.

Add to the imports at the top of the file (after line 10):

import os

Then apply this diff to remove the inline import:

         # Use custom CA certificate if available
-        import os
         ca_bundle = os.getenv('REQUESTS_CA_BUNDLE')

Based on coding guidelines: "NEVER import modules in the middle of functions or files - ALL imports must be at the top of the file."

🟡 Minor comments (10)
backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py-366-367 (1)

366-367: Parameters are not used in this implementation, indicating incomplete feature parity.

The allow_update and db_helper parameters are accepted but unused in mycelia.py, while the friend_lite provider actively implements both. Implement the missing logic to support memory updates and database relationship tracking, or document why these features are unsupported in the Mycelia provider.

backends/advanced/webui/src/pages/System.tsx-360-366 (1)

360-366: Fix option text rendering - missing fallback for unknown providers.

The current logic uses separate conditional expressions that don't provide fallback text for unknown providers. If a new provider is added, it would render as empty text.

                       {availableProviders.map((provider) => (
                         <option key={provider} value={provider}>
-                          {provider === 'friend_lite' && 'Friend-Lite (Sophisticated)'}
-                          {provider === 'openmemory_mcp' && 'OpenMemory MCP (Cross-client)'}
-                          {provider === 'mycelia' && 'Mycelia (Advanced)'}
+                          {provider === 'friend_lite' ? 'Friend-Lite (Sophisticated)' :
+                           provider === 'openmemory_mcp' ? 'OpenMemory MCP (Cross-client)' :
+                           provider === 'mycelia' ? 'Mycelia (Advanced)' :
+                           provider}
                         </option>
                       ))}
backends/advanced/src/advanced_omi_backend/controllers/system_controller.py-504-511 (1)

504-511: Fragile .env path resolution using os.getcwd().

Using os.getcwd() to locate the .env file is fragile - the working directory depends on how the service is started. Consider using a path relative to the module or a configured environment variable for the project root.

-        # Path to .env file (assuming we're running from backends/advanced/)
-        env_path = os.path.join(os.getcwd(), ".env")
+        # Path to .env file relative to this module's location
+        module_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
+        env_path = os.path.join(module_dir, ".env")

Alternatively, use a dedicated environment variable like PROJECT_ROOT or leverage a configuration constant.

backends/advanced/webui/public/frappe-gantt.css-1-1 (1)

1-1: CSS variable syntax error: missing var() function.

The .holiday-label rule uses background:--g-weekend-label-color which is invalid CSS. CSS custom properties must be accessed using the var() function for the value to be applied.

Apply this fix (shown in readable form - the minified file would need the same correction):

-.holiday-label{...background:--g-weekend-label-color;...}
+.holiday-label{...background:var(--g-weekend-label-color);...}
backends/advanced/src/advanced_omi_backend/services/memory/config.py-127-141 (1)

127-141: Fix type hint for optional api_key parameter.

The api_key parameter has an implicit Optional type which violates PEP 484. Use explicit Optional[str] or str | None.

Apply this diff:

+from typing import Any, Dict, Optional
 ...
 def create_mycelia_config(
     api_url: str = "http://localhost:8080",
-    api_key: str = None,
+    api_key: Optional[str] = None,
     timeout: int = 30,
     **kwargs
 ) -> Dict[str, Any]:

Or using Python 3.10+ syntax:

-    api_key: str = None,
+    api_key: str | None = None,
backends/advanced/webui/src/pages/ConversationsTimeline.tsx-219-230 (1)

219-230: Edge case: formatDate returns Invalid Date for empty/missing timestamps.

When created_at is undefined or empty string (line 289: formatDate(conv.created_at || '')), this function will attempt to parse an empty string, resulting in Invalid Date. The ISO string detection logic on line 221 is also fragile for edge cases.

Consider adding validation:

 const formatDate = (timestamp: number | string): Date => {
+  if (!timestamp) {
+    return new Date()
+  }
   if (typeof timestamp === 'string') {
     const isoString = timestamp.endsWith('Z') || timestamp.includes('+') || timestamp.includes('T') && timestamp.split('T')[1].includes('-')
       ? timestamp
       : timestamp + 'Z'
     return new Date(isoString)
   }
backends/advanced/src/advanced_omi_backend/controllers/memory_controller.py-168-185 (1)

168-185: Use time.time() instead of deprecated asyncio.get_event_loop().time().

asyncio.get_event_loop() is deprecated in Python 3.10+ and may create a new event loop unexpectedly. For generating a unique timestamp, time.time() is simpler and appropriate.

+import time
+
 async def add_memory(content: str, user: User, source_id: Optional[str] = None):
     """Add a memory directly from content text. Extracts structured memories from the provided content."""
     try:
         memory_service = get_memory_service()
 
         # Use source_id or generate a unique one
-        memory_source_id = source_id or f"manual_{user.user_id}_{int(asyncio.get_event_loop().time())}"
+        memory_source_id = source_id or f"manual_{user.user_id}_{int(time.time())}"

Committable suggestion skipped: line range outside the PR's diff.

backends/advanced/src/advanced_omi_backend/services/memory/prompts.py-554-554 (1)

554-554: Module-level initialization captures stale date context.

TEMPORAL_ENTITY_EXTRACTION_PROMPT is initialized once at module import time with datetime.now(). In long-running services, this prompt will contain an outdated "current date" context, potentially causing incorrect temporal resolution.

Consider calling build_temporal_extraction_prompt() at usage time instead, or document that this constant requires service restart for date accuracy:

-TEMPORAL_ENTITY_EXTRACTION_PROMPT = build_temporal_extraction_prompt(datetime.now())
+# Note: Call build_temporal_extraction_prompt(datetime.now()) at usage time
+# for accurate date context in long-running services
+def get_temporal_extraction_prompt() -> str:
+    """Get temporal extraction prompt with current date context."""
+    return build_temporal_extraction_prompt(datetime.now())

Committable suggestion skipped: line range outside the PR's diff.

backends/advanced/webui/src/pages/FrappeGanttTimeline.tsx-603-611 (1)

603-611: Remove duplicate focus:ring-2 class.

There's a duplicate CSS class on line 606.

-              className="px-3 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-gray-700 text-gray-900 dark:text-gray-100 text-sm focus:ring-2 focus:ring-2 focus:ring-blue-500 focus:border-blue-500"
+              className="px-3 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-gray-700 text-gray-900 dark:text-gray-100 text-sm focus:ring-2 focus:ring-blue-500 focus:border-blue-500"
backends/advanced/scripts/sync_friendlite_mycelia.py-344-353 (1)

344-353: Current URI parsing handles actual usage patterns adequately, though lacks robustness for future complex formats.

The string-based parsing (lines 348-351) works correctly for the MongoDB URIs actually used in this codebase (e.g., mongodb://mongo-test:27017/test_db, mongodb://...svc.cluster.local:27017/friend-lite). However, it would fail for SRV connections (mongodb+srv://) or URIs with authentication containing special characters. If the system may evolve to use such formats, consider using urllib.parse.urlparse() to extract the path component and parse the database name more robustly. For current usage, the fallback to "friend-lite" provides a safety net.

🧹 Nitpick comments (52)
backends/advanced/webui/src/hooks/useD3Zoom.ts (1)

42-55: Unstable dependencies may cause unnecessary re-creation of zoom behavior.

The wheelDelta default (line 14) and scaleExtent array are created fresh on each render when not provided by the caller. This causes zoomBehavior to be recreated every render, triggering the useEffect repeatedly.

Memoize the defaults or use refs for stable references:

+const defaultWheelDelta = useCallback(
+  (event: WheelEvent) => -event.deltaY * 0.002,
+  []
+)
+const defaultScaleExtent: [number, number] = useMemo(() => [0.5, 5], [])
+
 const {
   onZoom,
-  scaleExtent = [0.5, 5],
-  wheelDelta = (event) => -event.deltaY * 0.002
+  scaleExtent = defaultScaleExtent,
+  wheelDelta = defaultWheelDelta
 } = options

Alternatively, define the defaults as module-level constants:

const DEFAULT_SCALE_EXTENT: [number, number] = [0.5, 5]
const DEFAULT_WHEEL_DELTA = (event: WheelEvent) => -event.deltaY * 0.002
backends/advanced/webui/src/components/layout/Layout.tsx (1)

11-23: Timeline nav entry is consistent; consider nested-route active matching

The new /timeline navigation item (with icon + label) is consistent with the rest of the sidebar and integrates cleanly into navigationItems. If you expect nested timeline routes (e.g., /timeline/:id), consider updating the active check from location.pathname === path to something like location.pathname.startsWith(path) for all items so the tab stays highlighted on sub-pages.

backends/advanced/src/advanced_omi_backend/services/transcription/__init__.py (1)

13-22: Consider using relative imports consistently for same-package modules.

Lines 14-22 use absolute imports (advanced_omi_backend.services.transcription.deepgram) while line 13 uses a relative import (.base). For consistency, consider using relative imports for all modules within the same package.

Apply this diff for consistency:

 from .base import BaseTranscriptionProvider
-from advanced_omi_backend.services.transcription.deepgram import (
+from .deepgram import (
     DeepgramProvider,
     DeepgramStreamingProvider,
     DeepgramStreamConsumer,
 )
-from advanced_omi_backend.services.transcription.parakeet import (
+from .parakeet import (
     ParakeetProvider,
     ParakeetStreamingProvider,
 )
backends/advanced/src/advanced_omi_backend/utils/conversation_utils.py (1)

103-111: New min_duration gate is correct and consistent with thresholds

The new duration-based check after computing speech_duration is sound:

  • Uses the same settings dict as the word/confidence filters.
  • Returns a clear, structured reason when speech is too short.
  • Preserves word_count and duration in the negative case for downstream debugging/metrics.

Once get_speech_detection_settings() exposes min_duration, this will be fully config-driven rather than relying on the local 10.0 fallback.

If you want to tighten coupling to config after that change, you can simplify to min_duration = settings["min_duration"] and drop the hardcoded default.

.gitmodules (1)

1-3: Consider pinning the submodule to a specific branch or commit.

Without a branch directive, the submodule tracks the remote's default branch. For reproducible builds and avoiding unexpected breaking changes, consider specifying a branch or documenting which commit/tag to use.

 [submodule "extras/mycelia"]
 	path = extras/mycelia
 	url = https://github.com/mycelia-tech/mycelia
+	branch = main
backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py (3)

79-82: Resource leak: MongoClient connections are not closed.

MongoClient is instantiated in multiple methods (_create_mycelia_api_key, sync_user_to_mycelia, sync_admin_user) but never closed. Consider:

  1. Creating a shared client in __init__ and closing it in a cleanup method
  2. Using context managers (with MongoClient(...) as client:)
 def __init__(self):
     """Initialize the sync service."""
     # MongoDB configuration
     self.mongo_url = os.getenv("MONGODB_URI", os.getenv("MONGO_URL", "mongodb://localhost:27017"))
     # ... existing code ...
+    self._client = None
+
+def _get_client(self) -> MongoClient:
+    """Get or create MongoDB client."""
+    if self._client is None:
+        self._client = MongoClient(self.mongo_url)
+    return self._client
+
+def close(self):
+    """Close MongoDB connection."""
+    if self._client:
+        self._client.close()
+        self._client = None

165-165: Replace ambiguous Unicode character.

The (INFORMATION SOURCE) character may render inconsistently. Consider using a standard emoji or ASCII.

-                logger.info(f"ℹ️  {user_email} already synced with Mycelia")
+                logger.info(f"ℹ️ {user_email} already synced with Mycelia")

Or use a different indicator like [INFO] or 📋.


103-109: Consider documenting the full-access policy.

The wildcard policy grants complete access to all resources. This is likely intentional for admin users, but consider adding a comment explaining the scope and any plans for more granular policies.

             "policies": [
                 {
-                    "resource": "**",
-                    "action": "*",
-                    "effect": "allow"
+                    "resource": "**",  # Full access to all resources
+                    "action": "*",      # All actions permitted
+                    "effect": "allow"   # TODO: Consider granular policies for non-admin users
                 }
             ],
backends/advanced/src/advanced_omi_backend/services/memory/providers/__init__.py (1)

1-27: Well-structured package initialization.

The module correctly centralizes public API exports for memory providers. Imports are properly placed at the top of the file, adhering to the coding guidelines.

Consider sorting __all__ alphabetically for consistency with isort conventions:

 __all__ = [
     "FriendLiteMemoryService",
+    "MCPClient",
+    "MCPError",
+    "MyceliaMemoryService",
     "OpenMemoryMCPService",
-    "MyceliaMemoryService",
     "OpenAIProvider",
     "QdrantVectorStore",
-    "MCPClient",
-    "MCPError",
 ]
backends/advanced/src/advanced_omi_backend/services/memory/base.py (1)

249-268: Consider moving __init__ to the top of the class.

The __init__ method is conventionally placed at the beginning of a class definition. While Python allows this placement, it reduces readability as developers typically expect constructors near the top.

 class MemoryServiceBase(ABC):
     """Abstract base class defining the core memory service interface.
     ...
     """
     
+    def __init__(self):
+        """Initialize base memory service state.
+        
+        Subclasses should call super().__init__() in their constructors.
+        """
+        self._initialized = False
+    
+    async def _ensure_initialized(self) -> None:
+        """Ensure the memory service is initialized before use.
+        ...
+        """
+        if not self._initialized:
+            await self.initialize()
+
     @abstractmethod
     async def initialize(self) -> None:
         ...
backends/advanced/src/advanced_omi_backend/services/memory/providers/openmemory_mcp.py (1)

321-364: Consider using logging.exception for better debugging.

When catching exceptions, logging.exception automatically includes the stack trace, which is helpful for debugging production issues.

         except Exception as e:
-            memory_logger.error(f"Failed to update memory: {e}")
+            memory_logger.exception(f"Failed to update memory: {e}")
             return False
backends/advanced/scripts/create_mycelia_api_key.py (1)

55-68: Add error handling for MongoDB operations.

Per coding guidelines, errors should be raised explicitly rather than allowing silent failures. MongoDB connection and operations could fail, crashing the script ungracefully.

     # Connect to MongoDB
-    client = MongoClient(MONGO_URL)
-    db = client[MYCELIA_DB]
-    api_keys = db["api_keys"]
+    try:
+        client = MongoClient(MONGO_URL, serverSelectionTimeoutMS=5000)
+        # Test connection
+        client.admin.command('ping')
+        db = client[MYCELIA_DB]
+        api_keys = db["api_keys"]
+    except Exception as e:
+        print(f"❌ Failed to connect to MongoDB at {MONGO_URL}: {e}")
+        return 1
backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py (12)

163-164: Consider using dict.get() for cleaner code.

-        if "icon" in obj and obj["icon"]:
-            metadata["icon"] = obj["icon"]
+        if icon := obj.get("icon"):
+            metadata["icon"] = icon

354-357: Use logging.exception for better debugging.

When logging errors, use logging.exception to capture stack traces automatically:

         except Exception as e:
-            memory_logger.error(f"Failed to extract temporal data via OpenAI: {e}")
+            memory_logger.exception("Failed to extract temporal data via OpenAI")
             # Don't fail the entire memory creation if temporal extraction fails
             return None

425-425: Use unpacking for list concatenation.

-                        "aliases": [source_id, client_id] + temporal_entity.entities,
+                        "aliases": [source_id, client_id, *temporal_entity.entities],

527-529: Use logging.exception for error logging.

         except Exception as e:
-            memory_logger.error(f"Failed to search memories via Mycelia: {e}")
+            memory_logger.exception("Failed to search memories via Mycelia")
             return []

565-567: Use logging.exception for error logging.

         except Exception as e:
-            memory_logger.error(f"Failed to get memories via Mycelia: {e}")
+            memory_logger.exception("Failed to get memories via Mycelia")
             return []

601-603: Use logging.exception for error logging.

         except Exception as e:
-            memory_logger.error(f"Failed to count memories via Mycelia: {e}")
+            memory_logger.exception("Failed to count memories via Mycelia")
             return None

640-642: Use logging.exception for error logging.

         except Exception as e:
-            memory_logger.error(f"Failed to get memory via Mycelia: {e}")
+            memory_logger.exception("Failed to get memory via Mycelia")
             return None

721-723: Use logging.exception for error logging.

         except Exception as e:
-            memory_logger.error(f"Failed to update memory via Mycelia: {e}")
+            memory_logger.exception("Failed to update memory via Mycelia")
             return False

760-762: Use logging.exception for error logging.

         except Exception as e:
-            memory_logger.error(f"Failed to delete memory via Mycelia: {e}")
+            memory_logger.exception("Failed to delete memory via Mycelia")
             return False

795-797: Use logging.exception for error logging.

         except Exception as e:
-            memory_logger.error(f"Failed to delete user memories via Mycelia: {e}")
+            memory_logger.exception("Failed to delete user memories via Mycelia")
             return 0

816-818: Use logging.exception for error logging.

         except Exception as e:
-            memory_logger.error(f"Mycelia connection test failed: {e}")
+            memory_logger.exception("Mycelia connection test failed")
             return False

820-826: Consider implementing async cleanup.

The shutdown method logs a warning about needing async context for client cleanup but doesn't provide an async alternative. Consider adding an async ashutdown method for proper resource cleanup.

async def ashutdown(self) -> None:
    """Async shutdown of Mycelia client and cleanup resources."""
    memory_logger.info("Shutting down Mycelia memory service")
    if self._client:
        await self._client.aclose()
    self._initialized = False
backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py (1)

183-187: Avoid defensive hasattr() checks.

As per coding guidelines: "Understand data structures and research input/response or class structure instead of adding defensive hasattr() checks in Python."

The memory_entry is documented as a MemoryEntry object from the base module. Use the type properly:

                         memory_entry = await memory_service.get_memory(memory_id, user_id)
                         if memory_entry:
-                            # memory_entry is a MemoryEntry object, not a dict
-                            memory_text = memory_entry.content if hasattr(memory_entry, 'content') else str(memory_entry)
+                            memory_text = memory_entry.content
                             memory_details.append({
                                 "memory_id": memory_id,
                                 "text": memory_text[:200]  # First 200 chars
                             })
backends/advanced/src/advanced_omi_backend/app_factory.py (1)

76-82: Improve error logging pattern.

Use logging.exception to capture full stack traces for debugging:

     # Sync admin user with Mycelia OAuth (if using Mycelia memory provider)
     try:
         from advanced_omi_backend.services.mycelia_sync import sync_admin_on_startup
         await sync_admin_on_startup()
     except Exception as e:
-        application_logger.error(f"Failed to sync admin with Mycelia OAuth: {e}")
+        application_logger.exception("Failed to sync admin with Mycelia OAuth")
         # Don't raise here as this is not critical for startup
backends/advanced/src/advanced_omi_backend/services/memory/service_factory.py (2)

56-67: LGTM with minor improvement: Chain exceptions for better debugging.

The Mycelia provider branch is correctly implemented and follows the established pattern. The design to pass the full config (to access llm_config) is appropriate per the service requirements.

However, consider chaining the exception to preserve the original traceback for easier debugging.

         try:
             from .providers.mycelia import MyceliaMemoryService
         except ImportError as e:
-            raise RuntimeError(f"Mycelia memory service not available: {e}")
+            raise RuntimeError(f"Mycelia memory service not available: {e}") from e

The same improvement could be applied to line 49 for OpenMemory MCP. Based on static analysis hint B904.


157-161: Consider adding Mycelia to provider detection in get_service_info().

The get_service_info() function detects providers by class name but doesn't include a check for Mycelia. This would cause memory_provider to remain None when using the Mycelia provider.

         # Try to determine provider from service type
         if "OpenMemoryMCP" in info["service_type"]:
             info["memory_provider"] = "openmemory_mcp"
         elif "FriendLite" in info["service_type"] or "MemoryService" in info["service_type"]:
             info["memory_provider"] = "friend_lite"
+        elif "Mycelia" in info["service_type"]:
+            info["memory_provider"] = "mycelia"
backends/advanced/webui/src/pages/System.tsx (1)

144-166: Consider clearing timeout on subsequent saves or component unmount.

The setTimeout in saveMemoryProvider (line 147) is not cleared if the user quickly saves again or if the component unmounts. This could cause state updates on unmounted components.

Consider using a useRef to store the timeout ID and clear it in cleanup:

const providerMessageTimeoutRef = useRef<number | null>(null)

const saveMemoryProvider = async () => {
  if (providerMessageTimeoutRef.current) {
    clearTimeout(providerMessageTimeoutRef.current)
  }
  // ... rest of logic
  providerMessageTimeoutRef.current = setTimeout(() => setProviderMessage(''), 3000)
}

useEffect(() => {
  return () => {
    if (providerMessageTimeoutRef.current) {
      clearTimeout(providerMessageTimeoutRef.current)
    }
  }
}, [])
backends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.py (1)

10-11: Remove unused import Body.

Body is imported but not used in this file. The AddMemoryRequest model handles the request body parsing via Pydantic.

-from fastapi import APIRouter, Depends, Query, Body
+from fastapi import APIRouter, Depends, Query
backends/advanced/src/advanced_omi_backend/routers/modules/system_routes.py (1)

139-145: Clarify docstring - provider change doesn't automatically restart services.

The docstring says "Set memory provider and restart backend services" but the controller implementation only updates the .env file and returns requires_restart: true. The actual restart must be done manually.

The static analysis hints about unused current_user are false positives - the parameter is required to trigger the Depends(current_superuser) authentication check, which is a standard FastAPI pattern.

 @router.post("/admin/memory/provider")
 async def set_memory_provider(
     provider: str = Body(..., embed=True),
     current_user: User = Depends(current_superuser)
 ):
-    """Set memory provider and restart backend services. Admin only."""
+    """Set memory provider configuration. Requires manual service restart. Admin only."""
     return await system_controller.set_memory_provider(provider)
backends/advanced/src/advanced_omi_backend/services/memory/providers/friend_lite.py (2)

293-312: Unused user_id and user_email parameters should be acknowledged.

The delete_memory method signature was extended for Mycelia provider compatibility, but these parameters are unused in this implementation. While this is intentional for interface consistency (as seen in openmemory_mcp.py which also ignores them), consider adding a brief comment or using a leading underscore to indicate intentional non-use, which also silences the linter.

-    async def delete_memory(self, memory_id: str, user_id: Optional[str] = None, user_email: Optional[str] = None) -> bool:
+    async def delete_memory(self, memory_id: str, user_id: Optional[str] = None, user_email: Optional[str] = None) -> bool:  # noqa: ARG002
         """Delete a specific memory by ID.
         
         Args:
             memory_id: Unique identifier of the memory to delete
+            user_id: Unused in this provider (required for Mycelia)
+            user_email: Unused in this provider (required for Mycelia)
             
         Returns:
             True if successfully deleted, False otherwise
         """

225-227: Inconsistent initialization pattern across methods.

The add_memory method (line 132) uses await self._ensure_initialized(), while search_memories, get_all_memories, count_memories, delete_memory, and delete_all_user_memories still use the direct if not self._initialized: await self.initialize() pattern. Consider using _ensure_initialized() consistently across all methods for maintainability.

     async def search_memories(self, query: str, user_id: str, limit: int = 10, score_threshold: float = 0.0) -> List[MemoryEntry]:
-        if not self._initialized:
-            await self.initialize()
+        await self._ensure_initialized()

Similar changes would apply to get_all_memories, count_memories, delete_memory, and delete_all_user_memories.

backends/advanced/webui/src/pages/ReactGanttTimeline.tsx (2)

179-183: Missing dependency in useEffect may cause stale closure.

The useEffect hook references fetchMemoriesWithTimeRanges but doesn't include it in the dependency array. This can lead to stale closure issues where the function captures outdated values. Consider wrapping fetchMemoriesWithTimeRanges in useCallback or moving the fetch logic inside the effect.

Option 1 - Move fetch inside useEffect:

   useEffect(() => {
+    const fetchMemoriesWithTimeRanges = async () => {
+      setLoading(true)
+      setError(null)
+      try {
+        const response = await memoriesApi.getAll()
+        const memoriesData = response.data.memories || response.data || []
+        const memoriesWithTimeRanges = memoriesData.filter(
+          (memory: MemoryWithTimeRange) =>
+            memory.metadata?.timeRanges &&
+            memory.metadata.timeRanges.length > 0
+        )
+        if (memoriesWithTimeRanges.length === 0) {
+          setUseDemoData(true)
+          setMemories(getDemoMemories())
+          setError('No memories with time ranges found. Showing demo data.')
+        } else {
+          setMemories(memoriesWithTimeRanges)
+          setUseDemoData(false)
+        }
+      } catch (err) {
+        console.error('Failed to fetch memories:', err)
+        setError('Failed to load memories. Showing demo data.')
+        setUseDemoData(true)
+        setMemories(getDemoMemories())
+      } finally {
+        setLoading(false)
+      }
+    }
+
     if (user) {
       fetchMemoriesWithTimeRanges()
     }
   }, [user])

Option 2 - Use useCallback for fetchMemoriesWithTimeRanges and add it to dependencies.


227-235: Redundant data transformation.

The data variable on lines 229-235 maps tasks to objects with identical properties (id, name, start, end, color). Since convertToReactGanttFormat already returns objects with these exact properties, this transformation is unnecessary.

   const tasks = convertToReactGanttFormat(memories)

-  const data = tasks.map((task) => ({
-    id: task.id,
-    name: task.name,
-    start: task.start,
-    end: task.end,
-    color: task.color
-  }))
-
   return (
     ...
             <Timeline
-              data={data}
+              data={tasks}
             />
backends/advanced/src/advanced_omi_backend/controllers/system_controller.py (1)

470-488: Approve with note on provider list duplication.

The function correctly returns the current provider configuration. However, the available_providers list is duplicated in set_memory_provider (line 496). Consider extracting this to a constant to avoid inconsistency.

+# At module level or in config
+VALID_MEMORY_PROVIDERS = ["friend_lite", "openmemory_mcp", "mycelia"]
+
 async def get_memory_provider():
     """Get current memory provider configuration."""
     try:
         current_provider = os.getenv("MEMORY_PROVIDER", "friend_lite").lower()
-        available_providers = ["friend_lite", "openmemory_mcp", "mycelia"]
         return {
             "current_provider": current_provider,
-            "available_providers": available_providers,
+            "available_providers": VALID_MEMORY_PROVIDERS,
             "status": "success"
         }
backends/advanced/webui/src/pages/Memories.tsx (1)

8-19: Consider extracting the Memory interface to a shared types file.

The Memory interface is duplicated in both Memories.tsx and MemoryDetail.tsx, with MemoryDetail.tsx having more detailed metadata typing. Consolidating this into a shared types file (e.g., src/types/memory.ts) would prevent drift and improve maintainability.

backends/advanced/webui/src/pages/MemoryDetail.tsx (1)

89-91: Missing loadMemory in useEffect dependency array.

The loadMemory function is called in the effect but not listed in the dependency array. While this works because loadMemory captures id and user?.id from the closure, it could cause stale closure issues if the function logic changes.

Wrap loadMemory in useCallback and add it to the dependency array:

+import { useState, useEffect, useCallback } from 'react'
 ...
-  const loadMemory = async () => {
+  const loadMemory = useCallback(async () => {
     if (!user?.id || !id) {
       ...
     }
     ...
-  }
+  }, [id, user?.id])

   useEffect(() => {
     loadMemory()
-  }, [id, user?.id])
+  }, [loadMemory])
backends/advanced/webui/src/pages/MyceliaTimeline.tsx (1)

142-148: Consider adding loadMemories to useEffect dependencies or using useCallback.

The loadMemories function is called inside the effect but isn't in the dependency array. While this works because user?.id is in the deps, React ESLint would flag this. Consider wrapping loadMemories in useCallback with [user?.id] dependencies for clarity.

backends/advanced/src/advanced_omi_backend/controllers/memory_controller.py (1)

201-205: Use explicit conversion flag for f-string (per static analysis).

The static analysis tool (Ruff RUF010) suggests using an explicit conversion flag:

-        return JSONResponse(
-            status_code=500, content={"success": False, "message": f"Error adding memory: {str(e)}"}
-        )
+        return JSONResponse(
+            status_code=500, content={"success": False, "message": f"Error adding memory: {e!s}"}
+        )
backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py (1)

436-459: Update docstring to document new parameters.

The user_id and user_email parameters were added for interface consistency with other providers (e.g., OpenMemoryMCPProvider, QdrantVectorStore), but they're not documented in the docstring.

     async def delete_memory(self, memory_id: str, user_id: Optional[str] = None, user_email: Optional[str] = None) -> bool:
         """Delete a specific memory by ID.
         
         Args:
             memory_id: ID of the memory to delete
-            
+            user_id: Optional user identifier (unused, for interface compatibility)
+            user_email: Optional user email (unused, for interface compatibility)
+
         Returns:
             True if deletion succeeded, False otherwise
         """
backends/advanced/webui/src/pages/FrappeGanttTimeline.tsx (3)

214-220: Add loadMemories to useEffect dependency array or use useCallback.

The loadMemories function is called inside this effect but not listed in the dependency array. While ESLint exhaustive-deps would flag this, React may skip re-running the effect when loadMemories changes (if it's ever redefined).

Wrap loadMemories with useCallback:

+import { useState, useEffect, useRef, useCallback } from 'react'
...
-  const loadMemories = async () => {
+  const loadMemories = useCallback(async () => {
     if (!user?.id) return
     ...
-  }
+  }, [user?.id])

   useEffect(() => {
     if (!useDemoData) {
       loadMemories()
     } else {
       setMemories(getDemoMemories())
     }
-  }, [user?.id, useDemoData])
+  }, [loadMemories, useDemoData])

174-176: Consider using unknown instead of any for caught errors.

Using unknown is safer and forces explicit type checking before accessing properties.

-    } catch (err: any) {
+    } catch (err: unknown) {
       console.error('❌ Timeline loading error:', err)
-      setError(err.message || 'Failed to load timeline data')
+      setError(err instanceof Error ? err.message : 'Failed to load timeline data')

439-477: Scroll position preservation relies on setTimeout(0) timing.

The scroll position restoration uses setTimeout(0) to defer until after React's state update. This works but is timing-dependent. Consider using a useEffect that reacts to zoomScale changes for more reliable behavior.

backends/advanced/scripts/sync_friendlite_mycelia.py (4)

51-56: MongoDB client connection is never closed.

The MongoClient is created in __init__ but never explicitly closed. While Python's garbage collector will eventually clean it up, it's better practice to close connections explicitly, especially for scripts that may be run repeatedly.

Add a close method and use it in main():

+    def close(self):
+        """Close MongoDB connection."""
+        if self.client:
+            self.client.close()
+
     def _hash_api_key_with_salt(self, api_key: str, salt: bytes) -> str:

Then in main():

     # Create sync service
     sync = FriendLiteMyceliaSync(mongo_url, mycelia_db, friendlite_db)
+    try:
         # Execute requested action
         if args.status:
             sync.display_sync_status()
         ...
+    finally:
+        sync.close()

102-103: datetime.utcnow() is deprecated in Python 3.12+.

Use datetime.now(timezone.utc) for timezone-aware UTC timestamps.

Add to imports:

-from datetime import datetime
+from datetime import datetime, timezone

Then update the usages:

-            "createdAt": datetime.utcnow(),
+            "createdAt": datetime.now(timezone.utc),

And similarly at line 122:

-                        "created_at": datetime.utcnow(),
+                        "created_at": datetime.now(timezone.utc),

53-56: Remove extraneous f prefix from strings without placeholders.

Several f-strings don't contain any placeholders (lines 53, 163, 190, 256, 297).

Example fix for line 53:

-        print(f"📊 Connected to MongoDB:")
+        print("📊 Connected to MongoDB:")

Apply similar fixes to lines 163, 190, 256, and 297.


58-63: Hash function is duplicated from mycelia_sync.py.

This implementation matches _hash_api_key_with_salt in backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py. Consider importing from a shared utility to avoid drift.

For a standalone script, the duplication is acceptable. However, if both locations need to stay in sync, consider extracting to a shared module.

backends/advanced/src/advanced_omi_backend/workers/conversation_jobs.py (1)

283-287: Move check_job_alive import out of the hot loop and to module top

The zombie check itself is good, but importing check_job_alive on every iteration is unnecessary and violates the “imports at top of file” guideline. You can import once at module scope and just call it here.

@@
-    while True:
-        # Check if job still exists in Redis (detect zombie state)
-        from advanced_omi_backend.utils.job_utils import check_job_alive
-        if not await check_job_alive(redis_client, current_job):
-            break
+    while True:
+        # Check if job still exists in Redis (detect zombie state)
+        if not await check_job_alive(redis_client, current_job):
+            break

And near the existing imports at the top of this file:

-from advanced_omi_backend.controllers.queue_controller import start_post_conversation_jobs
+from advanced_omi_backend.controllers.queue_controller import start_post_conversation_jobs
+from advanced_omi_backend.utils.job_utils import check_job_alive

As per coding guidelines, imports should live at the top of the module.

backends/advanced/src/advanced_omi_backend/workers/audio_jobs.py (1)

263-274: Use module-level imports for get_current_job/check_job_alive instead of in-function

The liveness guard is correct, but importing get_current_job and check_job_alive inside the function is unnecessary. Moving these to the top of the module keeps imports centralized and avoids repeating this pattern elsewhere.

@@
-    # Get current job for zombie detection
-    from rq import get_current_job
-    from advanced_omi_backend.utils.job_utils import check_job_alive
-    current_job = get_current_job()
+    # Get current job for zombie detection
+    current_job = get_current_job()

And at the top of the file, alongside existing imports:

-from advanced_omi_backend.models.job import JobPriority, async_job
+from advanced_omi_backend.models.job import JobPriority, async_job
+from rq import get_current_job
+from advanced_omi_backend.utils.job_utils import check_job_alive

You can also reuse the module-level get_current_job in process_cropping_job to remove its local import. As per coding guidelines, keep imports at the top of the file.

backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py (2)

196-259: Early no‑speech guard is solid; refine imports and exception handling in cancellation block

The early analyze_speech check and mark_conversation_deleted call are a good way to short‑circuit empty/noise conversations and avoid wasting work in downstream jobs. The dependent‑job cancellation also matches the crop_*/speaker_*/memory_*/title_summary_* naming pattern.

A few refinements would make this more maintainable and lint‑friendly:

  1. Move imports to module level and remove duplicates

    • analyze_speech / mark_conversation_deleted
    • get_current_job / Job
    • The inner from advanced_omi_backend.controllers.queue_controller import redis_conn is redundant, since redis_conn is already imported at the top of the file.

    Centralizing these at the top avoids repeated imports and aligns with the “imports at top of file” guideline.

  2. Tighten the broad except Exception blocks

    • Inside the for job_id in job_patterns loop you catch any Exception when fetching/cancelling jobs.
    • The outer try/except Exception as cancel_error also catches everything.

    Narrowing these to the specific, expected failure modes (e.g., “job not found” vs. genuine Redis or RQ failures) will satisfy tools like Ruff (BLE001) and make real errors more visible. You already use a more specific pattern with NoSuchJobError in stream_speech_detection_job; mirroring that approach here would help.

  3. current_job gate is not strictly needed

    The cancellation logic only uses static job IDs plus redis_conn; it doesn’t reference current_job. You can either drop the if current_job: guard or make use of current_job (e.g., to log which job initiated the cancellation).


501-506: Avoid per‑iteration import of check_job_alive inside stream_speech_detection_job loop

The zombie‑detection guard in the speech‑detection loop is good, but importing check_job_alive on every iteration is unnecessary and breaks the “imports at top of file” rule.

You can import once at module scope and just call it in the loop:

@@
-    # Main loop: Listen for speech
-    while True:
-        # Check if job still exists in Redis (detect zombie state)
-        from advanced_omi_backend.utils.job_utils import check_job_alive
-        if not await check_job_alive(redis_client, current_job):
-            break
+    # Main loop: Listen for speech
+    while True:
+        # Check if job still exists in Redis (detect zombie state)
+        if not await check_job_alive(redis_client, current_job):
+            break

And at the top of this module:

-from advanced_omi_backend.models.job import JobPriority, BaseRQJob, async_job
+from advanced_omi_backend.models.job import JobPriority, BaseRQJob, async_job
+from advanced_omi_backend.utils.job_utils import check_job_alive

This keeps the hot loop lean and consistent with the shared job‑utils usage. As per coding guidelines, imports should live at the top of the file.

Comment on lines +20 to +27
def hash_api_key_with_salt(api_key: str, salt: bytes) -> str:
"""Hash API key with salt (matches Mycelia's hashApiKey function)."""
# SHA256(salt + apiKey) in base64
import base64
h = hashlib.sha256()
h.update(salt)
h.update(api_key.encode('utf-8'))
return base64.b64encode(h.digest()).decode('utf-8') # Use base64 like Mycelia
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Move base64 import to the top of the file.

Per coding guidelines, all imports must be at the top of the file. The base64 import on line 23 violates this requirement.

 import os
 import sys
 import secrets
 import hashlib
+import base64
 from pymongo import MongoClient
 from bson import ObjectId
 from datetime import datetime
 ...
 
 def hash_api_key_with_salt(api_key: str, salt: bytes) -> str:
     """Hash API key with salt (matches Mycelia's hashApiKey function)."""
     # SHA256(salt + apiKey) in base64
-    import base64
     h = hashlib.sha256()

As per coding guidelines, imports must never be in the middle of functions.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def hash_api_key_with_salt(api_key: str, salt: bytes) -> str:
"""Hash API key with salt (matches Mycelia's hashApiKey function)."""
# SHA256(salt + apiKey) in base64
import base64
h = hashlib.sha256()
h.update(salt)
h.update(api_key.encode('utf-8'))
return base64.b64encode(h.digest()).decode('utf-8') # Use base64 like Mycelia
def hash_api_key_with_salt(api_key: str, salt: bytes) -> str:
"""Hash API key with salt (matches Mycelia's hashApiKey function)."""
# SHA256(salt + apiKey) in base64
h = hashlib.sha256()
h.update(salt)
h.update(api_key.encode('utf-8'))
return base64.b64encode(h.digest()).decode('utf-8') # Use base64 like Mycelia
🤖 Prompt for AI Agents
In backends/advanced/scripts/create_mycelia_api_key.py around lines 20 to 27,
the base64 module is imported inside hash_api_key_with_salt; move "import
base64" to the top-level imports at the top of the file and remove the inline
import from within the function, then run the linter/formatter to ensure imports
are ordered per project style.

Comment on lines +70 to +90
# Create API key document (matches Mycelia's format)
import base64
api_key_doc = {
"hashedKey": hashed_key, # Note: hashedKey, not hash!
"salt": base64.b64encode(salt).decode('utf-8'), # Store as base64 like Mycelia
"owner": USER_ID,
"name": "Friend-Lite Integration",
"policies": [
{
"resource": "**",
"action": "*",
"effect": "allow"
}
],
"openPrefix": open_prefix,
"createdAt": datetime.now(),
"isActive": True,
}

# Insert into database
result = api_keys.insert_one(api_key_doc)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Move base64 import to the top (duplicate import).

This is a duplicate import of base64 that should also be consolidated at the top of the file.

     # Create API key document (matches Mycelia's format)
-    import base64
     api_key_doc = {

As per coding guidelines, imports must never be in the middle of functions.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Create API key document (matches Mycelia's format)
import base64
api_key_doc = {
"hashedKey": hashed_key, # Note: hashedKey, not hash!
"salt": base64.b64encode(salt).decode('utf-8'), # Store as base64 like Mycelia
"owner": USER_ID,
"name": "Friend-Lite Integration",
"policies": [
{
"resource": "**",
"action": "*",
"effect": "allow"
}
],
"openPrefix": open_prefix,
"createdAt": datetime.now(),
"isActive": True,
}
# Insert into database
result = api_keys.insert_one(api_key_doc)
# Create API key document (matches Mycelia's format)
api_key_doc = {
"hashedKey": hashed_key, # Note: hashedKey, not hash!
"salt": base64.b64encode(salt).decode('utf-8'), # Store as base64 like Mycelia
"owner": USER_ID,
"name": "Friend-Lite Integration",
"policies": [
{
"resource": "**",
"action": "*",
"effect": "allow"
}
],
"openPrefix": open_prefix,
"createdAt": datetime.now(),
"isActive": True,
}
# Insert into database
result = api_keys.insert_one(api_key_doc)
🤖 Prompt for AI Agents
In backends/advanced/scripts/create_mycelia_api_key.py around lines 70-90 there
is an inline "import base64" inside the middle of the code; move that import to
the top-level imports at the top of the file (and remove this duplicate), ensure
the top imports include "import base64" following the project's import ordering
conventions, and run a quick lint to confirm no other duplicate or unused
imports remain.

Comment on lines +101 to +133
def generate_jwt_for_user(user_id: str, user_email: str) -> str:
"""Generate a JWT token for a user to authenticate with external services.

This function creates a JWT token that can be used to authenticate with
services that share the same AUTH_SECRET_KEY, such as Mycelia.

Args:
user_id: User's unique identifier (MongoDB ObjectId as string)
user_email: User's email address

Returns:
JWT token string valid for 24 hours

Example:
>>> token = generate_jwt_for_user("507f1f77bcf86cd799439011", "user@example.com")
>>> # Use token to call Mycelia API
"""
from datetime import datetime, timedelta
import jwt

# Create JWT payload matching Friend-Lite's standard format
payload = {
"sub": user_id, # Subject = user ID
"email": user_email,
"iss": "friend-lite", # Issuer
"aud": "friend-lite", # Audience
"exp": datetime.utcnow() + timedelta(hours=24), # 24 hour expiration
"iat": datetime.utcnow(), # Issued at
}

# Sign the token with the same secret key
token = jwt.encode(payload, SECRET_KEY, algorithm="HS256")
return token
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Move inline imports to the top of the module and keep JWT config DRY

This helper looks good overall, but there are a couple of fixes / cleanups worth doing:

  • Inline imports at lines 118–119 violate the project guideline that all imports live at the top of the file. Please move datetime, timedelta, and jwt into the module import section (grouped stdlib vs third‑party) and drop the in‑function imports.

  • To keep JWT configuration consistent, consider extracting a single lifetime constant (e.g. JWT_LIFETIME_SECONDS = 86400) and using it both in get_jwt_strategy() and here (timedelta(seconds=JWT_LIFETIME_SECONDS)), so changes to token lifetime can’t drift between paths.

  • It would also be useful to add a small unit test that decodes the generated token and asserts sub, email, iss, aud, and an ~24h exp relative to iat, to lock in the contract for Mycelia/external services.

Suggested change inside this function (after moving imports to the top of the file):

 def generate_jwt_for_user(user_id: str, user_email: str) -> str:
@@
-    from datetime import datetime, timedelta
-    import jwt
-
     # Create JWT payload matching Friend-Lite's standard format
     payload = {
         "sub": user_id,  # Subject = user ID
         "email": user_email,
         "iss": "friend-lite",  # Issuer
         "aud": "friend-lite",  # Audience
         "exp": datetime.utcnow() + timedelta(hours=24),  # 24 hour expiration
         "iat": datetime.utcnow(),  # Issued at
     }

And at the top of the file, ensure imports follow the standard/third‑party/local grouping, e.g.:

import logging
import os
import re
from datetime import datetime, timedelta
from typing import Literal, Optional, overload

from beanie import PydanticObjectId
from dotenv import load_dotenv
from fastapi import Depends, Request
from fastapi_users import BaseUserManager, FastAPIUsers
from fastapi_users.authentication import (
    AuthenticationBackend,
    BearerTransport,
    CookieTransport,
    JWTStrategy,
)
import jwt

from advanced_omi_backend.users import User, UserCreate, get_user_db

Comment on lines 31 to 36
# Default speech detection settings
DEFAULT_SPEECH_DETECTION_SETTINGS = {
"min_words": 5, # Minimum words to create conversation
"min_confidence": 0.5, # Word confidence threshold (unified)
"min_duration": 2.0, # Minimum speech duration (seconds)
"min_words": 10, # Minimum words to create conversation (increased from 5)
"min_confidence": 0.7, # Word confidence threshold (increased from 0.5)
"min_duration": 10.0, # Minimum speech duration in seconds (increased from 2.0)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

min_duration env var is not wired into get_speech_detection_settings

You’ve added min_duration to DEFAULT_SPEECH_DETECTION_SETTINGS and documented SPEECH_DETECTION_MIN_DURATION in the speech analysis docstring, but get_speech_detection_settings() still only returns min_words and min_confidence. As a result:

  • SPEECH_DETECTION_MIN_DURATION cannot be tuned via environment.
  • analyze_speech() falls back to its hardcoded 10.0 default instead of the central config value.

To keep a single source of truth and make the new knob effective, get_speech_detection_settings() should expose min_duration and read the env var.

 def get_speech_detection_settings():
     """Get speech detection settings from environment or defaults."""
 
-    return {
-        "min_words": int(os.getenv("SPEECH_DETECTION_MIN_WORDS", DEFAULT_SPEECH_DETECTION_SETTINGS["min_words"])),
-        "min_confidence": float(os.getenv("SPEECH_DETECTION_MIN_CONFIDENCE", DEFAULT_SPEECH_DETECTION_SETTINGS["min_confidence"])),
-    }
+    return {
+        "min_words": int(
+            os.getenv(
+                "SPEECH_DETECTION_MIN_WORDS",
+                DEFAULT_SPEECH_DETECTION_SETTINGS["min_words"],
+            )
+        ),
+        "min_confidence": float(
+            os.getenv(
+                "SPEECH_DETECTION_MIN_CONFIDENCE",
+                DEFAULT_SPEECH_DETECTION_SETTINGS["min_confidence"],
+            )
+        ),
+        "min_duration": float(
+            os.getenv(
+                "SPEECH_DETECTION_MIN_DURATION",
+                DEFAULT_SPEECH_DETECTION_SETTINGS["min_duration"],
+            )
+        ),
+    }

(After this, the settings.get("min_duration", 10.0) call in analyze_speech can safely rely on the config-driven value.)

Also applies to: 134-140

Comment on lines +34 to +35
# Convert MemoryEntry objects to dicts for JSON serialization
memories_dicts = [mem.to_dict() if hasattr(mem, 'to_dict') else mem for mem in memories]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Avoid defensive hasattr() checks per coding guidelines.

The coding guidelines state to "understand data structures and research input/response or class structure instead of adding defensive hasattr() checks." Since MemoryEntry is a known dataclass from services/memory/base.py that has to_dict(), you can call it directly.

-        # Convert MemoryEntry objects to dicts for JSON serialization
-        memories_dicts = [mem.to_dict() if hasattr(mem, 'to_dict') else mem for mem in memories]
+        # Convert MemoryEntry objects to dicts for JSON serialization
+        memories_dicts = [mem.to_dict() for mem in memories]

As per coding guidelines, defensive hasattr() checks should be avoided.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Convert MemoryEntry objects to dicts for JSON serialization
memories_dicts = [mem.to_dict() if hasattr(mem, 'to_dict') else mem for mem in memories]
# Convert MemoryEntry objects to dicts for JSON serialization
memories_dicts = [mem.to_dict() for mem in memories]
🤖 Prompt for AI Agents
In backends/advanced/src/advanced_omi_backend/controllers/memory_controller.py
around lines 34-35, remove the defensive hasattr() check and call
MemoryEntry.to_dict() directly since MemoryEntry is a known dataclass; replace
the comprehension with one that calls mem.to_dict() for each memory (and adjust
any imports or type hints if needed to reflect that memories are MemoryEntry
instances).

Comment on lines +203 to +211
logger.info("="*70)
logger.info("🔑 MYCELIA OAUTH CREDENTIALS (Save these!)")
logger.info("="*70)
logger.info(f"User: {admin_email}")
logger.info(f"Client ID: {client_id}")
logger.info(f"Client Secret: {api_key}")
logger.info("="*70)
logger.info("Configure Mycelia frontend at http://localhost:3002/settings")
logger.info("="*70)
Copy link
Contributor

@coderabbitai coderabbitai bot Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Security: Avoid logging the API key/secret.

Logging credentials (even during initial setup) poses a risk as logs may be persisted, aggregated, or accessed by unauthorized parties. Consider:

  1. Printing to stderr for ephemeral display only
  2. Writing to a secure file with restricted permissions
  3. Using environment variable injection instead
                 if api_key:
-                    logger.info("="*70)
-                    logger.info("🔑 MYCELIA OAUTH CREDENTIALS (Save these!)")
-                    logger.info("="*70)
-                    logger.info(f"User:          {admin_email}")
-                    logger.info(f"Client ID:     {client_id}")
-                    logger.info(f"Client Secret: {api_key}")
-                    logger.info("="*70)
-                    logger.info("Configure Mycelia frontend at http://localhost:3002/settings")
-                    logger.info("="*70)
+                    import sys
+                    print("="*70, file=sys.stderr)
+                    print("🔑 MYCELIA OAUTH CREDENTIALS (Save these!)", file=sys.stderr)
+                    print("="*70, file=sys.stderr)
+                    print(f"User:          {admin_email}", file=sys.stderr)
+                    print(f"Client ID:     {client_id}", file=sys.stderr)
+                    print(f"Client Secret: {api_key}", file=sys.stderr)
+                    print("="*70, file=sys.stderr)
+                    print("Configure Mycelia frontend at http://localhost:3002/settings", file=sys.stderr)
+                    print("="*70, file=sys.stderr)

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py around
lines 203-211, do not log the client secret/api key; remove or replace any
logger.info calls that print the raw api_key or secret. Instead either (a) write
the secret to an ephemeral stderr-only output (e.g. sys.stderr.write) so it
isn’t captured by application logs, or (b) save it to a secure file created with
restrictive permissions (owner-only, mode 0o600) and log only the file path, or
(c) avoid emitting the secret altogether and log a message instructing the
operator how to retrieve it from the secure store or environment variable;
ensure no plaintext secret appears in any logger calls and, if you must display
a value, show only a masked version (e.g. first/last chars with stars).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@thestumonkey Lets mask this too

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

memory_provider = conversation_model.MemoryProvider.FRIEND_LITE # Default
try:
from advanced_omi_backend.memory import get_memory_service
from advanced_omi_backend.services.memory import get_memory_service
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Remove duplicate import.

This import duplicates line 49 and violates coding guidelines by importing inside a function.

Based on coding guidelines, remove this duplicate import:

                 # Determine memory provider from memory service
                 memory_provider = conversation_model.MemoryProvider.FRIEND_LITE  # Default
                 try:
-                    from advanced_omi_backend.services.memory import get_memory_service
                     memory_service_obj = get_memory_service()
                     provider_name = memory_service_obj.__class__.__name__
🤖 Prompt for AI Agents
In backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py around line
145, there is a duplicate import "from advanced_omi_backend.services.memory
import get_memory_service" inside a function which duplicates the top-level
import at line 49; remove the import on line 145 so the module uses the existing
top-level import and ensure no other local imports remain in the function.

Comment on lines +57 to +75
useEffect(() => {
if (!svgRef.current) return

const svg = d3.select(svgRef.current)
const node = svg.node()

if (node) {
node.style.touchAction = 'none'
node.style.webkitUserSelect = 'none'
node.style.userSelect = 'none'
}

svg.call(zoomBehavior as any)
svg.property('__zoom', transform)

return () => {
svg.on('.zoom', null)
}
}, [zoomBehavior, transform])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Including transform in dependencies causes effect to re-run on every zoom.

Every zoom interaction updates transform via setTransform, which re-runs this effect. Line 70 then resets __zoom to the (now stale) previous transform, potentially causing jank or fighting with the ongoing zoom gesture.

Remove transform from the dependency array. The initial zoom state is already set by D3 internally:

   useEffect(() => {
     if (!svgRef.current) return

     const svg = d3.select(svgRef.current)
     const node = svg.node()

     if (node) {
       node.style.touchAction = 'none'
       node.style.webkitUserSelect = 'none'
       node.style.userSelect = 'none'
     }

     svg.call(zoomBehavior as any)
-    svg.property('__zoom', transform)

     return () => {
       svg.on('.zoom', null)
     }
-  }, [zoomBehavior, transform])
+  }, [zoomBehavior])

If you need to programmatically set an initial transform, do so once (e.g., via a separate useEffect with an empty dependency array or a ref flag).

🤖 Prompt for AI Agents
In backends/advanced/webui/src/hooks/useD3Zoom.ts around lines 57 to 75, the
effect depends on `transform` which causes it to re-run on every zoom and reset
`__zoom`, fighting the active gesture; remove `transform` from the dependency
array so the bind/cleanup runs only when `zoomBehavior` changes, and if you need
to set an initial programmatic transform do it in a separate effect that runs
once (empty deps) or guarded by a ref flag so you don't overwrite D3's live zoom
state.

Comment on lines +43 to +72
const loadMemory = async () => {
if (!user?.id || !id) {
console.log('⏭️ MemoryDetail: Missing user or id', { userId: user?.id, memoryId: id })
return
}

try {
console.log('🔍 MemoryDetail: Loading memory', id)
setLoading(true)
setError(null)
const response = await memoriesApi.getAll(user.id)
const memoriesData = response.data.memories || response.data || []
console.log('📦 MemoryDetail: Loaded memories', memoriesData.length)

// Find the specific memory by ID
const foundMemory = memoriesData.find((m: Memory) => m.id === id)
console.log('🎯 MemoryDetail: Found memory?', !!foundMemory, foundMemory?.id)

if (foundMemory) {
setMemory(foundMemory)
} else {
setError('Memory not found')
}
} catch (err: any) {
console.error('❌ Failed to load memory:', err)
setError(err.message || 'Failed to load memory')
} finally {
setLoading(false)
}
}
Copy link
Contributor

@coderabbitai coderabbitai bot Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Inefficient data fetching pattern - fetches all memories to find one.

The loadMemory function fetches all memories via memoriesApi.getAll() and then filters client-side to find the specific memory. This is inefficient, especially as the memory count grows.

Consider adding a dedicated API endpoint like memoriesApi.getById(id) to fetch a single memory directly:

-      const response = await memoriesApi.getAll(user.id)
-      const memoriesData = response.data.memories || response.data || []
-      console.log('📦 MemoryDetail: Loaded memories', memoriesData.length)
-
-      // Find the specific memory by ID
-      const foundMemory = memoriesData.find((m: Memory) => m.id === id)
+      const response = await memoriesApi.getById(id, user.id)
+      const foundMemory = response.data

This would require adding a backend endpoint and updating api.ts:

getById: (id: string, userId?: string) => api.get(`/api/memories/${id}`, { params: userId ? { user_id: userId } : {} }),
🤖 Prompt for AI Agents
In backends/advanced/webui/src/pages/MemoryDetail.tsx around lines 43 to 72, the
loadMemory function inefficiently calls memoriesApi.getAll() and filters
client-side; change it to call a dedicated endpoint like memoriesApi.getById(id,
user.id) so only the single memory is fetched. Add the backend route
/api/memories/:id and update api.ts to expose getById(id, userId?) that performs
GET /api/memories/{id} with optional user_id query param, then update loadMemory
to call that, handle the returned memory shape (setMemory when present, setError
when not), and keep the existing loading/error state handling.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@thestumonkey This also looks legit

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

@@ -0,0 +1 @@
Subproject commit 47ea1966dd8a8c10662c91c7a3f907798f6a7dbc
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify submodule configuration and documentation

echo "=== .gitmodules configuration ==="
if [ -f .gitmodules ]; then
  cat .gitmodules
else
  echo ".gitmodules not found"
fi

echo ""
echo "=== Submodule reference file ==="
if [ -f extras/mycelia ]; then
  cat extras/mycelia
else
  echo "extras/mycelia file not found"
fi

echo ""
echo "=== Check for submodule-related documentation ==="
find . -maxdepth 3 -type f \( -name "README*" -o -name "SETUP*" -o -name "*.md" \) 2>/dev/null | head -20

Repository: chronicler-ai/chronicle

Length of output: 868


🏁 Script executed:

#!/bin/bash
# Search for submodule setup instructions in documentation

echo "=== Searching README.md for submodule instructions ==="
grep -A 5 -B 5 -i "submodule\|extras/mycelia\|mycelia\|clone.*recursive" README.md | head -50

echo ""
echo "=== Checking CLAUDE.md for submodule info ==="
grep -A 5 -B 5 -i "submodule\|extras" CLAUDE.md | head -30

echo ""
echo "=== Checking getting-started documentation ==="
grep -A 5 -B 5 -i "submodule\|clone\|setup" Docs/getting-started.md | head -50

echo ""
echo "=== Checking for any git-related setup files ==="
find . -maxdepth 3 -type f \( -name "*setup*" -o -name "*install*" \) ! -path "./node_modules/*" ! -path "./.git/*" 2>/dev/null

Repository: chronicler-ai/chronicle

Length of output: 3134


🏁 Script executed:

#!/bin/bash
# Check setup and installation docs more thoroughly

echo "=== Checking README.md for clone instructions ==="
head -100 README.md | grep -A 10 -i "clone\|install\|setup"

echo ""
echo "=== Checking setup-requirements.txt ==="
cat setup-requirements.txt 2>/dev/null || echo "Not found or empty"

echo ""
echo "=== Checking if init.sh mentions submodules ==="
if [ -f backends/advanced/init.sh ]; then
  grep -i "submodule\|recursive\|mycelia" backends/advanced/init.sh || echo "No submodule references found in init.sh"
fi

echo ""
echo "=== Full check of README first 200 lines for setup ==="
head -200 README.md

Repository: chronicler-ai/chronicle

Length of output: 3259


🏁 Script executed:

#!/bin/bash
# Check quickstart.md for clone and setup instructions

echo "=== quickstart.md - Complete setup guide ==="
cat quickstart.md 2>/dev/null | head -150

Repository: chronicler-ai/chronicle

Length of output: 5250


Add submodule initialization instructions to setup documentation.

The .gitmodules configuration is correct, but developers following the quickstart will not initialize the mycelia submodule. Update the clone instructions in quickstart.md to either:

  1. Use git clone --recursive https://github.com/AnkushMalaker/friend-lite.git, or
  2. Add explicit post-clone initialization: git submodule update --init --recursive

Additionally, document the mycelia submodule's purpose and dependencies in the setup wizard or quickstart guide so developers understand what it provides.

🤖 Prompt for AI Agents
In extras/mycelia around line 1, the repository contains a submodule but the
quickstart docs don't instruct users to initialize it; update quickstart.md to
show cloning with submodules (either git clone --recursive
https://github.com/AnkushMalaker/friend-lite.git) or add a post-clone step (git
submodule update --init --recursive), and add a short note in the
quickstart/setup wizard describing the mycelia submodule's purpose and
runtime/build dependencies so developers know what it provides and any extra
setup required.

@coderabbitai coderabbitai bot mentioned this pull request Dec 9, 2025
@thestumonkey thestumonkey closed this pull request by merging all changes into main in fb506dd Dec 10, 2025
@coderabbitai coderabbitai bot mentioned this pull request Dec 11, 2025
@AnkushMalaker AnkushMalaker deleted the merge-mycelia-memories branch December 18, 2025 02:29
@coderabbitai coderabbitai bot mentioned this pull request Jan 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants