Merge mycelia memories#179
Conversation
Tweaks to get all memory providers working
|
@RabbitAI review |
WalkthroughThis PR introduces Mycelia as a new pluggable memory backend provider alongside comprehensive refactoring of the memory service architecture. It includes a Git submodule for Mycelia, Makefile sync targets, environment configurations, new Python utilities for API key generation and user synchronization, a reorganized memory service layer with a new Changes
Sequence DiagramssequenceDiagram
participant Frontend as Frontend App
participant Auth as Auth Service
participant Backend as Friend-Lite Backend
participant Mycelia as Mycelia API
participant DB as MongoDB
rect rgb(200, 220, 255)
note over Frontend,DB: Mycelia Memory Sync on Admin Startup
Backend->>Backend: sync_admin_on_startup()
Backend->>DB: fetch admin user by ADMIN_EMAIL
Backend->>DB: check existing Mycelia API key
alt No Existing Key
Backend->>Backend: generate new API key + salt
Backend->>Backend: compute hashedKey (SHA256)
Backend->>DB: insert Mycelia API key doc
Backend->>DB: update Friend-Lite user.mycelia_oauth
else Key Exists
Backend->>Backend: reuse existing client_id
end
Backend->>Backend: log credentials for admin
end
sequenceDiagram
participant User as User/Browser
participant Frontend as Frontend
participant Backend as Friend-Lite Backend
participant Mycelia as Mycelia API
participant LLM as LLM (OpenAI)
rect rgb(220, 255, 220)
note over User,LLM: Mycelia Memory Creation & Temporal Extraction
User->>Frontend: Interact (audio/transcript)
Frontend->>Backend: POST /memories {content}
Backend->>Backend: add_memory() → MyceliaMemoryService
Backend->>LLM: extract_memories_via_llm(transcript)
LLM-->>Backend: [fact1, fact2, ...]
Backend->>LLM: extract_temporal_entity_via_llm(fact)
LLM-->>Backend: {timeRanges, isEvent, isPerson, ...}
Backend->>Backend: _get_user_jwt(user_id)
Backend->>Mycelia: POST /resources {memory_obj, jwt}
Mycelia-->>Backend: {id, createdAt, ...}
Backend-->>Frontend: {success, memory_ids}
Frontend-->>User: Memory saved
end
sequenceDiagram
participant Frontend as Frontend Timeline
participant Auth as AuthContext
participant Backend as Memory API
participant Mycelia as Mycelia API
participant D3 as D3.js Renderer
rect rgb(255, 240, 200)
note over Frontend,D3: Mycelia Timeline Visualization
Frontend->>Auth: useAuth() → get JWT
Auth-->>Frontend: mycelia_jwt_token
Frontend->>Backend: GET /memories (current user)
Backend->>Mycelia: list memories (with JWT)
Mycelia-->>Backend: [{id, timeRanges, metadata, ...}]
Backend-->>Frontend: MemoryWithTimeRange[]
Frontend->>D3: convert to TimelineTask[]
D3->>D3: render timeline with timeRanges
Frontend->>D3: handle click on bar
D3-->>Frontend: memory_id
Frontend->>Frontend: navigate to /memories/:id
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~55 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 20
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py (3)
47-52: Moveimport osto the top of the file.Per coding guidelines, all imports must be at the top of the file after the docstring. The
osmodule is imported inside__init__, which violates this rule.Add to the imports at the top of the file:
import logging import uuid from typing import List, Dict, Any, Optional import httpx +import osThen update the method:
# Use custom CA certificate if available - import os ca_bundle = os.getenv('REQUESTS_CA_BUNDLE')
334-337: Moveimport reto the top of the file.The
remodule is imported inside the method, violating the coding guideline that all imports must be at the top of the file.Add to imports at the top:
import logging +import re import uuidThen remove the inline import:
if "message" in result: # Parse message like "Successfully deleted 5 memories" - import re match = re.search(r'(\d+)', result["message"])
436-459: Unuseduser_idanduser_emailparameters – likely a bug.The signature accepts
user_idanduser_emailparameters, but the method body usesself.user_id(line 451) instead of the passeduser_id. The controller (memory_controller.py) explicitly passes these values expecting them to be used for authentication context.Either use the passed parameters or remove them from the signature.
If the passed
user_idshould be used:async def delete_memory(self, memory_id: str, user_id: Optional[str] = None, user_email: Optional[str] = None) -> bool: """Delete a specific memory by ID. Args: memory_id: ID of the memory to delete + user_id: Optional user ID override for the request + user_email: Optional user email (currently unused, for future auth) Returns: True if deletion succeeded, False otherwise """ try: + effective_user_id = user_id if user_id is not None else self.user_id response = await self.client.request( "DELETE", f"{self.server_url}/api/v1/memories/", json={ "memory_ids": [memory_id], - "user_id": self.user_id + "user_id": effective_user_id } )backends/advanced/src/advanced_omi_backend/services/memory/providers/openmemory_mcp.py (1)
366-385: Unused parameters and missing user context handling.The
user_idanduser_emailparameters are declared but not used, unlike other methods that set/restoremcp_client.user_id. This inconsistency could cause the delete operation to use the wrong user context.Consider applying the same user context pattern as other methods:
async def delete_memory(self, memory_id: str, user_id: Optional[str] = None, user_email: Optional[str] = None) -> bool: # ... if not self._initialized: await self.initialize() + # Update MCP client user context for this operation + original_user_id = self.mcp_client.user_id + if user_id: + self.mcp_client.user_id = user_id + try: success = await self.mcp_client.delete_memory(memory_id) if success: memory_logger.info(f"🗑️ Deleted memory {memory_id} via MCP") return success except Exception as e: - memory_logger.error(f"Delete memory failed: {e}") + memory_logger.exception(f"Delete memory failed: {e}") return False + finally: + self.mcp_client.user_id = original_user_id
♻️ Duplicate comments (1)
backends/advanced/scripts/sync_friendlite_mycelia.py (1)
58-63: Duplicated hash function - consider shared utility.This hash function is duplicated in
mycelia_sync.pyandcreate_mycelia_api_key.py. Consider extracting to a shared utility module.Same recommendation as made for
mycelia_sync.py- extract to a shared utility.
🟡 Minor comments (6)
backends/advanced/webui/src/hooks/useD3Zoom.ts-21-41 (1)
21-41: Consider safer type handling and potential performance optimization.Two observations:
Type safety on line 33: The expression
event.sourceEvent?.target as Elementcould evaluate toundefinedifsourceEventis missing, andnode.contains(undefined)may not behave as expected sincecontains()expectsNode | null.Performance on line 28:
d3.selectAll('.zoomable')runs on every zoom event, which could become expensive if there are many.zoomableelements or frequent zoom interactions.For type safety, consider:
- if (!node || node.contains(event.sourceEvent?.target as Element)) { + if (!node || (event.sourceEvent?.target && node.contains(event.sourceEvent.target as Node))) { return }For performance, consider caching the
.zoomableselection or debouncing the synchronization if this becomes a bottleneck.backends/advanced/scripts/create_mycelia_api_key.py-10-11 (1)
10-11: Remove the unusedbson.ObjectIdimport.
ObjectIdis imported on line 9 but never used in the script. Additionally, thebase64module is imported inline within functions (lines 23 and 71) instead of at the top of the file with other imports; move all imports to the top following the module docstring.backends/advanced/webui/src/pages/System.tsx-360-366 (1)
360-366: Empty option text when provider doesn't match any condition.The current pattern renders an empty
<option>element for each provider that doesn't match the condition. Each line like{provider === 'friend_lite' && 'Friend-Lite (Sophisticated)'}evaluates tofalse(rendered as nothing) when the condition fails, leaving an empty option.Use a mapping object or a single expression:
{availableProviders.map((provider) => ( <option key={provider} value={provider}> - {provider === 'friend_lite' && 'Friend-Lite (Sophisticated)'} - {provider === 'openmemory_mcp' && 'OpenMemory MCP (Cross-client)'} - {provider === 'mycelia' && 'Mycelia (Advanced)'} + {{ + 'friend_lite': 'Friend-Lite (Sophisticated)', + 'openmemory_mcp': 'OpenMemory MCP (Cross-client)', + 'mycelia': 'Mycelia (Advanced)' + }[provider] || provider} </option> ))}backends/advanced/docker-compose-test.yml-173-174 (1)
173-174: Add documentation about theextras/myceliasubmodule prerequisite for the mycelia profile.The build contexts
../../extras/mycelia/backendand../../extras/mycelia/frontenddepend on theextras/myceliasubmodule being initialized. Developers using--profile myceliawithout initializing this submodule will encounter build failures. Add a comment indocker-compose-test.ymlexplaining this requirement, or document it in the advanced backend setup guide (backends/advanced/README.md).backends/advanced/src/advanced_omi_backend/controllers/memory_controller.py-174-176 (1)
174-176: Avoid deprecatedasyncio.get_event_loop()for timestamp generation.
asyncio.get_event_loop()is deprecated in Python 3.10+ when called from a coroutine. Usetime.time()instead, which is simpler and doesn't have the deprecation issue.- memory_source_id = source_id or f"manual_{user.user_id}_{int(asyncio.get_event_loop().time())}" + import time + memory_source_id = source_id or f"manual_{user.user_id}_{int(time.time())}"Note: Move the
timeimport to the top of the file per coding guidelines.Committable suggestion skipped: line range outside the PR's diff.
backends/advanced/src/advanced_omi_backend/controllers/system_controller.py-504-511 (1)
504-511: Usingos.getcwd()for.envpath is fragile.The working directory may differ depending on how the service is started (e.g., from different directories, Docker, systemd). Consider using a path relative to the module or a configurable environment variable.
- # Path to .env file (assuming we're running from backends/advanced/) - env_path = os.path.join(os.getcwd(), ".env") + # Path to .env file - prefer explicit path from environment or relative to this module + env_path = os.getenv("ENV_FILE_PATH") or os.path.join( + os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(__file__)))), + ".env" + )
🧹 Nitpick comments (51)
backends/advanced/webui/src/hooks/useD3Zoom.ts (2)
50-54: Optional: Clarify button check logic.The button check on line 52 works correctly but relies on subtle short-circuit evaluation. Consider making the intent more explicit:
.filter((event) => { if (event.type === 'dblclick') return false - if (event.button && event.button !== 0) return false + if (event.button !== undefined && event.button !== 0) return false return true }),This makes it clearer that we're blocking non-left-button clicks while allowing left button (0) and events without a button property.
80-80: Optional: Investigate type assertion.The
as anytype assertion suggests a TypeScript type compatibility issue between D3 and React types. While this works, it bypasses type safety.Consider investigating if there's a more type-safe approach, or add a comment explaining why the assertion is necessary. If this is a known D3/React TypeScript compatibility issue, documenting it would help future maintainers.
backends/advanced/webui/src/types/react-gantt-timeline.d.ts (2)
2-3: Prefer explicit React type imports over relying on globalReactnamespace
ComponentTypeis imported explicitly, butReact.CSSPropertiesrelies on the globalReactnamespace. This usually works if@types/reactis globally included, but it’s a bit implicit and can break with strictertypes/typeRootsconfigs.You could make the dependency explicit and avoid the global:
-import { ComponentType } from 'react' +import type { ComponentType, CSSProperties } from 'react' @@ - top?: { - style?: React.CSSProperties - } - middle?: { - style?: React.CSSProperties - } - bottom?: { - style?: React.CSSProperties - } + top?: { + style?: CSSProperties + } + middle?: { + style?: CSSProperties + } + bottom?: { + style?: CSSProperties + }Also applies to: 13-23
4-41: Declaration models only a small subset of the library's propsThe
TimelineTask,TimelineConfig, andTimelinePropsinterfaces match your current usage (passing only adataarray to the Timeline component), but the actualreact-gantt-timelinelibrary exposes a much broader API includinglinks, event handlers (onCreateLink,onUpdateTask,onSelectItem), bounds (leftBound,rightBound), date formatting options, and style customization. Additionally,TimelineConfigis not currently used.To avoid type gaps as the feature evolves:
- Extend this stub whenever you adopt new props (e.g., links for task dependencies, event handlers).
- Periodically verify the declarations align with the upstream library by checking the react-timeline-gantt repository or its type definitions.
backends/advanced/src/advanced_omi_backend/controllers/user_controller.py (1)
63-64: MoveUserReadimports to module top to follow project guidelines
UserReadis imported inside functions in bothcreate_userandupdate_user. Per the repo’s Python guidelines, all imports should live at the top of the file (after the docstring), unless there is a hard circular-dependency reason.Consider moving these imports to the module-level import block.
Also applies to: 116-117
backends/advanced/src/advanced_omi_backend/utils/conversation_utils.py (1)
81-112: Double-check interaction betweenmin_durationandtrack_speech_activityWith the new duration check,
analyze_speechcan returnhas_speech=Falsewhile still reporting a non-zeroword_count(enough valid words but duration belowmin_duration).track_speech_activityonly looks atword_count, so these short bursts may still updatelast_meaningful_speech_timeand delay inactivity timeouts.If the intent is that sub‑
min_durationspeech should not reset inactivity, consider updatingtrack_speech_activityto also considerhas_speech, or to respect a minimum duration when deciding whether to bump the timestamp.Also applies to: 422-457
backends/advanced/webui/public/frappe-gantt.css (1)
1-1: Correct invalidbrightness()usage inbackground-colorhover ruleThe rule:
.gantt-container .popup-wrapper .action-btn:hover{ background-color:brightness(97%); }uses
brightness()where a color is expected, so the declaration will be ignored. Consider either:
- using a filter:
-.gantt-container .popup-wrapper .action-btn:hover{background-color:brightness(97%)} +.gantt-container .popup-wrapper .action-btn:hover{filter:brightness(0.97)}or
- setting a concrete hover color derived from
var(--g-popup-actions).backends/advanced/src/advanced_omi_backend/services/memory/providers/vector_stores.py (1)
12-13: Aligndelete_memoryimplementation with typing, imports, and lint expectationsThe new
delete_memorysignature and Qdrant call look fine functionally, but a few small cleanups will improve consistency:
- Remove inner
import uuidand use the top-level importYou already import
uuidat the top of the file; re-importing inside the method both violates the project guideline (“no mid-file imports”) and is redundant.- async def delete_memory(self, memory_id: str, user_id: Optional[str] = None, user_email: Optional[str] = None) -> bool: + async def delete_memory(self, memory_id: str, user_id: Optional[str] = None, user_email: Optional[str] = None) -> bool: """Delete a specific memory from Qdrant.""" try: - # Convert memory_id to proper format for Qdrant - import uuid + # Convert memory_id to proper format for Qdrant try: # Try to parse as UUID first uuid.UUID(memory_id)
- Address unused
user_id/user_emailparametersIf Qdrant deletion is intentionally ID-only (with ownership handled elsewhere), consider renaming to underscore-prefixed parameters to make this explicit and satisfy Ruff:
- async def delete_memory(self, memory_id: str, user_id: Optional[str] = None, user_email: Optional[str] = None) -> bool: + async def delete_memory(self, memory_id: str, _user_id: Optional[str] = None, _user_email: Optional[str] = None) -> bool:
- Optionally factor out ID parsing
The UUID/int fallback parsing logic is duplicated here and in
update_memory; a small private helper (e.g.,_parse_point_id(memory_id: str)) would DRY this up.Also applies to: 243-268
backends/advanced/src/advanced_omi_backend/services/memory/providers/friend_lite.py (1)
203-208: Chain exceptions withfrom eto preserve context.Based on learnings, when re-raising exceptions, always chain with
raise ... from eto preserve the original stack trace for better debuggability.except asyncio.TimeoutError as e: memory_logger.error(f"⏰ Memory processing timed out for {source_id}") - raise e + raise e from e except Exception as e: memory_logger.error(f"❌ Add memory failed for {source_id}: {e}") - raise e + raise e from ebackends/advanced/src/advanced_omi_backend/services/memory/config.py (1)
127-141: Use explicitOptional[str]type hint forapi_key.PEP 484 prohibits implicit
Optional. The parameter should use an explicit union type.+from typing import Any, Dict, Optional + def create_mycelia_config( api_url: str = "http://localhost:8080", - api_key: str = None, + api_key: Optional[str] = None, timeout: int = 30, **kwargs ) -> Dict[str, Any]:Note:
Optionalmay already be imported at the module level; if not, add it to the existingtypingimport.backends/advanced/scripts/create_mycelia_api_key.py (1)
55-89: Add error handling and ensure MongoDB connection cleanup.The script lacks error handling for MongoDB operations, which could leave connections open or produce unhelpful errors. Consider using a context manager or try-finally pattern.
- # Connect to MongoDB - client = MongoClient(MONGO_URL) - db = client[MYCELIA_DB] - api_keys = db["api_keys"] - - # Check for existing active keys for this user - existing = api_keys.find_one({"owner": USER_ID, "isActive": True}) - if existing: - print(f"ℹ️ Existing active API key found: {existing['_id']}") - print(f" Deactivating old key...\n") - api_keys.update_one( - {"_id": existing["_id"]}, - {"$set": {"isActive": False}} - ) - - # Create API key document (matches Mycelia's format) - api_key_doc = { - ... - } - - # Insert into database - result = api_keys.insert_one(api_key_doc) - client_id = str(result.inserted_id) + # Connect to MongoDB + client = MongoClient(MONGO_URL) + try: + db = client[MYCELIA_DB] + api_keys = db["api_keys"] + + # Check for existing active keys for this user + existing = api_keys.find_one({"owner": USER_ID, "isActive": True}) + if existing: + print(f"ℹ️ Existing active API key found: {existing['_id']}") + print(" Deactivating old key...\n") + api_keys.update_one( + {"_id": existing["_id"]}, + {"$set": {"isActive": False}} + ) + + # Create API key document (matches Mycelia's format) + api_key_doc = { + "hashedKey": hashed_key, + "salt": base64.b64encode(salt).decode('utf-8'), + "owner": USER_ID, + "name": "Friend-Lite Integration", + "policies": [ + { + "resource": "**", + "action": "*", + "effect": "allow" + } + ], + "openPrefix": open_prefix, + "createdAt": datetime.now(), + "isActive": True, + } + + # Insert into database + result = api_keys.insert_one(api_key_doc) + client_id = str(result.inserted_id) + finally: + client.close()backends/advanced/webui/src/pages/MyceliaTimeline.tsx (2)
27-34: Consider reusing the existingTimelineTasktype.A
TimelineTaskinterface already exists inbackends/advanced/webui/src/types/react-gantt-timeline.d.ts. Consider importing and extending it rather than redefining, to maintain consistency and reduce duplication.
142-148: MissingloadMemoriesin useEffect dependency array.The
loadMemoriesfunction is referenced but not included in the dependency array. This may cause stale closure issues or ESLint warnings.Consider wrapping
loadMemoriesinuseCallbackand adding it to the dependency array, or inline the fetch logic within the effect:+ const loadMemories = useCallback(async () => { + if (!user?.id) return + // ... implementation + }, [user?.id]) useEffect(() => { if (!useDemoData) { loadMemories() } else { setMemories(getDemoMemories()) } - }, [user?.id, useDemoData]) + }, [user?.id, useDemoData, loadMemories])backends/advanced/src/advanced_omi_backend/services/memory/base.py (1)
249-268: Consider moving__init__before abstract methods for conventional ordering.Python convention typically places
__init__near the top of a class, after class docstrings and before other methods. While valid, placing it after abstract methods may reduce readability.The lazy initialization pattern via
_ensure_initialized()is well-designed for RQ worker compatibility where service instances may be created in one process and used in another.backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py (3)
380-387: Uselogging.exception()for full stack traces in except blocks.Based on learnings, prefer
logging.exception()inside except blocks to automatically log the full stack trace, improving debuggability.except httpx.HTTPStatusError as e: if e.response.status_code == 404: return None - memory_logger.error(f"HTTP error getting memory: {e}") + memory_logger.exception(f"HTTP error getting memory: {e}") return None except Exception as e: - memory_logger.error(f"Error getting memory: {e}") + memory_logger.exception(f"Error getting memory: {e}") return None
429-434: Uselogging.exception()for full stack traces.Same pattern as noted above - prefer
logging.exception()in except blocks for automatic stack trace logging.except httpx.HTTPStatusError as e: - memory_logger.error(f"HTTP error updating memory: {e.response.status_code}") + memory_logger.exception(f"HTTP error updating memory: {e.response.status_code}") return False except Exception as e: - memory_logger.error(f"Error updating memory: {e}") + memory_logger.exception(f"Error updating memory: {e}") return False
478-480: Avoid bareexcept:clause.The bare
except:catches all exceptions includingKeyboardInterruptandSystemExit, which can mask critical issues. SpecifyExceptionat minimum.- except: + except Exception: continuebackends/advanced/src/advanced_omi_backend/utils/job_utils.py (1)
13-44: Async zombie-job check looks solid; just minor nitsThe control flow and Redis key check look correct for detecting deleted RQ jobs and short‑circuiting long‑running work. If you want to polish further later, you could (a) add concrete type hints for
redis_clientandcurrent_joband (b) drop the unusedOptionalimport at the top, but those are non‑blocking.backends/advanced/webui/src/pages/TimelineRouter.tsx (1)
7-83: Tabbed router implementation is clean; consider tiny a11y upgradeThe state handling and conditional rendering for the three timelines look good and are easy to follow. If you want to refine accessibility later, you could convert this into a proper tab pattern (e.g.,
role="tablist"on the<nav>,role="tab"andaria-selectedon buttons), but that’s optional polish.backends/advanced/webui/src/services/api.ts (1)
140-155: New memory provider admin endpoints are straightforward; consider typing the providerThe
getMemoryProviderandsetMemoryProviderwrappers cleanly mirror the new admin endpoints. To catch typos at compile time, you might later tighten theproviderargument to a union of known values (e.g.'friend_lite' | 'openmemory_mcp' | 'mycelia') instead of plainstring, ideally reusing the same type wherever you surface provider options in the UI.backends/advanced/src/advanced_omi_backend/services/memory/service_factory.py (1)
39-68: Mycelia provider branch is wired correctly; refine exception chaining and service infoThe new
MYCELIAbranch mirrors the existing providers nicely: it validatesconfig.mycelia_configand passes the fullMemoryConfigintoMyceliaMemoryService, which is what that implementation expects for LLM config access.Two improvements worth making here and in the OpenMemory branch:
- Chain ImportErrors to preserve tracebacks
Currently:
try: from .providers.openmemory_mcp import OpenMemoryMCPService except ImportError as e: raise RuntimeError(f"OpenMemory MCP service not available: {e}") # ... try: from .providers.mycelia import MyceliaMemoryService except ImportError as e: raise RuntimeError(f"Mycelia memory service not available: {e}")This drops the original ImportError traceback. You can keep the helpful message and preserve context with
from e, which also satisfies Ruff B904 and matches the earlier learning about exception chaining:@@ - try: - from .providers.openmemory_mcp import OpenMemoryMCPService - except ImportError as e: - raise RuntimeError(f"OpenMemory MCP service not available: {e}") + try: + from .providers.openmemory_mcp import OpenMemoryMCPService + except ImportError as e: + raise RuntimeError("OpenMemory MCP service not available") from e @@ - elif config.memory_provider == MemoryProvider.MYCELIA: - # Use Mycelia implementation - try: - from .providers.mycelia import MyceliaMemoryService - except ImportError as e: - raise RuntimeError(f"Mycelia memory service not available: {e}") + elif config.memory_provider == MemoryProvider.MYCELIA: + # Use Mycelia implementation + try: + from .providers.mycelia import MyceliaMemoryService + except ImportError as e: + raise RuntimeError("Mycelia memory service not available") from e
- Extend service introspection to include Mycelia
get_service_info()currently only infers"openmemory_mcp"and"friend_lite"from the service type name. For completeness, you may want to recognize the Mycelia implementation as well, e.g.:# inside get_service_info, after existing checks if "OpenMemoryMCP" in info["service_type"]: info["memory_provider"] = "openmemory_mcp" elif "FriendLite" in info["service_type"] or "MemoryService" in info["service_type"]: info["memory_provider"] = "friend_lite" elif "Mycelia" in info["service_type"]: info["memory_provider"] = "mycelia"(Adjust the string check to match the actual Mycelia service class name.)
Optionally, you could also update the module docstring at the top to explicitly mention Mycelia alongside Friend‑Lite and OpenMemory so the documentation stays in sync.
Based on learnings, preserving exception context with
raise ... from ewill make debugging provider import issues much easier.backends/advanced/src/advanced_omi_backend/routers/modules/system_routes.py (1)
139-145: Docstring says "restart" but the controller only updates the file and returns a restart notice.Looking at
system_controller.set_memory_provider()(lines 490-558 in the relevant snippets), the function only updates the.envfile and setsos.environ["MEMORY_PROVIDER"]for the current process. It doesn't actually restart backend services—it returns"requires_restart": Trueto inform the caller. The docstring "Set memory provider and restart backend services" is misleading.@router.post("/admin/memory/provider") async def set_memory_provider( provider: str = Body(..., embed=True), current_user: User = Depends(current_superuser) ): - """Set memory provider and restart backend services. Admin only.""" + """Set memory provider configuration. Admin only. Requires service restart to take effect.""" return await system_controller.set_memory_provider(provider)backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py (1)
241-242: Uselogging.exception()orlogging.warning()instead ofdebugfor exception handling.Per project learnings, use
logging.exception()inside except blocks to automatically log the full stack trace. Usingdebuglevel here loses important context when troubleshooting job cancellation failures.except Exception as e: - logger.debug(f"Job {job_id} not found or already completed: {e}") + logger.warning(f"Job {job_id} not found or already completed: {e}")backends/advanced/webui/src/pages/System.tsx (1)
144-149: Consider clearing the timeout if the component unmounts.The
setTimeoutat line 147 could fire after the component unmounts, causing a React warning about setting state on an unmounted component. This is a minor issue since the consequence is just a console warning.For a more robust solution, you could extract this to a custom hook or use
useEffectwith cleanup, but given this is a low-impact admin page, this is optional.backends/advanced/src/advanced_omi_backend/routers/modules/health_routes.py (1)
304-311: Consider usinglogging.exception()for better debuggability.Per project learnings, prefer
logging.exception()inside except blocks to automatically log the full stack trace. While this health check gracefully handles the error, logging the stack trace would help diagnose intermittent Mycelia connection issues.except Exception as e: + logger.exception("Mycelia memory health check failed") health_status["services"]["memory_service"] = { "status": f"⚠️ Mycelia Memory Failed: {str(e)}", "healthy": False, "provider": "mycelia", "critical": False, } overall_healthy = Falsebackends/advanced/src/advanced_omi_backend/controllers/system_controller.py (2)
484-488: Uselogging.exception()instead oflogging.error()to capture stack trace.Based on learnings, prefer
logging.exception()inside except blocks to automatically log the full stack trace, which improves debuggability.except Exception as e: - logger.error(f"Error getting memory provider: {e}") + logger.exception("Error getting memory provider") return JSONResponse( - status_code=500, content={"error": f"Failed to get memory provider: {str(e)}"} + status_code=500, content={"error": f"Failed to get memory provider: {e!s}"} )
555-559: Uselogging.exception()for better stack traces.Same as above - use
logging.exception()in except blocks per project conventions.except Exception as e: - logger.error(f"Error setting memory provider: {e}") + logger.exception("Error setting memory provider") return JSONResponse( - status_code=500, content={"error": f"Failed to set memory provider: {str(e)}"} + status_code=500, content={"error": f"Failed to set memory provider: {e!s}"} )backends/advanced/webui/src/pages/Memories.tsx (1)
522-533: Consider guarding against large metadata objects.If
memory.metadatacontains very large objects or circular references,JSON.stringifycould cause performance issues or errors. Consider adding error handling or truncation.<pre className="mt-2 p-2 bg-gray-100 dark:bg-gray-800 rounded text-xs overflow-x-auto"> - {JSON.stringify(memory.metadata, null, 2)} + {(() => { + try { + const str = JSON.stringify(memory.metadata, null, 2) + return str.length > 5000 ? str.substring(0, 5000) + '\n...(truncated)' : str + } catch { + return '[Unable to display metadata]' + } + })()} </pre>backends/advanced/webui/src/pages/ReactGanttTimeline.tsx (2)
179-183: AddfetchMemoriesWithTimeRangesto useEffect dependencies or useuseCallback.The function
fetchMemoriesWithTimeRangesis called inside the effect but isn't in the dependency array. While it works because the function is recreated on every render, React's exhaustive-deps rule would flag this.+ const fetchMemoriesWithTimeRanges = useCallback(async () => { + if (!user?.id) return + // ... rest of function + }, [user?.id]) + useEffect(() => { - if (user) { - fetchMemoriesWithTimeRanges() - } + fetchMemoriesWithTimeRanges() }, [user])
319-332: Scaling via CSS transform may cause blurry text at non-integer zoom levels.CSS
transform: scale()can cause rendering artifacts, especially for text. Consider using the timeline library's native zoom/scale options if available, or usingzoomCSS property as an alternative.backends/advanced/webui/src/pages/MemoryDetail.tsx (2)
89-91: MissingloadMemoryin useEffect dependencies.The
loadMemoryfunction is called in the effect but not included in the dependency array. While this works due to how closures capture values, it's a common source of subtle bugs and would trigger ESLint's exhaustive-deps warning.Consider wrapping
loadMemorywithuseCallback:+ const loadMemory = useCallback(async () => { + if (!user?.id || !id) { + console.log('⏭️ MemoryDetail: Missing user or id', { userId: user?.id, memoryId: id }) + return + } + // ... rest of function + }, [id, user?.id]) + useEffect(() => { loadMemory() - }, [id, user?.id]) + }, [loadMemory])
74-87: Consider using a toast notification instead ofalert()for delete errors.Using
window.alert()blocks the UI and provides a poor user experience. If the codebase has a toast/notification system, consider using that instead.backends/advanced/src/advanced_omi_backend/controllers/memory_controller.py (2)
202-206: Uselogging.exception()and explicit conversion flag.Per project learnings, use
logging.exception()inside except blocks to capture the full stack trace.except Exception as e: - audio_logger.error(f"Error adding memory: {e}", exc_info=True) + audio_logger.exception("Error adding memory") return JSONResponse( - status_code=500, content={"success": False, "message": f"Error adding memory: {str(e)}"} + status_code=500, content={"success": False, "message": f"Error adding memory: {e!s}"} )
278-282: Uselogging.exception()and explicit conversion flag.Same pattern as above - prefer
logging.exception()for automatic stack trace capture.except Exception as e: - audio_logger.error(f"Error fetching memory {memory_id}: {e}", exc_info=True) + audio_logger.exception(f"Error fetching memory {memory_id}") return JSONResponse( - status_code=500, content={"message": f"Error fetching memory: {str(e)}"} + status_code=500, content={"message": f"Error fetching memory: {e!s}"} )backends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.py (1)
10-10: Remove unusedBodyimport.The
Bodyimport is added but never used in the file. OnlyBaseModelis used forAddMemoryRequest.-from fastapi import APIRouter, Depends, Query, Body +from fastapi import APIRouter, Depends, Querybackends/advanced/src/advanced_omi_backend/services/memory/providers/__init__.py (1)
19-27: Sort__all__alphabetically for consistency.Per static analysis (RUF022),
__all__should be sorted in isort-style.__all__ = [ "FriendLiteMemoryService", + "MCPClient", + "MCPError", + "MyceliaMemoryService", "OpenMemoryMCPService", - "MyceliaMemoryService", "OpenAIProvider", "QdrantVectorStore", - "MCPClient", - "MCPError", ]backends/advanced/webui/src/pages/ConversationsTimeline.tsx (1)
208-209: Consider using a more specific error type.Using
anyfor the error type loses type safety. Consider usingunknownor a more specific error interface.- } catch (err: any) { - setError(err.message || 'Failed to load conversations') + } catch (err: unknown) { + const message = err instanceof Error ? err.message : 'Failed to load conversations' + setError(message)backends/advanced/webui/src/pages/FrappeGanttTimeline.tsx (3)
167-171: Remove console.log statements for production.Debug logging statements should be removed or replaced with a proper logging mechanism that can be disabled in production.
- console.log('📅 Timeline: Total memories:', memoriesData.length) - console.log('📅 Timeline: Memories with timeRanges:', memoriesWithTime.length) - if (memoriesWithTime.length > 0) { - console.log('📅 Timeline: First memory with timeRange:', memoriesWithTime[0]) - }Similar console statements at lines 175, 236, 279, 323, 327 should also be removed.
214-220: Missing dependency in useEffect could cause stale closure.
getDemoMemoriesis called inside the effect but not listed in the dependency array. SincegetDemoMemoriesis defined inside the component and recreated on each render, this could cause issues.Consider moving
getDemoMemoriesoutside the component (it has no dependencies on props/state) or wrapping it inuseCallback.+// Move outside component since it has no dependencies +const getDemoMemories = (): MemoryWithTimeRange[] => { + return [ + // ... demo data + ] +} + export default function FrappeGanttTimeline() { - // ... inside component - const getDemoMemories = (): MemoryWithTimeRange[] => { - // ... - }
606-606: Duplicatefocus:ring-2in className.Minor typo -
focus:ring-2appears twice in the className string.-className="px-3 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-gray-700 text-gray-900 dark:text-gray-100 text-sm focus:ring-2 focus:ring-2 focus:ring-blue-500 focus:border-blue-500" +className="px-3 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-gray-700 text-gray-900 dark:text-gray-100 text-sm focus:ring-2 focus:ring-blue-500 focus:border-blue-500"backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py (3)
165-165: Ambiguous unicode character in log message.The
ℹ(INFORMATION SOURCE) character may cause rendering issues in some terminals or log aggregators.- logger.info(f"ℹ️ {user_email} already synced with Mycelia") + logger.info(f"[INFO] {user_email} already synced with Mycelia")
44-49: Duplicated hash function across multiple files.The
_hash_api_key_with_saltmethod is duplicated in:
backends/advanced/scripts/sync_friendlite_mycelia.py(lines 57-62)backends/advanced/scripts/create_mycelia_api_key.py- This file
Consider extracting to a shared utility module to follow DRY principle.
# backends/advanced/src/advanced_omi_backend/utils/crypto.py import hashlib import base64 def hash_api_key_with_salt(api_key: str, salt: bytes) -> str: """Hash API key with salt (matches Mycelia's implementation).""" h = hashlib.sha256() h.update(salt) h.update(api_key.encode('utf-8')) return base64.b64encode(h.digest()).decode('utf-8')
200-217: Success return should be in else block.The
return resultstatement at line 217 would be better placed in anelseblock to clarify control flow. Additionally, the log statements reference credentials but don't return meaningful result whenapi_keyisNone(existing user case).if result: client_id, api_key = result if api_key: # Credentials created successfully - don't log them logger.info("="*70) # ... logging ... logger.info("="*70) + return result + else: + return None - return result except Exception as e: logger.error(f"Failed to sync admin user: {e}", exc_info=True) return Nonebackends/advanced/src/advanced_omi_backend/services/memory/providers/openmemory_mcp.py (2)
314-316: Uselogging.exception()for better stack trace visibility.Per project learnings, prefer
logging.exception()inside except blocks to automatically log the full stack trace.except Exception as e: - memory_logger.error(f"Failed to get memory: {e}") + memory_logger.exception(f"Failed to get memory: {e}") return NoneBased on learnings from PR #178.
359-361: Uselogging.exception()for better stack trace visibility.Per project learnings, prefer
logging.exception()inside except blocks to automatically log the full stack trace.except Exception as e: - memory_logger.error(f"Failed to update memory: {e}") + memory_logger.exception(f"Failed to update memory: {e}") return FalseBased on learnings from PR #178.
backends/advanced/scripts/sync_friendlite_mycelia.py (4)
47-56: MongoClient should be closed when done.The
MongoClientis stored as an instance attribute but never explicitly closed. For a CLI script, consider adding a cleanup method or using a context manager pattern.def __init__(self, mongo_url: str, mycelia_db: str, friendlite_db: str): self.mongo_url = mongo_url self.mycelia_db = mycelia_db self.friendlite_db = friendlite_db self.client = MongoClient(mongo_url) - print(f"📊 Connected to MongoDB:") + print("📊 Connected to MongoDB:") print(f" URL: {mongo_url}") print(f" Friend-Lite DB: {friendlite_db}") print(f" Mycelia DB: {mycelia_db}\n") + + def close(self): + """Close MongoDB connection.""" + if self.client: + self.client.close()Also add cleanup in
main():sync = FriendLiteMyceliaSync(mongo_url, mycelia_db, friendlite_db) try: # ... execute actions ... finally: sync.close()
154-156: Use logging or print with traceback for better debugging.The bare
except Exception as ecatches all exceptions but only prints a brief message, losing valuable stack trace information.except Exception as e: - print(f"✗ {user_email:40} Failed: {e}") + import traceback + print(f"✗ {user_email:40} Failed: {e}") + traceback.print_exc() return False
162-164: Remove extraneous f-string prefixes.These strings have no placeholders, so the
fprefix is unnecessary.- print(f"{'='*80}") - print(f"SYNC ALL USERS") - print(f"{'='*80}") + print("=" * 80) + print("SYNC ALL USERS") + print("=" * 80)Similar changes apply to lines 190, 256, and 297.
1-1: Add executable permission or remove shebang.The shebang
#!/usr/bin/env python3is present but the file is not executable. Either:
- Add executable permission:
chmod +x scripts/sync_friendlite_mycelia.py- Or remove the shebang since the Makefile invokes it via
uv run pythonbackends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py (2)
366-367: Unused parameters should be documented or removed.The
allow_updateanddb_helperparameters are declared but never used. If these are intentionally unused for API compatibility, add a comment explaining why, or implement the functionality.async def add_memory( self, transcript: str, client_id: str, source_id: str, user_id: str, user_email: str, - allow_update: bool = False, - db_helper: Any = None, + allow_update: bool = False, # noqa: ARG002 - Reserved for future use + db_helper: Any = None, # noqa: ARG002 - Reserved for future use ) -> Tuple[bool, List[str]]:Or consider implementing database relationship tracking similar to
openmemory_mcp.py.
764-797: Inefficient N+1 delete pattern.
delete_all_user_memoriesfetches all memories then deletes them one by one. This creates N+1 API calls. Consider adding a bulk delete action to Mycelia or using a batch approach.async def delete_all_user_memories(self, user_id: str) -> int: # ... try: jwt_token = await self._get_user_jwt(user_id) - # First, get all memory IDs for this user - result = await self._call_resource( - action="list", - jwt_token=jwt_token, - filters={}, - options={"limit": 10000} - ) - - # Delete each memory individually - deleted_count = 0 - for obj in result: - memory_id = self._extract_bson_id(obj.get("_id", "")) - if await self.delete_memory(memory_id, user_id): - deleted_count += 1 + # Try bulk delete if Mycelia supports it + result = await self._call_resource( + action="deleteMany", + jwt_token=jwt_token, + filters={} # Auto-scoped by userId + ) + deleted_count = result.get("deletedCount", 0)If Mycelia doesn't support bulk delete, document this as a known limitation.
| # Sync admin user with Mycelia OAuth (if using Mycelia memory provider) | ||
| try: | ||
| from advanced_omi_backend.services.mycelia_sync import sync_admin_on_startup | ||
| await sync_admin_on_startup() | ||
| except Exception as e: | ||
| application_logger.error(f"Failed to sync admin with Mycelia OAuth: {e}") | ||
| # Don't raise here as this is not critical for startup |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Improve exception handling with logging.exception().
The startup routine correctly wraps Mycelia sync in a try/except to ensure non-fatal error handling. However, the exception handling can be improved for better debuggability.
Apply this diff:
# Sync admin user with Mycelia OAuth (if using Mycelia memory provider)
try:
from advanced_omi_backend.services.mycelia_sync import sync_admin_on_startup
await sync_admin_on_startup()
except Exception as e:
- application_logger.error(f"Failed to sync admin with Mycelia OAuth: {e}")
+ application_logger.exception(f"Failed to sync admin with Mycelia OAuth: {e}")
# Don't raise here as this is not critical for startupBased on learnings, prefer logging.exception() inside except blocks to automatically log the full stack trace, which improves debuggability across Python files.
🧰 Tools
🪛 Ruff (0.14.8)
80-80: Do not catch blind exception: Exception
(BLE001)
81-81: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
🤖 Prompt for AI Agents
In backends/advanced/src/advanced_omi_backend/app_factory.py around lines 76 to
82, the except block currently logs the error message with
application_logger.error(f"Failed to sync admin with Mycelia OAuth: {e}") which
omits the stack trace; change this to use application_logger.exception("Failed
to sync admin with Mycelia OAuth") (or error(..., exc_info=True)) inside the
except so the full traceback is recorded for better debuggability while keeping
the non-fatal behavior.
backends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.py
Outdated
Show resolved
Hide resolved
backends/advanced/src/advanced_omi_backend/services/memory/prompts.py
Outdated
Show resolved
Hide resolved
backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py
Show resolved
Hide resolved
| @@ -0,0 +1 @@ | |||
| :root{--g-arrow-color: #1f2937;--g-bar-color: #fff;--g-bar-border: #fff;--g-tick-color-thick: #ededed;--g-tick-color: #f3f3f3;--g-actions-background: #f3f3f3;--g-border-color: #ebeff2;--g-text-muted: #7c7c7c;--g-text-light: #fff;--g-text-dark: #171717;--g-progress-color: #dbdbdb;--g-handle-color: #37352f;--g-weekend-label-color: #dcdce4;--g-expected-progress: #c4c4e9;--g-header-background: #fff;--g-row-color: #fdfdfd;--g-row-border-color: #c7c7c7;--g-today-highlight: #37352f;--g-popup-actions: #ebeff2;--g-weekend-highlight-color: #f7f7f7}.gantt-container{line-height:14.5px;position:relative;overflow:auto;font-size:12px;height:var(--gv-grid-height);width:100%;border-radius:8px}.gantt-container .popup-wrapper{position:absolute;top:0;left:0;background:#fff;box-shadow:0 10px 24px -3px #0003;padding:10px;border-radius:5px;width:max-content;z-index:1000}.gantt-container .popup-wrapper .title{margin-bottom:2px;color:var(--g-text-dark);font-size:.85rem;font-weight:650;line-height:15px}.gantt-container .popup-wrapper .subtitle{color:var(--g-text-dark);font-size:.8rem;margin-bottom:5px}.gantt-container .popup-wrapper .details{color:var(--g-text-muted);font-size:.7rem}.gantt-container .popup-wrapper .actions{margin-top:10px;margin-left:3px}.gantt-container .popup-wrapper .action-btn{border:none;padding:5px 8px;background-color:var(--g-popup-actions);border-right:1px solid var(--g-text-light)}.gantt-container .popup-wrapper .action-btn:hover{background-color:brightness(97%)}.gantt-container .popup-wrapper .action-btn:first-child{border-top-left-radius:4px;border-bottom-left-radius:4px}.gantt-container .popup-wrapper .action-btn:last-child{border-right:none;border-top-right-radius:4px;border-bottom-right-radius:4px}.gantt-container .grid-header{height:calc(var(--gv-lower-header-height) + var(--gv-upper-header-height) + 10px);background-color:var(--g-header-background);position:sticky;top:0;left:0;border-bottom:1px solid var(--g-row-border-color);z-index:1000}.gantt-container .lower-text,.gantt-container .upper-text{text-anchor:middle}.gantt-container .upper-header{height:var(--gv-upper-header-height)}.gantt-container .lower-header{height:var(--gv-lower-header-height)}.gantt-container .lower-text{font-size:12px;position:absolute;width:calc(var(--gv-column-width) * .8);height:calc(var(--gv-lower-header-height) * .8);margin:0 calc(var(--gv-column-width) * .1);align-content:center;text-align:center;color:var(--g-text-muted)}.gantt-container .upper-text{position:absolute;width:fit-content;font-weight:500;font-size:14px;color:var(--g-text-dark);height:calc(var(--gv-lower-header-height) * .66)}.gantt-container .current-upper{position:sticky;left:0!important;padding-left:17px;background:#fff}.gantt-container .side-header{position:sticky;top:0;right:0;float:right;z-index:1000;line-height:20px;font-weight:400;width:max-content;margin-left:auto;padding-right:10px;padding-top:10px;background:var(--g-header-background);display:flex}.gantt-container .side-header *{transition-property:background-color;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s;background-color:var(--g-actions-background);border-radius:.5rem;border:none;padding:5px 8px;color:var(--g-text-dark);font-size:14px;letter-spacing:.02em;font-weight:420;box-sizing:content-box;margin-right:5px}.gantt-container .side-header *:last-child{margin-right:0}.gantt-container .side-header *:hover{filter:brightness(97.5%)}.gantt-container .side-header select{width:60px;padding-top:2px;padding-bottom:2px}.gantt-container .side-header select:focus{outline:none}.gantt-container .date-range-highlight{background-color:var(--g-progress-color);border-radius:12px;height:calc(var(--gv-lower-header-height) - 6px);top:calc(var(--gv-upper-header-height) + 5px);position:absolute}.gantt-container .current-highlight{position:absolute;background:var(--g-today-highlight);width:1px;z-index:999}.gantt-container .current-ball-highlight{position:absolute;background:var(--g-today-highlight);z-index:1001;border-radius:50%}.gantt-container .current-date-highlight{background:var(--g-today-highlight);color:var(--g-text-light);border-radius:5px}.gantt-container .holiday-label{position:absolute;top:0;left:0;opacity:0;z-index:1000;background:--g-weekend-label-color;border-radius:5px;padding:2px 5px}.gantt-container .holiday-label.show{opacity:100}.gantt-container .extras{position:sticky;left:0}.gantt-container .extras .adjust{position:absolute;left:8px;top:calc(var(--gv-grid-height) - 60px);background-color:#000000b3;color:#fff;border:none;padding:8px;border-radius:3px}.gantt-container .hide{display:none}.gantt{user-select:none;-webkit-user-select:none;position:absolute}.gantt .grid-background{fill:none}.gantt .grid-row{fill:var(--g-row-color)}.gantt .row-line{stroke:var(--g-border-color)}.gantt .tick{stroke:var(--g-tick-color);stroke-width:.4}.gantt .tick.thick{stroke:var(--g-tick-color-thick);stroke-width:.7}.gantt .arrow{fill:none;stroke:var(--g-arrow-color);stroke-width:1.5}.gantt .bar-wrapper .bar{fill:var(--g-bar-color);stroke:var(--g-bar-border);stroke-width:0;transition:stroke-width .3s ease}.gantt .bar-progress{fill:var(--g-progress-color);border-radius:4px}.gantt .bar-expected-progress{fill:var(--g-expected-progress)}.gantt .bar-invalid{fill:transparent;stroke:var(--g-bar-border);stroke-width:1;stroke-dasharray:5}:is(.gantt .bar-invalid)~.bar-label{fill:var(--g-text-light)}.gantt .bar-label{fill:var(--g-text-dark);dominant-baseline:central;font-family:Helvetica;font-size:13px;font-weight:400}.gantt .bar-label.big{fill:var(--g-text-dark);text-anchor:start}.gantt .handle{fill:var(--g-handle-color);opacity:0;transition:opacity .3s ease}.gantt .handle.active,.gantt .handle.visible{cursor:ew-resize;opacity:1}.gantt .handle.progress{fill:var(--g-text-muted)}.gantt .bar-wrapper{cursor:pointer}.gantt .bar-wrapper .bar{outline:1px solid var(--g-row-border-color);border-radius:3px}.gantt .bar-wrapper:hover .bar{transition:transform .3s ease}.gantt .bar-wrapper:hover .date-range-highlight{display:block} | |||
There was a problem hiding this comment.
Fix CSS variable usage for weekend label background
.holiday-label uses a CSS custom property without var(), which will not work as intended:
.gantt-container .holiday-label{
/* current */
background:--g-weekend-label-color;
}Update to use var():
-.gantt-container .holiday-label{position:absolute;top:0;left:0;opacity:0;z-index:1000;background:--g-weekend-label-color;border-radius:5px;padding:2px 5px}
+.gantt-container .holiday-label{position:absolute;top:0;left:0;opacity:0;z-index:1000;background:var(--g-weekend-label-color);border-radius:5px;padding:2px 5px}🧰 Tools
🪛 Biome (2.1.2)
[error] 1-1: CSS variables '--g-weekend-label-color' is used without the 'var()' function
CSS variables should be used with the 'var()' function to ensure proper fallback behavior and browser compatibility.
(lint/correctness/noMissingVarFunction)
🤖 Prompt for AI Agents
In backends/advanced/webui/public/frappe-gantt.css around lines 1 to 1, the
.holiday-label rule uses the custom property name directly (background:
--g-weekend-label-color) which is invalid; update the declaration to use the
var() syntax (background: var(--g-weekend-label-color)) so the CSS variable is
resolved correctly.
|
@RabbitAI review |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backends/advanced/src/advanced_omi_backend/services/memory/prompts.py (1)
180-180: Stale date issue in FACT_RETRIEVAL_PROMPT.Line 180 uses
datetime.now()inside a module-level f-string, which means the date is captured once at import time and becomes stale if the server runs continuously. This causes the LLM to operate with an incorrect "today's date" context after the first day.Consider refactoring to match the pattern used for the temporal extraction prompt. Apply this approach:
def get_fact_retrieval_prompt() -> str: """Get the fact retrieval prompt with current date context.""" return f"""You are a Personal Information Organizer... Remember the following: - Today's date is {datetime.now().strftime("%Y-%m-%d")}. ... """Then update all call sites to use
get_fact_retrieval_prompt()instead of theFACT_RETRIEVAL_PROMPTconstant.
♻️ Duplicate comments (4)
backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py (1)
82-84: MongoClient connections are never closed - resource leak.Multiple
MongoClientinstances are created but never closed in_create_mycelia_api_key(line 82),sync_user_to_mycelia(line 134), andsync_admin_user(line 175). This causes resource leaks, especially when these methods are called repeatedly.Consider using a context manager or maintaining a single shared client instance:
+ def _get_client(self) -> MongoClient: + """Get or create MongoDB client.""" + if not hasattr(self, '_client') or self._client is None: + self._client = MongoClient(self.mongo_url) + return self._client + def _create_mycelia_api_key( self, user_id: str, user_email: str ) -> Tuple[str, str]: # ... - client = MongoClient(self.mongo_url) + client = self._get_client() db = client[self.mycelia_db]Alternatively, use context managers for explicit cleanup:
with MongoClient(self.mongo_url) as client: db = client[self.mycelia_db] # ... operationsAlso applies to: 134-136, 175-177
backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py (2)
96-104: Chain exceptions withraise ... from ein initialize().Per project learnings, when re-raising exceptions, always chain with
raise ... from eto preserve the original context for better debuggability.except httpx.HTTPError as e: - raise RuntimeError(f"Failed to connect to Mycelia service: {e}") + raise RuntimeError(f"Failed to connect to Mycelia service: {e}") from e self._initialized = True memory_logger.info("✅ Mycelia memory service initialized successfully") except Exception as e: - memory_logger.error(f"❌ Failed to initialize Mycelia service: {e}") - raise RuntimeError(f"Mycelia initialization failed: {e}") + memory_logger.exception("❌ Failed to initialize Mycelia service") + raise RuntimeError(f"Mycelia initialization failed: {e}") from eBased on learnings from PR #178.
286-288: Chain exception and uselogging.exception()in _extract_memories_via_llm.Per project learnings, prefer
logging.exception()in except blocks and chain exceptions withraise ... from e.except Exception as e: - memory_logger.error(f"Failed to extract memories via OpenAI: {e}") - raise RuntimeError(f"OpenAI memory extraction failed: {e}") + memory_logger.exception("Failed to extract memories via OpenAI") + raise RuntimeError(f"OpenAI memory extraction failed: {e}") from eBased on learnings from PR #178.
backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py (1)
222-223: Move RQ / job-utils / Redis imports to the top of the module (still inline in several places)There are still in-function imports that conflict with the project guideline that all imports live at the top of the file:
- Lines 222–223:
from rq import get_current_job,from rq.job import Jobinside the no‑speech branch.- Line 228:
from advanced_omi_backend.controllers.queue_controller import redis_conn(already imported at top).- Line 431:
from rq import get_current_jobagain insidetranscribe_full_audio_job.- Line 530:
from advanced_omi_backend.utils.job_utils import check_job_aliveinside thewhile Trueloop ofstream_speech_detection_job(also causing repeated imports every iteration).Per earlier review feedback, these should be hoisted to the module import section, reusing the existing
redis_connimport, e.g.:-from advanced_omi_backend.controllers.queue_controller import ( - transcription_queue, - redis_conn, - JOB_RESULT_TTL, - REDIS_URL, -) +from advanced_omi_backend.controllers.queue_controller import ( + transcription_queue, + redis_conn, + JOB_RESULT_TTL, + REDIS_URL, +) +from rq import get_current_job +from rq.job import Job +from advanced_omi_backend.utils.job_utils import check_job_aliveThen remove the in-function imports at lines 222–223, 228, 431, and 530.
This both follows the repo’s import policy and avoids repeatedly importing inside a hot loop.
Also applies to: 228-228, 431-432, 530-531
🧹 Nitpick comments (9)
backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py (1)
40-44: Fragile URL parsing for database name extraction.The logic to extract the database name from
MONGODB_URIusing string splitting is fragile and may fail with connection strings containing authentication credentials, query parameters, or replica set configurations.Consider using
urllib.parsefor more robust parsing:+from urllib.parse import urlparse + # Friend-Lite database - extract from MONGODB_URI or use default - if "/" in self.mongo_url and self.mongo_url.count("/") >= 3: - # Extract database name from mongodb://host:port/database - self.friendlite_db = self.mongo_url.split("/")[-1].split("?")[0] or "friend-lite" - else: - self.friendlite_db = "friend-lite" + try: + parsed = urlparse(self.mongo_url) + self.friendlite_db = parsed.path.lstrip("/").split("?")[0] or "friend-lite" + except Exception: + self.friendlite_db = "friend-lite"backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py (2)
379-380: Unused method argumentsallow_updateanddb_helper.These parameters are declared but never used in the method body. If they're part of the interface contract, consider adding a comment explaining why they're not applicable for Mycelia, or implement the functionality.
async def add_memory( self, transcript: str, client_id: str, source_id: str, user_id: str, user_email: str, - allow_update: bool = False, - db_helper: Any = None, + allow_update: bool = False, # Not used: Mycelia handles deduplication internally + db_helper: Any = None, # Not used: Mycelia manages its own database ) -> Tuple[bool, List[str]]:Alternatively, prefix with underscore to indicate intentionally unused:
_allow_update,_db_helper.
781-794: Inefficient delete pattern - consider bulk delete if Mycelia supports it.The current implementation fetches up to 10,000 memories and deletes them one by one, which is O(n) API calls and could be slow or timeout for users with many memories.
If Mycelia supports a bulk delete or delete-by-query action, consider using it instead:
# If Mycelia supports deleteMany or bulk delete: result = await self._call_resource( action="deleteMany", # or "bulkDelete" jwt_token=jwt_token, filters={}, # Auto-scoped by userId ) return result.get("deletedCount", 0)If not, consider adding progress logging for visibility during long operations.
backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py (1)
148-155: Silent exception swallowing in memory provider detection.The bare
except Exception: passsilently ignores all errors when determining the memory provider. While the fallback behavior is acceptable, consider logging the error for debugging purposes.try: memory_service_obj = get_memory_service() provider_name = memory_service_obj.__class__.__name__ if "OpenMemory" in provider_name: memory_provider = conversation_model.MemoryProvider.OPENMEMORY_MCP + elif "Mycelia" in provider_name: + memory_provider = conversation_model.MemoryProvider.MYCELIA except Exception: - pass + logger.debug("Could not determine memory provider, using default")Also, consider adding detection for the new Mycelia provider.
backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py (5)
32-32: Tightenconversation_idtyping and improve exception logging inapply_speaker_recognition
- Line 32:
conversation_id: str = Nonetriggers RUF013; prefer an explicit optional type, e.g.:- user_id: str,
- conversation_id: str = None,
- user_id: str,
- conversation_id: str | None = None,
- Lines 114–120: catching `Exception` and manually importing `traceback` is noisy. Given speaker recognition is optional, keeping the fallback is fine, but it would be cleaner and more debuggable to rely on `logger.exception` instead of manual stack formatting: ```diff - except Exception as speaker_error: - logger.warning(f"⚠️ Speaker recognition failed: {speaker_error}") - logger.warning(f"Continuing with original transcription speaker labels") - import traceback - - logger.debug(traceback.format_exc()) - return segments + except Exception: + logger.exception("⚠️ Speaker recognition failed") + logger.warning("Continuing with original transcription speaker labels") + return segmentsThis aligns with the project preference for
logging.exceptionin except blocks and avoids extra traceback plumbing. Based on learnings, ...Also applies to: 71-73, 114-120
71-73: Drop unnecessaryfprefixes on constant log messages (Ruff F541)Several log lines are f-strings without any interpolation:
- Line 72:
"🎤 Speaker recognition returned no segments, keeping original transcription segments"- Line 329:
"📊 Created single segment from transcript text (no segments returned by provider)"- Line 615:
"⚠️ Speaker check job disappeared from Redis (likely completed quickly), assuming not enrolled"- Line 678:
"ℹ️ No enrolled speakers found, but proceeding with conversation anyway"These can be plain strings to satisfy F541:
-logger.info(f"🎤 Speaker recognition returned no segments, keeping original transcription segments") +logger.info("🎤 Speaker recognition returned no segments, keeping original transcription segments") -logger.info( - f"📊 Created single segment from transcript text (no segments returned by provider)" -) +logger.info("📊 Created single segment from transcript text (no segments returned by provider)") -logger.warning( - f"⚠️ Speaker check job disappeared from Redis (likely completed quickly), assuming not enrolled" -) +logger.warning( + "⚠️ Speaker check job disappeared from Redis (likely completed quickly), assuming not enrolled" +) -logger.info( - f"ℹ️ No enrolled speakers found, but proceeding with conversation anyway" -) +logger.info( + "ℹ️ No enrolled speakers found, but proceeding with conversation anyway" +)(Ruff also flags the ℹ character; keep it if you like the iconography, but this change alone will clear F541.)
Also applies to: 329-330, 615-616, 678-679
131-131: Markredis_clientas intentionally unused intranscribe_full_audio_jobThe injected
redis_clientargument isn’t used in this function (ARG001). Since the decorator requires it, a simple way to satisfy linters is to rename it to an underscore-prefixed name:-async def transcribe_full_audio_job( +async def transcribe_full_audio_job( conversation_id: str, audio_uuid: str, audio_path: str, version_id: str, trigger: str = "reprocess", *, - redis_client=None, + redis_client=None, # kept for async_job(redis=True) signature ) -> Dict[str, Any]:or:
- *, redis_client=None, + *, redis_client=None,followed by:
_ = redis_client # explicitly mark as intentionally unusedPick whichever pattern matches existing conventions in this codebase.
252-260: Strengthen exception handling when cancelling dependent jobsThe job-cancellation loop currently does:
for job_id in job_patterns: try: dependent_job = Job.fetch(job_id, connection=redis_conn) ... except Exception as e: logger.debug(f"Job {job_id} not found or already completed: {e}") ... except Exception as cancel_error: logger.warning(f"Failed to cancel some dependent jobs: {cancel_error}")Catching bare
Exceptionhere makes it easy to hide unexpected failures (network, Redis, serialization issues). Suggested tweaks:
- Narrow the inner
exceptto the specific RQ/Redis exception(s) expected when a job is missing (e.g.,NoSuchJobError), and let other exceptions propagate or be logged with a full stack trace.- Use
logger.exception("Failed to cancel some dependent jobs")in the outer handler so you keep the stack trace if something truly goes wrong.This keeps the “best-effort” behavior while preserving debuggability. Based on learnings, ...
377-387: Uselogger.exceptionfor LLM failures and simplify the fallback pathThe LLM title/summary generation block gracefully falls back on truncation, which is good. The exception handling can be made more informative:
- except Exception as llm_error: - logger.warning(f"⚠️ LLM title/summary generation failed: {llm_error}") + except Exception: + logger.exception("⚠️ LLM title/summary generation failed") # Fallback to simple truncation first_sentence = transcript_text.split(".")[0].strip() conversation.title = ( first_sentence[:50] + "..." if len(first_sentence) > 50 else first_sentence ) conversation.summary = ( transcript_text[:150] + "..." if len(transcript_text) > 150 else transcript_text )This keeps the behavior identical while logging a full stack trace, matching the project’s exception-logging guideline. Based on learnings, ...
Also applies to: 392-403, 409-418
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
backends/advanced/src/advanced_omi_backend/auth.py(4 hunks)backends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.py(4 hunks)backends/advanced/src/advanced_omi_backend/services/memory/prompts.py(2 hunks)backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py(1 hunks)backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py(1 hunks)backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py(11 hunks)backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py(23 hunks)backends/advanced/webui/src/pages/FrappeGanttTimeline.tsx(1 hunks)backends/advanced/webui/src/pages/MyceliaTimeline.tsx(1 hunks)backends/advanced/webui/src/pages/ReactGanttTimeline.tsx(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- backends/advanced/src/advanced_omi_backend/auth.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Useuv run pythoncommand instead of directpythonorpython3for executing Python scripts
Use Black formatter with 100-character line length for Python code style
Use isort for Python import organization
NEVER import modules in the middle of functions or files - ALL imports must be at the top of the file after the docstring
Group imports in order: standard library, third-party, local imports
Use lazy imports sparingly and only when absolutely necessary for circular import issues in Python
Always raise errors explicitly in Python code, never silently ignore errors or failures
Use explicit error handling with proper exceptions in Python rather than silent failures
Understand data structures and research input/response or class structure instead of adding defensivehasattr()checks in Python
Files:
backends/advanced/src/advanced_omi_backend/services/mycelia_sync.pybackends/advanced/src/advanced_omi_backend/workers/memory_jobs.pybackends/advanced/src/advanced_omi_backend/services/memory/prompts.pybackends/advanced/src/advanced_omi_backend/workers/transcription_jobs.pybackends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.pybackends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.py
🧠 Learnings (4)
📚 Learning: 2025-12-08T23:52:34.959Z
Learnt from: AnkushMalaker
Repo: chronicler-ai/chronicle PR: 178
File: backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py:218-223
Timestamp: 2025-12-08T23:52:34.959Z
Learning: In Python code (chronicle project), prefer logging.exception() inside except blocks to automatically log the full stack trace. When re-raising exceptions, always chain with 'raise ... from e' to preserve the original context; use 'raise ... from None' only if you explicitly want to suppress the context. This improves debuggability across Python files.
Applied to files:
backends/advanced/src/advanced_omi_backend/services/mycelia_sync.pybackends/advanced/src/advanced_omi_backend/workers/memory_jobs.pybackends/advanced/src/advanced_omi_backend/services/memory/prompts.pybackends/advanced/src/advanced_omi_backend/workers/transcription_jobs.pybackends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.pybackends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.py
📚 Learning: 2025-12-07T18:52:48.868Z
Learnt from: CR
Repo: chronicler-ai/chronicle PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T18:52:48.868Z
Learning: Applies to **/*.py : Understand data structures and research input/response or class structure instead of adding defensive `hasattr()` checks in Python
Applied to files:
backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py
📚 Learning: 2025-12-07T18:52:48.868Z
Learnt from: CR
Repo: chronicler-ai/chronicle PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T18:52:48.868Z
Learning: Applies to **/*.py : NEVER import modules in the middle of functions or files - ALL imports must be at the top of the file after the docstring
Applied to files:
backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.pybackends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py
📚 Learning: 2025-12-07T18:52:48.868Z
Learnt from: CR
Repo: chronicler-ai/chronicle PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T18:52:48.868Z
Learning: Applies to **/*.py : Use explicit error handling with proper exceptions in Python rather than silent failures
Applied to files:
backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py
🧬 Code graph analysis (5)
backends/advanced/webui/src/pages/MyceliaTimeline.tsx (4)
backends/advanced/src/advanced_omi_backend/services/memory/prompts.py (1)
TimeRange(393-397)backends/advanced/webui/src/types/react-gantt-timeline.d.ts (1)
TimelineTask(4-10)backends/advanced/webui/src/contexts/AuthContext.tsx (1)
useAuth(114-120)backends/advanced/webui/src/services/api.ts (1)
memoriesApi(106-121)
backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py (2)
backends/advanced/scripts/sync_friendlite_mycelia.py (1)
_hash_api_key_with_salt(58-63)backends/advanced/src/advanced_omi_backend/models/user.py (1)
user_id(71-73)
backends/advanced/webui/src/pages/FrappeGanttTimeline.tsx (2)
backends/advanced/webui/src/contexts/AuthContext.tsx (1)
useAuth(114-120)backends/advanced/webui/src/services/api.ts (1)
memoriesApi(106-121)
backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py (2)
backends/advanced/src/advanced_omi_backend/utils/conversation_utils.py (2)
analyze_speech(50-143)mark_conversation_deleted(558-580)backends/advanced/src/advanced_omi_backend/utils/job_utils.py (1)
check_job_alive(13-44)
backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py (7)
backends/advanced/src/advanced_omi_backend/auth.py (1)
generate_jwt_for_user(106-135)backends/advanced/src/advanced_omi_backend/services/memory/base.py (17)
MemoryEntry(26-63)initialize(75-84)initialize(350-356)search_memories(114-132)search_memories(371-389)get_all_memories(135-149)count_memories(151-163)count_memories(404-416)get_memory(165-179)update_memory(181-204)update_memory(419-437)delete_memory(207-218)delete_memory(440-449)delete_all_user_memories(221-230)test_connection(233-239)test_connection(331-337)test_connection(464-470)backends/advanced/src/advanced_omi_backend/services/memory/config.py (1)
MemoryConfig(43-55)backends/advanced/src/advanced_omi_backend/services/memory/prompts.py (2)
TemporalEntity(400-409)get_temporal_entity_extraction_prompt(554-556)backends/advanced/src/advanced_omi_backend/services/memory/providers/llm_providers.py (2)
_get_openai_client(45-69)test_connection(269-296)backends/advanced/src/advanced_omi_backend/services/memory/providers/mcp_client.py (4)
get_memory(346-387)update_memory(389-434)delete_memory(436-459)test_connection(461-486)backends/advanced/src/advanced_omi_backend/controllers/system_controller.py (1)
delete_all_user_memories(442-465)
🪛 Ruff (0.14.8)
backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py
154-154: String contains ambiguous ℹ (INFORMATION SOURCE). Did you mean i (LATIN SMALL LETTER I)?
(RUF001)
208-208: Consider moving this statement to an else block
(TRY300)
backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py
24-24: Unused function argument: redis_client
(ARG001)
203-205: Abstract raise to an inner function
(TRY301)
203-205: Avoid specifying long messages outside the exception class
(TRY003)
214-216: Abstract raise to an inner function
(TRY301)
214-216: Avoid specifying long messages outside the exception class
(TRY003)
222-222: Do not catch blind exception: Exception
(BLE001)
backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py
32-32: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
72-72: f-string without any placeholders
Remove extraneous f prefix
(F541)
112-112: Consider moving this statement to an else block
(TRY300)
114-114: Do not catch blind exception: Exception
(BLE001)
116-116: f-string without any placeholders
Remove extraneous f prefix
(F541)
131-131: Unused function argument: redis_client
(ARG001)
252-252: Do not catch blind exception: Exception
(BLE001)
259-259: Do not catch blind exception: Exception
(BLE001)
329-329: f-string without any placeholders
Remove extraneous f prefix
(F541)
409-409: Do not catch blind exception: Exception
(BLE001)
615-615: f-string without any placeholders
Remove extraneous f prefix
(F541)
678-678: f-string without any placeholders
Remove extraneous f prefix
(F541)
678-678: String contains ambiguous ℹ (INFORMATION SOURCE). Did you mean i (LATIN SMALL LETTER I)?
(RUF001)
backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py
95-95: Avoid specifying long messages outside the exception class
(TRY003)
97-97: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
97-97: Avoid specifying long messages outside the exception class
(TRY003)
102-102: Do not catch blind exception: Exception
(BLE001)
103-103: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
104-104: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
104-104: Avoid specifying long messages outside the exception class
(TRY003)
123-123: Avoid specifying long messages outside the exception class
(TRY003)
170-170: Unnecessary key check before dictionary access
Replace with dict.get
(RUF019)
174-174: Unnecessary key check before dictionary access
Replace with dict.get
(RUF019)
209-209: Avoid specifying long messages outside the exception class
(TRY003)
224-224: Avoid specifying long messages outside the exception class
(TRY003)
226-226: Redundant exception object included in logging.exception call
(TRY401)
227-227: Avoid specifying long messages outside the exception class
(TRY003)
280-280: Consider moving this statement to an else block
(TRY300)
282-282: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
283-283: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
287-287: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
288-288: Avoid specifying long messages outside the exception class
(TRY003)
356-356: Consider moving this statement to an else block
(TRY300)
359-359: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
360-360: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
362-362: Do not catch blind exception: Exception
(BLE001)
363-363: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
364-364: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
367-367: Do not catch blind exception: Exception
(BLE001)
368-368: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
379-379: Unused method argument: allow_update
(ARG002)
380-380: Unused method argument: db_helper
(ARG002)
404-404: f-string without any placeholders
Remove extraneous f prefix
(F541)
440-441: Consider [source_id, client_id, *temporal_entity.entities] instead of concatenation
Replace with [source_id, client_id, *temporal_entity.entities]
(RUF005)
495-495: Do not catch blind exception: Exception
(BLE001)
496-496: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
545-545: Consider moving this statement to an else block
(TRY300)
547-547: Do not catch blind exception: Exception
(BLE001)
548-548: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
578-578: Consider moving this statement to an else block
(TRY300)
580-580: Do not catch blind exception: Exception
(BLE001)
581-581: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
602-602: Abstract raise to an inner function
(TRY301)
602-602: Avoid specifying long messages outside the exception class
(TRY003)
612-612: Do not catch blind exception: Exception
(BLE001)
613-613: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
649-649: Do not catch blind exception: Exception
(BLE001)
650-650: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
727-727: Do not catch blind exception: Exception
(BLE001)
728-728: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
764-764: Do not catch blind exception: Exception
(BLE001)
765-765: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
797-797: Consider moving this statement to an else block
(TRY300)
799-799: Do not catch blind exception: Exception
(BLE001)
800-800: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
818-818: Consider moving this statement to an else block
(TRY300)
820-820: Do not catch blind exception: Exception
(BLE001)
821-821: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
831-831: Do not catch blind exception: Exception
(BLE001)
832-832: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
858-858: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
863-863: Do not catch blind exception: Exception
(BLE001)
864-864: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
865-865: Do not catch blind exception: Exception
(BLE001)
866-866: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
backends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.py
63-63: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
94-94: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
🔇 Additional comments (13)
backends/advanced/src/advanced_omi_backend/services/memory/prompts.py (4)
11-14: LGTM: Imports are properly organized.The imports follow coding guidelines with all imports at the top of the file, properly grouped (standard library followed by third-party packages).
393-409: LGTM: Well-structured Pydantic models.The
TimeRangeandTemporalEntitymodels are properly defined with:
- Appropriate use of
default_factory=listfor mutable defaults- Clear, comprehensive field descriptions
- Correct typing with
OptionalandListannotations
412-551: LGTM: Comprehensive temporal extraction prompt.The function is well-designed with:
current_dateparameter for testability and flexibility- Dynamic timestamp generation in examples (lines 472-473, 491-492, 536-537) that ensures examples are always relative to the current date
- Comprehensive guidelines covering time resolution, entity types, and edge cases
- Clear structure with multiple realistic examples
554-556: LGTM: Properly addresses staleness concern from past review.This function correctly implements the pattern suggested in the previous review by calling
build_temporal_extraction_prompt(datetime.now())at runtime. This ensures the prompt always has current date/time context and won't become stale during long-running server processes.backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py (1)
190-216: Improved type handling for memory entries.The explicit
isinstancechecks are a significant improvement over the previoushasattr()approach. The code now properly handlesMemoryEntry,dict, andstrtypes with appropriate error handling for unexpected cases.backends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.py (2)
91-98: Route ordering is now correct.The dynamic route
/{memory_id}is correctly placed after the static routes/unfilteredand/admin, preventing path conflicts. This addresses the previously flagged issue.
60-66: New add_memory endpoint looks good.The endpoint properly uses a Pydantic model for request validation and delegates to the controller. The authentication via
Depends(current_active_user)is correct.One minor note: The static analysis warning about
Dependsin argument defaults (B008) is a false positive - this is the standard FastAPI pattern and is safe becauseDependsreturns a sentinel object that FastAPI processes at runtime.backends/advanced/webui/src/pages/ReactGanttTimeline.tsx (1)
146-183: Per-user fetch and demo fallback look solidThe auth guard plus
memoriesApi.getAll(user.id)resolve the prior “missing userId” concern, and the automatic fallback to demo data on empty results or errors gives a good UX without leaking cross-user data.backends/advanced/webui/src/pages/MyceliaTimeline.tsx (1)
216-300: Good hardening of tooltip rendering and ID parsingBuilding the tooltip via DOM/text APIs (instead of interpolated HTML) plus using
lastIndexOf('-')to recover thememoryIdmakes this D3 interaction safe against XSS and robust for dashed IDs like UUIDs.backends/advanced/webui/src/pages/FrappeGanttTimeline.tsx (1)
50-58: XSS mitigation in popup is correctly appliedDefining
escapeHtmland applying it to bothtask.nameandmemory.contentbefore interpolating intocustom_popup_htmlcloses the earlier XSS hole while keeping the popup implementation simple.Also applies to: 263-285
backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py (3)
21-21: Top-level import of conversation utilities looks goodBringing
analyze_speechandmark_conversation_deletedinto the module import block aligns with the “no in-function imports” guideline and fixes the earlier inline-import issue.
508-525: Speech-detection job metadata and zombie check look solidThe additional behavior in
stream_speech_detection_jobis well thought out:
- Job meta is updated with
status,session_id,client_id, and asession_levelflag, then later cleaned up and replaced with conversation/speaker job IDs and timestamps.- The
while Trueloop now checkscheck_job_alive(redis_client, current_job)and breaks for zombie jobs, preventing orphaned long-running workers.- Redis events (
last_event,speech_detected_at,speaker_check_status, etc.) and the addedruntime_secondsfield on returns will be very helpful for observability and UI.Once the
check_job_aliveimport is moved to the top-level (see other comment), this section looks good to me.Also applies to: 528-533, 570-574, 581-586, 671-680, 712-725, 736-747
205-271: [rewritten review comment]
[classification tag]
backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py
Outdated
Show resolved
Hide resolved
f8263d5 to
8e45aae
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (3)
backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py (1)
96-104: Chain exceptions and uselogging.exception()in initialize().Per project learnings, use
logging.exception()in except blocks and chain exceptions withraise ... from e.except httpx.HTTPError as e: - raise RuntimeError(f"Failed to connect to Mycelia service: {e}") + raise RuntimeError(f"Failed to connect to Mycelia service: {e}") from e self._initialized = True memory_logger.info("✅ Mycelia memory service initialized successfully") except Exception as e: - memory_logger.error(f"❌ Failed to initialize Mycelia service: {e}") - raise RuntimeError(f"Mycelia initialization failed: {e}") + memory_logger.exception("❌ Failed to initialize Mycelia service") + raise RuntimeError(f"Mycelia initialization failed: {e}") from eBased on learnings from PR #178.
backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py (1)
81-115: MongoClient resource leak persists (previously flagged).A
MongoClientis created at line 82 but never closed, leading to connection leaks on repeated calls. This issue was flagged in a previous review but remains unresolved.As suggested previously, either:
- Use a context manager:
with MongoClient(self.mongo_url) as client:- Create a reusable instance attribute with a
_get_client()helper methodApply this pattern to all three
MongoClientinstantiations in this file (lines 82, 134, and 175).backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py (1)
221-228: Move RQ/check‑job imports to top‑level (repeat of earlier guidance)These new/changed in‑function imports still violate the project rule “ALL imports at the top of the file” and add overhead in the hot loop:
- Lines 221–223:
from rq import get_current_job,from rq.job import Job- Line 228: re‑imports
redis_connalready imported at module level- Lines 529–530:
from advanced_omi_backend.utils.job_utils import check_job_aliveinsidewhile TrueSuggested direction:
- Add these to the top import block (after existing queue imports):
from rq import get_current_job from rq.job import Job from advanced_omi_backend.utils.job_utils import check_job_alive
Remove the in‑function/inside‑loop imports above and rely on the top‑level imports.
Also clean up any other
get_current_jobimports in this file (e.g., near the job‑meta update and at the top ofstream_speech_detection_job) so there’s a single shared import.This keeps imports centralized and avoids repeated work in the tight polling loop. Based on learnings, imports should not live inside functions or loops.
Also applies to: 529-533
🧹 Nitpick comments (19)
backends/advanced/webui/package.json (1)
15-21: Verify D3 integration patterns with React 18 to avoid DOM conflicts.The new dependencies (
d3,frappe-gantt,react-vertical-timeline-component) are legitimate, well-maintained packages compatible with React 18. However, D3 requires careful integration with React 18: use refs for DOM containers, place D3 mutations inuseLayoutEffect, ensure cleanup functions to prevent listener leaks, and make effects idempotent for StrictMode. Consider documenting the D3 integration approach to avoid common pitfalls with React 18's concurrent rendering and double-mount behavior in development.backends/advanced/webui/src/pages/MyceliaTimeline.tsx (2)
8-34: Consider extracting shared interfaces to avoid duplication.The
TimeRangeandMemoryWithTimeRangeinterfaces are duplicated acrossMyceliaTimeline.tsxandFrappeGanttTimeline.tsx. Similarly, theTimelineTaskinterface here has a slightly different shape (withtypeand requiredcolor) than the one inreact-gantt-timeline.d.ts(with optionalcolorand notype).Consider extracting these to a shared types file to ensure consistency and reduce maintenance burden.
142-148: Missing dependency in useEffect may cause stale closure.The
loadMemoriesfunction is called inside this effect but is not listed in the dependency array. SinceloadMemoriesreferencesuser?.id, this could lead to stale closures if the component logic changes. Either addloadMemoriesto the dependencies (and wrap it withuseCallback), or inline the condition check.+import { useState, useEffect, useRef, useCallback } from 'react' ... - const loadMemories = async () => { + const loadMemories = useCallback(async () => { if (!user?.id) return // ... rest of function - } + }, [user?.id]) useEffect(() => { if (!useDemoData) { loadMemories() } else { setMemories(getDemoMemories()) } - }, [user?.id, useDemoData]) + }, [loadMemories, useDemoData])backends/advanced/webui/src/pages/FrappeGanttTimeline.tsx (4)
177-181: Consider removing or gating debug logs for production.Multiple
console.logstatements (lines 177-181, 246, 295, 343, etc.) will produce noise in production browser consoles. Consider removing them or wrapping in a development-only check.
224-230: Missing dependency:loadMemoriesnot in useEffect deps.Same issue as in
MyceliaTimeline.tsx- theloadMemoriesfunction referencesuser?.idbut isn't in the dependency array. Consider wrapping withuseCallbackand adding to deps for correctness.
298-341: Fragile timing with setTimeout for DOM manipulation.Using
setTimeoutwith a magic number (150ms) to wait for DOM rendering is fragile and may not work reliably on slower devices or under load. Consider using aMutationObserverto detect when the Gantt DOM is ready, or accept that year labels may occasionally not appear.The try-catch and
console.warnprovide graceful degradation, which is good.
455-493: Consider consolidating duplicate zoom logic.
zoomInandzoomOutcontain nearly identical scroll-preservation logic. Consider extracting a shared helper:+ const adjustZoom = (delta: number, min: number, max: number) => { + setZoomScale(prev => { + const newScale = Math.min(Math.max(prev + delta, min), max) + if (scrollContainerRef.current) { + const container = scrollContainerRef.current + const scrollRatio = (container.scrollLeft + container.clientWidth / 2) / container.scrollWidth + setTimeout(() => { + if (scrollContainerRef.current) { + const newScrollLeft = scrollRatio * scrollContainerRef.current.scrollWidth - container.clientWidth / 2 + scrollContainerRef.current.scrollLeft = newScrollLeft + } + }, 0) + } + return newScale + }) + } + + const zoomIn = () => adjustZoom(0.25, 0.5, 3) + const zoomOut = () => adjustZoom(-0.25, 0.5, 3)backends/advanced/src/advanced_omi_backend/auth.py (1)
129-130:datetime.utcnow()is deprecated in Python 3.12+.Consider using
datetime.now(UTC)instead for forward compatibility. TheUTCtimezone is already imported fromdatetimemodule elsewhere in the codebase.+from datetime import UTC, datetime, timedelta -from datetime import datetime, timedelta ... - "exp": datetime.utcnow() + timedelta(seconds=JWT_LIFETIME_SECONDS), - "iat": datetime.utcnow(), # Issued at + "exp": datetime.now(UTC) + timedelta(seconds=JWT_LIFETIME_SECONDS), + "iat": datetime.now(UTC), # Issued atbackends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py (3)
286-288: Uselogging.exception()for automatic stack trace logging.Per project learnings, prefer
logging.exception()inside except blocks to automatically log the full stack trace.except Exception as e: - memory_logger.error(f"Failed to extract memories via OpenAI: {e}") + memory_logger.exception("Failed to extract memories via OpenAI") raise RuntimeError(f"OpenAI memory extraction failed: {e}") from e
404-404: Remove extraneous f-string prefix.This f-string has no placeholders.
- memory_logger.info(f"Extracting memories from transcript via OpenAI...") + memory_logger.info("Extracting memories from transcript via OpenAI...")
853-858: Store reference to scheduled task to prevent garbage collection.The task returned by
asyncio.ensure_future()should be stored to prevent it from being garbage collected before completion.try: # Schedule the coroutine to run on the existing loop - asyncio.ensure_future(self.aclose(), loop=loop) + task = asyncio.ensure_future(self.aclose(), loop=loop) + # Store reference to prevent garbage collection + self._cleanup_task = task memory_logger.info("✅ Close operation scheduled on running event loop") except Exception as e: - memory_logger.error(f"Error scheduling close on running loop: {e}") + memory_logger.exception("Error scheduling close on running loop")You'll also need to add
self._cleanup_task: Optional[asyncio.Task] = Noneto__init__.backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py (1)
222-223: Consider logging the exception type for better debugging.The generic exception catch is acceptable here since memory detail fetching is non-critical. However, including the exception type in the warning would aid debugging.
except Exception as e: - logger.warning(f"Failed to fetch memory details for UI: {e}") + logger.warning(f"Failed to fetch memory details for UI: {type(e).__name__}: {e}")backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py (2)
40-44: Consider usingurllib.parsefor more robust URL parsing.The manual string splitting works but is fragile. Using Python's
urllib.parse.urlparsewould be more robust and handle edge cases better.Example refactor:
from urllib.parse import urlparse # ... parsed = urlparse(self.mongo_url) self.friendlite_db = parsed.path.lstrip('/').split('?')[0] or "friend-lite"
208-208: Consider moving return statement toelseblock.For cleaner exception handling structure, move the return statement to an
elseblock after theexceptclause.return result except Exception as e: logger.error(f"Failed to sync admin user: {e}", exc_info=True) return None + else: + return resultThis follows the TRY300 style recommendation from Ruff.
backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py (5)
26-33: Clarify Optional annotation forconversation_id
conversation_id: str = Nonetrips Ruff’s implicit-Optional rule (RUF013). To make the type explicit and future‑proof, update the annotation:- user_id: str, - conversation_id: str = None, + user_id: str, + conversation_id: str | None = None,This keeps behavior the same while matching typing best practices.
123-132: Suppress or justify unusedredis_clientintranscribe_full_audio_job
redis_clientis injected by@async_jobbut unused in the function body, which triggers ARG001 from Ruff.If it’s intentionally unused here, consider marking that explicitly:
async def transcribe_full_audio_job( @@ - *, - redis_client=None, + *, + redis_client=None, # noqa: ARG001 - injected by async_job but unused in this job ) -> Dict[str, Any]:That documents the intent and keeps the signature compatible with the decorator.
71-73: Drop redundantfprefixes and consider plain ASCII for ℹ logSeveral log messages are declared as f‑strings but don’t interpolate anything (Ruff F541), e.g.:
- Line 71:
"🎤 Speaker recognition returned no segments..."- Line 116:
"Continuing with original transcription speaker labels"- Line 329:
"📊 Created single segment from transcript text..."- Line 615:
"⚠️ Speaker check job disappeared from Redis..."- Line 678:
"ℹ️ No enrolled speakers found, but proceeding with conversation anyway"You can safely remove the
fprefix on these:- logger.info( - f"🎤 Speaker recognition returned no segments, keeping original transcription segments" - ) + logger.info( + "🎤 Speaker recognition returned no segments, keeping original transcription segments" + )Optionally, swap the
ℹ️symbol for plain text ("Info:") to avoid the RUF001 ambiguous‑character warning.Also applies to: 116-116, 329-330, 615-617, 678-679
114-120: Tightenexcept Exceptionblocks and uselogger.exceptionThere are a few broad
except Exceptionhandlers (speaker recognition, dependent‑job cancellation, LLM title/summary generation). They currently:
- Swallow all exceptions
- Log with
warning/debug(traceback)instead oflogger.exceptionGiven these are worker jobs where you do want to degrade gracefully, keeping the catch‑alls is reasonable, but debuggability can improve by:
- Narrowing where feasible (e.g., RQ‑specific errors vs truly unknown errors)
- Using
logger.exception("…")inside theexceptblock so the full stack trace is logged without manualtraceback.format_exc()- If you ever re‑raise, chaining with
raise ... from eto preserve contextThis keeps the resilience while making operational debugging much easier. Based on learnings,
logger.exceptionplus proper chaining is preferred in this codebase.Also applies to: 252-260, 409-418
529-537: Double‑checkcheck_job_alivebehavior withcurrent_jobpossiblyNoneThe zombie‑job guard is a good addition, but note that
current_jobcomes fromget_current_job()once before the loop and may beNone(e.g., if this code is ever reused outside an RQ worker).Please confirm
check_job_alive(redis_client, current_job)safely handles aNonejob (e.g., returnsFalseearly rather than raising). If not, add a cheap guard:if not current_job or not await check_job_alive(redis_client, current_job): breakThis keeps the loop robust even if the job context is absent.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
backends/advanced/webui/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (15)
backends/advanced/src/advanced_omi_backend/auth.py(4 hunks)backends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.py(4 hunks)backends/advanced/src/advanced_omi_backend/services/memory/prompts.py(2 hunks)backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py(1 hunks)backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py(1 hunks)backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py(11 hunks)backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py(23 hunks)backends/advanced/webui/package.json(2 hunks)backends/advanced/webui/src/hooks/useD3Zoom.ts(1 hunks)backends/advanced/webui/src/pages/FrappeGanttTimeline.tsx(1 hunks)backends/advanced/webui/src/pages/MyceliaTimeline.tsx(1 hunks)backends/advanced/webui/src/pages/TimelineRouter.tsx(1 hunks)tests/endpoints/client_queue_tests.robot(5 hunks)tests/infrastructure/infra_tests.robot(1 hunks)tests/integration/websocket_streaming_tests.robot(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- backends/advanced/webui/src/hooks/useD3Zoom.ts
🧰 Additional context used
📓 Path-based instructions (2)
**/*.robot
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.robot: Follow the Arrange-Act-Assert pattern with inline verifications for Robot Framework tests
Check existing resource files before writing new Robot Framework keywords - NEVER duplicate existing keywords
Only use 11 approved tags for Robot Framework tests - check @tests/tags.md for the complete list
Use tab-separated tags in Robot Framework tests (e.g.,[Tags] infra audio-streaming), never space-separated
Write Robot Framework assertions directly in tests, not in resource keywords
Only create Robot Framework keywords for reusable setup and action operations AFTER confirming no existing keyword exists
Use descriptive Robot Framework test names that explain business purpose, not technical implementation
Files:
tests/infrastructure/infra_tests.robottests/endpoints/client_queue_tests.robottests/integration/websocket_streaming_tests.robot
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Useuv run pythoncommand instead of directpythonorpython3for executing Python scripts
Use Black formatter with 100-character line length for Python code style
Use isort for Python import organization
NEVER import modules in the middle of functions or files - ALL imports must be at the top of the file after the docstring
Group imports in order: standard library, third-party, local imports
Use lazy imports sparingly and only when absolutely necessary for circular import issues in Python
Always raise errors explicitly in Python code, never silently ignore errors or failures
Use explicit error handling with proper exceptions in Python rather than silent failures
Understand data structures and research input/response or class structure instead of adding defensivehasattr()checks in Python
Files:
backends/advanced/src/advanced_omi_backend/workers/memory_jobs.pybackends/advanced/src/advanced_omi_backend/services/memory/prompts.pybackends/advanced/src/advanced_omi_backend/services/mycelia_sync.pybackends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.pybackends/advanced/src/advanced_omi_backend/workers/transcription_jobs.pybackends/advanced/src/advanced_omi_backend/auth.pybackends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py
🧠 Learnings (6)
📚 Learning: 2025-12-07T18:52:48.868Z
Learnt from: CR
Repo: chronicler-ai/chronicle PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T18:52:48.868Z
Learning: Scan existing resource files for keywords before writing Robot Framework test code - check `websocket_keywords.robot`, `queue_keywords.robot`, `conversation_keywords.robot`, etc.
Applied to files:
tests/endpoints/client_queue_tests.robot
📚 Learning: 2025-12-07T18:52:48.868Z
Learnt from: CR
Repo: chronicler-ai/chronicle PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T18:52:48.868Z
Learning: Applies to **/*.robot : Write Robot Framework assertions directly in tests, not in resource keywords
Applied to files:
tests/endpoints/client_queue_tests.robot
📚 Learning: 2025-12-07T18:52:48.868Z
Learnt from: CR
Repo: chronicler-ai/chronicle PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T18:52:48.868Z
Learning: Applies to **/*.py : Understand data structures and research input/response or class structure instead of adding defensive `hasattr()` checks in Python
Applied to files:
backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py
📚 Learning: 2025-12-08T23:52:34.959Z
Learnt from: AnkushMalaker
Repo: chronicler-ai/chronicle PR: 178
File: backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py:218-223
Timestamp: 2025-12-08T23:52:34.959Z
Learning: In Python code (chronicle project), prefer logging.exception() inside except blocks to automatically log the full stack trace. When re-raising exceptions, always chain with 'raise ... from e' to preserve the original context; use 'raise ... from None' only if you explicitly want to suppress the context. This improves debuggability across Python files.
Applied to files:
backends/advanced/src/advanced_omi_backend/workers/memory_jobs.pybackends/advanced/src/advanced_omi_backend/services/memory/prompts.pybackends/advanced/src/advanced_omi_backend/services/mycelia_sync.pybackends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.pybackends/advanced/src/advanced_omi_backend/workers/transcription_jobs.pybackends/advanced/src/advanced_omi_backend/auth.pybackends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py
📚 Learning: 2025-12-07T18:52:48.868Z
Learnt from: CR
Repo: chronicler-ai/chronicle PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T18:52:48.868Z
Learning: Applies to **/*.py : NEVER import modules in the middle of functions or files - ALL imports must be at the top of the file after the docstring
Applied to files:
backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.pybackends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py
📚 Learning: 2025-12-07T18:52:48.868Z
Learnt from: CR
Repo: chronicler-ai/chronicle PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T18:52:48.868Z
Learning: Applies to **/*.py : Use explicit error handling with proper exceptions in Python rather than silent failures
Applied to files:
backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py
🧬 Code graph analysis (8)
backends/advanced/webui/src/pages/TimelineRouter.tsx (2)
backends/advanced/webui/src/pages/FrappeGanttTimeline.tsx (1)
FrappeGanttTimeline(35-723)backends/advanced/webui/src/pages/MyceliaTimeline.tsx (1)
MyceliaTimeline(36-467)
backends/advanced/webui/src/pages/MyceliaTimeline.tsx (4)
backends/advanced/src/advanced_omi_backend/services/memory/prompts.py (1)
TimeRange(393-397)backends/advanced/webui/src/types/react-gantt-timeline.d.ts (1)
TimelineTask(4-10)backends/advanced/webui/src/contexts/AuthContext.tsx (1)
useAuth(114-120)backends/advanced/webui/src/services/api.ts (1)
memoriesApi(106-121)
backends/advanced/webui/src/pages/FrappeGanttTimeline.tsx (3)
backends/advanced/src/advanced_omi_backend/services/memory/prompts.py (1)
TimeRange(393-397)backends/advanced/webui/src/contexts/AuthContext.tsx (1)
useAuth(114-120)backends/advanced/webui/src/services/api.ts (1)
memoriesApi(106-121)
backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py (1)
backends/advanced/scripts/sync_friendlite_mycelia.py (1)
_hash_api_key_with_salt(58-63)
backends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.py (3)
backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py (1)
add_memory(372-497)backends/advanced/src/advanced_omi_backend/controllers/memory_controller.py (2)
add_memory(169-206)get_memory_by_id(258-282)backends/advanced/src/advanced_omi_backend/services/memory/base.py (1)
add_memory(87-111)
backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py (3)
backends/advanced/src/advanced_omi_backend/utils/conversation_utils.py (2)
analyze_speech(50-143)mark_conversation_deleted(558-580)backends/advanced/src/advanced_omi_backend/models/conversation.py (6)
segments(196-198)Conversation(17-313)SpeakerSegment(51-57)add_transcript_version(230-258)transcript(190-192)TranscriptProvider(21-27)backends/advanced/src/advanced_omi_backend/utils/job_utils.py (1)
check_job_alive(13-44)
backends/advanced/src/advanced_omi_backend/auth.py (1)
backends/advanced/src/advanced_omi_backend/models/user.py (1)
user_id(71-73)
backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py (5)
backends/advanced/src/advanced_omi_backend/auth.py (1)
generate_jwt_for_user(106-135)backends/advanced/src/advanced_omi_backend/services/memory/base.py (21)
MemoryEntry(26-63)MemoryServiceBase(66-268)initialize(75-84)initialize(350-356)add_memory(87-111)_ensure_initialized(256-268)search_memories(114-132)search_memories(371-389)get_all_memories(135-149)count_memories(151-163)count_memories(404-416)get_memory(165-179)update_memory(181-204)update_memory(419-437)delete_memory(207-218)delete_memory(440-449)delete_all_user_memories(221-230)test_connection(233-239)test_connection(331-337)test_connection(464-470)shutdown(241-247)backends/advanced/src/advanced_omi_backend/services/memory/config.py (1)
MemoryConfig(43-55)backends/advanced/src/advanced_omi_backend/services/memory/prompts.py (2)
TemporalEntity(400-409)get_temporal_entity_extraction_prompt(554-556)backends/advanced/src/advanced_omi_backend/services/memory/providers/llm_providers.py (2)
_get_openai_client(45-69)test_connection(269-296)
🪛 Ruff (0.14.8)
backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py
24-24: Unused function argument: redis_client
(ARG001)
203-205: Abstract raise to an inner function
(TRY301)
203-205: Avoid specifying long messages outside the exception class
(TRY003)
214-216: Abstract raise to an inner function
(TRY301)
214-216: Avoid specifying long messages outside the exception class
(TRY003)
222-222: Do not catch blind exception: Exception
(BLE001)
backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py
154-154: String contains ambiguous ℹ (INFORMATION SOURCE). Did you mean i (LATIN SMALL LETTER I)?
(RUF001)
208-208: Consider moving this statement to an else block
(TRY300)
backends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.py
63-63: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
94-94: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py
32-32: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
72-72: f-string without any placeholders
Remove extraneous f prefix
(F541)
112-112: Consider moving this statement to an else block
(TRY300)
114-114: Do not catch blind exception: Exception
(BLE001)
116-116: f-string without any placeholders
Remove extraneous f prefix
(F541)
131-131: Unused function argument: redis_client
(ARG001)
252-252: Do not catch blind exception: Exception
(BLE001)
259-259: Do not catch blind exception: Exception
(BLE001)
329-329: f-string without any placeholders
Remove extraneous f prefix
(F541)
409-409: Do not catch blind exception: Exception
(BLE001)
615-615: f-string without any placeholders
Remove extraneous f prefix
(F541)
678-678: f-string without any placeholders
Remove extraneous f prefix
(F541)
678-678: String contains ambiguous ℹ (INFORMATION SOURCE). Did you mean i (LATIN SMALL LETTER I)?
(RUF001)
backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py
95-95: Avoid specifying long messages outside the exception class
(TRY003)
97-97: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
97-97: Avoid specifying long messages outside the exception class
(TRY003)
102-102: Do not catch blind exception: Exception
(BLE001)
103-103: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
104-104: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
104-104: Avoid specifying long messages outside the exception class
(TRY003)
123-123: Avoid specifying long messages outside the exception class
(TRY003)
170-170: Unnecessary key check before dictionary access
Replace with dict.get
(RUF019)
174-174: Unnecessary key check before dictionary access
Replace with dict.get
(RUF019)
209-209: Avoid specifying long messages outside the exception class
(TRY003)
224-224: Avoid specifying long messages outside the exception class
(TRY003)
226-226: Redundant exception object included in logging.exception call
(TRY401)
227-227: Avoid specifying long messages outside the exception class
(TRY003)
280-280: Consider moving this statement to an else block
(TRY300)
282-282: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
283-283: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
287-287: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
288-288: Avoid specifying long messages outside the exception class
(TRY003)
356-356: Consider moving this statement to an else block
(TRY300)
359-359: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
360-360: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
362-362: Do not catch blind exception: Exception
(BLE001)
363-363: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
364-364: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
367-367: Do not catch blind exception: Exception
(BLE001)
368-368: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
379-379: Unused method argument: allow_update
(ARG002)
380-380: Unused method argument: db_helper
(ARG002)
404-404: f-string without any placeholders
Remove extraneous f prefix
(F541)
440-441: Consider [source_id, client_id, *temporal_entity.entities] instead of concatenation
Replace with [source_id, client_id, *temporal_entity.entities]
(RUF005)
495-495: Do not catch blind exception: Exception
(BLE001)
496-496: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
545-545: Consider moving this statement to an else block
(TRY300)
547-547: Do not catch blind exception: Exception
(BLE001)
548-548: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
578-578: Consider moving this statement to an else block
(TRY300)
580-580: Do not catch blind exception: Exception
(BLE001)
581-581: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
602-602: Abstract raise to an inner function
(TRY301)
602-602: Avoid specifying long messages outside the exception class
(TRY003)
612-612: Do not catch blind exception: Exception
(BLE001)
613-613: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
649-649: Do not catch blind exception: Exception
(BLE001)
650-650: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
727-727: Do not catch blind exception: Exception
(BLE001)
728-728: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
764-764: Do not catch blind exception: Exception
(BLE001)
765-765: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
797-797: Consider moving this statement to an else block
(TRY300)
799-799: Do not catch blind exception: Exception
(BLE001)
800-800: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
818-818: Consider moving this statement to an else block
(TRY300)
820-820: Do not catch blind exception: Exception
(BLE001)
821-821: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
831-831: Do not catch blind exception: Exception
(BLE001)
832-832: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
855-855: Store a reference to the return value of asyncio.ensure_future
(RUF006)
857-857: Do not catch blind exception: Exception
(BLE001)
858-858: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
863-863: Do not catch blind exception: Exception
(BLE001)
864-864: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
865-865: Do not catch blind exception: Exception
(BLE001)
866-866: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
🔇 Additional comments (32)
backends/advanced/webui/package.json (2)
37-37: Verifysass-embeddedversion and build compatibility.The new devDependency
sass-embedded@^1.83.0is added for styling support. Confirm this version is compatible with the project's Vite setup and Tailwind CSS configuration (line 38).sass-embedded 1.83.0 Vite compatibility
24-28: This review comment references code that does not exist in the current package.json file.The type definitions mentioned (
@types/d3@^7.4.3,@types/frappe-gantt@^0.9.0,@types/react-vertical-timeline-component@^3.3.6) are not present at lines 24-28 or anywhere inbackends/advanced/webui/package.json. Lines 24-28 contain@typescript-eslint/parser,@vitejs/plugin-react,autoprefixer,eslint, andeslint-plugin-react-hooks. Only@types/reactand@types/react-domare present in the file.Likely an incorrect or invalid review comment.
backends/advanced/webui/src/pages/MyceliaTimeline.tsx (5)
110-140: LGTM!The task conversion logic correctly handles the priority of type flags and provides sensible defaults.
247-280: Good fix for XSS prevention.The tooltip is now built using D3's DOM manipulation methods with
.text()for user-controlled content (d.name), which safely escapes HTML. This properly addresses the previous XSS concern.
290-300: Good fix for UUID handling.Using
lastIndexOf('-')correctly handles memory IDs that contain dashes (like UUIDs), extracting the full memory ID before the range index suffix.
302-315: LGTM!Y-axis labels safely use
.text()for rendering task names, preventing any XSS from user-controlled content.
369-466: LGTM!The component render logic is well-structured with appropriate loading, error, and empty states. The tooltip positioning and legend are clear and user-friendly.
backends/advanced/webui/src/pages/TimelineRouter.tsx (1)
1-69: LGTM!The TimelineRouter is a clean, simple component that provides accessible tab navigation between timeline implementations. The
aria-labelon the nav element is a good accessibility practice.backends/advanced/webui/src/pages/FrappeGanttTimeline.tsx (3)
50-58: Good XSS mitigation with escapeHtml.The
escapeHtmlfunction properly escapes the five critical HTML characters to prevent XSS attacks in the custom popup HTML.
263-292: Good fixes for both XSS and UUID handling.The
custom_popup_htmlnow properly:
- Uses
lastIndexOf('-')to extract memoryId, handling UUIDs correctly- Uses
escapeHtml()on bothtask.name(line 282) andmemory.content(line 289) to prevent XSSBoth issues from previous reviews are properly addressed.
684-720: LGTM!The inline styles provide good dark mode support for the Gantt chart elements and color classes for different memory types.
backends/advanced/src/advanced_omi_backend/auth.py (1)
27-28: Good centralization of JWT lifetime constant.The
JWT_LIFETIME_SECONDSconstant is now properly reused acrosscookie_transport,get_jwt_strategy, andgenerate_jwt_for_user, preventing drift between these configurations.backends/advanced/src/advanced_omi_backend/services/memory/prompts.py (2)
554-556: Past review issue addressed - prompt now generated at runtime.The stale date issue has been correctly resolved.
get_temporal_entity_extraction_prompt()now callsbuild_temporal_extraction_prompt(datetime.now())at runtime, ensuring the date context is always current.
393-409: Well-structured Pydantic models for temporal extraction.The
TimeRangeandTemporalEntitymodels provide clear schema definitions with descriptive Field annotations. The camelCase naming (isEvent, isPerson, etc.) appropriately matches the expected LLM JSON output format.backends/advanced/src/advanced_omi_backend/services/memory/providers/mycelia.py (1)
51-79: Well-structured Mycelia memory service implementation.The class properly implements
MemoryServiceBasewith:
- Centralized API calls via
_call_resource()- BSON data helpers for ID and date extraction
- Proper JWT-based authentication
- LLM-assisted memory and temporal extraction
backends/advanced/src/advanced_omi_backend/routers/modules/memory_routes.py (3)
22-25: Clean request model for memory creation.The
AddMemoryRequestmodel appropriately defines the requiredcontentfield and optionalsource_idfor the new POST endpoint.
60-66: New POST endpoint for adding memories.The endpoint properly delegates to
memory_controller.add_memoryand uses theAddMemoryRequestmodel for validation.
91-98: Route ordering is correct.The dynamic route
/{memory_id}is now properly placed after all static routes (/unfiltered,/admin), preventing path conflicts. This addresses the previous review concern.backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py (2)
190-221: Good explicit type handling for memory entries.The code now properly uses
isinstance()checks forMemoryEntry,dict, andstrtypes instead of defensivehasattr()checks. This follows the coding guidelines about understanding data structures.
46-46: Import path updated for memory service.The import now correctly uses
advanced_omi_backend.services.memoryto accessget_memory_service, aligning with the new memory service architecture.backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py (4)
9-18: LGTM!Imports are properly organized at the top of the file following the coding guidelines: standard library imports first, then third-party imports, with proper grouping.
50-55: LGTM!Hash implementation is correct and consistent with the approach used in
sync_friendlite_mycelia.py.
157-159: Good exception handling pattern.The use of
exc_info=Trueproperly logs the full stack trace, which aids debugging. This follows the project's learnings about exception handling in Python.Based on learnings, this matches the preferred pattern of logging exceptions with full context.
219-243: LGTM!The singleton pattern and async startup function are well-implemented. The guard clause properly ensures synchronization only runs when the Mycelia provider is configured.
tests/endpoints/client_queue_tests.robot (4)
5-8: LGTM: Resource import restructuring improves organization.The updated imports follow a clearer structure by separating setup/teardown from user/session operations. This aligns with the coding guideline to check existing resource files.
Based on learnings, this organization makes it easier to find and reuse existing keywords.
119-123: LGTM: Test user isolation improves reliability.Creating a unique test user with a random ID ensures test isolation and prevents conflicts when tests run in parallel or with residual test data.
166-167: LGTM: Explicit session variable assignment improves clarity.Storing the anonymous session in a variable before use makes the test more explicit and the session reference reusable.
176-177: LGTM: Consistent session handling pattern.This follows the same improved pattern as the previous test, maintaining consistency across security tests.
tests/infrastructure/infra_tests.robot (1)
262-262: The review comment references a change from 100 to 200 chunks on line 262, but the code showsnum_chunks=100. The value 200 appears in other test files (integration tests), not in this file. This comment addresses a change that does not exist in the code being reviewed.Likely an incorrect or invalid review comment.
tests/integration/websocket_streaming_tests.robot (1)
214-214: Keyword signature verified—parameter usage is correct.The keyword
Verify Segments Match Expected Timestampsintranscript_verification.robotaccepts theexpected_segmentsparameter as defined in its signature:[Arguments] ${segments} ${expected_segments}=${None} ${tolerance}=${None}. The explicit named argument at line 214 is valid and improves readability without breaking any existing behavior.backends/advanced/src/advanced_omi_backend/workers/transcription_jobs.py (2)
205-276: Early “no meaningful speech” handling looks solidThe new
analyze_speechgate, soft‑delete viamark_conversation_deleted, and early return payload (includingreason,word_count,duration,deleted) form a coherent failure path and avoid doing unnecessary downstream work. The follow‑up log for the positive path is also clear.No functional issues from this block; just a note that this changes behavior by skipping transcript version creation and LLM title/summary when no meaningful speech is found, which seems intentional.
430-460: Job metadata and return payload extensions look consistentThe additional job meta fields (
conversation_id,title,summary,transcript_length,word_count,processing_time, and laterconversation_job_id,speaker_check_job_id,detected_speakers,runtime_seconds) seem internally consistent with the rest of the logic and should be very helpful for UI/ops.No issues from a correctness standpoint; just ensure any consumers of these fields tolerate them being absent in older jobs or failure paths (e.g.,
success: Falseearly‑return case).Also applies to: 708-737
| logger.info("") | ||
| logger.info("🔐 To retrieve credentials for Mycelia configuration:") | ||
| logger.info(" cd backends/advanced/scripts") | ||
| logger.info(" python create_mycelia_api_key.py") |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Update command to use uv run python per coding guidelines.
The instruction uses python create_mycelia_api_key.py, but coding guidelines specify using uv run python instead of direct python or python3.
Apply this diff:
- logger.info(" python create_mycelia_api_key.py")
+ logger.info(" uv run python create_mycelia_api_key.py")As per coding guidelines, use uv run python command for executing Python scripts.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logger.info(" python create_mycelia_api_key.py") | |
| logger.info(" uv run python create_mycelia_api_key.py") |
🤖 Prompt for AI Agents
In backends/advanced/src/advanced_omi_backend/services/mycelia_sync.py around
line 201, the logged shell instruction uses "python create_mycelia_api_key.py";
update the message to use "uv run python create_mycelia_api_key.py" to follow
the coding guideline requiring "uv run python" instead of direct "python" or
"python3". Ensure the logger.info call's string is replaced accordingly.
| <select | ||
| onChange={(e) => changeViewMode(e.target.value)} | ||
| value={currentViewMode} | ||
| className="px-3 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-gray-700 text-gray-900 dark:text-gray-100 text-sm focus:ring-2 focus:ring-2 focus:ring-blue-500 focus:border-blue-500" | ||
| > | ||
| {viewModes.map(mode => ( | ||
| <option key={mode.value} value={mode.value}>{mode.label}</option> | ||
| ))} | ||
| </select> |
There was a problem hiding this comment.
Duplicate CSS class: focus:ring-2 appears twice.
Line 622 has focus:ring-2 focus:ring-2 - the class is duplicated.
- className="px-3 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-gray-700 text-gray-900 dark:text-gray-100 text-sm focus:ring-2 focus:ring-2 focus:ring-blue-500 focus:border-blue-500"
+ className="px-3 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-gray-700 text-gray-900 dark:text-gray-100 text-sm focus:ring-2 focus:ring-blue-500 focus:border-blue-500"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <select | |
| onChange={(e) => changeViewMode(e.target.value)} | |
| value={currentViewMode} | |
| className="px-3 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-gray-700 text-gray-900 dark:text-gray-100 text-sm focus:ring-2 focus:ring-2 focus:ring-blue-500 focus:border-blue-500" | |
| > | |
| {viewModes.map(mode => ( | |
| <option key={mode.value} value={mode.value}>{mode.label}</option> | |
| ))} | |
| </select> | |
| <select | |
| onChange={(e) => changeViewMode(e.target.value)} | |
| value={currentViewMode} | |
| className="px-3 py-2 border border-gray-300 dark:border-gray-600 rounded-lg bg-white dark:bg-gray-700 text-gray-900 dark:text-gray-100 text-sm focus:ring-2 focus:ring-blue-500 focus:border-blue-500" | |
| > | |
| {viewModes.map(mode => ( | |
| <option key={mode.value} value={mode.value}>{mode.label}</option> | |
| ))} | |
| </select> |
🤖 Prompt for AI Agents
In backends/advanced/webui/src/pages/FrappeGanttTimeline.tsx around lines 619 to
627, the select element's className contains a duplicated CSS class
`focus:ring-2` (appears twice). Remove the duplicate so the class list contains
`focus:ring-2` only once, keeping the remaining classes unchanged to preserve
styling and focus behavior.
| Dictionary Should Contain Key ${stats} queued_jobs | ||
| Dictionary Should Contain Key ${stats} processing_jobs | ||
| Dictionary Should Contain Key ${stats} completed_jobs | ||
| Dictionary Should Contain Key ${stats} failed_jobs |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for the statistics endpoint implementation and the new key names
rg -nP -C 5 'queued_jobs|processing_jobs|completed_jobs|failed_jobs' --type pyRepository: chronicler-ai/chronicle
Length of output: 14021
🏁 Script executed:
# Also search for the old key names to see if they still exist
rg -nP 'queued|processing|completed|failed' --type py -C 3 | head -100Repository: chronicler-ai/chronicle
Length of output: 5232
🏁 Script executed:
# Find the API endpoint implementation
rg -nP '/api/queue/stats|queue.*stats|stats.*endpoint' --type py -C 5Repository: chronicler-ai/chronicle
Length of output: 7775
🏁 Script executed:
# Check the test file structure and context
head -100 tests/endpoints/client_queue_tests.robot
tail -20 tests/endpoints/client_queue_tests.robotRepository: chronicler-ai/chronicle
Length of output: 4818
🏁 Script executed:
# Search for any API schema or contract definitions
fd -e json -e yaml -e md | xargs grep -l 'queued_jobs\|queue.*stats' 2>/dev/null || echo "No schema files found with those keys"Repository: chronicler-ai/chronicle
Length of output: 95
🏁 Script executed:
cat -n tests/endpoints/client_queue_tests.robot | sed -n '60,100p'Repository: chronicler-ai/chronicle
Length of output: 2050
🏁 Script executed:
# Count total lines in the test file to understand scope
wc -l tests/endpoints/client_queue_tests.robotRepository: chronicler-ai/chronicle
Length of output: 111
🏁 Script executed:
# Look for any other references to these statistics keys in the test file
rg -n 'queued|processing|completed|failed' tests/endpoints/client_queue_tests.robotRepository: chronicler-ai/chronicle
Length of output: 504
Backend API mismatch: test will fail with current code.
The backend /api/queue/stats endpoint returns queued_jobs, processing_jobs, completed_jobs, and failed_jobs (verified in queue_controller.py lines 86-89), but the test currently checks for the old key names queued, processing, completed, and failed (lines 89-92). The test assertions at lines 89-92 and 95-98 must be updated to use the new key names with the _jobs suffix to match the backend API contract.
🤖 Prompt for AI Agents
In tests/endpoints/client_queue_tests.robot around lines 90 to 93 (and also
update the adjacent assertions at 89-92 and 95-98), the test is asserting old
key names; replace occurrences of queued, processing, completed, and failed with
queued_jobs, processing_jobs, completed_jobs, and failed_jobs respectively so
the Robot assertions match the backend /api/queue/stats response; update both
the "Dictionary Should Contain Key" checks and any subsequent references to
those keys.
| ${response}= GET On Session admin_session /api/queue/worker-details | ||
| Should Be Equal As Integers ${response.status_code} 200 | ||
|
|
||
| ${health}= Set Variable ${response.json()} | ||
| Dictionary Should Contain Key ${health} status | ||
| Dictionary Should Contain Key ${health} worker_running | ||
| Dictionary Should Contain Key ${health} message | ||
| Dictionary Should Contain Key ${health} redis_connection | ||
| Dictionary Should Contain Key ${health} workers | ||
| Dictionary Should Contain Key ${health} queues | ||
|
|
||
| # Status should be one of expected values | ||
| Should Be True '${health}[status]' in ['healthy', 'stopped', 'unhealthy'] | ||
| Should Be True '${health}[redis_connection]' in ['healthy', 'stopped', 'unhealthy'] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for the worker-details endpoint implementation
rg -n 'worker-details|worker_details' --type=py -i
# Also search for queue health endpoint
rg -n '/api/queue/health|queue/health' --type=py -i
# Search for the response schema definition
rg -n 'redis_connection|workers|queues' --type=py -A3 -B3 | head -100Repository: chronicler-ai/chronicle
Length of output: 10466
🏁 Script executed:
# Read the full worker-details endpoint implementation
sed -n '351,400p' backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.py
# Search for get_queue_health function
rg -n 'def get_queue_health' --type=py -A30
# Check for old /api/queue/health endpoint
rg -n '@router.get.*health|/api/queue/health' backends/advanced/src/advanced_omi_backend/routers/modules/queue_routes.pyRepository: chronicler-ai/chronicle
Length of output: 5295
🏁 Script executed:
# Check complete get_queue_health implementation to see all possible return values for redis_connection
sed -n '481,550p' backends/advanced/src/advanced_omi_backend/controllers/queue_controller.pyRepository: chronicler-ai/chronicle
Length of output: 2203
Fix the redis_connection status assertion—the test will fail with actual error messages from the backend.
The endpoint and response structure are correctly implemented, but the assertion at line 111 is incompatible with the actual backend response. The get_queue_health() function returns:
"healthy"when Redis connection succeeds"unhealthy: {exception_message}"when Redis connection fails (e.g.,"unhealthy: ConnectionError: ...")
The current test assertion '${health}[redis_connection]' in ['healthy', 'stopped', 'unhealthy'] will fail because the backend returns "unhealthy: ..." with an error message suffix, not the bare string "unhealthy". Additionally, "stopped" is never returned by the backend.
Update the assertion to handle the actual response format, either by checking the beginning of the string or accepting the full error message pattern.
🤖 Prompt for AI Agents
In tests/endpoints/client_queue_tests.robot around lines 102 to 111, the
assertion for redis_connection is too strict and will fail because the backend
returns "unhealthy: {error}" (with the error message appended) and never returns
"stopped"; update the test to accept the actual formats by asserting the
redis_connection value is either exactly "healthy" or starts with "unhealthy"
(remove "stopped" from accepted values) so the test passes when the backend
returns "unhealthy: <error message>".
| # # Cleanup | ||
| # Delete Test User ${test_user}[user_id] |
There was a problem hiding this comment.
Uncomment cleanup to prevent test data pollution.
The commented out cleanup leaves test users in the database after each test run. This will accumulate orphaned test data over time, potentially causing:
- Database bloat
- Performance degradation
- Test failures due to unexpected data
Apply this diff to restore cleanup:
- # # Cleanup
- # Delete Test User ${test_user}[user_id]
+ [Teardown] Delete Test User ${test_user}[user_id]Alternatively, if cleanup must be disabled temporarily, add a comment explaining why and create a tracking issue.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In tests/endpoints/client_queue_tests.robot around lines 138 to 139, the cleanup
step that deletes the test user is commented out causing test data to
accumulate; restore the cleanup by uncommenting the "Delete Test User
${test_user}[user_id]" line so the test user is removed after each run, or if
you must keep it disabled temporarily, add a clear inline comment explaining why
and create/link a tracking issue for re-enabling cleanup.
Summary by CodeRabbit
Release Notes
New Features
Infrastructure
✏️ Tip: You can customize this high-level summary in your review settings.