Conversation
|
@coderabbitai review |
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThis PR enhances setup workflows across multiple services by introducing helper methods for managing existing environment values and API key masking, replacing Tailscale-specific IP handling with generic server address support, and adding CUDA version detection for speaker recognition. New startup scripts initialize the services through a unified entry point. Changes
Sequence DiagramsequenceDiagram
participant User
participant Wizard
participant Setup
participant .env File
User->>Wizard: Run setup
Wizard->>Setup: Initialize setup class
Setup->>Setup: read_existing_env_value(key)
alt Existing value found & not placeholder
Setup->>Setup: mask_api_key(value)
Setup->>User: Prompt to reuse masked value
alt User accepts
User-->>Setup: Confirm reuse
Setup->>.env File: Store existing key
else User enters new
User-->>Setup: Provide new key
Setup->.env File: Store new key
end
else No existing value
Setup->>User: Prompt for new key
User-->>Setup: Provide key
Setup->.env File: Store key
end
Setup-->>Wizard: Configuration complete
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes The changes span multiple files with heterogeneous modifications: setup helper methods with logic for environment value reuse, SSL script refactoring with conditional branching, Docker variable renaming, and startup script additions. While individual cohorts follow consistent patterns, each requires separate reasoning for correctness (environment handling, SSL certificate generation logic, CUDA detection). Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
wizard.py (1)
153-160: Avoid passing API keys via CLI; use env to prevent leakage.Secrets in process args can be exposed via ps/history. Pass them via environment to the subprocess instead.
- cmd.extend(['--deepgram-api-key', deepgram_key]) - console.print("[blue][INFO][/blue] Found existing DEEPGRAM_API_KEY from backend config, reusing") + console.print("[blue][INFO][/blue] Found existing DEEPGRAM_API_KEY from backend config, reusing") + # Will inject via env when launching @@ - cmd.extend(['--openai-api-key', openai_key]) - console.print("[blue][INFO][/blue] Found existing OPENAI_API_KEY from backend config, reusing") + console.print("[blue][INFO][/blue] Found existing OPENAI_API_KEY from backend config, reusing")And when calling subprocess.run:
- result = subprocess.run( - cmd, - cwd=service['path'], - check=True, - timeout=300 - ) + env = os.environ.copy() + if service_name == 'speaker-recognition' and deepgram_key: + env['DEEPGRAM_API_KEY'] = deepgram_key + if service_name == 'openmemory-mcp' and openai_key: + env['OPENAI_API_KEY'] = openai_key + result = subprocess.run(cmd, cwd=service['path'], check=True, timeout=300, env=env)Also applies to: 175-181
🧹 Nitpick comments (9)
extras/speaker-recognition/ssl/generate-ssl.sh (1)
16-23: IP address regex lacks octet range validation.The regex pattern
^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$will match invalid IP addresses like999.999.999.999because it doesn't enforce that each octet is in the range 0–255. While this is unlikely to cause operational issues (invalid IPs will fail at certificate generation time), the detection logic could be more robust.Consider updating the regex to validate octet ranges:
- if echo "$SERVER_ADDRESS" | grep -E '^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$' > /dev/null; then + if echo "$SERVER_ADDRESS" | grep -E '^([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$' > /dev/null; thenAlternatively, you could use a simpler check if false positives are acceptable:
if [[ "$SERVER_ADDRESS" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; thenbackends/advanced/ssl/generate-ssl.sh (3)
16-23: IP address regex lacks octet range validation (same issue as extras/speaker-recognition version).The regex pattern allows invalid IP addresses like
999.999.999.999. Both SSL generation scripts should use consistent and valid IP validation logic. See the suggested refactor in the extras file review for options.
19-19: Emoji decorators inconsistent across similar scripts.This file includes emoji (🔐 and ✅) in status messages, while the analogous file at
extras/speaker-recognition/ssl/generate-ssl.shdoes not. If emoji styling is intentional per module, consider documenting this convention. Otherwise, standardize for consistency.Also applies to: 22-22, 83-83
12-12: Domain example uses clearer placeholder pattern.The example
$0 myserver.tailxxxxx.ts.netis more instructive than the specific address in the extras version. This style could be adopted inextras/speaker-recognition/ssl/generate-ssl.shfor consistency and clarity.backends/advanced/init.py (1)
103-112: Return type should be Optional[str] (can return None).read_existing_env_value returns None when key/file missing but is annotated as str. Update the signature for correctness.
-from typing import Any, Dict +from typing import Any, Dict, Optional @@ - def read_existing_env_value(self, key: str) -> str: + def read_existing_env_value(self, key: str) -> Optional[str]:wizard.sh (1)
1-1: Add shebang and safe shell options; fix ShellCheck SC2148.Without a shebang, shell is unknown; also adopt strict mode and path safety.
+#!/usr/bin/env bash +set -euo pipefail +cd "$(dirname "${BASH_SOURCE[0]}")" uv run --with-requirements setup-requirements.txt wizard.pystart.sh (1)
1-1: Add shebang and safe shell options; fix ShellCheck SC2148.Consistent with wizard.sh, declare shell and enable strict mode.
+#!/usr/bin/env bash +set -euo pipefail +cd "$(dirname "${BASH_SOURCE[0]}")" uv run --with-requirements setup-requirements.txt python services.py start --all --buildextras/speaker-recognition/init.py (2)
79-88: Same short-key masking issue as backend; apply consistent fix.Mirror hardened masking logic to avoid revealing short tokens.
- def mask_api_key(self, key: str, show_chars: int = 5) -> str: + def mask_api_key(self, key: str, show_chars: int = 5) -> str: """Mask API key showing only first and last few characters""" - if not key or len(key) <= show_chars * 2: - return key + if not key: + return "" @@ - return f"{key_clean[:show_chars]}{'*' * min(15, len(key_clean) - show_chars * 2)}{key_clean[-show_chars:]}" + if len(key_clean) <= show_chars * 2: + head = key_clean[:1] + tail = key_clean[-1:] if len(key_clean) > 1 else "" + return f"{head}{'*' * max(4, len(key_clean) - len(head) - len(tail))}{tail}" + return f"{key_clean[:show_chars]}{'*' * min(15, len(key_clean) - show_chars * 2)}{key_clean[-show_chars:]}"
205-208: Compute-mode flow looks good; minor UX nit.Nice defaulting and detection. Consider surfacing a brief note that the selected tag must match the chosen base image in Docker (cpu vs cuXXX) to avoid mismatched wheels.
Also applies to: 210-241, 242-242
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
backends/advanced/init.py(5 hunks)backends/advanced/ssl/generate-ssl.sh(3 hunks)extras/speaker-recognition/.env.template(1 hunks)extras/speaker-recognition/Dockerfile(3 hunks)extras/speaker-recognition/docker-compose.yml(3 hunks)extras/speaker-recognition/init.py(3 hunks)extras/speaker-recognition/ssl/generate-ssl.sh(3 hunks)start.sh(1 hunks)wizard.py(1 hunks)wizard.sh(1 hunks)
🧰 Additional context used
🪛 Ruff (0.14.1)
extras/speaker-recognition/init.py
153-153: Starting a process with a partial executable path
(S607)
161-161: Starting a process with a partial executable path
(S607)
wizard.py
164-164: Possible hardcoded password assigned to: "hf_token"
(S105)
🪛 Shellcheck (0.11.0)
start.sh
[error] 1-1: Tips depend on target shell and yours is unknown. Add a shebang or a 'shell' directive.
(SC2148)
wizard.sh
[error] 1-1: Tips depend on target shell and yours is unknown. Add a shebang or a 'shell' directive.
(SC2148)
🔇 Additional comments (6)
extras/speaker-recognition/ssl/generate-ssl.sh (2)
60-65: Conditional SAN entry logic is sound.The branching approach to add either
IP.2orDNS.3based on the detected address type is correct and aligns with the hardcoded localhost entries (DNS.1, DNS.2, IP.1).
7-14: Usage documentation is clear and examples are helpful.The updated usage message now correctly reflects that either an IP or domain name is accepted. The example and error handling are appropriate.
backends/advanced/ssl/generate-ssl.sh (2)
60-65: Conditional SAN entry logic is correct.The branching approach to add either
IP.2orDNS.3based on address type is sound and consistent with the logic in the extras version.
7-14: Usage documentation is clear; examples and error handling are appropriate.The refactored usage message correctly indicates that either IP or domain is accepted.
extras/speaker-recognition/.env.template (1)
16-20: CUDA tag variable is clear and documented.PYTORCH_CUDA_VERSION with cu121/cu126/cu128 guidance looks good and aligns with Docker/build changes.
extras/speaker-recognition/Dockerfile (1)
3-7: Rename to PYTORCH_CUDA_VERSION is consistent; build/run flags align.ARG/ENV and uv extras usage look correct for cpu/cu121+ variants.
Also applies to: 30-32, 45-45
| def mask_api_key(self, key: str, show_chars: int = 5) -> str: | ||
| """Mask API key showing only first and last few characters""" | ||
| if not key or len(key) <= show_chars * 2: | ||
| return key | ||
|
|
||
| # Remove quotes if present | ||
| key_clean = key.strip("'\"") | ||
|
|
||
| return f"{key_clean[:show_chars]}{'*' * min(15, len(key_clean) - show_chars * 2)}{key_clean[-show_chars:]}" | ||
|
|
There was a problem hiding this comment.
Masking leaks short keys; always mask at least partially.
Current logic returns the raw key if length <= 2*show_chars, which can expose secrets in prompts/logs.
- def mask_api_key(self, key: str, show_chars: int = 5) -> str:
+ def mask_api_key(self, key: str, show_chars: int = 5) -> str:
"""Mask API key showing only first and last few characters"""
- if not key or len(key) <= show_chars * 2:
- return key
+ if not key:
+ return ""
@@
- return f"{key_clean[:show_chars]}{'*' * min(15, len(key_clean) - show_chars * 2)}{key_clean[-show_chars:]}"
+ if len(key_clean) <= show_chars * 2:
+ # For short keys, reveal first/last 1 char max.
+ head = key_clean[:1]
+ tail = key_clean[-1:] if len(key_clean) > 1 else ""
+ return f"{head}{'*' * max(4, len(key_clean) - len(head) - len(tail))}{tail}"
+ return f"{key_clean[:show_chars]}{'*' * min(15, len(key_clean) - show_chars * 2)}{key_clean[-show_chars:]}"🤖 Prompt for AI Agents
In backends/advanced/init.py around lines 113-122, the current mask_api_key
returns the raw key when len <= 2*show_chars; change it to never return the raw
key. Strip surrounding quotes as before, then if key_clean length is small (<=
2*show_chars) build a masked version that always hides at least one character by
showing only the first and last character (if length>1) and replacing the middle
with '*' (use max(1, len(key_clean)-2) stars); otherwise keep the existing
behavior of showing show_chars at both ends and masking the middle with up to 15
asterisks. Ensure the function always returns a masked string and never the
original unmasked key.
| # Check for existing API key | ||
| existing_key = self.read_existing_env_value("DEEPGRAM_API_KEY") | ||
| if existing_key and existing_key not in ['your_deepgram_api_key_here', 'your-deepgram-key-here']: | ||
| masked_key = self.mask_api_key(existing_key) | ||
| prompt_text = f"Deepgram API key ({masked_key}) [press Enter to reuse, or enter new]" | ||
| api_key_input = self.prompt_value(prompt_text, "") | ||
| api_key = api_key_input if api_key_input else existing_key | ||
| else: | ||
| api_key = self.prompt_value("Deepgram API key (leave empty to skip)", "") | ||
|
|
There was a problem hiding this comment.
Collect new API keys via hidden input, not echoed prompts.
Deepgram/Mistral/OpenAI new-key paths use Prompt.ask, which displays secrets on screen and persists in terminal history. Use getpass-based prompt_password for new entries; keep masked reuse flow.
- api_key = self.prompt_value("Deepgram API key (leave empty to skip)", "")
+ api_key = self.prompt_password("Deepgram API key (leave empty to skip)") or ""
@@
- api_key = self.prompt_value("Mistral API key (leave empty to skip)", "")
+ api_key = self.prompt_password("Mistral API key (leave empty to skip)") or ""
@@
- api_key = self.prompt_value("OpenAI API key (leave empty to skip)", "")
+ api_key = self.prompt_password("OpenAI API key (leave empty to skip)") or ""Also applies to: 174-183, 220-229
🤖 Prompt for AI Agents
In backends/advanced/init.py around lines 152-161 (and similarly at 174-183,
220-229), the code uses visible prompts for entering new API keys which echo
secrets to the terminal; change the logic so that when prompting for a new key
you use a getpass-based prompt (e.g., prompt_password or getpass.getpass) for
hidden input, while keeping the existing masked-reuse flow (display masked_key
and allow empty Enter to reuse existing key); ensure validation and assignment
remain the same but replace prompt_value for new entries with the hidden prompt.
| default: | ||
| name: friend-network | ||
| external: true |
There was a problem hiding this comment.
🧩 Analysis chain
external: true requires pre-created network; fresh runs may fail.
Declaring networks.default as external means friend-network must already exist. Starting this compose in isolation will error.
Options:
- Remove external: true to let Compose create it automatically; or
- Keep external and document a pre-step: docker network create friend-network; or
- Gate externalization behind an env var (EXTERNAL_NETWORK=true).
networks:
default:
name: friend-network
- external: true
+ # Set to true only if the network is managed externally
+ external: ${EXTERNAL_NETWORK:-false}Run to verify failure/success scenarios:
🏁 Script executed:
#!/usr/bin/env bash
set -euo pipefail
# 1) Expect failure when network absent and external=true
docker network rm friend-network >/dev/null 2>&1 || true
echo "Expect failure:"; docker compose -f extras/speaker-recognition/docker-compose.yml config >/dev/null && echo "OK (unexpected)" || echo "Failed as expected"
# 2) Create network and expect success
docker network create friend-network >/dev/null 2>&1 || true
echo "Expect success:"; docker compose -f extras/speaker-recognition/docker-compose.yml config >/dev/null && echo "OK (expected)"Length of output: 140
I need to examine the actual docker-compose.yml file since Docker isn't available in the sandbox to run the verification script. Let me inspect the current state:
---
🏁 Script executed:
```shell
#!/bin/bash
# Check the current state of the docker-compose.yml file at lines 109-111
cat extras/speaker-recognition/docker-compose.yml | sed -n '100,115p'
Length of output: 283
🏁 Script executed:
#!/bin/bash
# Get full networks section to understand complete context
grep -n "networks:" extras/speaker-recognition/docker-compose.yml -A 10Length of output: 349
🏁 Script executed:
#!/bin/bash
# Check if external: true exists anywhere in the file
grep -n "external:" extras/speaker-recognition/docker-compose.ymlLength of output: 22
Let me check for setup documentation and override files that might handle the network creation:
---
🏁 Script executed:
```shell
#!/bin/bash
# Look for docker-compose override files
find extras/speaker-recognition -name "docker-compose*.yml" -o -name "docker-compose*.yaml"
Length of output: 96
🏁 Script executed:
#!/bin/bash
# Search for "friend-network" references in the project
rg "friend-network" extras/speaker-recognition/ -nLength of output: 74
🏁 Script executed:
#!/bin/bash
# Check for setup/initialization scripts in the speaker-recognition directory
find extras/speaker-recognition -type f \( -name "*.sh" -o -name "Makefile" -o -name "*.md" \) | head -20Length of output: 422
🏁 Script executed:
#!/bin/bash
# Look for README or setup documentation
find extras/speaker-recognition -maxdepth 2 -name "README*" -o -name "SETUP*" -o -name "*.md"Length of output: 217
Let me check the alternative compose file and setup scripts to understand the intended network management:
---
🏁 Script executed:
```shell
#!/bin/bash
# Check docker-compose-test.yml network configuration
grep -n "networks:" extras/speaker-recognition/docker-compose-test.yml -A 5
Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search setup.sh for network creation logic
grep -n "network\|friend-network" extras/speaker-recognition/setup.shLength of output: 0
🏁 Script executed:
#!/bin/bash
# Search init.sh for network creation logic
grep -n "network\|friend-network" extras/speaker-recognition/init.shLength of output: 0
🏁 Script executed:
#!/bin/bash
# Search README for network setup instructions
grep -n "network\|friend-network\|docker network" extras/speaker-recognition/README.mdLength of output: 271
🏁 Script executed:
#!/bin/bash
# Check quickstart.md for network setup instructions
grep -n "network\|friend-network\|docker network" extras/speaker-recognition/quickstart.mdLength of output: 47
Let me verify the actual contents of these setup scripts and the test compose file:
---
🏁 Script executed:
```shell
#!/bin/bash
# Check if docker-compose-test.yml has any content and look for networks
cat extras/speaker-recognition/docker-compose-test.yml | tail -20
Length of output: 621
🏁 Script executed:
#!/bin/bash
# View setup.sh to see what it does
head -30 extras/speaker-recognition/setup.shLength of output: 586
🏁 Script executed:
#!/bin/bash
# View init.sh to see what it does
head -30 extras/speaker-recognition/init.shLength of output: 802
🏁 Script executed:
#!/bin/bash
# Check if there are any docker network create commands anywhere in the project
rg "docker network create" extras/speaker-recognition/Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check run-test.sh for network setup
grep -n "network\|friend-network" extras/speaker-recognition/run-test.shLength of output: 0
Network marked external but no creation mechanism or documentation.
The concern is valid. external: true at line 111 requires friend-network to pre-exist, but:
- No setup script creates it (setup.sh and init.sh verified—neither contains network creation)
- No documentation instructs users to pre-create it
- No
docker network createfound anywhere in the project
Fresh deployments will fail. The proposed env var solution is sound—it allows external management when needed while defaulting to auto-creation.
networks:
default:
name: friend-network
- external: true
+ external: ${EXTERNAL_NETWORK:-false}🤖 Prompt for AI Agents
In extras/speaker-recognition/docker-compose.yml around lines 109–111, the
network is marked external: true which requires a pre-existing friend-network
but there is no creation step or docs; change the compose file to make the
network external conditional via environment variables (e.g.
FRIEND_NETWORK_EXTERNAL default false and FRIEND_NETWORK_NAME default
friend-network) so when FRIEND_NETWORK_EXTERNAL=true compose uses external: true
name: ${FRIEND_NETWORK_NAME} and otherwise lets Docker Compose create the
network (omit external). Also add a short note to README (or setup.sh)
describing the FRIEND_NETWORK_EXTERNAL/FRIEND_NETWORK_NAME variables and, if
desired, add an optional setup step or script that runs docker network create
${FRIEND_NETWORK_NAME} when users opt into external networks.
| def detect_cuda_version(self) -> str: | ||
| """Detect system CUDA version from nvidia-smi""" | ||
| try: | ||
| result = subprocess.run( | ||
| ["nvidia-smi", "--query-gpu=driver_version", "--format=csv,noheader"], | ||
| capture_output=True, | ||
| text=True, | ||
| timeout=5 | ||
| ) | ||
| if result.returncode == 0: | ||
| # Try to get CUDA version from nvidia-smi | ||
| result = subprocess.run( | ||
| ["nvidia-smi"], | ||
| capture_output=True, | ||
| text=True, | ||
| timeout=5 | ||
| ) | ||
| if result.returncode == 0: | ||
| output = result.stdout | ||
| # Parse CUDA Version from nvidia-smi output | ||
| # Format: "CUDA Version: 12.6" | ||
| import re | ||
| match = re.search(r'CUDA Version:\s*(\d+)\.(\d+)', output) | ||
| if match: | ||
| major, minor = match.groups() | ||
| cuda_ver = f"{major}.{minor}" | ||
|
|
||
| # Map to available PyTorch CUDA versions | ||
| if cuda_ver >= "12.8": | ||
| return "cu128" | ||
| elif cuda_ver >= "12.6": | ||
| return "cu126" | ||
| elif cuda_ver >= "12.1": | ||
| return "cu121" | ||
| except (subprocess.SubprocessError, FileNotFoundError): | ||
| pass | ||
| return "cu121" # Default fallback | ||
|
|
There was a problem hiding this comment.
CUDA detection compares version strings; use numeric parsing.
String comparisons like "12.10" >= "12.8" are wrong lexicographically. Parse major/minor as ints and map reliably; also use shutil.which to skip when nvidia-smi isn’t present (addresses Ruff S607 intent).
- def detect_cuda_version(self) -> str:
+ def detect_cuda_version(self) -> str:
"""Detect system CUDA version from nvidia-smi"""
- try:
- result = subprocess.run(
- ["nvidia-smi", "--query-gpu=driver_version", "--format=csv,noheader"],
- capture_output=True,
- text=True,
- timeout=5
- )
- if result.returncode == 0:
- # Try to get CUDA version from nvidia-smi
- result = subprocess.run(
- ["nvidia-smi"],
- capture_output=True,
- text=True,
- timeout=5
- )
- if result.returncode == 0:
- output = result.stdout
- # Parse CUDA Version from nvidia-smi output
- # Format: "CUDA Version: 12.6"
- import re
- match = re.search(r'CUDA Version:\s*(\d+)\.(\d+)', output)
- if match:
- major, minor = match.groups()
- cuda_ver = f"{major}.{minor}"
-
- # Map to available PyTorch CUDA versions
- if cuda_ver >= "12.8":
- return "cu128"
- elif cuda_ver >= "12.6":
- return "cu126"
- elif cuda_ver >= "12.1":
- return "cu121"
- except (subprocess.SubprocessError, FileNotFoundError):
- pass
+ try:
+ import shutil, re
+ if not shutil.which("nvidia-smi"):
+ return "cu121"
+ result = subprocess.run(["nvidia-smi"], capture_output=True, text=True, timeout=5)
+ if result.returncode != 0:
+ return "cu121"
+ match = re.search(r'CUDA Version:\s*(\d+)\.(\d+)', result.stdout)
+ if not match:
+ return "cu121"
+ major, minor = (int(match.group(1)), int(match.group(2)))
+ # Map to available PyTorch CUDA versions
+ if (major, minor) >= (12, 8):
+ return "cu128"
+ if (major, minor) >= (12, 6):
+ return "cu126"
+ if (major, minor) >= (12, 1):
+ return "cu121"
+ except (subprocess.SubprocessError, FileNotFoundError, ValueError):
+ return "cu121"
return "cu121" # Default fallback🧰 Tools
🪛 Ruff (0.14.1)
153-153: Starting a process with a partial executable path
(S607)
161-161: Starting a process with a partial executable path
(S607)
| openai_key = read_env_value(backend_env_path, 'OPENAI_API_KEY') | ||
| if openai_key and openai_key != 'your-openai-key-here': | ||
| cmd.extend(['--openai-api-key', openai_key]) | ||
| console.print("[blue][INFO][/blue] Found existing OPENAI_API_KEY from backend config, reusing") | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
Placeholder mismatch may reuse placeholders as real keys.
Checks use 'your-openai-key-here' but other files/templates use 'your_openai_api_key_here' as well. Normalize to handle both.
- if openai_key and openai_key != 'your-openai-key-here':
+ placeholders = {'your-openai-key-here', 'your_openai_api_key_here'}
+ if openai_key and openai_key not in placeholders:
cmd.extend(['--openai-api-key', openai_key])
console.print("[blue][INFO][/blue] Found existing OPENAI_API_KEY from backend config, reusing")Similarly, align DEEPGRAM placeholder checks with both hyphen/underscore forms.
🏁 Script executed:
#!/usr/bin/env bash
# Scan repo for placeholder variants to unify checks
rg -nP "(your[-_][a-z]+[_-]api[_-]key[_-]here)" -C2Length of output: 6160
🏁 Script executed:
#!/bin/bash
# Find all placeholder checks in wizard.py
rg -n "!=" wizard.py | grep -E "(your|placeholder|key)" | head -20Length of output: 49
---
🏁 Script executed:
```shell
#!/bin/bash
# Find wizard.py location
fd -t f "wizard.py"
Length of output: 9
---
🏁 Script executed:
```shell
#!/bin/bash
# Find all placeholder comparisons in wizard.py
rg -n -B2 -A2 "(your[-_][a-z_-]*api[-_]key[-_]here|!=\s*['\"]your)" wizard.py
Length of output: 1174
Normalize all API key placeholder checks to handle both hyphen and underscore variants.
Verification confirms inconsistent placeholder formats across wizard.py:
- Line 157 (DEEPGRAM): only checks underscore variant
'your_deepgram_api_key_here' - Line 164 (HF_TOKEN): only checks underscore variant
'your_huggingface_token_here' - Line 178 (OPENAI): only checks hyphen variant
'your-openai-key-here'
Templates use mixed formats (both your_key_here and your-key-here). The backends/advanced/init.py file correctly handles both variants for these keys. Apply the suggested normalization to all three keys:
deepgram_key = read_env_value(backend_env_path, 'DEEPGRAM_API_KEY')
- if deepgram_key and deepgram_key != 'your_deepgram_api_key_here':
+ deepgram_placeholders = {'your_deepgram_api_key_here', 'your-deepgram-api-key-here'}
+ if deepgram_key and deepgram_key not in deepgram_placeholders:
cmd.extend(['--deepgram-api-key', deepgram_key])
hf_token = read_env_value(speaker_env_path, 'HF_TOKEN')
- if hf_token and hf_token != 'your_huggingface_token_here':
+ hf_placeholders = {'your_huggingface_token_here', 'your-huggingface-token-here'}
+ if hf_token and hf_token not in hf_placeholders:
cmd.extend(['--hf-token', hf_token])
openai_key = read_env_value(backend_env_path, 'OPENAI_API_KEY')
- if openai_key and openai_key != 'your-openai-key-here':
+ openai_placeholders = {'your_openai_api_key_here', 'your-openai-key-here'}
+ if openai_key and openai_key not in openai_placeholders:
cmd.extend(['--openai-api-key', openai_key])Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In wizard.py around lines 157, 164 and 177-181, the placeholder checks for
DEEPGRAM, HF_TOKEN and OPENAI API keys only test one variant (underscore or
hyphen) while templates use both; normalize by comparing a unified form:
transform the read key by replacing '-' with '_' (or vice versa) and then check
against the canonical placeholder like 'your_deepgram_api_key_here',
'your_huggingface_token_here' and 'your_openai_key_here' (or alternatively test
both the hyphen and underscore literal placeholders) so the code treats
'your-key-here' and 'your_key_here' as equivalent and will correctly skip
reusing placeholders.
… in environment variables
Summary by CodeRabbit
New Features
Chores