docs: update competitive analysis for v3.2.0 and March 2026 landscape#559
docs: update competitive analysis for v3.2.0 and March 2026 landscape#559carlos-alm merged 25 commits intomainfrom
Conversation
Re-rank codegraph from #8 (4.0) to #5 (4.5) reflecting v3.2.0 features: 41 CLI commands, 32 MCP tools, dataflow across all 11 languages, CFG, sequence diagrams, architecture boundaries, unified graph model. Add new competitors: GitNexus (#1, 18k stars), DeusData/codebase-memory-mcp (#6, 793 stars in 25 days). Update star counts and feature status across all 85+ ranked projects. Mark 7 roadmap items as DONE. Flag stagnant projects. Update joern.md (3,021 stars, 75 contributors, 4 community MCP wrappers) and narsil-mcp.md (129 stars, SPA frontend, +36 security rules, development paused since Feb 25).
|
Claude finished @carlos-alm's task —— View job Reviewing PR #559: Competitive Analysis Update
|
Greptile SummaryThis documentation PR updates the competitive analysis to reflect codegraph v3.2.0 capabilities and the March 2026 landscape, adding two new entries (GitNexus at #1 and codebase-memory-mcp at #5), re-ranking codegraph from #8 to #4, updating star counts across 85+ projects, and marking 8 roadmap features as DONE. After a thorough review cycle that fixed ~15 distinct scoring/ranking issues (arithmetic sub-score mismatches, rank inversions, duplicate numbering, stale section headers), two issues remain in
Confidence Score: 3/5
Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
subgraph Tier1["Tier 1 — Top 8 (score ≥ 4.0)"]
G1["#1 GitNexus 4.5 ⭐NEW"]
G2["#2 joern 4.5 ↑ 3,021⭐"]
G3["#3 narsil-mcp 4.5 ↑ 129⭐"]
G4["#4 codegraph 4.5 ↑ from #8"]
G5["#5 codebase-memory-mcp 4.3 ⭐NEW"]
G6["#6 code-graph-rag 4.2"]
G7["#7 cpg 4.2"]
G8["#8 arbor 4.2 ↑ from #13"]
end
subgraph Changes["Key Changes in This PR"]
A["Add GitNexus (#1, 18k⭐)"]
B["Add codebase-memory-mcp (#5, 793⭐)"]
C["Re-rank codegraph #8→#4"]
D["Score-fix: arbor 3.7→4.2, moves #13→#8"]
E["8 roadmap items → DONE"]
F["Flag stagnant: glimpse, autodev-codebase,\nentrepeneur4lyf/code-graph-mcp"]
end
subgraph Issues["Remaining Issues"]
I1["⚠️ Duplicate 'vs glimpse' section\n(lines 248–251, rank shows #10 not #11)"]
I2["⚠️ Wrong role names in vs arbor section\n('bridge' → 'adapter', missing 'entry')"]
end
A & B & C & D & E & F --> Tier1
I1 & I2 --> COMPETITIVE_ANALYSIS["COMPETITIVE_ANALYSIS.md"]
Reviews (14): Last reviewed commit: "fix: add missing "vs arbor" comparison s..." | Re-trigger Greptile |
| | 10 | 3.8 | [anrgct/autodev-codebase](https://github.com/anrgct/autodev-codebase) | 111 | TypeScript | None | 40+ languages, 7 embedding providers, Cytoscape.js visualization, LLM reranking | | ||
| | 1 | 4.7 | [abhigyanpatwari/GitNexus](https://github.com/abhigyanpatwari/GitNexus) | 18,453 | TS/JS | PolyForm NC | Zero-server knowledge graph engine with Graph RAG Agent, CLI + MCP + Web UI, tree-sitter native + WASM, LadybugDB (custom graph DB), multi-editor support (Claude Code hooks, Cursor, Codex, Windsurf, OpenCode), auto-generated AGENTS.md/CLAUDE.md. **Non-commercial license. Viral growth (18k stars in ~8 months)** | | ||
| | 2 | 4.5 | [joernio/joern](https://github.com/joernio/joern) | 3,021 | Scala | Apache-2.0 | Full CPG analysis platform for vulnerability discovery, Scala query DSL, multi-language, daily releases (v4.0.508), 75 contributors | | ||
| | 3 | 4.5 | [postrv/narsil-mcp](https://github.com/postrv/narsil-mcp) | 129 | Rust | Apache-2.0 | 90 MCP tools, 32 languages, taint analysis, SBOM, dead code, neural semantic search, single ~30MB binary, SPA web frontend (v1.6.1) | |
There was a problem hiding this comment.
SPA version attribution mismatch
The ranking table summary says SPA web frontend (v1.6.1), implying the SPA was delivered in v1.6.1. However, the detailed "vs narsil-mcp" section further down in the same file (lines 222–223) correctly attributes it to v1.6.0:
"SPA web frontend: Full web UI with file tree sidebar, syntax-highlighted code viewer, dashboard, per-repo overview, CFG visualization (added v1.6.0)"
narsil-mcp.md is also consistent — the SPA appears as "(v1.6.0)" in both the Visualization comparison table (line 163) and the Ecosystem table (line 273). The current narsil release is v1.6.1, but the SPA was introduced in v1.6.0.
This creates a verifiable factual inconsistency within COMPETITIVE_ANALYSIS.md itself and across the two sibling documents.
| | 3 | 4.5 | [postrv/narsil-mcp](https://github.com/postrv/narsil-mcp) | 129 | Rust | Apache-2.0 | 90 MCP tools, 32 languages, taint analysis, SBOM, dead code, neural semantic search, single ~30MB binary, SPA web frontend (v1.6.1) | | |
| | 3 | 4.5 | [postrv/narsil-mcp](https://github.com/postrv/narsil-mcp) | 129 | Rust | Apache-2.0 | 90 MCP tools, 32 languages, taint analysis, SBOM, dead code, neural semantic search, single ~30MB binary, SPA web frontend (added v1.6.0, current v1.6.1) | |
There was a problem hiding this comment.
Fixed in commit d3f65d2. Line 18 now reads "SPA web frontend (added v1.6.0, current v1.6.1)" — distinguishing the version when the feature was introduced (v1.6.0) from the current release (v1.6.1), consistent with the detailed narsil-mcp.md entry.
generated/competitive/joern.md
Outdated
| | Language support | 11 | 16 (incl. binary/bytecode) | Joern | | ||
| | Query expressiveness | Fixed commands | Arbitrary graph traversals | Joern | | ||
| | Community & maturity | New | 7 years, IEEE award, 2,968 stars | Joern | | ||
| | Community & maturity | 32 stars, growing | 7 years, IEEE award, 3,021 stars, 75 contributors | Joern | |
There was a problem hiding this comment.
Stale codegraph star count not updated
The Key Metrics Summary hardcodes codegraph's star count as 32 stars, growing, but the ecosystem comparison table in the same document (line 241) uses Growing without a specific number — the pattern used everywhere else in this PR's updates. The value 32 appears to be leftover from an earlier version and was not refreshed as part of this PR's stated goal of updating star counts.
This is also internally inconsistent within joern.md: the table at line 241 avoids pinning a number, but the summary row below pins it at 32.
| | Community & maturity | 32 stars, growing | 7 years, IEEE award, 3,021 stars, 75 contributors | Joern | | |
| | Community & maturity | Growing | 7 years, IEEE award, 3,021 stars, 75 contributors | Joern | |
There was a problem hiding this comment.
Fixed in commit 63abc82. The "32 stars, growing" value hardcoded a stale star count that would go out of date immediately. Updated to "Growing" to match the consistent style used in other comparison tables throughout the competitive analysis.
…view Line 18 incorrectly stated "v1.6.1" as the version when the SPA feature was introduced. The SPA frontend was added in v1.6.0; v1.6.1 is the current release. Updated to "added v1.6.0, current v1.6.1" to match the detailed narsil-mcp.md entry.
The "32 stars, growing" value in the Community & maturity row hardcodes a stale star count. Other comparison tables use "Growing" consistently for codegraph's community status. Updated to match.
P1 fixes appliedBoth Greptile P1 factual inconsistencies have been resolved: 1. SPA version attribution (
2. Stale star count in joern comparison table (
|
| | **8** | **4.0** | **[@optave/codegraph](https://github.com/optave/codegraph)** | — | **JS/Rust** | **Apache-2.0** | **Sub-second incremental rebuilds, dual engine (native Rust + WASM), 11 languages, 18-tool MCP, qualified call resolution, `context`/`explain`/`where` AI-optimized commands, structure/hotspot analysis, node role classification (entry/core/utility/adapter/dead/leaf), dead code detection, zero-cost core + optional LLM enhancement** | | ||
| | 9 | 3.9 | [harshkedia177/axon](https://github.com/harshkedia177/axon) | 421 | Python | MIT | 11-phase pipeline, KuzuDB, Leiden community detection, dead code, change coupling, 7 MCP tools | | ||
| | 10 | 3.8 | [anrgct/autodev-codebase](https://github.com/anrgct/autodev-codebase) | 111 | TypeScript | None | 40+ languages, 7 embedding providers, Cytoscape.js visualization, LLM reranking | | ||
| | 1 | 4.7 | [abhigyanpatwari/GitNexus](https://github.com/abhigyanpatwari/GitNexus) | 18,453 | TS/JS | PolyForm NC | Zero-server knowledge graph engine with Graph RAG Agent, CLI + MCP + Web UI, tree-sitter native + WASM, LadybugDB (custom graph DB), multi-editor support (Claude Code hooks, Cursor, Codex, Windsurf, OpenCode), auto-generated AGENTS.md/CLAUDE.md. **Non-commercial license. Viral growth (18k stars in ~8 months)** | |
There was a problem hiding this comment.
GitNexus overall score doesn't match breakdown sub-scores
The ranking table lists GitNexus with an overall score of 4.7, but the scoring breakdown table (line 135) shows:
| Features | Analysis Depth | Deploy Simplicity | Lang Support | Code Quality | Community |
|---|---|---|---|---|---|
| 5 | 5 | 4 | 4 | 4 | 5 |
Simple average: (5+5+4+4+4+5) / 6 = 4.5, not 4.7. Every other entry in the table produces a score consistent with the arithmetic mean of its six sub-scores (e.g., joern = 27/6 = 4.5, narsil = 27/6 = 4.5, codegraph = 27/6 = 4.5). GitNexus is the only new entry, making this a discrepancy introduced in this PR.
If the intent is to keep GitNexus ranked #1, the overall score should either be corrected to 4.5, or at least one sub-score should be raised to 5 and the methodology for the weighted average documented — since 4.7 is not achievable with integer 1–5 scores in a simple 6-dimension average.
| | 1 | 4.7 | [abhigyanpatwari/GitNexus](https://github.com/abhigyanpatwari/GitNexus) | 18,453 | TS/JS | PolyForm NC | Zero-server knowledge graph engine with Graph RAG Agent, CLI + MCP + Web UI, tree-sitter native + WASM, LadybugDB (custom graph DB), multi-editor support (Claude Code hooks, Cursor, Codex, Windsurf, OpenCode), auto-generated AGENTS.md/CLAUDE.md. **Non-commercial license. Viral growth (18k stars in ~8 months)** | | |
| | 1 | 4.5 | [abhigyanpatwari/GitNexus](https://github.com/abhigyanpatwari/GitNexus) | 18,453 | TS/JS | PolyForm NC | Zero-server knowledge graph engine with Graph RAG Agent, CLI + MCP + Web UI, tree-sitter native + WASM, LadybugDB (custom graph DB), multi-editor support (Claude Code hooks, Cursor, Codex, Windsurf, OpenCode), auto-generated AGENTS.md/CLAUDE.md. **Non-commercial license. Viral growth (18k stars in ~8 months)** | |
There was a problem hiding this comment.
Fixed in commit a2a2a32. The overall score for GitNexus has been corrected from 4.7 to 4.5, which is the correct arithmetic mean of its six sub-scores: (5+5+4+4+4+5)/6 = 4.5, consistent with all other entries in the table.
There was a problem hiding this comment.
Fixed — GitNexus overall score was already corrected to 4.5 in the previous round (matches sub-score average of 27/6 = 4.5). Verified it remains consistent.
| |---|-------|---------|-------|------|---------|---------| | ||
| | 37 | 2.9 | [rahulvgmail/CodeInteliMCP](https://github.com/rahulvgmail/CodeInteliMCP) | 8 | Python | None | DuckDB + ChromaDB (zero Docker), multi-repo, lightweight embedded DBs | |
There was a problem hiding this comment.
Duplicate rank number at tier boundary
The Tier 1 table ends at rank #37 (cloud-property-graph, score 3.0) and the Tier 2 table immediately begins at rank #37 (CodeInteliMCP, score 2.9) — the same number is used for two different projects. With the addition of GitNexus (#1) and codebase-memory-mcp (#6) in this PR, Tier 1 now contains 37 entries, but the Tier 2 numbering was not incremented to start at 38.
The first Tier 2 entry should be renumbered to #38, and all subsequent Tier 2 entries should be incremented by one accordingly.
| - **Architecture boundary enforcement**: *(Gap closed: we now have `boundaries` command with onion/hexagonal/layered/clean presets)* | ||
| - **Tiered indexing**: fast/balanced/full modes for different use cases — we have one mode | ||
|
|
||
| ### vs jelly (#16, 417 stars) |
There was a problem hiding this comment.
Star count in section header diverges from ranking table
The ranking table (line 31) was updated to 423 stars for cs-au-dk/jelly, but the section header here still reads 417 stars. The PR description states "Update star counts and feature status across all 85+ ranked projects", so this looks like the ranking table update was not mirrored in the "Where Codegraph Loses" header.
| ### vs jelly (#16, 417 stars) | |
| ### vs jelly (#16, 423 stars) |
There was a problem hiding this comment.
Fixed — jelly section header was already updated to 423 stars in the previous round, matching the ranking table. Verified consistency.
- GitNexus overall score corrected from 4.7 to 4.5 to match the arithmetic mean of its six sub-scores (5+5+4+4+4+5)/6 = 4.5 - Tier 2 renumbered starting at #38 (was duplicating #37 with Tier 1); also resolves the pre-existing duplicate #43 (Bikach/ChrisRoyse now #44/#45), with all subsequent entries incremented accordingly - jelly section header updated from 417 to 423 stars to match the ranking table
…' into docs/competitive-analysis-update
| - **Vulnerability exposure analysis**: library usage pattern matching specific to the JS/TS ecosystem | ||
|
|
||
| ### vs aider (#38, 41,664 stars) | ||
| ### vs aider (#38, 42,198 stars — now Aider-AI/aider) |
There was a problem hiding this comment.
Stale rank number in "vs aider" section header
The section header on line 278 says #38 for aider, but after the Tier 2 renumbering fix (commit a2a2a32, which made Tier 2 start at #38 and pushed all subsequent entries up by one), aider is now at #39 in the ranking table (line 59):
| 38 | 2.9 | [rahulvgmail/CodeInteliMCP] | ← slot #38 is CodeInteliMCP
| 39 | 2.8 | [Aider-AI/aider] | ← aider is at #39
The vs joern, vs narsil-mcp, and other section headers were all correctly updated to reflect their new ranks. Aider's header was missed.
| ### vs aider (#38, 42,198 stars — now Aider-AI/aider) | |
| ### vs aider (#39, 42,198 stars — now Aider-AI/aider) |
generated/competitive/joern.md
Outdated
| |---------|-----------|-------|---------------| | ||
| | **GitHub stars** | New project (growing) | ~2,968 | **Joern** | | ||
| | **Contributors** | Small team | 64 | **Joern** | | ||
| | **GitHub stars** | 32 (growing) | ~3,021 | **Joern** | |
There was a problem hiding this comment.
Stale star count in Ecosystem feature-comparison table
This row in the Ecosystem & Community feature-by-feature table still pins codegraph's star count at 32 (growing). The previous review flagged the same issue in the Key Metrics Summary, which was fixed in commit 63abc82 to use Growing. However, this table entry was not updated as part of that fix, leaving the two tables internally inconsistent within joern.md:
- Feature-by-Feature (line 241):
32 (growing)← stale pinned number - Key Metrics Summary (line 337):
Growing← correctly unpinned
| | **GitHub stars** | 32 (growing) | ~3,021 | **Joern** | | |
| | **GitHub stars** | Growing | ~3,021 | **Joern** | |
There was a problem hiding this comment.
Fixed — changed codegraph star count from '32 (growing)' to 'Growing' in the Ecosystem feature-comparison table, matching the Key Metrics Summary table format on line 337.
| | 7 | cpg | 5 | 5 | 2 | 5 | 5 | 3 | | ||
| | 8 | CKB | 5 | 5 | 4 | 3 | 4 | 3 | | ||
| | 9 | axon | 5 | 5 | 4 | 2 | 4 | 3 | | ||
| | 10 | glimpse | 4 | 4 | 5 | 3 | 5 | 4 | |
There was a problem hiding this comment.
glimpse sub-scores don't match updated overall ranking score
This PR downgrades glimpse from 4.2 → 3.8 in the ranking table (line 25) to reflect "Stagnant since Jan 2026", but the scoring breakdown row was not updated. The current sub-scores still average to the old 4.2 value:
| Features | Analysis Depth | Deploy Simplicity | Lang Support | Code Quality | Community |
|---|---|---|---|---|---|
| 4 | 4 | 5 | 3 | 5 | 4 |
Sum = 25 / 6 = 4.17, which is inconsistent with the new overall score of 3.8 — the breakdown still describes a 4.2-tier tool, not a 3.8-tier one.
Every other entry in this table produces an overall score consistent with its arithmetic mean (e.g. the GitNexus fix from earlier on this PR corrected exactly this pattern). The stagnation signal should be reflected in the Community sub-score at minimum (currently 4 — arguably should drop to 2 or 1 for an inactive project), and/or Code Quality to bring the average in line with 3.8.
The same pattern affects autodev-codebase (line 157), which the PR also explicitly marks as stagnant. Its breakdown scores (5+3+3+5+3+3 = 22/6 = 3.67) don't match its overall ranking of 3.4.
There was a problem hiding this comment.
Fixed — adjusted sub-scores to reflect stagnation in Community dimension: glimpse Community 4→2 (avg 3.83≈3.8, matching ranking), autodev-codebase Community 3→1 (avg 3.33, ranking updated 3.4→3.3). Both now arithmetically consistent.
…gnant projects glimpse: Community 4→2 (stagnant since Jan 2026), avg now 3.83≈3.8 matching ranking. autodev-codebase: Community 3→1 (stagnant since Jan 2026), avg now 3.33, ranking updated 3.4→3.3.
| | 20 | 3.5 | [colbymchenry/codegraph](https://github.com/colbymchenry/codegraph) | 165 | TypeScript | MIT | tree-sitter + SQLite + MCP, Claude Code token reduction benchmarks, npx installer | | ||
| | 15 | 3.7 | [entrepeneur4lyf/code-graph-mcp](https://github.com/entrepeneur4lyf/code-graph-mcp) | 83 | Python | MIT | ast-grep for 25+ languages, complexity metrics, code smells, circular dependency detection. **Stagnant since Jul 2025** | | ||
| | 16 | 3.7 | [cs-au-dk/jelly](https://github.com/cs-au-dk/jelly) | 423 | TypeScript | BSD-3 | Academic-grade JS/TS points-to analysis, call graphs, vulnerability exposure, 5 published papers | | ||
| | 17 | 3.6 | [colbymchenry/codegraph](https://github.com/colbymchenry/codegraph) | 308 | TypeScript | MIT | tree-sitter + SQLite + MCP, Claude Code token reduction benchmarks, npx installer. **Nearly doubled since Feb — naming competitor** | |
There was a problem hiding this comment.
Sub-score mismatch on two updated entries
This PR explicitly updated Community sub-scores for colbymchenry/codegraph (3→4) and axon (2→3) to reflect their growth, but the corresponding overall ranking scores were not updated to match the new averages. This is the same pattern as the glimpse/autodev-codebase fix applied earlier in this PR.
colbymchenry/codegraph (scoring breakdown line 151):
- Sub-scores: 4 + 3 + 5 + 3 + 3 + 4 = 22 → 22/6 = 3.67 ≈ 3.7
- Ranking table (line 32) still shows: 3.6
axon (scoring breakdown line 143):
- Sub-scores: 5 + 5 + 4 + 2 + 4 + 3 = 23 → 23/6 = 3.83 ≈ 3.8
- Ranking table (line 24) still shows: 4.0
Every other entry whose sub-scores were touched in this PR had its overall score corrected to the arithmetic mean (glimpse: 23/6 = 3.83 → 3.8, autodev-codebase: 20/6 = 3.33 → 3.3). These two entries were missed.
There was a problem hiding this comment.
Fixed — updated colbymchenry/codegraph overall from 3.6→3.7 and axon from 4.0→3.8 to match their sub-score arithmetic means.
| | 23 | 3.3 | [anrgct/autodev-codebase](https://github.com/anrgct/autodev-codebase) | 111 | TypeScript | None | 40+ languages, 7 embedding providers, Cytoscape.js visualization, LLM reranking. **Stagnant since Jan 2026** | | ||
| | 24 | 3.4 | [Durafen/Claude-code-memory](https://github.com/Durafen/Claude-code-memory) | 72 | Python | None | Memory Guard quality gate, persistent codebase memory, Voyage AI + Qdrant | | ||
| | 25 | 3.3 | [NeuralRays/codexray](https://github.com/NeuralRays/codexray) | 2 | TypeScript | MIT | 16 MCP tools, TF-IDF semantic search (~50MB), dead code, complexity, path finding | |
There was a problem hiding this comment.
Ranking inversion: rank #23 (3.3) scored lower than rank #24 (3.4)
The table is ordered by descending score, but after the stagnation-driven downgrade of autodev-codebase from 3.8 → 3.3, its position was not adjusted. autodev-codebase now sits at rank #23 with score 3.3, while Claude-code-memory at rank #24 has a higher score of 3.4. A project with score 3.4 must be ranked above one with score 3.3.
Current (broken):
| 23 | 3.3 | anrgct/autodev-codebase |
| 24 | 3.4 | Durafen/Claude-code-memory |
Expected (fixed):
| 23 | 3.4 | Durafen/Claude-code-memory |
| 24 | 3.3 | anrgct/autodev-codebase |
| 25 | 3.3 | NeuralRays/codexray |
The scoring breakdown table at rows 23–24 should be reordered to match. This inversion was introduced when autodev-codebase's score was correctly reduced for stagnation but its table position was not updated.
| # Competitive Analysis — Code Graph / Code Intelligence Tools | ||
|
|
||
| **Date:** 2026-02-25 | ||
| **Scope:** 137+ code analysis tools evaluated, 82+ ranked against `@optave/codegraph` | ||
| **Date:** 2026-03-21 (updated from 2026-02-25) |
There was a problem hiding this comment.
PR description says "7 roadmap items" but 8 are marked DONE
The PR description states "Mark 7 roadmap items as DONE: path, complexity, visualization, co-change, communities, flow, dataflow, boundaries" — but counting the items in that list yields 8: path, complexity, visualization, co-change, communities, flow, dataflow, boundaries. The diff confirms 8 roadmap items were changed from TODO → DONE in this PR (the 6 Tier 3 items plus path in Tier 1 and complexity / formal code health metrics in Tier 2). The PR description count is off by one; consider updating it to "8 roadmap items" to keep the summary accurate.
There was a problem hiding this comment.
Fixed — updated PR description from '7 roadmap items' to '8 roadmap items' to match the actual count: path, complexity, visualization, co-change, communities, flow, dataflow, boundaries.
| | 1 | 4.5 | [abhigyanpatwari/GitNexus](https://github.com/abhigyanpatwari/GitNexus) | 18,453 | TS/JS | PolyForm NC | Zero-server knowledge graph engine with Graph RAG Agent, CLI + MCP + Web UI, tree-sitter native + WASM, LadybugDB (custom graph DB), multi-editor support (Claude Code hooks, Cursor, Codex, Windsurf, OpenCode), auto-generated AGENTS.md/CLAUDE.md. **Non-commercial license. Viral growth (18k stars in ~8 months)** | | ||
| | 2 | 4.5 | [joernio/joern](https://github.com/joernio/joern) | 3,021 | Scala | Apache-2.0 | Full CPG analysis platform for vulnerability discovery, Scala query DSL, multi-language, daily releases (v4.0.508), 75 contributors | | ||
| | 3 | 4.5 | [postrv/narsil-mcp](https://github.com/postrv/narsil-mcp) | 129 | Rust | Apache-2.0 | 90 MCP tools, 32 languages, taint analysis, SBOM, dead code, neural semantic search, single ~30MB binary, SPA web frontend (added v1.6.0, current v1.6.1) | | ||
| | 4 | 4.5 | [vitali87/code-graph-rag](https://github.com/vitali87/code-graph-rag) | 2,168 | Python | MIT | Graph RAG with Memgraph, multi-provider AI, code editing, semantic search, MCP server (added 2026) | |
There was a problem hiding this comment.
Pre-existing sub-score/overall-score mismatches on two untouched entries
The pattern of reconciling sub-score averages with overall ranking scores (applied this PR to GitNexus, glimpse, autodev-codebase, axon, colbymchenry/codegraph) leaves two Tier 1 entries still inconsistent. Both were not changed by this PR, but since the PR is actively editing the scoring tables it's worth catching them here:
code-graph-rag (#4, ranking line 19, breakdown line 138):
| Features | Analysis Depth | Deploy Simplicity | Lang Support | Code Quality | Community |
|---|---|---|---|---|---|
| 5 | 4 | 3 | 4 | 4 | 5 |
| Sum = 25 → 25/6 = 4.17 ≈ 4.2, but the ranking table shows 4.5 (off by 0.33). |
arbor (#13, ranking line 28, breakdown line 147):
| Features | Analysis Depth | Deploy Simplicity | Lang Support | Code Quality | Community |
|---|---|---|---|---|---|
| 4 | 4 | 5 | 4 | 5 | 3 |
| Sum = 25 → 25/6 = 4.17 ≈ 4.2, but the ranking table shows 3.7 (off by 0.47 — the largest gap in the table). |
Every other entry in the scoring breakdown now produces an overall score consistent with its arithmetic mean (all the fixes from this review round confirm that is the intended methodology). For arbor specifically the sub-scores describe a ~4.2-tier tool while the ranking places it at 3.7 — a half-point discrepancy that understates it relative to neighbours at 3.7 with sub-score averages of 3.67.
There was a problem hiding this comment.
Fixed — corrected both scores to match their sub-score arithmetic means: code-graph-rag from 4.5 to 4.2, arbor from 3.7 to 4.2. Note: full re-ranking/reordering of the table is separate work since it would affect position numbering throughout the document.
…' into fix/review-559
| | 2 | 4.5 | [joernio/joern](https://github.com/joernio/joern) | 3,021 | Scala | Apache-2.0 | Full CPG analysis platform for vulnerability discovery, Scala query DSL, multi-language, daily releases (v4.0.508), 75 contributors | | ||
| | 3 | 4.5 | [postrv/narsil-mcp](https://github.com/postrv/narsil-mcp) | 129 | Rust | Apache-2.0 | 90 MCP tools, 32 languages, taint analysis, SBOM, dead code, neural semantic search, single ~30MB binary, SPA web frontend (added v1.6.0, current v1.6.1) | | ||
| | 4 | 4.2 | [vitali87/code-graph-rag](https://github.com/vitali87/code-graph-rag) | 2,168 | Python | MIT | Graph RAG with Memgraph, multi-provider AI, code editing, semantic search, MCP server (added 2026) | | ||
| | **5** | **4.5** | **[@optave/codegraph](https://github.com/optave/codegraph)** | **32** | **JS/Rust** | **Apache-2.0** | **Sub-second incremental rebuilds (3-tier change detection), dual engine (native Rust + WASM), 11 languages, 32-tool MCP, 41 CLI commands, qualified call resolution with receiver type tracking, `context`/`audit`/`where` AI-optimized commands, dataflow + CFG + stored AST across all languages, sequence diagrams, structure/hotspot analysis, node role classification, dead code/export detection, architecture boundary enforcement, unified graph model with qualified names/scope/visibility, zero-cost core + optional LLM enhancement** | |
There was a problem hiding this comment.
Ranking inversion: codegraph (#5, 4.5) placed below code-graph-rag (#4, 4.2)
The table header states entries are "Ranked by weighted score across 6 dimensions," but codegraph at rank #5 (score 4.5) is placed below vitali87/code-graph-rag at rank #4 (score 4.2). A project with score 4.5 must rank above a project with score 4.2.
| # | Score | Project |
|---|---|---|
| 4 | 4.2 | vitali87/code-graph-rag |
| 5 | 4.5 | @optave/codegraph ← should be ranked higher |
The PR description says "re-rank codegraph from #8 (4.0) to #5 (4.5)," but with the updated score of 4.5, codegraph should be ranked at or above code-graph-rag — which means at least #4, if not tied with the other three 4.5-scoring entries at #1–3 (GitNexus, joern, narsil-mcp). Either codegraph's rank should move up to reflect its score, or the score should be adjusted to justify the #5 slot.
| | 23 | autodev-codebase | 5 | 3 | 3 | 5 | 3 | 1 | | ||
| | 24 | Claude-code-memory | 4 | 3 | 3 | 3 | 4 | 3 | |
There was a problem hiding this comment.
Scoring breakdown row numbers out of sync with ranking table after #23/#24 swap
The previous round fixed the ranking table to swap Claude-code-memory and autodev-codebase (Claude-code-memory → #23, autodev-codebase → #24) to correct a ranking inversion. However, the scoring breakdown table was not updated — it still lists them in the pre-swap order:
| Row (breakdown) | Project | Score |
|---|---|---|
| #23 | autodev-codebase |
3.3 |
| #24 | Claude-code-memory |
3.4 |
But the ranking table now shows:
| Row (ranking) | Project | Score |
|---|---|---|
| #23 | Claude-code-memory |
3.4 |
| #24 | autodev-codebase |
3.3 |
The row # numbers in the two tables no longer refer to the same project. The breakdown table rows #23 and #24 need to be swapped to match the ranking table's corrected order.
1 similar comment
| | 3 | 4.5 | [postrv/narsil-mcp](https://github.com/postrv/narsil-mcp) | 129 | Rust | Apache-2.0 | 90 MCP tools, 32 languages, taint analysis, SBOM, dead code, neural semantic search, single ~30MB binary, SPA web frontend (added v1.6.0, current v1.6.1) | | ||
| | **4** | **4.5** | **[@optave/codegraph](https://github.com/optave/codegraph)** | **32** | **JS/Rust** | **Apache-2.0** | **Sub-second incremental rebuilds (3-tier change detection), dual engine (native Rust + WASM), 11 languages, 32-tool MCP, 41 CLI commands, qualified call resolution with receiver type tracking, `context`/`audit`/`where` AI-optimized commands, dataflow + CFG + stored AST across all languages, sequence diagrams, structure/hotspot analysis, node role classification, dead code/export detection, architecture boundary enforcement, unified graph model with qualified names/scope/visibility, zero-cost core + optional LLM enhancement** | | ||
| | 5 | 4.2 | [vitali87/code-graph-rag](https://github.com/vitali87/code-graph-rag) | 2,168 | Python | MIT | Graph RAG with Memgraph, multi-provider AI, code editing, semantic search, MCP server (added 2026) | | ||
| | 6 | 4.3 | [DeusData/codebase-memory-mcp](https://github.com/DeusData/codebase-memory-mcp) | 793 | C | MIT | Single static C binary, 64 languages (tree-sitter), 14 MCP tools, Cypher-like query language, persistent SQLite knowledge graph, 10-agent auto-installer, 3D graph visualization, HTTP route analysis. **25 days old — fastest-growing new entrant** | |
There was a problem hiding this comment.
Ranking inversion: codebase-memory-mcp (4.3) placed below code-graph-rag (4.2)
codebase-memory-mcp is a new entry added at #6 with score 4.3, but code-graph-rag at #5 has score 4.2. Since 4.3 > 4.2, codebase-memory-mcp should rank above code-graph-rag — this breaks the stated ordering principle of "ranked by weighted score."
Verify the breakdown arithmetic to confirm:
codebase-memory-mcp(Bump commander from 12.1.0 to 14.0.3 #6 row in breakdown, line 140): 4+4+5+5+4+4 = 26 → 26/6 = 4.33 ≈ 4.3code-graph-rag(Bump better-sqlite3 from 11.10.0 to 12.6.2 #5 row in breakdown, line 139): 5+4+3+4+4+5 = 25 → 25/6 = 4.17 ≈ 4.2
The "Where Codegraph Loses" section header at line 234 (### vs codebase-memory-mcp (#6, 793 stars — NEW)) will also need its rank updated once the position is corrected.
| | 11 | 3.8 | [ShiftLeftSecurity/codepropertygraph](https://github.com/ShiftLeftSecurity/codepropertygraph) | 564 | Scala | Apache-2.0 | CPG specification + Tinkergraph library, Scala query DSL, protobuf serialization (Joern foundation) | | ||
| | 12 | 3.8 | [Jakedismo/codegraph-rust](https://github.com/Jakedismo/codegraph-rust) | 142 | Rust | None | 100% Rust GraphRAG, SurrealDB, LSP-powered dataflow analysis, architecture boundary enforcement | | ||
| | 13 | 3.7 | [Anandb71/arbor](https://github.com/Anandb71/arbor) | 85 | Rust | MIT | Native GUI, confidence scoring, architectural role classification, fuzzy search, MCP | | ||
| | 13 | 4.2 | [Anandb71/arbor](https://github.com/Anandb71/arbor) | 85 | Rust | MIT | Native GUI, confidence scoring, architectural role classification, fuzzy search, MCP | |
There was a problem hiding this comment.
Ranking inversion: arbor (4.2) placed below 3.8-scoring entries (#9–#12)
A previous round of fixes (per the thread) corrected arbor's overall score from 3.7 → 4.2 to match its sub-score average (4+4+5+4+5+3 = 25 → 25/6 = 4.17 ≈ 4.2). However, arbor's position was not updated — it remains at #13, sitting below four entries that all score 3.8 (axon, glimpse, codepropertygraph, codegraph-rust at #9–#12).
With a score of 4.2, arbor should rank alongside the other 4.2-scoring entries: code-graph-rag (#5) and cpg (#7). The current table has entries ranked:
#9 3.8 axon
#10 3.8 glimpse
#11 3.8 codepropertygraph
#12 3.8 codegraph-rust
#13 4.2 arbor ← should be near #5–#7
The scoring breakdown table row #13 and the "Where Codegraph Loses" section do not need rank-header updates yet, but the ranking table position itself requires correction (with downstream re-numbering).
|
|
||
| ### vs code-graph-rag (#3, 1,916 stars) | ||
| - **Graph query expressiveness**: Memgraph + Cypher enables arbitrary graph traversals; our SQL queries are more rigid | ||
| ### vs code-graph-rag (#4, 2,168 stars) |
There was a problem hiding this comment.
Section header rank stale after codegraph/code-graph-rag swap
This section header says ### vs code-graph-rag (#4, 2,168 stars), but after the codegraph/code-graph-rag position swap (codegraph moved to #4, code-graph-rag moved to #5), code-graph-rag is now #5 in the ranking table — not #4. The header should read ### vs code-graph-rag (#5, 2,168 stars).
| ### vs code-graph-rag (#4, 2,168 stars) | |
| ### vs code-graph-rag (#5, 2,168 stars) |
There was a problem hiding this comment.
Fixed — section header updated to '### vs code-graph-rag (#6, 2,168 stars)' to reflect its new position after the codebase-memory-mcp/arbor reordering.
generated/competitive/joern.md
Outdated
| **Context:** Both are Apache-2.0-licensed code analysis tools with CLI interfaces. Joern is ranked #1 in our [competitive analysis](./COMPETITIVE_ANALYSIS.md) with a score of 4.5 vs codegraph's 4.0 at #8. | ||
| **Date:** 2026-03-21 | ||
| **Competitors:** `@optave/codegraph` v3.2.0 (Apache-2.0) vs `joernio/joern` v4.x (Apache-2.0) | ||
| **Context:** Both are Apache-2.0-licensed code analysis tools with CLI interfaces. Joern is ranked #2 in our [competitive analysis](./COMPETITIVE_ANALYSIS.md) with a score of 4.5 vs codegraph's 4.5 at #5. |
There was a problem hiding this comment.
Stale codegraph rank reference in cross-document headers
Both deep-dive files still describe codegraph as ranked #5, but after the codegraph/code-graph-rag position swap (fixed per a previous round of feedback), codegraph is now #4 in the ranking table.
joern.mdline 5:"…vs codegraph's 4.5 at #5"→ should be#4narsil-mcp.mdline 5:"…vs codegraph's 4.5 at #5"→ should be#4
| **Context:** Both are Apache-2.0-licensed code analysis tools with CLI interfaces. Joern is ranked #2 in our [competitive analysis](./COMPETITIVE_ANALYSIS.md) with a score of 4.5 vs codegraph's 4.5 at #5. | |
| **Context:** Both are Apache-2.0-licensed code analysis tools with CLI interfaces. Joern is ranked #2 in our [competitive analysis](./COMPETITIVE_ANALYSIS.md) with a score of 4.5 vs codegraph's 4.5 at #4. |
The same fix applies to narsil-mcp.md line 5:
**Context:** Both are Apache-2.0-licensed code analysis tools with MCP interfaces. Narsil-MCP is ranked #3 in our [competitive analysis](./COMPETITIVE_ANALYSIS.md) with a score of 4.5 vs codegraph's 4.5 at #4.
There was a problem hiding this comment.
Fixed — both joern.md line 5 and narsil-mcp.md line 5 now reference codegraph as #4, matching the ranking table position.
| | 25 | codexray | 5 | 4 | 4 | 4 | 3 | 1 | | ||
| | 26 | CodeVisualizer | 4 | 3 | 5 | 3 | 3 | 2 | | ||
| | 27 | code-health-meter | 3 | 5 | 5 | 1 | 4 | 2 | | ||
| | 28 | code-graph-analysis-pipeline | 5 | 5 | 1 | 2 | 5 | 2 | | ||
| | 29 | codebadger | 4 | 4 | 3 | 5 | 3 | 1 | | ||
| | 30 | codegraph-cli | 5 | 3 | 3 | 2 | 3 | 2 | | ||
| | 31 | claude-context-local | 4 | 3 | 3 | 4 | 4 | 1 | | ||
| | 32 | loregrep | 3 | 3 | 4 | 3 | 5 | 2 | | ||
| | 33 | xnuinside/codegraph | 3 | 2 | 5 | 1 | 3 | 4 | | ||
| | 34 | java-all-call-graph | 4 | 4 | 3 | 1 | 3 | 3 | | ||
| | 35 | pyan | 3 | 3 | 5 | 1 | 4 | 2 | | ||
| | 36 | MATE | 3 | 5 | 1 | 1 | 3 | 2 | |
There was a problem hiding this comment.
Three more sub-score/overall-score mismatches remaining
This PR has already fixed this exact pattern for GitNexus, glimpse, autodev-codebase, axon, colbymchenry/codegraph, code-graph-rag, and arbor — but three more entries in the scoring breakdown table still don't match their ranking table scores:
Row #25 – codexray (breakdown line 159):
- Sub-scores: 5 + 4 + 4 + 4 + 3 + 1 = 21 → 21/6 = 3.5
- Ranking table shows: 3.3 (off by 0.2)
- Consequence: codexray (actual avg 3.5) sits below the five 3.5-scoring entries at feat: harden multi-repo registry and add structural analysis #18–22, creating a ranking inversion
Row #32 – loregrep (breakdown line 166):
- Sub-scores: 3 + 3 + 4 + 3 + 5 + 2 = 20 → 20/6 = 3.33 ≈ 3.3
- Ranking table shows: 3.0 (off by 0.3)
- Consequence: loregrep (actual avg 3.3) is deflated below its correct tier
Row #36 – MATE (breakdown line 170):
- Sub-scores: 3 + 5 + 1 + 1 + 3 + 2 = 15 → 15/6 = 2.5
- Ranking table shows: 3.0 (off by 0.5)
- This is the most critical of the three: MATE's actual sub-score average (2.5) falls below the Tier 1 threshold of 3.0, so MATE is misclassified as Tier 1. Based on sub-scores it belongs in Tier 2
Every other entry touched in this PR was corrected to the arithmetic mean of its six sub-scores. These three follow the same rule.
There was a problem hiding this comment.
| | 34 | 3.0 | [Technologicat/pyan](https://github.com/Technologicat/pyan) | 395 | Python | GPL-2.0 | Python 3 call graph generator, module import analysis, cycle detection, interactive HTML | | ||
| | 35 | 3.0 | [GaloisInc/MATE](https://github.com/GaloisInc/MATE) | 194 | Python | BSD-3 | DARPA-funded interactive CPG-based bug hunting for C/C++ via LLVM | | ||
| | 23 | 3.5 | [NeuralRays/codexray](https://github.com/NeuralRays/codexray) | 2 | TypeScript | MIT | 16 MCP tools, TF-IDF semantic search (~50MB), dead code, complexity, path finding | | ||
| | 24 | 3.4 | [Durafen/Claude-code-memory](https://github.com/Durafen/Claude-code-memory) | 72 | Python | None | Memory Guard quality gate, persistent codebase memory, Voyage AI + Qdrant | |
There was a problem hiding this comment.
Three more sub-score/overall-score mismatches remaining
The same scoring reconciliation pattern applied to GitNexus, glimpse, autodev-codebase, axon, colbymchenry/codegraph, code-graph-rag, arbor, codexray, loregrep, and MATE still has three outstanding entries:
Claude-code-memory (#24, ranking table line 39)
- Breakdown (line 158):
4 + 3 + 3 + 3 + 4 + 3 = 20 → 20/6 = 3.33 ≈ 3.3 - Ranking shows: 3.4 (off by 0.1)
codegraph-cli (#31, ranking table line 46)
- Breakdown (line 165):
5 + 3 + 3 + 2 + 3 + 2 = 18 → 18/6 = 3.0 - Ranking shows: 3.2 (off by 0.2)
claude-context-local (#32, ranking table line 47)
- Breakdown (line 166):
4 + 3 + 3 + 4 + 4 + 1 = 19 → 19/6 = 3.17 ≈ 3.2 - Ranking shows: 3.1 (off by ~0.1)
The last two also create a ranking inversion: claude-context-local has an actual average of 3.2 while codegraph-cli averages 3.0, but codegraph-cli is ranked above it at #31 vs #32. With corrected scores, claude-context-local (3.2) should be ranked at #31 and codegraph-cli (3.0) should drop to #32 (alongside the other 3.0-scoring entries at #33–36).
| - **Vulnerability exposure analysis**: library usage pattern matching specific to the JS/TS ecosystem | ||
|
|
||
| ### vs aider (#38, 41,664 stars) | ||
| ### vs aider (#39, 42,198 stars — now Aider-AI/aider) |
There was a problem hiding this comment.
Stale rank number in "vs aider" section header
The section header reads #39 for aider, but the Tier 2 ranking table (line 58) shows aider at #38:
| 37 | 2.9 | rahulvgmail/CodeInteliMCP | ← slot #37
| 38 | 2.8 | Aider-AI/aider | ← aider is at #38
A previous round of fixes (thread on this PR) corrected the header from #38 → #39 when Tier 2 was starting at #38. However, the subsequent fix that moved MATE from Tier 1 to Tier 2 reduced Tier 1 from 37 to 36 entries (Tier 1 now ends at #36, cloud-property-graph), shifting Tier 2 back to starting at #37 — making aider #38 again. The section header was not updated to reflect this.
| ### vs aider (#39, 42,198 stars — now Aider-AI/aider) | |
| ### vs aider (#38, 42,198 stars — now Aider-AI/aider) |
There was a problem hiding this comment.
Verified — the header now reads '#38' matching the ranking table position.
generated/competitive/narsil-mcp.md
Outdated
| | Standalone CLI | Full CLI experience | Config/tools management only | Codegraph | | ||
| | Standalone CLI | 41 commands | Config/tools management only | Codegraph | | ||
| | Programmatic API | Full JS API | None | Codegraph | | ||
| | Community & maturity | New | Newer (Dec 2025), growing fast | Tie | |
There was a problem hiding this comment.
Key Metrics "growing fast" contradicts stagnation note
The PR explicitly sets out to flag narsil-mcp's development pause ("development paused since Feb 25") in narsil-mcp.md, and the Ecosystem & Community table (line 267) was correctly updated to say "no activity since Feb 25 (24+ day gap)". However, the Key Metrics Summary row still reads:
| Community & maturity | New | Newer (Dec 2025), growing fast | Tie |
"Growing fast" is now internally contradicted by the doc's own stagnation note. A reader consulting only the summary table gets a misleading signal. The row should be updated to reflect the pause, e.g.:
| | Community & maturity | New | Newer (Dec 2025), growing fast | Tie | | |
| | Community & maturity | New | Newer (Dec 2025); no activity since Feb 25 | Codegraph | |
The winner could arguably flip to Codegraph if narsil's development has stalled while codegraph is still actively shipping.
There was a problem hiding this comment.
Fixed — updated Key Metrics Summary row from "growing fast" to "no activity since Feb 25" and changed the winner from Tie to Codegraph, consistent with the Ecosystem table (line 267) and the stagnation summary (line 277).
There was a problem hiding this comment.
Verified — 'growing fast' text has already been removed from narsil-mcp.md. The Key Metrics Summary now correctly reflects the development pause.
| # Competitive Analysis — Code Graph / Code Intelligence Tools | ||
|
|
||
| **Date:** 2026-02-25 | ||
| **Scope:** 137+ code analysis tools evaluated, 82+ ranked against `@optave/codegraph` | ||
| **Date:** 2026-03-21 (updated from 2026-02-25) | ||
| **Scope:** 140+ code analysis tools evaluated, 85+ ranked against `@optave/codegraph` |
There was a problem hiding this comment.
PR description rank numbers stale after position fixes
The PR description (which is part of this documentation PR) states:
"Re-rank codegraph from #8 (4.0) to #5 (4.5)"
"Add DeusData/codebase-memory-mcp as #6"
But the current document reflects a different final state — after all the scoring/ranking corrections applied during this review cycle:
| Entry | PR description says | Document shows |
|---|---|---|
@optave/codegraph |
#5 | #4 |
codebase-memory-mcp |
#6 | #5 |
The PR description was never updated to reflect the final positions after the codegraph ↔ code-graph-rag swap and the codebase-memory-mcp ↔ code-graph-rag reordering. Since this is a documentation PR, the PR summary itself should accurately describe the final state of the documents.
4443b33 to
71e910f
Compare
| ### vs cpg (#7, 424 stars) | ||
| - **Formal CPG specification**: academic-grade graph representation (AST + CFG + PDG + DFG) with published specs | ||
| - **MCP module**: built-in MCP support now, matching our integration | ||
| - **LLVM IR support**: extends language coverage to any LLVM-compiled language (Rust, Swift, etc.) | ||
| - **Type inference**: can analyze incomplete/partial code — our tree-sitter requires syntactically valid input |
There was a problem hiding this comment.
arbor is now ranked #8 (score 4.2), sitting directly between cpg (#7) and CKB (#9) — both of which have dedicated comparison sections in "Where Codegraph Loses." This makes arbor the highest-ranked entry in the top 10 without a "where we lose" analysis.
This gap was introduced by the arithmetic score correction applied during this review cycle (arbor's overall score was corrected from 3.7 → 4.2, which elevated its position from #13 to #8). As a result, the "Where Codegraph Loses" section now skips from ### vs cpg (#7…) directly to ### vs CKB (#9…), with no discussion of arbor's actual competitive advantages over codegraph (native GUI, fuzzy search, MCP, confidence scoring, architectural role classification).
Consider adding a ### vs arbor (#8, 85 stars) section covering where arbor wins — for example its native Rust GUI, fuzzy symbol search with Levenshtein scoring, and built-in confidence scoring on graph edges.
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
There was a problem hiding this comment.
Fixed — added 'vs arbor (#8, 85 stars)' section covering its native Rust GUI, fuzzy symbol search, confidence scoring, and role classification advantages.
There was a problem hiding this comment.
The vs-arbor section exists at line 253 with the role names now corrected.
Impact: 601 functions changed, 295 affected
| ### vs glimpse (#10, 349 stars — stagnant) | ||
| - **LLM workflow optimization**: clipboard-first output + token counting + XML output mode — purpose-built for "code → LLM context" | ||
| - **LSP-based call resolution**: compiler-grade accuracy vs our tree-sitter heuristic approach | ||
| - **Web content processing**: can fetch URLs and convert HTML to markdown for context |
There was a problem hiding this comment.
Duplicate "vs glimpse" section with stale rank
This section is a duplicate of the correct ### vs glimpse (#11, 349 stars — stagnant) section at line 272. It was not removed when the vs arbor (#8) section was inserted here as part of the fixes in this review cycle, and its rank is also wrong — glimpse is at #11 in the ranking table, not #10 (axon holds #10).
The content of both sections is identical (LLM workflow optimization, LSP-based call resolution, web content processing). This duplicate should be removed, leaving only the correct ### vs glimpse (#11, ...) at line 272.
| ### vs glimpse (#10, 349 stars — stagnant) | |
| - **LLM workflow optimization**: clipboard-first output + token counting + XML output mode — purpose-built for "code → LLM context" | |
| - **LSP-based call resolution**: compiler-grade accuracy vs our tree-sitter heuristic approach | |
| - **Web content processing**: can fetch URLs and convert HTML to markdown for context |
(remove lines 248–251 entirely)
| - **Native Rust GUI**: Built-in desktop interface for interactive graph exploration — we have HTML viewer but no native GUI | ||
| - **Fuzzy symbol search**: Levenshtein-scored symbol matching tolerates typos and partial names — our search requires exact or substring matches | ||
| - **Built-in confidence scoring**: Graph edges carry confidence weights out of the box — we have confidence scoring on import resolution but not surfaced on all edge types | ||
| - **Architectural role classification**: Automatic labeling of nodes by architectural role (controller, service, repository, etc.) — *(Gap closed: our `roles` command now classifies nodes as core, bridge, leaf, dead, utility)* |
There was a problem hiding this comment.
Incorrect role names in new "vs arbor" section
This line was authored in the newly added vs arbor section (added as a fix in this PR) but uses the wrong role vocabulary. The text reads:
"our
rolescommand now classifies nodes as core, bridge, leaf, dead, utility"
However, the actual codegraph role taxonomy — confirmed in line 320 of this same file and in CLAUDE.md — is:
entry/core/utility/adapter/dead/leaf
Two problems:
bridgeis not a valid role — the correct name isadapterentryis missing from the list
| - **Architectural role classification**: Automatic labeling of nodes by architectural role (controller, service, repository, etc.) — *(Gap closed: our `roles` command now classifies nodes as core, bridge, leaf, dead, utility)* | |
| - **Architectural role classification**: Automatic labeling of nodes by architectural role (controller, service, repository, etc.) — *(Gap closed: our `roles` command now classifies nodes as `entry`/`core`/`utility`/`adapter`/`dead`/`leaf`)* |
There was a problem hiding this comment.
Fixed — corrected the role names from 'core, bridge, leaf, dead, utility' to 'entry, core, utility, adapter, dead, leaf' matching the actual codegraph taxonomy.

Summary
Test plan