Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 65 additions & 8 deletions .claude/skills/titan-close/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ Compare final metrics against `titan-state.json` baseline:

---

## Step 5 — Compile the issue tracker
## Step 5 — Compile the issue tracker and open GitHub issues

Read `.codegraph/titan/issues.ndjson`. Each line is a JSON object:

Expand All @@ -230,6 +230,47 @@ Group issues by category and severity. Summarize:
- **Process notes:** suggestions for improving the Titan workflow
- **Codebase observations:** structural concerns beyond what the audit covered

### 5b. Open GitHub issues

**Pre-check:** Verify `gh` is available and authenticated before attempting issue creation:
```bash
gh auth status 2>&1 || echo "GH_UNAVAILABLE"
```
If `GH_UNAVAILABLE`, skip issue creation entirely and note in the report: "GitHub issues were not created — `gh` CLI is not available or not authenticated. Create them manually from the Issues section below."

For each issue with severity `bug` or `limitation`, create a GitHub issue using `gh`:

```bash
BODY=$(mktemp)
cat > "$BODY" <<'ISSUE_BODY'
## Context
Discovered during Titan audit (phase: <phase>, date: <timestamp>).

## Description
<description>

## Additional Context
<context field, if present>

## Source
- **Titan phase:** <phase>
- **Severity:** <severity>
- **Category:** <category>
ISSUE_BODY
gh issue create --title "<category>: <short description>" --body-file "$BODY" --label "titan-audit"
rm -f "$BODY"
```

Using `--body-file` with a temp file avoids quoting/expansion issues that can arise when issue descriptions contain backticks, `$()` sequences, or literal `EOF` strings.

**Rules for issue creation:**
- **Only open issues for `bug` and `limitation` severity.** Suggestions and observations go in the report only — they are not actionable enough for standalone issues.
- **Check for duplicates first:** Run `gh issue list --search "<short description>" --state open --limit 5` before creating. If a matching open issue exists, skip it and note "existing issue #N" in the report.
- **Label:** Use `titan-audit` label. If the label doesn't exist, create it: `gh label create titan-audit --description "Issues discovered during Titan audit" --color "d4c5f9" 2>/dev/null || true`
- **Record each created issue number** for inclusion in the report's Issues section.

For `suggestion` severity entries and entries with `category: "codebase"`, include them in the report's Issues section but do NOT create GitHub issues.

---

## Step 6 — Compile the gate log
Expand All @@ -244,6 +285,14 @@ Read `.codegraph/titan/gate-log.ndjson`. Summarize:

## Step 7 — Generate the report

### Record CLOSE completion timestamp

Before writing the report, record `phaseTimestamps.close.completedAt` so the Pipeline Timeline has accurate data for the CLOSE row. (titan-run also records this after titan-close returns as a safety backstop, but by then the report is already written.)

```bash
node -e "const fs=require('fs');const s=JSON.parse(fs.readFileSync('.codegraph/titan/titan-state.json','utf8'));s.phaseTimestamps=s.phaseTimestamps||{};s.phaseTimestamps['close']=s.phaseTimestamps['close']||{};s.phaseTimestamps['close'].completedAt=new Date().toISOString();fs.writeFileSync('.codegraph/titan/titan-state.json',JSON.stringify(s,null,2));"
```

### Report path

```
Expand Down Expand Up @@ -283,13 +332,21 @@ Write the report as Markdown:

## Pipeline Timeline

| Phase | Started | Completed | Duration |
|-------|---------|-----------|----------|
| RECON | <from state> | <from state> | — |
| GAUNTLET | — | — | — |
| SYNC | — | — | — |
| GATE (runs) | — | — | — |
| CLOSE | <now> | <now> | — |
Read `titan-state.json → phaseTimestamps` for real wall-clock data. If `phaseTimestamps` exists, use the recorded ISO 8601 timestamps to compute durations. If it does not exist (older pipeline run), derive timing from git commit timestamps as a fallback — **never invent or guess timestamps.**

**Duration computation:** For each phase with `startedAt` and `completedAt`, compute duration as the difference in minutes/hours. For forge, also note the first and last commit timestamps from `git log`.

| Phase | Duration | Notes |
|-------|----------|-------|
| RECON | <computed from phaseTimestamps.recon> | — |
| GAUNTLET | <computed from phaseTimestamps.gauntlet> | <iterations count if resuming> |
| SYNC | <computed from phaseTimestamps.sync> | — |
| FORGE | <computed from phaseTimestamps.forge> | <commit count>, first at <time>, last at <time> |
| GATE | across forge | <total runs> inline with forge commits |
| CLOSE | <computed from phaseTimestamps.close> | — |
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 close.completedAt is never available when the report is generated

phaseTimestamps.close.completedAt is written by titan-run after titan-close returns (see titan-run/SKILL.md Step 5c). When titan-close is building the Pipeline Timeline table, completedAt for the CLOSE phase does not yet exist in titan-state.json — only startedAt is present.

The template on this line says <computed from phaseTimestamps.close>, but with a missing completedAt an AI agent following these instructions has no real value to compute from. This is the same "fabricated timestamps" failure mode the PR was written to fix: an agent asked to compute a duration it cannot derive will either guess or use a placeholder.

The simplest fix is to have titan-close record its own completedAt immediately before writing the final report, then use that value in the table:

# Just before writing the final report file:
node -e "const fs=require('fs');const s=JSON.parse(fs.readFileSync('.codegraph/titan/titan-state.json','utf8'));s.phaseTimestamps=s.phaseTimestamps||{};s.phaseTimestamps['close']=s.phaseTimestamps['close']||{};s.phaseTimestamps['close'].completedAt=new Date().toISOString();fs.writeFileSync('.codegraph/titan/titan-state.json',JSON.stringify(s,null,2));"

Then titan-run's existing "Record phaseTimestamps.close.completedAt" step (Step 5c) can be kept as a safety backstop or removed to avoid overwriting the more-accurate in-close value.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed -- added a "Record CLOSE completion timestamp" step in titan-close Step 7, immediately before writing the report. titan-close now records its own phaseTimestamps.close.completedAt so the Pipeline Timeline CLOSE row has accurate duration data. The existing titan-run Step 5c recording is preserved as a safety backstop but the report no longer depends on it.

| **Total** | <sum of all phases> | — |

**If `phaseTimestamps` is missing:** Fall back to git log timestamps. Use the earliest and latest commit timestamps from `git log main..HEAD --format="%ai"` to bound the forge phase. For analysis phases (recon, gauntlet, sync), use `titan-state.json → initialized` and `lastUpdated` as rough bounds. Mark the durations as "~approximate" in the table.

---

Expand Down
12 changes: 11 additions & 1 deletion .claude/skills/titan-recon/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,8 @@ Write `.codegraph/titan/GLOBAL_ARCH.md`:

## Step 10 — Propose work batches

Decompose the priority queue into **work batches** of ~5-15 files each:
Decompose the priority queue into **work batches** of **at most 5 files each**:
- **Hard limit: 5 files per batch.** If a domain has more than 5 files, split it into multiple batches (e.g., "domain-parser-1", "domain-parser-2"). This keeps each gauntlet iteration focused and prevents context overload in sub-agents.
- Stay within a single domain where possible
- Group tightly-coupled files together (from communities)
- Order by priority: highest-risk domains first
Expand All @@ -214,6 +215,14 @@ Create `.codegraph/titan/titan-state.json` — the single source of truth for th
mkdir -p .codegraph/titan
```

**Important:** Before writing, check if `titan-state.json` already exists (the orchestrator may have written `phaseTimestamps.recon.startedAt` before dispatching this sub-agent). If it does, read the existing `phaseTimestamps` and merge them into the new state object so the start timestamp is preserved:

```bash
node -e "const fs=require('fs');const p='.codegraph/titan/titan-state.json';let existing={};try{existing=JSON.parse(fs.readFileSync(p,'utf8'));}catch{}console.log(JSON.stringify(existing.phaseTimestamps||{}));"
```

Include the preserved `phaseTimestamps` in the state file below.

```json
{
"version": 1,
Expand Down Expand Up @@ -281,6 +290,7 @@ mkdir -p .codegraph/titan
},
"hotFiles": ["<top 30>"],
"tangledDirs": ["<cohesion < 0.3>"],
"phaseTimestamps": "<merged from existing file — see above>",
"fileAudits": {},
"progress": {
"totalFiles": 0,
Expand Down
50 changes: 50 additions & 0 deletions .claude/skills/titan-run/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,24 @@ You are the **orchestrator** for the full Titan Paradigm pipeline. Your job is t
- If state exists and `--start-from` not specified, ask user: "Existing Titan state found (phase: `<currentPhase>`). Resume from current state, or start fresh with `/titan-reset` first?"
- If `--yes` is set, resume automatically.

**Initialize the phase timestamps helper.** Throughout the pipeline, you will record wall-clock timestamps for each phase. Use this helper to write them into `titan-state.json`:

```bash
# Record phase start (safe for resume — only sets startedAt if not already present):
node -e "const fs=require('fs');const s=JSON.parse(fs.readFileSync('.codegraph/titan/titan-state.json','utf8'));s.phaseTimestamps=s.phaseTimestamps||{};s.phaseTimestamps['<PHASE>']=s.phaseTimestamps['<PHASE>']||{};if(!s.phaseTimestamps['<PHASE>'].startedAt){s.phaseTimestamps['<PHASE>'].startedAt=new Date().toISOString();fs.writeFileSync('.codegraph/titan/titan-state.json',JSON.stringify(s,null,2));}"

# Record phase completion:
node -e "const fs=require('fs');const s=JSON.parse(fs.readFileSync('.codegraph/titan/titan-state.json','utf8'));s.phaseTimestamps=s.phaseTimestamps||{};s.phaseTimestamps['<PHASE>']=s.phaseTimestamps['<PHASE>']||{};s.phaseTimestamps['<PHASE>'].completedAt=new Date().toISOString();fs.writeFileSync('.codegraph/titan/titan-state.json',JSON.stringify(s,null,2));"
```

Replace `<PHASE>` with `recon`, `gauntlet`, `sync`, `forge`, or `close`. **Run the start command immediately before dispatching each phase's first sub-agent, and the completion command immediately after post-phase validation passes.** If resuming a phase (e.g., gauntlet loop iteration 2+), do NOT overwrite `startedAt` — only set it if it doesn't already exist.

**Timestamp validation:** After recording `completedAt` for any phase, verify `startedAt < completedAt`:
```bash
node -e "const s=JSON.parse(require('fs').readFileSync('.codegraph/titan/titan-state.json','utf8'));const p=s.phaseTimestamps?.['<PHASE>'];if(p?.startedAt&&p?.completedAt){const start=new Date(p.startedAt),end=new Date(p.completedAt);if(end<=start){console.log('WARNING: <PHASE> completedAt ('+p.completedAt+') is not after startedAt ('+p.startedAt+')');process.exit(0);}console.log('<PHASE> duration: '+((end-start)/60000).toFixed(1)+' min');}else{console.log('WARNING: <PHASE> missing startedAt or completedAt');}"
```
If the check fails, log a warning but do not stop the pipeline — clock skew or immediate completion of short phases can cause this.

4. **Sync with main** (once, before any sub-agent runs):
```bash
git fetch origin main && git merge origin/main --no-edit
Expand Down Expand Up @@ -128,6 +146,17 @@ WARN-level V-checks from skipped phases are surfaced as prefixed warnings: "[ski

### 1a. Run Pre-Agent Gate (G1-G4)

### 1a.1. Record phase start timestamp
Record `phaseTimestamps.recon.startedAt` (only if not already set — it may exist from a prior crashed run).
Comment on lines +149 to +150
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 recon.startedAt always fails on fresh runs — ENOENT

Step 1a.1 instructs the orchestrator to record phaseTimestamps.recon.startedAt by running the helper node one-liner, but on a fresh run titan-state.json does not yet exist at this point. The file is only created at the very end of the recon sub-agent — in titan-recon Step 12. The node command will throw:

Error: ENOENT: no such file or directory, open '.codegraph/titan/titan-state.json'

Furthermore, even if the orchestrator pre-created a stub file, titan-recon Step 12 overwrites it wholesale with a freshly built JSON object that has no phaseTimestamps key, so any pre-written startedAt would be lost regardless.

The end result: on every fresh run phaseTimestamps.recon.startedAt is never written (only completedAt is recorded after validation), the Pipeline Timeline falls back to approximate git-log timestamps for the RECON row — exactly the fabricated-timestamp problem this PR was written to fix.

Recommended fix: Add phaseTimestamps to the initial state template in titan-recon Step 12, seeding recon.startedAt at the moment the sub-agent begins executing:

{
  "version": 1,
  "initialized": "<ISO 8601>",
  ...
  "phaseTimestamps": {
    "recon": {
      "startedAt": "<ISO 8601 — set at start of Step 1>"
    }
  }
}

Alternatively, have titan-recon record its own startedAt as the very first action (before any heavy processing), writing directly into a field it then embeds in the Step 12 write.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed -- addressed the ENOENT issue in two places:

  1. titan-run Step 1a.1: Added a safe variant of the timestamp helper that catches the missing file and creates a minimal stub with phaseTimestamps.recon.startedAt when titan-state.json doesn't exist yet. Uses try/catch around the readFileSync and falls back to mkdirSync + empty object.

  2. titan-recon Step 12: Added instructions to read any existing phaseTimestamps from the file before overwriting it, and merge them into the full state object. The JSON template now includes a phaseTimestamps field that must be populated from the existing file.

This ensures recon.startedAt is recorded on fresh runs and survives the Step 12 wholesale write.

Regarding the titan-audit label concern from the summary: this is already handled in titan-close Step 5b's rules section -- gh label create titan-audit ... 2>/dev/null || true runs before the first gh issue create, so the label is created on-demand if missing.


**Note:** On a fresh run, `titan-state.json` does not yet exist (titan-recon creates it in Step 12). Use this safe variant that creates a minimal stub if the file is missing:

```bash
node -e "const fs=require('fs');const p='.codegraph/titan/titan-state.json';let s;try{s=JSON.parse(fs.readFileSync(p,'utf8'));}catch{fs.mkdirSync('.codegraph/titan',{recursive:true});s={};}s.phaseTimestamps=s.phaseTimestamps||{};s.phaseTimestamps['recon']=s.phaseTimestamps['recon']||{};if(!s.phaseTimestamps['recon'].startedAt){s.phaseTimestamps['recon'].startedAt=new Date().toISOString();fs.writeFileSync(p,JSON.stringify(s,null,2));}"
```

This ensures `recon.startedAt` is recorded even on first-time runs. titan-recon Step 12 merges any existing `phaseTimestamps` into the full state file it writes.

### 1b. Dispatch sub-agent

Use the **Agent tool** to spawn a sub-agent:
Expand Down Expand Up @@ -177,11 +206,14 @@ If `NO_SNAPSHOT` → **WARN** (not fatal, but note it: "No baseline snapshot —
**V4. Cross-check counts:**
- `titan-state.json → stats.totalFiles` should roughly match the number of targets across all batches (batches are subsets of files, so `sum(batch.files.length)` should be ≤ `totalFiles`)
- `priorityQueue.length` should be > 0 and ≤ `totalNodes`
- **Batch size check:** Every batch must have ≤ 5 files. If any batch exceeds 5, **WARN**: "Batch <id> has <N> files (max 5). Large batches cause context overload in gauntlet sub-agents."

If wildly inconsistent (e.g., 0 batches but 500 nodes) → **WARN** with details.

Print: `RECON validated. Domains: <count>, Batches: <count>, Priority targets: <count>, Quality score: <score>`

Record `phaseTimestamps.recon.completedAt`.

---

## Step 2 — GAUNTLET (loop)
Expand All @@ -190,6 +222,8 @@ Print: `RECON validated. Domains: <count>, Batches: <count>, Priority targets: <

### 2a. Pre-loop check

Record `phaseTimestamps.gauntlet.startedAt` (only if not already set — gauntlet may be resuming).

Read `.codegraph/titan/gauntlet-summary.json` if it exists:
- If `"complete": true` → run gauntlet post-validation (2d) and skip loop if it passes
- Otherwise, count completed batches from `titan-state.json` for progress tracking
Expand Down Expand Up @@ -298,6 +332,8 @@ If mismatched → **WARN** with details (not fatal — the NDJSON is the source

Print: `GAUNTLET validated. Audited: <N>/<M> targets. Pass: <N>, Warn: <N>, Fail: <N>, Decompose: <N>. NDJSON integrity: <valid>/<total> lines OK.`

Record `phaseTimestamps.gauntlet.completedAt`.

---

## Step 3 — SYNC
Expand All @@ -306,6 +342,9 @@ Print: `GAUNTLET validated. Audited: <N>/<M> targets. Pass: <N>, Warn: <N>, Fail

### 3a. Run Pre-Agent Gate (G1-G4)

### 3a.1. Record phase start timestamp
Record `phaseTimestamps.sync.startedAt`.

### 3b. Dispatch sub-agent

```
Expand Down Expand Up @@ -337,6 +376,8 @@ For entries with `dependencies` arrays, verify that each dependency phase number

Print: `SYNC validated. Execution phases: <N>, Total targets: <N>, Estimated commits: <N>.`

Record `phaseTimestamps.sync.completedAt`.

---

## Step 3.5 — Pre-forge: Architectural Snapshot + Human Checkpoint
Expand Down Expand Up @@ -453,6 +494,8 @@ Once the user confirms (or `--yes` was set), `autoConfirm` is already `true` (se

### 4a. Pre-loop check

Record `phaseTimestamps.forge.startedAt` (only if not already set — forge may be resuming).

Read `.codegraph/titan/sync.json` → count total phases in `executionOrder`.
Read `.codegraph/titan/titan-state.json` → check `execution.completedPhases` (may not exist yet if forge hasn't started).

Expand Down Expand Up @@ -575,6 +618,8 @@ If `.codegraph/titan/gate-log.ndjson` exists:

Print forge summary.

Record `phaseTimestamps.forge.completedAt`.

---

## Step 5 — CLOSE (report + PRs)
Expand All @@ -583,6 +628,9 @@ After forge completes, dispatch `/titan-close` to produce the final report with

### 5a. Run Pre-Agent Gate (G1-G4)

### 5a.1. Record phase start timestamp
Record `phaseTimestamps.close.startedAt`.

### 5b. Dispatch sub-agent

```
Expand All @@ -598,6 +646,8 @@ After the agent returns, verify:

If the agent created PRs, print the PR URLs.

Record `phaseTimestamps.close.completedAt`.

---

## Error Handling
Expand Down
Loading