diff --git a/.claude/skills/drift/SKILL.md b/.claude/skills/drift/SKILL.md index 721a6a1..0a8dfd0 100644 --- a/.claude/skills/drift/SKILL.md +++ b/.claude/skills/drift/SKILL.md @@ -11,9 +11,15 @@ drift binds markdown docs to code and lints for staleness. When you change code without updating the docs that describe it, those docs become stale. Stale docs get loaded as context in future sessions and produce wrong code based on wrong descriptions. This compounds — each session that trusts a stale doc makes things worse. drift makes the anchor explicit and enforceable so this feedback loop breaks. -## CRITICAL: never relink without reviewing +## Relink gate -`drift link` refreshes provenance — it tells drift "I've reviewed this code and the doc is accurate." If you relink without actually updating the doc prose to match the code change, you are lying to every future session that loads that doc. Read the stale report, understand what changed, update the prose, THEN relink. +`drift link` refuses to restamp a stale anchor without explicit review. When a target's signature has drifted, `drift link` prints both sides — the doc section (spec) and the current code — then exits 1. + +This means you cannot blindly relink. You must review the doc prose and confirm it is still accurate. Use: + +```bash +drift link docs/auth.md --doc-is-still-accurate +``` ## After you change code @@ -30,10 +36,11 @@ drift check ``` If a doc is stale because of your change: -1. Read the blame info to understand what changed and why -2. Update the doc's prose to reflect what you changed -3. Refresh provenance: `drift link ` -4. Verify: `drift check` +1. Run `drift link ` — it will print the doc section and current code side by side, then refuse +2. Read both sides to understand what's out of sync +3. Update the doc's prose to reflect what you changed +4. Run `drift link --doc-is-still-accurate` — succeeds now that you've reviewed +5. Verify: `drift check` Do not skip this. Leaving a doc stale is worse than leaving it unwritten. @@ -100,6 +107,8 @@ Anchors can target code files, code symbols (`file#Symbol`), or doc headings (`d `drift link` writes bindings to `drift.lock` with content signatures (`sig:`). Content signatures are AST fingerprints of the target, so staleness detection works without querying VCS history. This means `drift link` works on uncommitted files — no need to commit first. +When relinking a stale anchor, `drift link` refuses and prints both sides (doc section and current code) so you can review the change. Pass `--doc-is-still-accurate` to confirm the doc doesn't need updates. + `drift lint` also checks all markdown links (`[text](path.md)`) in drift-managed docs for existence — broken links are reported as `BROKEN` without needing a lockfile entry. ## Cross-repo docs (origin) diff --git a/docs/CLI.md b/docs/CLI.md index ebb6da5..9f9b805 100644 --- a/docs/CLI.md +++ b/docs/CLI.md @@ -4,7 +4,7 @@ ## drift check / drift lint -Check all docs for staleness. The primary command. Exits 1 if any anchor is stale or any link is broken. `drift lint` is an alias. +Check all docs for staleness. The primary command. Exits 1 if any anchor is stale or any link is broken. `drift lint` is an alias. Markdown files under directories with their own `drift.lock` are skipped — they belong to a nested scope. ``` drift check [--format text|json] [--changed ] @@ -69,12 +69,12 @@ docs/payments.md (1 anchor) ## drift link -Add or refresh bindings in `drift.lock`. `drift link` computes a content signature (`sig:`) from the target file's current syntax fingerprint and writes it to the lockfile. Creates `drift.lock` if it doesn't exist. +Add or refresh bindings in `drift.lock`. `drift link` computes a content signature (`sig:`) from the target file's current syntax fingerprint and writes it to the lockfile. Creates `drift.lock` if it doesn't exist. The lockfile is discovered by walking up from the doc's directory, not from cwd — if a nested `drift.lock` exists closer to the doc, that lockfile is used. ``` -drift link -drift link -drift link +drift link [--doc-is-still-accurate] +drift link [--doc-is-still-accurate] +drift link [--doc-is-still-accurate] ``` **Targeted mode** — adds a single binding to `drift.lock`: @@ -99,9 +99,11 @@ relinked all anchors in docs/auth.md Each anchor gets its own content signature computed from the current file on disk. +**Relink gate** — when relinking a stale anchor (target signature changed), the relink is refused and both sides are printed (doc section and current code). This prevents blindly restamping without reviewing documentation. Pass `--doc-is-still-accurate` to confirm you've reviewed the doc and it doesn't need changes. + ## drift unlink -Remove a binding from `drift.lock`. +Remove a binding from `drift.lock`. Like `drift link`, the lockfile is discovered from the doc's directory. ``` drift unlink diff --git a/drift.lock b/drift.lock index 183b038..51bff1e 100644 --- a/drift.lock +++ b/drift.lock @@ -1,15 +1,15 @@ -.claude/skills/drift/SKILL.md -> src/main.zig sig:a0933989333423fb origin:github:fiberplane/drift +.claude/skills/drift/SKILL.md -> src/main.zig sig:647a31274655a84d origin:github:fiberplane/drift .claude/skills/drift/SKILL.md -> src/vcs.zig sig:2468937f00d5305a origin:github:fiberplane/drift CLAUDE.md -> build.zig sig:7194b38f39dbadba -CLAUDE.md -> src/main.zig sig:a0933989333423fb -docs/CLI.md -> src/commands/link.zig sig:f4ab9576ebef2981 -docs/CLI.md -> src/commands/lint.zig sig:a0237cda56055884 +CLAUDE.md -> src/main.zig sig:647a31274655a84d +docs/CLI.md -> src/commands/link.zig sig:70e52c01fb9022a8 +docs/CLI.md -> src/commands/lint.zig sig:b8ab0cd93909b888 docs/CLI.md -> src/commands/refs.zig sig:e3309a0d11c02bb0 docs/CLI.md -> src/commands/status.zig sig:ab9cee37b4b22644 -docs/CLI.md -> src/commands/unlink.zig sig:590c53a3920551d3 +docs/CLI.md -> src/commands/unlink.zig sig:6fc59e2a25f80fac docs/DESIGN.md -> src/context.zig sig:70678dcc0872470d -docs/DESIGN.md -> src/lockfile.zig sig:f659df3e59325b71 -docs/DESIGN.md -> src/main.zig sig:a0933989333423fb +docs/DESIGN.md -> src/lockfile.zig sig:23bc7256cff13942 +docs/DESIGN.md -> src/main.zig sig:647a31274655a84d docs/DESIGN.md -> src/symbols.zig sig:8e4a403c6f0130c3 docs/DESIGN.md -> src/vcs.zig sig:2468937f00d5305a docs/RELEASING.md -> .github/workflows/ci.yml sig:e8440b1d7ee3e4ba diff --git a/examples/broken-links/README.md b/examples/broken-links/README.md new file mode 100644 index 0000000..c791714 --- /dev/null +++ b/examples/broken-links/README.md @@ -0,0 +1,23 @@ +# Broken Links Example + +Demonstrates `drift check` detecting broken markdown links. + +## Setup + +No lockfile binding needed — broken link detection works on any markdown file +discovered by `drift check`. + +## Run + +```bash +drift check +``` + +Expected output: `doc.md` reports 3 `BROKEN` links: + +- `./stripe-guide.md` — doesn't exist +- `../docs/payment-arch.md` — doesn't exist +- `./errors.md` — doesn't exist + +The two links under "Working links" point to files in the relink-gate example +and should pass. diff --git a/examples/broken-links/doc.md b/examples/broken-links/doc.md new file mode 100644 index 0000000..1790ace --- /dev/null +++ b/examples/broken-links/doc.md @@ -0,0 +1,14 @@ +# Payment Processing + +See the [Stripe integration guide](./stripe-guide.md) for setup instructions. + +The payment flow is documented in [the architecture doc](../docs/payment-arch.md). + +For error codes, refer to [error reference](./errors.md). + +## Working links + +These should pass lint: + +- [Auth example](../relink-gate/doc.md) +- [README](../relink-gate/README.md) diff --git a/examples/broken-links/drift.lock b/examples/broken-links/drift.lock new file mode 100644 index 0000000..e69de29 diff --git a/examples/relink-gate/README.md b/examples/relink-gate/README.md new file mode 100644 index 0000000..f83e0a7 --- /dev/null +++ b/examples/relink-gate/README.md @@ -0,0 +1,37 @@ +# Relink Gate Example + +Demonstrates `drift link` refusing to restamp when the doc hasn't been updated. + +## Setup + +```bash +drift link examples/relink-gate/doc.md examples/relink-gate/auth.ts#login +``` + +## Trigger the gate + +Change the code without updating the doc: + +```bash +# Add a rate-limit check to login — the doc now lies about what login does +sed -i '' 's/return createSession(username);/if (rateLimited(username)) throw new Error("rate limited");\n return createSession(username);/' examples/relink-gate/auth.ts + +# Try to relink — refused because doc.md wasn't updated +drift link examples/relink-gate/doc.md +``` + +Expected output: drift prints the doc section and current code, then refuses. + +## Fix it + +Either update `doc.md` to mention rate limiting, then relink: + +```bash +drift link examples/relink-gate/doc.md +``` + +Or confirm the doc is still accurate: + +```bash +drift link examples/relink-gate/doc.md --doc-is-still-accurate +``` diff --git a/examples/relink-gate/auth.ts b/examples/relink-gate/auth.ts new file mode 100644 index 0000000..049ffe1 --- /dev/null +++ b/examples/relink-gate/auth.ts @@ -0,0 +1,13 @@ +export function login(username: string, password: string): string { + if (password.length === 0) { + throw new Error("password required"); + } + if (username.length < 3) { + throw new Error("username must be more than 3 chars"); + } + return createSession(username); +} + +function createSession(username: string): string { + return `session_${username}_${Date.now()}`; +} diff --git a/examples/relink-gate/doc.md b/examples/relink-gate/doc.md new file mode 100644 index 0000000..48df00f --- /dev/null +++ b/examples/relink-gate/doc.md @@ -0,0 +1,10 @@ +# Authentication + +The `login` function validates a username/password pair against the +database and returns a session token on success. + +``` +login(username, password) -> session_token +``` + +It rejects empty passwords and usernames shorter than 3 characters. diff --git a/examples/relink-gate/drift.lock b/examples/relink-gate/drift.lock new file mode 100644 index 0000000..ff70e7e --- /dev/null +++ b/examples/relink-gate/drift.lock @@ -0,0 +1 @@ +doc.md -> auth.ts#login sig:fde13635c2922d43 diff --git a/examples/symbol-anchor/README.md b/examples/symbol-anchor/README.md new file mode 100644 index 0000000..ba0295f --- /dev/null +++ b/examples/symbol-anchor/README.md @@ -0,0 +1,26 @@ +# Symbol Anchor Example + +Demonstrates symbol-level anchors where drift tracks a specific named symbol +rather than the whole file. The relink gate shows the symbol body when refusing. + +## Setup + +```bash +drift link examples/symbol-anchor/doc.md examples/symbol-anchor/config.ts#DatabaseConfig +drift link examples/symbol-anchor/doc.md examples/symbol-anchor/config.ts#createPool +``` + +## Trigger the gate + +Change the `DatabaseConfig` interface without updating the doc: + +```bash +# Add a new field — the doc now omits it +sed -i '' 's/maxConnections: number;/maxConnections: number;\n ssl: boolean;/' examples/symbol-anchor/config.ts + +# Relink refused — prints the doc section AND the current DatabaseConfig body +drift link examples/symbol-anchor/doc.md +``` + +The output shows both sides so you can see what's out of sync: the doc lists +4 fields but the code now has 5. diff --git a/examples/symbol-anchor/config.ts b/examples/symbol-anchor/config.ts new file mode 100644 index 0000000..1e0c8f6 --- /dev/null +++ b/examples/symbol-anchor/config.ts @@ -0,0 +1,15 @@ +export interface DatabaseConfig { + host: string; + port: number; + database: string; + maxConnections: number; +} + +export interface CacheConfig { + ttlSeconds: number; + maxEntries: number; +} + +export function createPool(config: DatabaseConfig): void { + console.log(`Connecting to ${config.host}:${config.port}/${config.database}`); +} diff --git a/examples/symbol-anchor/doc.md b/examples/symbol-anchor/doc.md new file mode 100644 index 0000000..b470729 --- /dev/null +++ b/examples/symbol-anchor/doc.md @@ -0,0 +1,9 @@ +# Database Configuration + +`DatabaseConfig` defines the connection parameters: + +- `host` / `port` — database server address +- `database` — target database name +- `maxConnections` — connection pool ceiling + +Use `createPool` to initialize the pool from a config object. diff --git a/examples/symbol-anchor/drift.lock b/examples/symbol-anchor/drift.lock new file mode 100644 index 0000000..9cf782f --- /dev/null +++ b/examples/symbol-anchor/drift.lock @@ -0,0 +1,2 @@ +doc.md -> config.ts#DatabaseConfig sig:a643fafff54d00ed +doc.md -> config.ts#createPool sig:3294e048b6c144ae diff --git a/src/commands/link.zig b/src/commands/link.zig index 796df8e..e9ceb7e 100644 --- a/src/commands/link.zig +++ b/src/commands/link.zig @@ -5,7 +5,9 @@ const markdown = @import("../markdown.zig"); const symbols = @import("../symbols.zig"); const target = @import("../target.zig"); -pub const RunError = error{ DocReadFailed, NoBindingsForDoc, CannotComputeFingerprint, TargetNotFound, HeadingNotFound }; +const stale_context_line_cap = 10; + +pub const RunError = error{ DocReadFailed, NoBindingsForDoc, CannotComputeFingerprint, TargetNotFound, HeadingNotFound, DocUnchanged }; pub fn run( ctx: CommandContext, @@ -13,17 +15,19 @@ pub fn run( stderr_w: *std.io.Writer, doc_path: []const u8, optional_anchor: ?[]const u8, + doc_is_still_accurate: bool, ) !void { const cwd_path = try std.fs.cwd().realpathAlloc(ctx.run_arena, "."); - var lf = try lockfile.discover(ctx.run_arena, ctx.scratch(), cwd_path); + const abs_doc_path = try std.fs.path.resolve(ctx.run_arena, &.{ cwd_path, doc_path }); + const doc_dir = std.fs.path.dirname(abs_doc_path) orelse cwd_path; + var lf = try lockfile.discover(ctx.run_arena, ctx.scratch(), doc_dir); ctx.resetScratch(); - _ = std.fs.cwd().readFileAlloc(ctx.scratch(), doc_path, 1024 * 1024) catch |err| { + const doc_content = std.fs.cwd().readFileAlloc(ctx.run_arena, doc_path, 1024 * 1024) catch |err| { stderr_w.print("error: cannot read '{s}': {s}\n", .{ doc_path, @errorName(err) }) catch {}; return error.DocReadFailed; }; - ctx.resetScratch(); const normalized_doc_path = try normalizeDocPath(ctx, lf.root_path, cwd_path, doc_path); ctx.resetScratch(); @@ -42,10 +46,20 @@ pub fn run( }; ctx.resetScratch(); + const existing_binding = findBinding(lf.bindings.items, normalized_doc_path, normalized_target); + const old_sig = if (existing_binding) |b| b.fieldValue("sig") else null; + try upsertBinding(ctx, &lf, cwd_path, normalized_doc_path, normalized_target); - try lockfile.writeFile(&lf, ctx.scratch()); const binding = findBinding(lf.bindings.items, normalized_doc_path, normalized_target).?; + if (isDocGateBlocked(binding, old_sig, doc_is_still_accurate)) { + printStaleContext(ctx, stderr_w, lf.root_path, cwd_path, doc_path, doc_content, binding.target); + if (!promptDocAccurate(stderr_w)) return error.DocUnchanged; + } + binding.removeField("doc"); + + try lockfile.writeFile(&lf, ctx.scratch()); + stdout_w.print("added {s} -> {s}", .{ normalized_doc_path, binding.target }) catch {}; if (binding.fieldValue("sig")) |sig| { stdout_w.print(" sig:{s}", .{sig}) catch {}; @@ -55,9 +69,17 @@ pub fn run( } var relinked_any = false; + var refused_count: usize = 0; for (lf.bindings.items) |*binding| { if (!std.mem.eql(u8, binding.doc_path, normalized_doc_path)) continue; + const old_sig = binding.fieldValue("sig"); try refreshBindingSig(ctx, cwd_path, lf.root_path, binding); + + if (isDocGateBlocked(binding, old_sig, doc_is_still_accurate)) { + refused_count += 1; + } else { + binding.removeField("doc"); + } relinked_any = true; } @@ -66,10 +88,87 @@ pub fn run( return error.NoBindingsForDoc; } + if (refused_count > 0) { + printBlanketRefusal(ctx, stderr_w, lf.root_path, cwd_path, doc_path, doc_content, lf.bindings.items, normalized_doc_path, refused_count); + if (!promptDocAccurate(stderr_w)) return error.DocUnchanged; + } + try lockfile.writeFile(&lf, ctx.scratch()); stdout_w.print("relinked all anchors in {s}\n", .{normalized_doc_path}) catch {}; } +/// In TTY mode, prompt the user to confirm the doc is still accurate. +/// In non-TTY mode, print the refusal message and return false. +fn promptDocAccurate(stderr_w: *std.io.Writer) bool { + const stdin = std.fs.File.stdin(); + if (!stdin.isTty()) { + stderr_w.print("refused: target changed since last link.\nReview the doc, then relink with --doc-is-still-accurate.\n", .{}) catch {}; + return false; + } + stderr_w.print("Doc is still accurate? [y/N] ", .{}) catch {}; + stderr_w.flush() catch {}; + var buf: [16]u8 = undefined; + const n = stdin.read(&buf) catch return false; + if (n == 0) return false; + const answer = std.mem.trimRight(u8, buf[0..n], "\r\n \t"); + return answer.len > 0 and (answer[0] == 'y' or answer[0] == 'Y'); +} + +/// Returns true when a relink should be refused: target changed without review. +fn isDocGateBlocked( + binding: *lockfile.Binding, + old_sig: ?[]const u8, + doc_is_still_accurate: bool, +) bool { + if (doc_is_still_accurate) return false; + const os = old_sig orelse return false; + const ns = binding.fieldValue("sig") orelse return true; + return !std.mem.eql(u8, os, ns); +} + +/// Print a consolidated refusal for blanket relink: doc section once, then +/// each refused binding with its target context. +fn printBlanketRefusal( + ctx: CommandContext, + stderr_w: *std.io.Writer, + root_path: []const u8, + cwd_path: []const u8, + doc_path: []const u8, + doc_content: []const u8, + bindings: []lockfile.Binding, + normalized_doc_path: []const u8, + refused_count: usize, +) void { + // Print the doc section once using the first binding for this doc + const parsed_first = blk: { + for (bindings) |*b| { + if (std.mem.eql(u8, b.doc_path, normalized_doc_path)) break :blk target.parse(b.target); + } + return; + }; + printDocSection(stderr_w, doc_path, doc_content, parsed_first.identity, parsed_first); + + // Print each binding's target context + for (bindings) |*b| { + if (!std.mem.eql(u8, b.doc_path, normalized_doc_path)) continue; + + const parsed = target.parse(b.target); + if (parsed.isHeading()) { + printHeadingTarget(ctx, stderr_w, root_path, cwd_path, parsed); + } else if (parsed.symbol_name != null) { + printSymbolTarget(ctx, stderr_w, root_path, cwd_path, parsed); + } + + stderr_w.print(" STALE {s}\n", .{b.target}) catch {}; + } + + stderr_w.print("\n{d} stale anchor{s} in {s}\n", .{ + refused_count, + if (refused_count == 1) "" else "s", + doc_path, + }) catch {}; +} + fn upsertBinding( ctx: CommandContext, lf: *lockfile.Lockfile, @@ -87,8 +186,6 @@ fn upsertBinding( .target = try ctx.run_arena.dupe(u8, normalized_target), .metadata = .{}, }; - errdefer binding.metadata.deinit(ctx.run_arena); - try refreshBindingSig(ctx, cwd_path, lf.root_path, &binding); try lf.bindings.append(ctx.run_arena, binding); } @@ -208,3 +305,269 @@ fn findBinding(bindings: []lockfile.Binding, doc_path: []const u8, normalized_ta } return null; } + +/// Print context for a stale relink refusal: the doc section and the current +/// code/heading/commits depending on anchor type. Best-effort — failures are +/// silently ignored so the refusal message always follows. +fn printStaleContext( + ctx: CommandContext, + stderr_w: *std.io.Writer, + root_path: []const u8, + cwd_path: []const u8, + doc_path: []const u8, + doc_content: []const u8, + binding_target: []const u8, +) void { + const parsed = target.parse(binding_target); + + // Header line: doc_path -> binding_target STALE + stderr_w.print("\n{s} -> {s} STALE\n", .{ doc_path, binding_target }) catch {}; + + // --- doc section --- + printDocSection(stderr_w, doc_path, doc_content, binding_target, parsed); + + // --- target section (symbol or heading anchors only) --- + if (parsed.isHeading()) { + printHeadingTarget(ctx, stderr_w, root_path, cwd_path, parsed); + } else if (parsed.symbol_name != null) { + printSymbolTarget(ctx, stderr_w, root_path, cwd_path, parsed); + } + + stderr_w.print("\n", .{}) catch {}; +} + +fn printDocSection( + stderr_w: *std.io.Writer, + doc_path: []const u8, + doc_content: []const u8, + binding_target: []const u8, + parsed: target.ParsedTarget, +) void { + const section_text = findDocSectionForTarget(doc_content, binding_target, parsed); + const heading_label = findNearestHeadingAbove(doc_content, section_text); + + if (heading_label) |label| { + stderr_w.print("\n── doc ── {s} ##{s} ──\n", .{ doc_path, label }) catch {}; + } else { + stderr_w.print("\n── doc ── {s} ──\n", .{doc_path}) catch {}; + } + + printCappedLines(stderr_w, section_text); +} + +/// Search doc_content for a line referencing the target. Extract the enclosing +/// heading section. Falls back to the first stale_context_line_cap lines. +fn findDocSectionForTarget( + doc_content: []const u8, + binding_target: []const u8, + parsed: target.ParsedTarget, +) []const u8 { + // Try to find a line referencing the target file path or symbol name + const search_terms = [_][]const u8{ + binding_target, + parsed.file_path, + if (parsed.symbol_name) |s| s else "", + }; + + var lines = std.mem.splitScalar(u8, doc_content, '\n'); + var line_start: usize = 0; + var match_offset: ?usize = null; + + while (lines.next()) |line| { + for (search_terms) |term| { + if (term.len > 0 and std.mem.indexOf(u8, line, term) != null) { + match_offset = line_start; + break; + } + } + if (match_offset != null) break; + line_start += line.len + 1; + } + + if (match_offset) |offset| { + return extractSectionAroundOffset(doc_content, offset); + } + + // Fallback: return the beginning of the doc + return firstNLines(doc_content, stale_context_line_cap); +} + +/// Walk backwards from offset to find the nearest heading, then forward to the +/// next heading of equal or higher level (or EOF). +fn extractSectionAroundOffset(content: []const u8, offset: usize) []const u8 { + // Find nearest heading above offset + var heading_start: usize = 0; + var heading_level: usize = 0; + var pos: usize = 0; + var line_iter = std.mem.splitScalar(u8, content, '\n'); + + while (line_iter.next()) |line| { + const trimmed = std.mem.trimLeft(u8, line, " \t"); + if (trimmed.len > 0 and trimmed[0] == '#') { + const level = countLeadingChar(trimmed, '#'); + if (level > 0 and level <= 6 and pos <= offset) { + heading_start = pos; + heading_level = level; + } + } + if (pos > offset and heading_level > 0) { + // We've passed the match; now find the end of the section + break; + } + pos += line.len + 1; + } + + if (heading_level == 0) { + return firstNLines(content, stale_context_line_cap); + } + + // Find the end: next heading of equal or higher level + var section_end: usize = content.len; + pos = heading_start; + var past_heading = false; + var iter2 = std.mem.splitScalar(u8, content[heading_start..], '\n'); + + while (iter2.next()) |line| { + if (past_heading) { + const trimmed = std.mem.trimLeft(u8, line, " \t"); + if (trimmed.len > 0 and trimmed[0] == '#') { + const level = countLeadingChar(trimmed, '#'); + if (level > 0 and level <= heading_level) { + section_end = pos; + break; + } + } + } else { + past_heading = true; + } + pos += line.len + 1; + } + + const section = content[heading_start..@min(section_end, content.len)]; + return std.mem.trimRight(u8, section, "\n\r "); +} + +fn countLeadingChar(s: []const u8, c: u8) usize { + var count: usize = 0; + for (s) |ch| { + if (ch == c) count += 1 else break; + } + return count; +} + +/// Find the heading text for the section containing `section_text` within `doc_content`. +fn findNearestHeadingAbove(doc_content: []const u8, section_text: []const u8) ?[]const u8 { + // section_text is a slice of doc_content, so we can compute the offset + if (@intFromPtr(section_text.ptr) < @intFromPtr(doc_content.ptr)) return null; + const offset = @intFromPtr(section_text.ptr) - @intFromPtr(doc_content.ptr); + if (offset > doc_content.len) return null; + + const prefix = doc_content[0..offset]; + // Find the last heading line in prefix + var last_heading: ?[]const u8 = null; + var lines = std.mem.splitScalar(u8, prefix, '\n'); + while (lines.next()) |line| { + const trimmed = std.mem.trimLeft(u8, line, " \t"); + if (trimmed.len > 0 and trimmed[0] == '#') { + const hashes = countLeadingChar(trimmed, '#'); + if (hashes > 0 and hashes <= 6 and trimmed.len > hashes) { + last_heading = std.mem.trim(u8, trimmed[hashes..], " \t"); + } + } + } + + // Also check if section_text itself starts with a heading + const first_line_end = std.mem.indexOfScalar(u8, section_text, '\n') orelse section_text.len; + const first_line = std.mem.trimLeft(u8, section_text[0..first_line_end], " \t"); + if (first_line.len > 0 and first_line[0] == '#') { + const hashes = countLeadingChar(first_line, '#'); + if (hashes > 0 and hashes <= 6 and first_line.len > hashes) { + return std.mem.trim(u8, first_line[hashes..], " \t"); + } + } + + return last_heading; +} + +fn printHeadingTarget( + ctx: CommandContext, + stderr_w: *std.io.Writer, + root_path: []const u8, + cwd_path: []const u8, + parsed: target.ParsedTarget, +) void { + const absolute_path = resolveInputPath(ctx, root_path, cwd_path, parsed.file_path) catch return; + const content = readResolvedFile(ctx, absolute_path) catch return; + defer ctx.resetScratch(); + + const symbol = parsed.symbol_name orelse return; + const range = markdown.extractHeadingSectionContent(content, symbol) orelse return; + const section = content[range[0]..range[1]]; + const line_count = countLines(section); + + stderr_w.print("\n── target ── {s} ({d} lines) ──\n", .{ parsed.identity, line_count }) catch {}; + printCappedLines(stderr_w, section); +} + +fn printSymbolTarget( + ctx: CommandContext, + stderr_w: *std.io.Writer, + root_path: []const u8, + cwd_path: []const u8, + parsed: target.ParsedTarget, +) void { + const absolute_path = resolveInputPath(ctx, root_path, cwd_path, parsed.file_path) catch return; + const content = readResolvedFile(ctx, absolute_path) catch return; + defer ctx.resetScratch(); + + const symbol = parsed.symbol_name orelse return; + const ext = std.fs.path.extension(parsed.file_path); + const lang_query = symbols.languageForExtension(ext) orelse return; + const range = symbols.extractSymbolContent(content, lang_query, symbol) orelse return; + const source = content[range[0]..range[1]]; + const line_count = countLines(source); + + stderr_w.print("\n── code ── {s} ({d} lines) ──\n", .{ parsed.identity, line_count }) catch {}; + printCappedLines(stderr_w, source); +} + +fn printCappedLines(stderr_w: *std.io.Writer, text: []const u8) void { + var lines = std.mem.splitScalar(u8, text, '\n'); + var printed: usize = 0; + + while (lines.next()) |line| { + if (printed >= stale_context_line_cap) { + stderr_w.print(" ... ({d} more lines)\n", .{countLines(lines.rest()) + 1}) catch {}; + return; + } + stderr_w.print("{s}\n", .{line}) catch {}; + printed += 1; + } +} + +fn countLines(text: []const u8) usize { + if (text.len == 0) return 0; + var count: usize = 1; + for (text) |c| { + if (c == '\n') count += 1; + } + // Don't count a trailing newline as an extra line + if (text[text.len - 1] == '\n') count -= 1; + return count; +} + +fn firstNLines(content: []const u8, n: usize) []const u8 { + var end: usize = 0; + var line_count: usize = 0; + for (content, 0..) |c, i| { + if (c == '\n') { + line_count += 1; + if (line_count >= n) { + end = i; + break; + } + } + end = i + 1; + } + return content[0..end]; +} diff --git a/src/commands/lint.zig b/src/commands/lint.zig index 482d618..7d80076 100644 --- a/src/commands/lint.zig +++ b/src/commands/lint.zig @@ -317,6 +317,7 @@ fn discoverDocGroups( offset += rel_end + 1; if (!std.mem.endsWith(u8, line, ".md")) continue; + if (hasNestedLockfile(root_path, line, allocator)) continue; _ = try ensureDocGroup(allocator, &docs, line); } @@ -342,6 +343,19 @@ fn discoverDocGroups( return docs; } +/// Check if a relative path has a closer drift.lock than root_path. +/// Returns true if there's an intermediate drift.lock (the file belongs to a nested scope). +fn hasNestedLockfile(root_path: []const u8, rel_path: []const u8, allocator: std.mem.Allocator) bool { + var dir: []const u8 = std.fs.path.dirname(rel_path) orelse return false; + + while (dir.len > 0) { + const candidate = std.fs.path.join(allocator, &.{ root_path, dir, "drift.lock" }) catch return false; + if (pathExists(candidate)) return true; + dir = std.fs.path.dirname(dir) orelse break; + } + return false; +} + fn ensureDocGroup( allocator: std.mem.Allocator, docs: *std.ArrayList(DocGroup), diff --git a/src/commands/unlink.zig b/src/commands/unlink.zig index 612ad54..6481a61 100644 --- a/src/commands/unlink.zig +++ b/src/commands/unlink.zig @@ -8,7 +8,9 @@ pub fn run(ctx: CommandContext, stdout_w: *std.io.Writer, stderr_w: *std.io.Writ const cwd_path = try std.fs.cwd().realpathAlloc(ctx.run_arena, "."); - var lf = try lockfile.discover(ctx.run_arena, ctx.scratch(), cwd_path); + const abs_doc_path = try std.fs.path.resolve(ctx.run_arena, &.{ cwd_path, doc_path }); + const doc_dir = std.fs.path.dirname(abs_doc_path) orelse cwd_path; + var lf = try lockfile.discover(ctx.run_arena, ctx.scratch(), doc_dir); ctx.resetScratch(); if (!lf.exists) return; diff --git a/src/lockfile.zig b/src/lockfile.zig index c6766f3..860b4c4 100644 --- a/src/lockfile.zig +++ b/src/lockfile.zig @@ -17,6 +17,18 @@ pub const Binding = struct { return null; } + /// Removes a metadata field by key, if present. + pub fn removeField(self: *Binding, key: []const u8) void { + var i: usize = 0; + while (i < self.metadata.items.len) { + if (std.mem.eql(u8, self.metadata.items[i].key, key)) { + _ = self.metadata.orderedRemove(i); + return; + } + i += 1; + } + } + /// Updates or appends a metadata field. On replace, frees the old value with `allocator` before allocating the new slice. pub fn setField(self: *Binding, allocator: std.mem.Allocator, key: []const u8, value: []const u8) !void { for (self.metadata.items) |*field| { diff --git a/src/main.zig b/src/main.zig index d9d000b..6f259f9 100644 --- a/src/main.zig +++ b/src/main.zig @@ -51,6 +51,7 @@ const format_params = clap.parseParamsComptime( const check_params = clap.parseParamsComptime( \\--format \\--changed + \\--silent \\ ); @@ -62,6 +63,7 @@ fn parseFormat(maybe_value: ?[]const u8, stderr_w: *std.Io.Writer) lint.Format { } const link_params = clap.parseParamsComptime( + \\--doc-is-still-accurate \\ \\ ); @@ -159,15 +161,22 @@ pub fn main() !void { var sub = parseExOrReport(&check_params, clap.parsers.default, allocator, &diag, &stderr_w.interface, &iter, clap_parse_all); defer sub.deinit(); if (iter.next()) |_| { - fatal(&stderr_w.interface, "usage: drift check [--format text|json] [--changed ]\n", .{}); + fatal(&stderr_w.interface, "usage: drift check [--format text|json] [--changed ] [--silent]\n", .{}); } const format = parseFormat(sub.args.format, &stderr_w.interface); + const silent = sub.args.silent != 0; + var null_buf: [1]u8 = undefined; + var null_file = std.fs.openFileAbsolute("/dev/null", .{ .mode = .write_only }) catch + fatal(&stderr_w.interface, "error: cannot open /dev/null\n", .{}); + defer null_file.close(); + var null_w = null_file.writer(&null_buf); var run_arena = std.heap.ArenaAllocator.init(allocator); defer run_arena.deinit(); var scratch_arena = std.heap.ArenaAllocator.init(allocator); defer scratch_arena.deinit(); const ctx = CommandContext{ .run_arena = run_arena.allocator(), .scratch_arena = &scratch_arena }; - const run_status = lint.run(ctx, &stdout_w.interface, &stderr_w.interface, format, sub.args.changed) catch |err| switch (err) { + const out_w = if (silent) &null_w.interface else &stdout_w.interface; + const run_status = lint.run(ctx, out_w, &stderr_w.interface, format, sub.args.changed) catch |err| switch (err) { error.LintCheckFailed => { stdout_w.interface.flush() catch {}; stderr_w.interface.flush() catch {}; @@ -203,19 +212,31 @@ pub fn main() !void { .link => { var sub = parseExOrReport(&link_params, link_parsers, allocator, &diag, &stderr_w.interface, &iter, 0); defer sub.deinit(); + var doc_is_still_accurate = sub.args.@"doc-is-still-accurate" != 0; const doc_path = sub.positionals[0] orelse { - fatal(&stderr_w.interface, "usage: drift link [anchor]\n", .{}); + fatal(&stderr_w.interface, "usage: drift link [anchor] [--doc-is-still-accurate]\n", .{}); }; - const optional_anchor = iter.next(); - if (iter.next()) |_| { - fatal(&stderr_w.interface, "usage: drift link [anchor]\n", .{}); + // Remaining args after the first positional: optional anchor and/or --doc-is-still-accurate + var optional_anchor: ?[]const u8 = null; + var has_extra_args = false; + while (iter.next()) |arg| { + if (std.mem.eql(u8, arg, "--doc-is-still-accurate")) { + doc_is_still_accurate = true; + } else if (optional_anchor == null) { + optional_anchor = arg; + } else { + has_extra_args = true; + } + } + if (has_extra_args) { + fatal(&stderr_w.interface, "usage: drift link [anchor] [--doc-is-still-accurate]\n", .{}); } var run_arena = std.heap.ArenaAllocator.init(allocator); defer run_arena.deinit(); var scratch_arena = std.heap.ArenaAllocator.init(allocator); defer scratch_arena.deinit(); const ctx = CommandContext{ .run_arena = run_arena.allocator(), .scratch_arena = &scratch_arena }; - link.run(ctx, &stdout_w.interface, &stderr_w.interface, doc_path, optional_anchor) catch |err| switch (err) { + link.run(ctx, &stdout_w.interface, &stderr_w.interface, doc_path, optional_anchor, doc_is_still_accurate) catch |err| switch (err) { error.DocReadFailed, error.NoBindingsForDoc => { fatal(&stderr_w.interface, "", .{}); }, @@ -225,6 +246,9 @@ pub fn main() !void { error.CannotComputeFingerprint => { fatal(&stderr_w.interface, "error: cannot compute fingerprint for anchor in '{s}'\n", .{doc_path}); }, + error.DocUnchanged => { + fatal(&stderr_w.interface, "", .{}); + }, else => exitWithError(&stderr_w.interface, err), }; }, @@ -276,9 +300,9 @@ fn printUsage(w: *std.io.Writer) void { \\Usage: drift [options] \\ \\Commands: - \\ check Check all docs for staleness [--format text|json] [--changed ] + \\ check Check all docs for staleness [--format text|json] [--changed ] [--silent] \\ status Show all docs and their anchors [--format text|json] - \\ link Add anchors to a doc + \\ link Add anchors to a doc [--doc-is-still-accurate] \\ unlink Remove anchors from a doc \\ refs Show which docs reference a target \\ diff --git a/src/markdown.zig b/src/markdown.zig index b56efba..82e3717 100644 --- a/src/markdown.zig +++ b/src/markdown.zig @@ -91,6 +91,16 @@ pub fn headingExists(source: []const u8, heading_fragment: []const u8) bool { return findHeadingSection(block_tree.rootNode(), source, heading_fragment) != null; } +/// Extract the byte range [start, end) of a heading section matching the given fragment. +/// Returns null if the heading is not found or parsing fails. +pub fn extractHeadingSectionContent(source: []const u8, heading_fragment: []const u8) ?[2]u32 { + const block_tree = parseBlockTree(source) orelse return null; + defer block_tree.destroy(); + + const section = findHeadingSection(block_tree.rootNode(), source, heading_fragment) orelse return null; + return .{ section.startByte(), section.endByte() }; +} + fn parseBlockTree(source: []const u8) ?*ts.Tree { const parser = ts.Parser.create(); defer parser.destroy(); diff --git a/test/helpers.zig b/test/helpers.zig index 0c48a52..d780b6f 100644 --- a/test/helpers.zig +++ b/test/helpers.zig @@ -116,6 +116,23 @@ pub const TempRepo = struct { return runProcess(self.allocator, argv, self.abs_path); } + /// Run the drift binary with given arguments, cwd set to a subdirectory of the temp repo. + pub fn runDriftFromSubdir(self: *TempRepo, subdir: []const u8, args: []const []const u8) !ExecResult { + const drift_bin = build_options.drift_bin; + + var argv_buf: [17][]const u8 = undefined; + argv_buf[0] = drift_bin; + for (args, 0..) |arg, i| { + argv_buf[i + 1] = arg; + } + const argv = argv_buf[0 .. args.len + 1]; + + const sub_path = try std.fs.path.join(self.allocator, &.{ self.abs_path, subdir }); + defer self.allocator.free(sub_path); + + return runProcess(self.allocator, argv, sub_path); + } + /// Get the short commit hash of HEAD. Caller owns returned memory. pub fn getHeadRevision(self: *TempRepo, allocator: std.mem.Allocator) ![]const u8 { const result = try runProcess(allocator, &.{ "git", "rev-parse", "--short", "HEAD" }, self.abs_path); diff --git a/test/integration/link_test.zig b/test/integration/link_test.zig index 324d230..f740e48 100644 --- a/test/integration/link_test.zig +++ b/test/integration/link_test.zig @@ -10,7 +10,7 @@ test "link exits non-zero when required arguments are missing" { defer result.deinit(allocator); try helpers.expectExitCode(result.term, 1); - try helpers.expectContains(result.stderr, "usage: drift link [anchor]"); + try helpers.expectContains(result.stderr, "usage: drift link "); } test "link adds new file binding to drift.lock" { @@ -31,6 +31,7 @@ test "link adds new file binding to drift.lock" { const lock_content = try repo.readFile("drift.lock"); defer allocator.free(lock_content); try helpers.expectContains(lock_content, "docs/doc.md -> src/new.ts sig:"); + try helpers.expectNotContains(lock_content, "doc:"); const doc_content = try repo.readFile("docs/doc.md"); defer allocator.free(doc_content); @@ -76,7 +77,6 @@ test "link stores markdown heading bindings using slug fragments" { try helpers.expectContains(lock_content, "docs/overview.md -> docs/auth.md#token-validation sig:"); } - test "link rejects missing markdown heading target" { const allocator = std.testing.allocator; var repo = try helpers.TempRepo.init(allocator); @@ -113,7 +113,54 @@ test "link round-trips slugged markdown heading bindings through lint" { try helpers.expectContains(lint_result.stdout, "ok"); } -test "link blanket mode refreshes sigs for existing bindings" { +test "link blanket mode refuses relink when doc unchanged" { + const allocator = std.testing.allocator; + var repo = try helpers.TempRepo.init(allocator); + defer repo.cleanup(); + + try repo.writeFile("docs/doc.md", "# Doc\n"); + try repo.writeFile("src/main.ts", "export const value = 1;\n"); + try repo.commit("add doc and source"); + + const first_link = try repo.runDrift(&.{ "link", "docs/doc.md", "src/main.ts" }); + defer first_link.deinit(allocator); + try helpers.expectExitCode(first_link.term, 0); + try repo.commit("create lockfile binding"); + + try repo.writeFile("src/main.ts", "export const value = 2;\n"); + + const result = try repo.runDrift(&.{ "link", "docs/doc.md" }); + defer result.deinit(allocator); + try helpers.expectExitCode(result.term, 1); + try helpers.expectContains(result.stderr, "refused:"); + try helpers.expectContains(result.stderr, "--doc-is-still-accurate"); +} + +test "link blanket mode refuses relink even when doc changed" { + const allocator = std.testing.allocator; + var repo = try helpers.TempRepo.init(allocator); + defer repo.cleanup(); + + try repo.writeFile("docs/doc.md", "# Doc\n"); + try repo.writeFile("src/main.ts", "export const value = 1;\n"); + try repo.commit("add doc and source"); + + const first_link = try repo.runDrift(&.{ "link", "docs/doc.md", "src/main.ts" }); + defer first_link.deinit(allocator); + try helpers.expectExitCode(first_link.term, 0); + try repo.commit("create lockfile binding"); + + try repo.writeFile("src/main.ts", "export const value = 2;\n"); + try repo.writeFile("docs/doc.md", "# Doc\nUpdated content.\n"); + + const result = try repo.runDrift(&.{ "link", "docs/doc.md" }); + defer result.deinit(allocator); + try helpers.expectExitCode(result.term, 1); + try helpers.expectContains(result.stderr, "refused:"); + try helpers.expectContains(result.stderr, "--doc-is-still-accurate"); +} + +test "link blanket mode relinks with --doc-is-still-accurate override" { const allocator = std.testing.allocator; var repo = try helpers.TempRepo.init(allocator); defer repo.cleanup(); @@ -132,7 +179,7 @@ test "link blanket mode refreshes sigs for existing bindings" { try repo.writeFile("src/main.ts", "export const value = 2;\n"); - const result = try repo.runDrift(&.{ "link", "docs/doc.md" }); + const result = try repo.runDrift(&.{ "link", "docs/doc.md", "--doc-is-still-accurate" }); defer result.deinit(allocator); try helpers.expectExitCode(result.term, 0); try helpers.expectContains(result.stdout, "relinked all anchors in docs/doc.md"); @@ -158,3 +205,58 @@ test "link no longer migrates legacy frontmatter anchors" { try helpers.expectExitCode(result.term, 1); try helpers.expectContains(result.stderr, "no bindings found for docs/doc.md"); } + +test "link uses nested drift.lock when doc is in nested scope" { + const allocator = std.testing.allocator; + var repo = try helpers.TempRepo.init(allocator); + defer repo.cleanup(); + + try repo.writeFile("drift.lock", ""); + try repo.writeFile("nested/drift.lock", ""); + try repo.writeFile("nested/doc.md", "# Nested\n"); + try repo.writeFile("nested/code.ts", "export const value = 1;\n"); + try repo.commit("add root and nested scope"); + + // Run link from root, but doc is in nested/ — should write to nested/drift.lock + const result = try repo.runDrift(&.{ "link", "nested/doc.md", "nested/code.ts" }); + defer result.deinit(allocator); + + try helpers.expectExitCode(result.term, 0); + try helpers.expectContains(result.stdout, "added doc.md -> code.ts sig:"); + + // Verify binding is in nested/drift.lock, NOT root drift.lock + const nested_lock = try repo.readFile("nested/drift.lock"); + defer allocator.free(nested_lock); + try helpers.expectContains(nested_lock, "doc.md -> code.ts sig:"); + + const root_lock = try repo.readFile("drift.lock"); + defer allocator.free(root_lock); + try std.testing.expectEqualStrings("", root_lock); +} + +test "unlink uses nested drift.lock when doc is in nested scope" { + const allocator = std.testing.allocator; + var repo = try helpers.TempRepo.init(allocator); + defer repo.cleanup(); + + try repo.writeFile("drift.lock", ""); + try repo.writeFile("nested/drift.lock", "nested/doc.md -> nested/code.ts sig:deadbeefdeadbeef\n"); + + // Wait, unlink normalizes paths relative to lockfile root. + // Since nested/drift.lock root is nested/, the binding path is doc.md -> code.ts + try repo.writeFile("nested/drift.lock", "doc.md -> code.ts sig:deadbeefdeadbeef\n"); + try repo.writeFile("nested/doc.md", "# Nested\n"); + try repo.writeFile("nested/code.ts", "export const value = 1;\n"); + try repo.commit("add root and nested scope with binding"); + + // Run unlink from root, but doc is in nested/ — should use nested/drift.lock + const result = try repo.runDrift(&.{ "unlink", "nested/doc.md", "nested/code.ts" }); + defer result.deinit(allocator); + + try helpers.expectExitCode(result.term, 0); + try helpers.expectContains(result.stdout, "removed doc.md -> code.ts from drift.lock"); + + const nested_lock = try repo.readFile("nested/drift.lock"); + defer allocator.free(nested_lock); + try helpers.expectNotContains(nested_lock, "code.ts"); +} diff --git a/test/integration/lint_test.zig b/test/integration/lint_test.zig index 614540b..73c7cd9 100644 --- a/test/integration/lint_test.zig +++ b/test/integration/lint_test.zig @@ -798,3 +798,46 @@ test "lint --format json works as alias" { try helpers.validateDriftCheckJson(allocator, result.stdout); try helpers.expectContains(result.stdout, "drift.check.v1"); } + +test "check from root skips docs in nested drift.lock scope" { + const allocator = std.testing.allocator; + var repo = try helpers.TempRepo.init(allocator); + defer repo.cleanup(); + + // Create root lockfile and a nested lockfile in nested/ + try repo.writeFile("drift.lock", ""); + try repo.writeFile("docs/root.md", "# Root\n"); + try repo.writeFile("nested/drift.lock", ""); + try repo.writeFile("nested/doc.md", "# Nested\n\nSee [missing](missing.md).\n"); + try repo.commit("add root and nested scope"); + + // From root: should NOT report the broken link in nested/doc.md + const result = try repo.runDrift(&.{"check"}); + defer result.deinit(allocator); + + try helpers.expectExitCode(result.term, 0); + try helpers.expectNotContains(result.stdout, "nested/doc.md"); + try helpers.expectNotContains(result.stdout, "BROKEN"); +} + +test "check from nested subdir with its own drift.lock only checks that scope" { + const allocator = std.testing.allocator; + var repo = try helpers.TempRepo.init(allocator); + defer repo.cleanup(); + + try repo.writeFile("drift.lock", ""); + try repo.writeFile("docs/root.md", "# Root\n\nSee [missing](missing.md).\n"); + try repo.writeFile("nested/drift.lock", ""); + try repo.writeFile("nested/doc.md", "# Nested\n\nSee [also-missing](also-missing.md).\n"); + try repo.commit("add root and nested scope"); + + // From nested/: should report the broken link in nested/doc.md + const result = try repo.runDriftFromSubdir("nested", &.{"check"}); + defer result.deinit(allocator); + + try helpers.expectExitCode(result.term, 1); + try helpers.expectContains(result.stdout, "doc.md"); + try helpers.expectContains(result.stdout, "BROKEN"); + // Should NOT contain docs from root scope + try helpers.expectNotContains(result.stdout, "docs/root.md"); +}