$$$$$$$\ $$\ $$\ $$$$$$\ $$\
$$ __$$\ $$ | $$ | $$ __$$\ $$ |
$$ / \__|$$\ $$\ $$$$$$$\ $$$$$$\ $$$$$$$\ $$ / \__| $$$$$$\ $$$$$$$ | $$$$$$\
\$$$$$$\ $$ | $$ |$$ __$$\\_$$ _| $$ __$$\ $$ | $$ __$$\ $$ __$$ |$$ __$$\
\____$$\ $$ | $$ |$$ | $$ | $$ | $$ | $$ |$$ | $$ / $$ |$$ / $$ |$$$$$$$$ |
$$\ $$ |$$ | $$ |$$ | $$ | $$ |$$\ $$ | $$ |$$ | $$\ $$ | $$ |$$ | $$ |$$ ____|
\$$$$$$ |\$$$$$$$ |$$ | $$ | \$$$$ |$$ | $$ |\$$$$$$ |\$$$$$$ |\$$$$$$$ |\$$$$$$$\
\______/ \____$$ |\__| \__| \____/ \__| \__| \______/ \______/ \_______| \_______|
$$\ $$ |
\$$$$$$ |
\______/
A neurosymbolic coding platform. Six symbolic gates verify every LLM output in ~3-8ms.
The terminal is the correct interface.
# TUI -- the full experience
npx @avasis-ai/synthcode-tui@latest
# Framework -- programmatic API
bun add @avasis-ai/synthcode| Gate | Verifies | Latency |
|---|---|---|
| Structure | AST well-formedness, syntax validity | ~1ms |
| Scope | Variable bindings, lexical resolution | ~2ms |
| Type | Type consistency, inference chains | ~3ms |
| Safety | Side-effect boundaries, mutation control | ~2ms |
| Control Flow | Reachability, termination guarantees | ~3ms |
| Semantic | Logical coherence, intent alignment | ~5ms |
- Dual-path verification -- neural output passes six symbolic gates before touching your codebase
- Zero-dependency framework -- 10KB gzipped, no runtime deps
- Agentic chat -- up to 15 autonomous rounds with inline gate feedback
- Six screen modes -- chat, gates, code view, world model, trust boundary, playground
- Provider-agnostic -- Gemini, Groq, OpenRouter, OpenAI, Ollama
- 269 tests -- 300+ consecutive CI runs, zero failures
- OpenTUI engine -- built on Zig+TS for native terminal performance
TUI:
npx @avasis-ai/synthcode-tui@latestFramework:
import { Agent, BashTool, DualPathVerifier } from "@avasis-ai/synthcode";
import { OllamaProvider } from "@avasis-ai/synthcode/llm";
const agent = new Agent({
model: new OllamaProvider({ model: "qwen3:32b" }),
tools: [BashTool],
dualPathVerifier: new DualPathVerifier(),
});
for await (const event of agent.run("List all TypeScript files in src/")) {
if (event.type === "text") process.stdout.write(event.text);
} LLM Output
|
v
+----------+ +-------+ +------+ +--------+ +------------+ +----------+
| Structure|-->| Scope |--->| Type |--->| Safety |--->| ControlFlow|--->| Semantic |
+----------+ +-------+ +------+ +--------+ +------------+ +----------|
| |
v v
REJECT ACCEPT
import {
Agent, BashTool, FileReadTool, FileWriteTool,
DualPathVerifier, WorldModel, CostTracker, CircuitBreaker,
AnthropicProvider, OpenAIProvider, OllamaProvider,
} from "@avasis-ai/synthcode";
const agent = new Agent({
model: new AnthropicProvider({ model: "claude-sonnet-4-20250514" }),
tools: [BashTool, FileReadTool, FileWriteTool],
dualPathVerifier: new DualPathVerifier(),
costTracker: new CostTracker(),
});npx @avasis-ai/synthcode "Explain this codebase" # auto-detect
npx @avasis-ai/synthcode "Refactor this" --ollama qwen3:32b # local
npx @avasis-ai/synthcode adapt catalog # 30+ modelsSynthCode is designed for autonomous agents that run 24/7. Here are common patterns:
Autonomous agents must handle failures gracefully:
import { createResilientTool, ErrorLogger, CircuitBreaker } from "@avasis-ai/synthcode";
// Wrap tools with retry logic and circuit breaking
const bashTool = createResilientTool(
new BashTool(),
new ErrorLogger(),
new CircuitBreaker(),
{ maxRetries: 3 }
);
// The agent continues even when tools fail temporarilySee examples/error-handling.ts for a complete implementation with:
- Automatic retry with exponential backoff
- Circuit breaker pattern for failing tools
- Structured error logging for debugging
- Graceful degradation when tools are unavailable
Track agent performance over time:
const costTracker = new CostTracker();
const agent = new Agent({
model: anthropic("claude-3-5-sonnet-20241022"),
tools: [BashTool, FileReadTool],
costTracker, // Tracks tokens, cost, and usage metrics
});
// Get stats anytime
const stats = costTracker.getStats();
console.log(`Total cost: $${stats.totalCost}`);Persistent memory for long-running agents:
import { SQLiteStore } from "@avasis-ai/synthcode/memory";
const memory = new SQLiteStore({ path: "./agent-memory.db" });
// Agent remembers conversations across restarts
const agent = new Agent({
model: anthropic("claude-3-5-sonnet-20241022"),
tools: [BashTool],
memory, // Persist context to disk
});Run agents continuously:
import { agentLoop } from "@avasis-ai/synthcode";
for await (const result of agentLoop(agent, {
maxTurns: 15,
timeout: 5 * 60 * 1000, // 5 minute timeout
onTurn: async (turn, events) => {
// Hook into each turn for monitoring
console.log(`Turn ${turn}:`, events.length, "events");
}
})) {
if (result.done) break;
// Continue loop
}- examples/autonomous-agent.ts - CI/CD automation and issue fixing
- examples/error-handling.ts - Resilient error handling patterns
- examples/coding-agent.ts - Codebase refactoring
- examples/basic.ts - Simple tool usage
- examples/demo.ts - Full feature demonstration
| SynthCode | Claude Code | Cursor | Aider | |
|---|---|---|---|---|
| Symbolic verification | Yes | No | No | No |
| Dual-path gates | Yes | No | No | No |
| Zero dependencies | 10KB | No | No | No |
| Terminal-native TUI | Yes | Yes | No | Yes |
| Provider-agnostic | 5+ | No | Partial | Partial |
| Open source | MIT | No | No | Apache |
avasis-ai/synthcode -- MIT License -- Built by Avasis AI