Configure agents. Chain models. Ship results.
AgentForge is a multi-LLM orchestrator. You set up agents with different roles — each powered by a different AI model. Then you pick a pipeline mode and let them work together.
Most AI tools give you one model, one prompt, one output. AgentForge gives you a team.
- Chef decomposes the task → Dev executes → Reviewer validates
- Or: Chef plans → Workers execute in parallel → Chef approves
- Or: linear chain A → B → C with manual gates between steps
Each agent has its own LLM (Claude, Gemini, or GPT), its own role, its own system prompt. You control the pipeline.
You want to build a REST API with auth. You type one task. AgentForge:
- Chef (Claude) breaks it into: schema design, route implementation, middleware, tests
- Dev (Gemini) writes the code for each
- Reviewer (GPT-4) checks for security issues
- Chef validates — or sends back for corrections
Done in under 2 minutes.
| Feature | Description |
|---|---|
| Multi-agent | Add N agents, each with their own LLM and role |
| 4 pipeline modes | Linear, Full Auto, Manual, Chef-Worker |
| Live visualization | See which agent is active, watch data flow |
| Real API calls | Claude, Gemini, GPT all work through backend proxy |
| Manual gates | Approve/reject between pipeline steps |
| Auto loop | Chef/worker loops until chef says "VALIDATED" |
| Task decomposition | Chef breaks task into subtasks, workers execute |
| Code rendering | Messages render code blocks with syntax styling |
git clone https://github.com/Vitalcheffe/agentforge.git
cd agentforge
npm install
cp .env.example .env
# Add your API keys to .env
npm startTask → [Chef] → [Dev] → [Reviewer] → Result
Task → [Chef: plan] → [Dev: execute] → [Chef: validate]
↑___re-loop if not approved___↓
Same as Linear, but you click "Approve" between each step.
Task → [Chef: decompose]
↓ ↓
[Worker 1] [Worker 2]
agentforge/
├── src/
│ ├── components/
│ │ └── AgentForge.jsx # Main orchestrator UI
│ ├── lib/
│ │ ├── api.js # LLM proxy client
│ │ ├── constants.js # Providers, modes, roles
│ │ └── utils.js # Helpers, default agents
│ ├── styles/
│ │ └── forge.css # Dark theme CSS
│ └── main.jsx # Entry point
├── server/
│ └── index.js # Express API proxy
├── package.json
├── vercel.json # Vercel deployment config
└── vite.config.js
| Provider | Models | Status |
|---|---|---|
| Claude | Sonnet 4, Opus 4.5, Haiku 4.5 | ✅ Live |
| Gemini | 2.0 Flash, 1.5 Pro, 1.5 Flash | ✅ Live |
| GPT | GPT-4o, GPT-4 Turbo, GPT-3.5 | ✅ Live |
| Ollama | Any local model | 🔧 Add yourself |
Want Ollama or a local model? Add a case in server/index.js:
case 'ollama':
const ollamaRes = await fetch('http://localhost:11434/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: model || 'llama3',
messages: [{ role: 'system', content: systemPrompt }, ...messages],
}),
});npm run dev # Frontend only (Vite)
npm run server # Backend only (Express)
npm start # Both (concurrently)MIT