Skip to content

Vitalcheffe/agentforge

Repository files navigation

AgentForge

Configure agents. Chain models. Ship results.

CI MIT React Node Deploy with Vercel


AgentForge UI

AgentForge is a multi-LLM orchestrator. You set up agents with different roles — each powered by a different AI model. Then you pick a pipeline mode and let them work together.

Why

Most AI tools give you one model, one prompt, one output. AgentForge gives you a team.

  • Chef decomposes the task → Dev executes → Reviewer validates
  • Or: Chef plans → Workers execute in parallel → Chef approves
  • Or: linear chain A → B → C with manual gates between steps

Each agent has its own LLM (Claude, Gemini, or GPT), its own role, its own system prompt. You control the pipeline.

Real-world example

You want to build a REST API with auth. You type one task. AgentForge:

  1. Chef (Claude) breaks it into: schema design, route implementation, middleware, tests
  2. Dev (Gemini) writes the code for each
  3. Reviewer (GPT-4) checks for security issues
  4. Chef validates — or sends back for corrections

Done in under 2 minutes.

Features

Feature Description
Multi-agent Add N agents, each with their own LLM and role
4 pipeline modes Linear, Full Auto, Manual, Chef-Worker
Live visualization See which agent is active, watch data flow
Real API calls Claude, Gemini, GPT all work through backend proxy
Manual gates Approve/reject between pipeline steps
Auto loop Chef/worker loops until chef says "VALIDATED"
Task decomposition Chef breaks task into subtasks, workers execute
Code rendering Messages render code blocks with syntax styling

Quick start

git clone https://github.com/Vitalcheffe/agentforge.git
cd agentforge
npm install
cp .env.example .env
# Add your API keys to .env
npm start

Open http://localhost:5173

One-click deploy

Deploy with Vercel

Pipeline modes

Linear

Task → [Chef] → [Dev] → [Reviewer] → Result

Full Auto

Task → [Chef: plan] → [Dev: execute] → [Chef: validate]
                    ↑___re-loop if not approved___↓

Manual

Same as Linear, but you click "Approve" between each step.

Chef-Worker

Task → [Chef: decompose]
           ↓         ↓
      [Worker 1] [Worker 2]

Architecture

agentforge/
├── src/
│   ├── components/
│   │   └── AgentForge.jsx    # Main orchestrator UI
│   ├── lib/
│   │   ├── api.js            # LLM proxy client
│   │   ├── constants.js      # Providers, modes, roles
│   │   └── utils.js          # Helpers, default agents
│   ├── styles/
│   │   └── forge.css         # Dark theme CSS
│   └── main.jsx              # Entry point
├── server/
│   └── index.js              # Express API proxy
├── package.json
├── vercel.json               # Vercel deployment config
└── vite.config.js

Supported LLMs

Provider Models Status
Claude Sonnet 4, Opus 4.5, Haiku 4.5 ✅ Live
Gemini 2.0 Flash, 1.5 Pro, 1.5 Flash ✅ Live
GPT GPT-4o, GPT-4 Turbo, GPT-3.5 ✅ Live
Ollama Any local model 🔧 Add yourself

Self-hosted LLM

Want Ollama or a local model? Add a case in server/index.js:

case 'ollama':
  const ollamaRes = await fetch('http://localhost:11434/api/chat', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
      model: model || 'llama3',
      messages: [{ role: 'system', content: systemPrompt }, ...messages],
    }),
  });

Development

npm run dev      # Frontend only (Vite)
npm run server   # Backend only (Express)
npm start        # Both (concurrently)

License

MIT

About

Multi-LLM orchestrator — configure agents, chain models, ship results.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors