Skip to content

PraisonAI is a production-ready Multi AI Agents framework, designed to create AI Agents to automate and solve problems ranging from simple tasks to complex challenges. It provides a low-code solution to streamline the building and management of multi-agent LLM systems, emphasising simplicity, customisation, and effective human-agent collaboration.

License

Notifications You must be signed in to change notification settings

MervinPraison/PraisonAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

PraisonAI Logo

Total Downloads Latest Stable Version License

Praison AI

MervinPraison%2FPraisonAI | Trendshift

PraisonAI is a production-ready Multi-AI Agents framework with self-reflection, designed to create AI Agents to automate and solve problems ranging from simple tasks to complex challenges. By integrating PraisonAI Agents, AG2 (Formerly AutoGen), and CrewAI into a low-code solution, it streamlines the building and management of multi-agent LLM systems, emphasising simplicity, customisation, and effective human-agent collaboration.

Key Features

Feature Code Docs
πŸš€ Single Agent Example πŸ“–
🀝 Multi Agents Example πŸ“–
πŸ€– Auto Agents Example πŸ“–
πŸ”„ Self Reflection AI Agents Example πŸ“–
🧠 Reasoning AI Agents Example πŸ“–
πŸ‘οΈ Multi Modal AI Agents Example πŸ“–
🎭 AI Agent Workflow Example πŸ“–
πŸ“š Add Custom Knowledge Example πŸ“–
🧠 Memory (Short & Long Term) Example πŸ“–
πŸ“„ Chat with PDF Agents Example πŸ“–
πŸ’» Code Interpreter Agents Example πŸ“–
πŸ“š RAG Agents Example πŸ“–
πŸ€” Async & Parallel Processing Example πŸ“–
πŸ”’ Math Agents Example πŸ“–
🎯 Structured Output Agents Example πŸ“–
πŸ”— LangChain Integrated Agents Example πŸ“–
πŸ“ž Callback Agents Example πŸ“–
πŸ› οΈ 100+ Custom Tools Example πŸ“–
πŸ“„ YAML Configuration Example πŸ“–
πŸ’― 100+ LLM Support Example πŸ“–
πŸ”¬ Deep Research Agents Example πŸ“–
πŸ”„ Query Rewriter Agent Example πŸ“–
🌐 Native Web Search Example πŸ“–
πŸ“₯ Web Fetch (Anthropic) Example πŸ“–
πŸ’Ύ Prompt Caching Example πŸ“–
🧠 Claude Memory Tool Example πŸ“–
πŸ’Ύ File-Based Memory Example πŸ“–
πŸ” Built-in Search Tools Example πŸ“–
πŸ“‹ Planning Mode Example πŸ“–
πŸ”§ Planning Tools Example πŸ“–
🧠 Planning Reasoning Example πŸ“–
πŸ”Œ MCP Transports Example πŸ“–
🌐 WebSocket MCP Example πŸ“–
πŸ” MCP Security Example πŸ“–
πŸ”„ MCP Resumability Example πŸ“–
⚑ Fast Context Example πŸ“–
πŸ–ΌοΈ Image Generation Agent Example πŸ“–
πŸ“· Image to Text Agent Example πŸ“–
🎬 Video Agent Example πŸ“–
πŸ“Š Data Analyst Agent Example πŸ“–
πŸ’° Finance Agent Example πŸ“–
πŸ›’ Shopping Agent Example πŸ“–
⭐ Recommendation Agent Example πŸ“–
πŸ“– Wikipedia Agent Example πŸ“–
πŸ’» Programming Agent Example πŸ“–
πŸ“ Markdown Agent Example πŸ“–
πŸ“ Prompt Expander Agent Example πŸ“–
πŸ”€ Router Agent Example πŸ“–
⛓️ Prompt Chaining Example πŸ“–
πŸ” Evaluator Optimiser Example πŸ“–
πŸ‘· Orchestrator Workers Example πŸ“–
⚑ Parallelisation Example πŸ“–
πŸ” Repetitive Agents Example πŸ“–
🀝 Agent Handoffs Example πŸ“–
πŸ›‘οΈ Guardrails Example πŸ“–
πŸ’¬ Sessions Management Example πŸ“–
βœ… Human Approval Example πŸ“–
πŸ”„ Stateful Agents Example πŸ“–
πŸ€– Autonomous Workflow Example πŸ“–
πŸ“œ Rules & Instructions Example πŸ“–
πŸͺ Hooks Example πŸ“–
πŸ“ˆ Telemetry Example πŸ“–
πŸ“Ή Camera Integration Example πŸ“–

Supported Providers

Provider Example
OpenAI Example
Anthropic Example
Google Gemini Example
Ollama Example
Groq Example
DeepSeek Example
xAI Grok Example
Mistral Example
Cohere Example
Perplexity Example
Fireworks Example
Together AI Example
OpenRouter Example
HuggingFace Example
Azure OpenAI Example
AWS Bedrock Example
Google Vertex Example
Databricks Example
Cloudflare Example
AI21 Example
Replicate Example
SageMaker Example
Moonshot Example
vLLM Example

Using Python Code

Light weight package dedicated for coding:

pip install praisonaiagents
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx

1. Single Agent

Create app.py file and add the code below:

from praisonaiagents import Agent
agent = Agent(instructions="Your are a helpful AI assistant")
agent.start("Write a movie script about a robot in Mars")

Run:

python app.py

2. Multi Agents

Create app.py file and add the code below:

from praisonaiagents import Agent, PraisonAIAgents

research_agent = Agent(instructions="Research about AI")
summarise_agent = Agent(instructions="Summarise research agent's findings")
agents = PraisonAIAgents(agents=[research_agent, summarise_agent])
agents.start()

Run:

python app.py

3. Agent with Planning Mode

Enable planning for any agent - the agent creates a plan, then executes step by step:

from praisonaiagents import Agent

def search_web(query: str) -> str:
    return f"Search results for: {query}"

agent = Agent(
    name="AI Assistant",
    instructions="Research and write about topics",
    planning=True,              # Enable planning mode
    planning_tools=[search_web], # Tools for planning research
    planning_reasoning=True      # Chain-of-thought reasoning
)

result = agent.start("Research AI trends in 2025 and write a summary")

What happens:

  1. πŸ“‹ Agent creates a multi-step plan
  2. πŸš€ Executes each step sequentially
  3. πŸ“Š Shows progress with context passing
  4. βœ… Returns final result

4. Deep Research Agent

Automated research with real-time streaming, web search, and citations using OpenAI or Gemini Deep Research APIs.

from praisonaiagents import DeepResearchAgent

# OpenAI Deep Research
agent = DeepResearchAgent(
    model="o4-mini-deep-research",  # or "o3-deep-research"
    verbose=True
)

result = agent.research("What are the latest AI trends in 2025?")
print(result.report)
print(f"Citations: {len(result.citations)}")
# Gemini Deep Research
from praisonaiagents import DeepResearchAgent

agent = DeepResearchAgent(
    model="deep-research-pro",  # Auto-detected as Gemini
    verbose=True
)

result = agent.research("Research quantum computing advances")
print(result.report)

Features:

  • πŸ” Multi-provider support (OpenAI, Gemini, LiteLLM)
  • πŸ“‘ Real-time streaming with reasoning summaries
  • πŸ“š Structured citations with URLs
  • πŸ› οΈ Built-in tools: web search, code interpreter, MCP, file search
  • πŸ”„ Automatic provider detection from model name

5. Query Rewriter Agent

Transform user queries to improve RAG retrieval quality using multiple strategies.

from praisonaiagents import QueryRewriterAgent, RewriteStrategy

agent = QueryRewriterAgent(model="gpt-4o-mini")

# Basic - expands abbreviations, adds context
result = agent.rewrite("AI trends")
print(result.primary_query)  # "What are the current trends in Artificial Intelligence?"

# HyDE - generates hypothetical document for semantic matching
result = agent.rewrite("What is quantum computing?", strategy=RewriteStrategy.HYDE)

# Step-back - generates broader context question
result = agent.rewrite("GPT-4 vs Claude 3?", strategy=RewriteStrategy.STEP_BACK)

# Sub-queries - decomposes complex questions
result = agent.rewrite("RAG setup and best embedding models?", strategy=RewriteStrategy.SUB_QUERIES)

# Contextual - resolves references using chat history
result = agent.rewrite("What about cost?", chat_history=[...])

Strategies:

  • BASIC: Expand abbreviations, fix typos, add context
  • HYDE: Generate hypothetical document for semantic matching
  • STEP_BACK: Generate higher-level concept questions
  • SUB_QUERIES: Decompose multi-part questions
  • MULTI_QUERY: Generate multiple paraphrased versions
  • CONTEXTUAL: Resolve references using conversation history
  • AUTO: Automatically detect best strategy

6. Agent Memory (Zero Dependencies)

Enable persistent memory for agents - works out of the box without any extra packages.

from praisonaiagents import Agent
from praisonaiagents.memory import FileMemory

# Enable memory with a single parameter
agent = Agent(
    name="Personal Assistant",
    instructions="You are a helpful assistant that remembers user preferences.",
    memory=True,  # Enables file-based memory (no extra deps!)
    user_id="user123"  # Isolate memory per user
)

# Memory is automatically injected into conversations
result = agent.start("My name is John and I prefer Python")
# Agent will remember this for future conversations

Memory Types:

  • Short-term: Rolling buffer of recent context (auto-expires)
  • Long-term: Persistent important facts (sorted by importance)
  • Entity: People, places, organizations with attributes
  • Episodic: Date-based interaction history

Advanced Features:

from praisonaiagents.memory import FileMemory

memory = FileMemory(user_id="user123")

# Session Save/Resume
memory.save_session("project_session", conversation_history=[...])
memory.resume_session("project_session")

# Context Compression
memory.compress(llm_func=lambda p: agent.chat(p), max_items=10)

# Checkpointing
memory.create_checkpoint("before_refactor", include_files=["main.py"])
memory.restore_checkpoint("before_refactor", restore_files=True)

# Slash Commands
memory.handle_command("/memory show")
memory.handle_command("/memory save my_session")

Storage Options:

Option Dependencies Description
memory=True None File-based JSON storage (default)
memory="file" None Explicit file-based storage
memory="sqlite" Built-in SQLite with indexing
memory="chromadb" chromadb Vector/semantic search

6. Rules & Instructions

PraisonAI auto-discovers instruction files from your project root and git root:

File Description Priority
PRAISON.md PraisonAI native instructions High
PRAISON.local.md Local overrides (gitignored) Higher
CLAUDE.md Claude Code memory file High
CLAUDE.local.md Local overrides (gitignored) Higher
AGENTS.md OpenAI Codex CLI instructions High
GEMINI.md Gemini CLI memory file High
.cursorrules Cursor IDE rules High
.windsurfrules Windsurf IDE rules High
.claude/rules/*.md Claude Code modular rules Medium
.windsurf/rules/*.md Windsurf modular rules Medium
.cursor/rules/*.mdc Cursor modular rules Medium
.praison/rules/*.md Workspace rules Medium
~/.praison/rules/*.md Global rules Low
from praisonaiagents import Agent

# Agent auto-discovers CLAUDE.md, AGENTS.md, GEMINI.md, etc.
agent = Agent(name="Assistant", instructions="You are helpful.")
# Rules are injected into system prompt automatically

@Import Syntax:

# CLAUDE.md
See @README for project overview
See @docs/architecture.md for system design
@~/.praison/my-preferences.md

Rule File Format (with YAML frontmatter):

---
description: Python coding guidelines
globs: ["**/*.py"]
activation: always  # always, glob, manual, ai_decision
---

# Guidelines
- Use type hints
- Follow PEP 8

7. Auto-Generated Memories

from praisonaiagents.memory import FileMemory, AutoMemory

memory = FileMemory(user_id="user123")
auto = AutoMemory(memory, enabled=True)

# Automatically extracts and stores memories from conversations
memories = auto.process_interaction(
    "My name is John and I prefer Python for backend work"
)
# Extracts: name="John", preference="Python for backend"

8. Workflows

Create reusable multi-step workflows in .praison/workflows/:

from praisonaiagents.memory import WorkflowManager

manager = WorkflowManager()

# Execute a workflow
result = manager.execute(
    "deploy",
    executor=lambda prompt: agent.chat(prompt),
    variables={"environment": "production"}
)

9. Hooks

Configure in .praison/hooks.json:

from praisonaiagents.memory import HooksManager

hooks = HooksManager()

# Register Python hooks
hooks.register("pre_write_code", lambda ctx: print(f"Writing {ctx['file']}"))

# Execute hooks
result = hooks.execute("pre_write_code", {"file": "main.py"})

Using No Code

Auto Mode:

pip install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
praisonai --auto create a movie script about Robots in Mars

Query Rewriting (works with any command):

# Rewrite query for better results (uses QueryRewriterAgent)
praisonai "AI trends" --query-rewrite

# Rewrite with search tools (agent decides when to search)
praisonai "latest developments" --query-rewrite --rewrite-tools "internet_search"

# Works with any prompt
praisonai "explain quantum computing" --query-rewrite -v

Deep Research CLI:

# Default: OpenAI (o4-mini-deep-research)
praisonai research "What are the latest AI trends in 2025?"

# Use Gemini
praisonai research --model deep-research-pro "Your research query"

# Rewrite query before research
praisonai research --query-rewrite "AI trends"

# Rewrite with search tools
praisonai research --query-rewrite --rewrite-tools "internet_search" "AI trends"

# Use custom tools from file (gathers context before deep research)
praisonai research --tools tools.py "Your research query"
praisonai research -t my_tools.py "Your research query"

# Use built-in tools by name (comma-separated)
praisonai research --tools "internet_search,wiki_search" "Your query"
praisonai research -t "yfinance,calculator_tools" "Stock analysis query"

# Save output to file (output/research/{query}.md)
praisonai research --save "Your research query"
praisonai research -s "Your research query"

# Combine options
praisonai research --query-rewrite --tools tools.py --save "Your research query"

# Verbose mode (show debug logs)
praisonai research -v "Your research query"

Planning Mode CLI:

# Enable planning mode - agent creates a plan before execution
praisonai "Research AI trends and write a summary" --planning

# Planning with tools for research
praisonai "Analyze market trends" --planning --planning-tools tools.py

# Planning with chain-of-thought reasoning
praisonai "Complex analysis task" --planning --planning-reasoning

# Auto-approve plans without confirmation
praisonai "Task" --planning --auto-approve-plan

Memory CLI:

# Enable memory for agent (persists across sessions)
praisonai "My name is John" --memory

# Memory with user isolation
praisonai "Remember my preferences" --memory --user-id user123

# Memory management commands
praisonai memory show                      # Show memory statistics
praisonai memory add "User prefers Python" # Add to long-term memory
praisonai memory search "Python"           # Search memories
praisonai memory clear                     # Clear short-term memory
praisonai memory clear all                 # Clear all memory
praisonai memory save my_session           # Save session
praisonai memory resume my_session         # Resume session
praisonai memory sessions                  # List saved sessions
praisonai memory checkpoint                # Create checkpoint
praisonai memory restore <checkpoint_id>   # Restore checkpoint
praisonai memory checkpoints               # List checkpoints
praisonai memory help                      # Show all commands

Rules CLI:

# List all loaded rules (from PRAISON.md, CLAUDE.md, etc.)
praisonai rules list

# Show specific rule details
praisonai rules show <rule_name>

# Create a new rule
praisonai rules create my_rule "Always use type hints"

# Delete a rule
praisonai rules delete my_rule

# Show rules statistics
praisonai rules stats

# Include manual rules with prompts
praisonai "Task" --include-rules security,testing

Workflow CLI:

# List available workflows
praisonai workflow list

# Execute a workflow
praisonai workflow run deploy

# Execute with variables
praisonai workflow run deploy --workflow-var environment=staging --workflow-var branch=main

# Show workflow details
praisonai workflow show deploy

# Create a new workflow template
praisonai workflow create my_workflow

Hooks CLI:

# List configured hooks
praisonai hooks list

# Show hooks statistics
praisonai hooks stats

# Create hooks.json template
praisonai hooks init

Claude Memory Tool CLI:

# Enable Claude Memory Tool (Anthropic models only)
praisonai "Research and remember findings" --claude-memory --llm anthropic/claude-sonnet-4-20250514

Guardrail CLI:

# Validate output with LLM guardrail
praisonai "Write code" --guardrail "Ensure code is secure and follows best practices"

# Combine with other flags
praisonai "Generate SQL query" --guardrail "No DROP or DELETE statements" --save

Metrics CLI:

# Display token usage and cost metrics
praisonai "Analyze this data" --metrics

# Combine with other features
praisonai "Complex task" --metrics --planning

Image Processing CLI:

# Process images with vision-based tasks
praisonai "Describe this image" --image path/to/image.png

# Analyze image content
praisonai "What objects are in this photo?" --image photo.jpg --llm openai/gpt-4o

Telemetry CLI:

# Enable usage monitoring and analytics
praisonai "Task" --telemetry

# Combine with metrics for full observability
praisonai "Complex analysis" --telemetry --metrics

MCP (Model Context Protocol) CLI:

# Use MCP server tools
praisonai "Search files" --mcp "npx -y @modelcontextprotocol/server-filesystem ."

# MCP with environment variables
praisonai "Search web" --mcp "npx -y @modelcontextprotocol/server-brave-search" --mcp-env "BRAVE_API_KEY=your_key"

# Multiple MCP options
praisonai "Task" --mcp "npx server" --mcp-env "KEY1=value1,KEY2=value2"

Fast Context CLI:

# Search codebase for relevant context
praisonai "Find authentication code" --fast-context ./src

# Add code context to any task
praisonai "Explain this function" --fast-context /path/to/project

Knowledge CLI:

# Add documents to knowledge base
praisonai knowledge add document.pdf
praisonai knowledge add ./docs/

# Search knowledge base
praisonai knowledge search "API authentication"

# List indexed documents
praisonai knowledge list

# Clear knowledge base
praisonai knowledge clear

# Show knowledge base info
praisonai knowledge info

# Show all commands
praisonai knowledge help

Session CLI:

# Start a new session
praisonai session start my-project

# List all sessions
praisonai session list

# Resume a session
praisonai session resume my-project

# Show session details
praisonai session show my-project

# Delete a session
praisonai session delete my-project

# Show all commands
praisonai session help

Tools CLI:

# List all available tools
praisonai tools list

# Get info about a specific tool
praisonai tools info internet_search

# Search for tools
praisonai tools search "web"

# Show all commands
praisonai tools help

Handoff CLI:

# Enable agent-to-agent task delegation
praisonai "Research and write article" --handoff "researcher,writer,editor"

# Complex multi-agent workflow
praisonai "Analyze data and create report" --handoff "analyst,visualizer,writer"

Auto Memory CLI:

# Enable automatic memory extraction
praisonai "Learn about user preferences" --auto-memory

# Combine with user isolation
praisonai "Remember my settings" --auto-memory --user-id user123

Todo CLI:

# Generate todo list from task
praisonai "Plan the project" --todo

# Add a todo item
praisonai todo add "Implement feature X"

# List all todos
praisonai todo list

# Complete a todo
praisonai todo complete 1

# Delete a todo
praisonai todo delete 1

# Clear all todos
praisonai todo clear

# Show all commands
praisonai todo help

Router CLI:

# Auto-select best model based on task complexity
praisonai "Simple question" --router

# Specify preferred provider
praisonai "Complex analysis" --router --router-provider anthropic

# Router automatically selects:
# - Simple tasks β†’ gpt-4o-mini, claude-3-haiku
# - Complex tasks β†’ gpt-4-turbo, claude-3-opus

Flow Display CLI:

# Enable visual workflow tracking
praisonai agents.yaml --flow-display

# Combine with other features
praisonai "Multi-step task" --planning --flow-display

Prompt Expansion CLI:

# Expand a short prompt into detailed prompt
praisonai "write a movie script in 3 lines" --expand-prompt

# With verbose output
praisonai "blog about AI" --expand-prompt -v

# With tools for context gathering
praisonai "latest AI trends" --expand-prompt --expand-tools tools.py

# Combine with query rewrite
praisonai "AI news" --query-rewrite --expand-prompt

Using JavaScript Code

npm install praisonai
export OPENAI_API_KEY=xxxxxxxxxxxxxxxxxxxxxx
const { Agent } = require('praisonai');
const agent = new Agent({ instructions: 'You are a helpful AI assistant' });
agent.start('Write a movie script about a robot in Mars');

PraisonAI CLI Demo

Star History

Star History Chart

AI Agents Flow

graph LR
    %% Define the main flow
    Start([β–Ά Start]) --> Agent1
    Agent1 --> Process[βš™ Process]
    Process --> Agent2
    Agent2 --> Output([βœ“ Output])
    Process -.-> Agent1
    
    %% Define subgraphs for agents and their tasks
    subgraph Agent1[ ]
        Task1[πŸ“‹ Task]
        AgentIcon1[πŸ€– AI Agent]
        Tools1[πŸ”§ Tools]
        
        Task1 --- AgentIcon1
        AgentIcon1 --- Tools1
    end
    
    subgraph Agent2[ ]
        Task2[πŸ“‹ Task]
        AgentIcon2[πŸ€– AI Agent]
        Tools2[πŸ”§ Tools]
        
        Task2 --- AgentIcon2
        AgentIcon2 --- Tools2
    end

    classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
    classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
    classDef tools fill:#2E8B57,stroke:#7C90A0,color:#fff
    classDef transparent fill:none,stroke:none

    class Start,Output,Task1,Task2 input
    class Process,AgentIcon1,AgentIcon2 process
    class Tools1,Tools2 tools
    class Agent1,Agent2 transparent
Loading

AI Agents with Tools

Create AI agents that can use tools to interact with external systems and perform actions.

flowchart TB
    subgraph Tools
        direction TB
        T3[Internet Search]
        T1[Code Execution]
        T2[Formatting]
    end

    Input[Input] ---> Agents
    subgraph Agents
        direction LR
        A1[Agent 1]
        A2[Agent 2]
        A3[Agent 3]
    end
    Agents ---> Output[Output]

    T3 --> A1
    T1 --> A2
    T2 --> A3

    style Tools fill:#189AB4,color:#fff
    style Agents fill:#8B0000,color:#fff
    style Input fill:#8B0000,color:#fff
    style Output fill:#8B0000,color:#fff
Loading

AI Agents with Memory

Create AI agents with memory capabilities for maintaining context and information across tasks.

flowchart TB
    subgraph Memory
        direction TB
        STM[Short Term]
        LTM[Long Term]
    end

    subgraph Store
        direction TB
        DB[(Vector DB)]
    end

    Input[Input] ---> Agents
    subgraph Agents
        direction LR
        A1[Agent 1]
        A2[Agent 2]
        A3[Agent 3]
    end
    Agents ---> Output[Output]

    Memory <--> Store
    Store <--> A1
    Store <--> A2
    Store <--> A3

    style Memory fill:#189AB4,color:#fff
    style Store fill:#2E8B57,color:#fff
    style Agents fill:#8B0000,color:#fff
    style Input fill:#8B0000,color:#fff
    style Output fill:#8B0000,color:#fff
Loading

AI Agents with Different Processes

Sequential Process

The simplest form of task execution where tasks are performed one after another.

graph LR
    Input[Input] --> A1
    subgraph Agents
        direction LR
        A1[Agent 1] --> A2[Agent 2] --> A3[Agent 3]
    end
    A3 --> Output[Output]

    classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
    classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
    classDef transparent fill:none,stroke:none

    class Input,Output input
    class A1,A2,A3 process
    class Agents transparent
Loading

Hierarchical Process

Uses a manager agent to coordinate task execution and agent assignments.

graph TB
    Input[Input] --> Manager
    
    subgraph Agents
        Manager[Manager Agent]
        
        subgraph Workers
            direction LR
            W1[Worker 1]
            W2[Worker 2]
            W3[Worker 3]
        end
        
        Manager --> W1
        Manager --> W2
        Manager --> W3
    end
    
    W1 --> Manager
    W2 --> Manager
    W3 --> Manager
    Manager --> Output[Output]

    classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
    classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
    classDef transparent fill:none,stroke:none

    class Input,Output input
    class Manager,W1,W2,W3 process
    class Agents,Workers transparent
Loading

Workflow Process

Advanced process type supporting complex task relationships and conditional execution.

graph LR
    Input[Input] --> Start
    
    subgraph Workflow
        direction LR
        Start[Start] --> C1{Condition}
        C1 --> |Yes| A1[Agent 1]
        C1 --> |No| A2[Agent 2]
        A1 --> Join
        A2 --> Join
        Join --> A3[Agent 3]
    end
    
    A3 --> Output[Output]

    classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
    classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
    classDef decision fill:#2E8B57,stroke:#7C90A0,color:#fff
    classDef transparent fill:none,stroke:none

    class Input,Output input
    class Start,A1,A2,A3,Join process
    class C1 decision
    class Workflow transparent
Loading

Agentic Routing Workflow

Create AI agents that can dynamically route tasks to specialized LLM instances.

flowchart LR
    In[In] --> Router[LLM Call Router]
    Router --> LLM1[LLM Call 1]
    Router --> LLM2[LLM Call 2]
    Router --> LLM3[LLM Call 3]
    LLM1 --> Out[Out]
    LLM2 --> Out
    LLM3 --> Out
    
    style In fill:#8B0000,color:#fff
    style Router fill:#2E8B57,color:#fff
    style LLM1 fill:#2E8B57,color:#fff
    style LLM2 fill:#2E8B57,color:#fff
    style LLM3 fill:#2E8B57,color:#fff
    style Out fill:#8B0000,color:#fff
Loading

Agentic Orchestrator Worker

Create AI agents that orchestrate and distribute tasks among specialized workers.

flowchart LR
    In[In] --> Router[LLM Call Router]
    Router --> LLM1[LLM Call 1]
    Router --> LLM2[LLM Call 2]
    Router --> LLM3[LLM Call 3]
    LLM1 --> Synthesizer[Synthesizer]
    LLM2 --> Synthesizer
    LLM3 --> Synthesizer
    Synthesizer --> Out[Out]
    
    style In fill:#8B0000,color:#fff
    style Router fill:#2E8B57,color:#fff
    style LLM1 fill:#2E8B57,color:#fff
    style LLM2 fill:#2E8B57,color:#fff
    style LLM3 fill:#2E8B57,color:#fff
    style Synthesizer fill:#2E8B57,color:#fff
    style Out fill:#8B0000,color:#fff
Loading

Agentic Autonomous Workflow

Create AI agents that can autonomously monitor, act, and adapt based on environment feedback.

flowchart LR
    Human[Human] <--> LLM[LLM Call]
    LLM -->|ACTION| Environment[Environment]
    Environment -->|FEEDBACK| LLM
    LLM --> Stop[Stop]
    
    style Human fill:#8B0000,color:#fff
    style LLM fill:#2E8B57,color:#fff
    style Environment fill:#8B0000,color:#fff
    style Stop fill:#333,color:#fff
Loading

Agentic Parallelization

Create AI agents that can execute tasks in parallel for improved performance.

flowchart LR
    In[In] --> LLM2[LLM Call 2]
    In --> LLM1[LLM Call 1]
    In --> LLM3[LLM Call 3]
    LLM1 --> Aggregator[Aggregator]
    LLM2 --> Aggregator
    LLM3 --> Aggregator
    Aggregator --> Out[Out]
    
    style In fill:#8B0000,color:#fff
    style LLM1 fill:#2E8B57,color:#fff
    style LLM2 fill:#2E8B57,color:#fff
    style LLM3 fill:#2E8B57,color:#fff
    style Aggregator fill:#fff,color:#000
    style Out fill:#8B0000,color:#fff
Loading

Agentic Prompt Chaining

Create AI agents with sequential prompt chaining for complex workflows.

flowchart LR
    In[In] --> LLM1[LLM Call 1] --> Gate{Gate}
    Gate -->|Pass| LLM2[LLM Call 2] -->|Output 2| LLM3[LLM Call 3] --> Out[Out]
    Gate -->|Fail| Exit[Exit]
    
    style In fill:#8B0000,color:#fff
    style LLM1 fill:#2E8B57,color:#fff
    style LLM2 fill:#2E8B57,color:#fff
    style LLM3 fill:#2E8B57,color:#fff
    style Out fill:#8B0000,color:#fff
    style Exit fill:#8B0000,color:#fff
Loading

Agentic Evaluator Optimizer

Create AI agents that can generate and optimize solutions through iterative feedback.

flowchart LR
    In[In] --> Generator[LLM Call Generator] 
    Generator -->|SOLUTION| Evaluator[LLM Call Evaluator] -->|ACCEPTED| Out[Out]
    Evaluator -->|REJECTED + FEEDBACK| Generator
    
    style In fill:#8B0000,color:#fff
    style Generator fill:#2E8B57,color:#fff
    style Evaluator fill:#2E8B57,color:#fff
    style Out fill:#8B0000,color:#fff
Loading

Repetitive Agents

Create AI agents that can efficiently handle repetitive tasks through automated loops.

flowchart LR
    In[Input] --> LoopAgent[("Looping Agent")]
    LoopAgent --> Task[Task]
    Task --> |Next iteration| LoopAgent
    Task --> |Done| Out[Output]
    
    style In fill:#8B0000,color:#fff
    style LoopAgent fill:#2E8B57,color:#fff,shape:circle
    style Task fill:#2E8B57,color:#fff
    style Out fill:#8B0000,color:#fff
Loading

Adding Models

Ollama Integration

export OPENAI_BASE_URL=http://localhost:11434/v1

Groq Integration

Replace xxxx with Groq API KEY:

export OPENAI_API_KEY=xxxxxxxxxxx
export OPENAI_BASE_URL=https://api.groq.com/openai/v1

No Code Options

Agents Playbook

Simple Playbook Example

Create agents.yaml file and add the code below:

framework: praisonai
topic: Artificial Intelligence
roles:
  screenwriter:
    backstory: "Skilled in crafting scripts with engaging dialogue about {topic}."
    goal: Create scripts from concepts.
    role: Screenwriter
    tasks:
      scriptwriting_task:
        description: "Develop scripts with compelling characters and dialogue about {topic}."
        expected_output: "Complete script ready for production."

To run the playbook:

praisonai agents.yaml

Use 100+ Models

Custom Tools

Using @tool Decorator

from praisonaiagents import Agent, tool

@tool
def search(query: str) -> str:
    """Search the web for information."""
    return f"Results for: {query}"

@tool
def calculate(expression: str) -> float:
    """Evaluate a math expression."""
    return eval(expression)

agent = Agent(
    instructions="You are a helpful assistant",
    tools=[search, calculate]
)
agent.start("Search for AI news and calculate 15*4")

Using BaseTool Class

from praisonaiagents import Agent, BaseTool

class WeatherTool(BaseTool):
    name = "weather"
    description = "Get current weather for a location"
    
    def run(self, location: str) -> str:
        return f"Weather in {location}: 72Β°F, Sunny"

agent = Agent(
    instructions="You are a weather assistant",
    tools=[WeatherTool()]
)
agent.start("What's the weather in Paris?")

Creating a Tool Package (pip installable)

# pyproject.toml
[project]
name = "my-praisonai-tools"
version = "1.0.0"
dependencies = ["praisonaiagents"]

[project.entry-points."praisonaiagents.tools"]
my_tool = "my_package:MyTool"
# my_package/__init__.py
from praisonaiagents import BaseTool

class MyTool(BaseTool):
    name = "my_tool"
    description = "My custom tool"
    
    def run(self, param: str) -> str:
        return f"Result: {param}"

After pip install, tools are auto-discovered:

agent = Agent(tools=["my_tool"])  # Works automatically!

Prompt Expansion

Expand short prompts into detailed, actionable prompts:

CLI Usage

# Expand a short prompt into detailed prompt
praisonai "write a movie script in 3 lines" --expand-prompt

# With verbose output
praisonai "blog about AI" --expand-prompt -v

# With tools for context gathering
praisonai "latest AI trends" --expand-prompt --expand-tools tools.py

# Combine with query rewrite
praisonai "AI news" --query-rewrite --expand-prompt

Programmatic Usage

from praisonaiagents import PromptExpanderAgent, ExpandStrategy

# Basic usage
agent = PromptExpanderAgent()
result = agent.expand("write a movie script in 3 lines")
print(result.expanded_prompt)

# With specific strategy
result = agent.expand("blog about AI", strategy=ExpandStrategy.DETAILED)

# Available strategies: BASIC, DETAILED, STRUCTURED, CREATIVE, AUTO

Key Difference:

  • --query-rewrite: Optimizes queries for search/retrieval (RAG)
  • --expand-prompt: Expands prompts for detailed task execution

Web Search, Web Fetch & Prompt Caching

CLI Usage

# Web Search - Get real-time information
praisonai "What are the latest AI news today?" --web-search --llm openai/gpt-4o-search-preview

# Web Fetch - Retrieve and analyze URL content (Anthropic only)
praisonai "Summarize https://docs.praison.ai" --web-fetch --llm anthropic/claude-sonnet-4-20250514

# Prompt Caching - Reduce costs for repeated prompts
praisonai "Analyze this document..." --prompt-caching --llm anthropic/claude-sonnet-4-20250514

Programmatic Usage

from praisonaiagents import Agent

# Web Search
agent = Agent(
    instructions="You are a research assistant",
    llm="openai/gpt-4o-search-preview",
    web_search=True
)

# Web Fetch (Anthropic only)
agent = Agent(
    instructions="You are a content analyzer",
    llm="anthropic/claude-sonnet-4-20250514",
    web_fetch=True
)

# Prompt Caching
agent = Agent(
    instructions="You are an AI assistant..." * 50,  # Long system prompt
    llm="anthropic/claude-sonnet-4-20250514",
    prompt_caching=True
)

Supported Providers:

Feature Providers
Web Search OpenAI, Gemini, Anthropic, xAI, Perplexity
Web Fetch Anthropic
Prompt Caching OpenAI (auto), Anthropic, Bedrock, Deepseek

MCP (Model Context Protocol)

PraisonAI supports MCP Protocol Revision 2025-11-25 with multiple transports.

MCP Client (Consume MCP Servers)

from praisonaiagents import Agent, MCP

# stdio - Local NPX/Python servers
agent = Agent(tools=MCP("npx @modelcontextprotocol/server-memory"))

# Streamable HTTP - Production servers
agent = Agent(tools=MCP("https://api.example.com/mcp"))

# WebSocket - Real-time bidirectional
agent = Agent(tools=MCP("wss://api.example.com/mcp", auth_token="token"))

# SSE (Legacy) - Backward compatibility
agent = Agent(tools=MCP("http://localhost:8080/sse"))

# With environment variables
agent = Agent(
    tools=MCP(
        command="npx",
        args=["-y", "@modelcontextprotocol/server-brave-search"],
        env={"BRAVE_API_KEY": "your-key"}
    )
)

MCP Server (Expose Tools as MCP Server)

Expose your Python functions as MCP tools for Claude Desktop, Cursor, and other MCP clients:

from praisonaiagents.mcp import ToolsMCPServer

def search_web(query: str, max_results: int = 5) -> dict:
    """Search the web for information."""
    return {"results": [f"Result for {query}"]}

def calculate(expression: str) -> dict:
    """Evaluate a mathematical expression."""
    return {"result": eval(expression)}

# Create and run MCP server
server = ToolsMCPServer(name="my-tools")
server.register_tools([search_web, calculate])
server.run()  # stdio for Claude Desktop
# server.run_sse(host="0.0.0.0", port=8080)  # SSE for web clients

MCP Features

Feature Description
Session Management Automatic Mcp-Session-Id handling
Protocol Versioning Mcp-Protocol-Version header
Resumability SSE stream recovery via Last-Event-ID
Security Origin validation, DNS rebinding prevention
WebSocket Auto-reconnect with exponential backoff

Development:

Below is used for development only.

Using uv

# Install uv if you haven't already
pip install uv

# Install from requirements
uv pip install -r pyproject.toml

# Install with extras
uv pip install -r pyproject.toml --extra code
uv pip install -r pyproject.toml --extra "crewai,autogen"

Bump and Release

# From project root - bumps version and releases in one command
python src/praisonai/scripts/bump_and_release.py 2.2.99

# With praisonaiagents dependency
python src/praisonai/scripts/bump_and_release.py 2.2.99 --agents 0.0.169

# Then publish
cd src/praisonai && uv publish

Contributing

  • Fork on GitHub: Use the "Fork" button on the repository page.
  • Clone your fork: git clone https://github.com/yourusername/praisonAI.git
  • Create a branch: git checkout -b new-feature
  • Make changes and commit: git commit -am "Add some feature"
  • Push to your fork: git push origin new-feature
  • Submit a pull request via GitHub's web interface.
  • Await feedback from project maintainers.

Advanced Features

Research & Intelligence:

  • πŸ”¬ Deep Research Agents (OpenAI & Gemini)
  • πŸ”„ Query Rewriter Agent (HyDE, Step-back, Multi-query)
  • 🌐 Native Web Search (OpenAI, Gemini, Anthropic, xAI, Perplexity)
  • πŸ“₯ Web Fetch (Retrieve full content from URLs - Anthropic)
  • πŸ“ Prompt Expander Agent (Expand short prompts into detailed instructions)

Memory & Caching:

  • πŸ’Ύ Prompt Caching (Reduce costs & latency - OpenAI, Anthropic, Bedrock, Deepseek)
  • 🧠 Claude Memory Tool (Persistent cross-conversation memory - Anthropic Beta)
  • πŸ’Ύ File-Based Memory (Zero-dependency persistent memory for all agents)
  • πŸ” Built-in Search Tools (Tavily, You.com, Exa - web search, news, content extraction)

Planning & Workflows:

  • πŸ“‹ Planning Mode (Plan before execution - Agent & Multi-Agent)
  • πŸ”§ Planning Tools (Research with tools during planning)
  • 🧠 Planning Reasoning (Chain-of-thought planning)
  • ⛓️ Prompt Chaining (Sequential prompt workflows with gates)
  • πŸ” Evaluator Optimiser (Generate and optimize through iterative feedback)
  • πŸ‘· Orchestrator Workers (Distribute tasks among specialized workers)
  • ⚑ Parallelisation (Execute tasks in parallel for improved performance)
  • πŸ” Repetitive Agents (Handle repetitive tasks through automated loops)
  • πŸ€– Autonomous Workflow (Monitor, act, adapt based on environment feedback)

Agent Types:

  • πŸ–ΌοΈ Image Generation Agent (Create images from text descriptions)
  • πŸ“· Image to Text Agent (Extract text and descriptions from images)
  • 🎬 Video Agent (Analyze and process video content)
  • πŸ“Š Data Analyst Agent (Analyze data and generate insights)
  • πŸ’° Finance Agent (Financial analysis and recommendations)
  • πŸ›’ Shopping Agent (Price comparison and shopping assistance)
  • ⭐ Recommendation Agent (Personalized recommendations)
  • πŸ“– Wikipedia Agent (Search and extract Wikipedia information)
  • πŸ’» Programming Agent (Code development and analysis)
  • πŸ“ Markdown Agent (Generate and format Markdown content)
  • πŸ”€ Router Agent (Dynamic task routing with cost optimization)

MCP Protocol:

  • πŸ”Œ MCP Transports (stdio, Streamable HTTP, WebSocket, SSE - Protocol 2025-11-25)
  • 🌐 WebSocket MCP (Real-time bidirectional connections with auto-reconnect)
  • πŸ” MCP Security (Origin validation, DNS rebinding prevention, secure sessions)
  • πŸ”„ MCP Resumability (SSE stream recovery via Last-Event-ID)

Safety & Control:

  • 🀝 Agent Handoffs (Transfer context between specialized agents)
  • πŸ›‘οΈ Guardrails (Input/output validation and safety checks)
  • βœ… Human Approval (Require human confirmation for critical actions)
  • πŸ’¬ Sessions Management (Isolated conversation contexts)
  • πŸ”„ Stateful Agents (Maintain state across interactions)

Developer Tools:

  • ⚑ Fast Context (Rapid parallel code search - 10-20x faster than traditional methods)
  • πŸ“œ Rules & Instructions (Auto-discover CLAUDE.md, AGENTS.md, GEMINI.md)
  • πŸͺ Hooks (Pre/post operation hooks for custom logic)
  • πŸ“ˆ Telemetry (Track agent performance and usage)
  • πŸ“Ή Camera Integration (Capture and analyze camera input)

Other Features

  • πŸ”„ Use CrewAI or AG2 (Formerly AutoGen) Framework
  • πŸ’» Chat with ENTIRE Codebase
  • 🎨 Interactive UIs
  • πŸ“„ YAML-based Configuration
  • πŸ› οΈ Custom Tool Integration
  • πŸ” Internet Search Capability (Tavily, You.com, Exa, DuckDuckGo, Crawl4AI)
  • πŸ–ΌοΈ Vision Language Model (VLM) Support
  • πŸŽ™οΈ Real-time Voice Interaction

Video Tutorials

Topic Video
AI Agents with Self Reflection Self Reflection
Reasoning Data Generating Agent Reasoning Data
AI Agents with Reasoning Reasoning
Multimodal AI Agents Multimodal
AI Agents Workflow Workflow
Async AI Agents Async
Mini AI Agents Mini
AI Agents with Memory Memory
Repetitive Agents Repetitive
Introduction Introduction
Tools Overview Tools Overview
Custom Tools Custom Tools
Firecrawl Integration Firecrawl
User Interface UI
Crawl4AI Integration Crawl4AI
Chat Interface Chat
Code Interface Code
Mem0 Integration Mem0
Training Training
Realtime Voice Interface Realtime
Call Interface Call
Reasoning Extract Agents Reasoning Extract

About

PraisonAI is a production-ready Multi AI Agents framework, designed to create AI Agents to automate and solve problems ranging from simple tasks to complex challenges. It provides a low-code solution to streamline the building and management of multi-agent LLM systems, emphasising simplicity, customisation, and effective human-agent collaboration.

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project

Packages

No packages published

Contributors 28