An experimental autonomous agent framework with MCP (Model Context Protocol) support for learning and experimentation. This framework enables LLM-powered agents to autonomously execute complex tasks using dynamically discovered tools from MCP servers.
An autonomous AI agent that:
- π€ Makes its own decisions - The LLM autonomously chooses which tools to use and when
- π Connects to MCP servers - Discovers and uses tools from any MCP-compatible server
- π Executes multi-step workflows - Handles complex tasks requiring multiple tool calls
- π Logs everything - Separate console and file logging for debugging and auditing
- π€ LLM-Driven Decision Making - Uses LangGraph for autonomous task execution
- π MCP Support - Connects to MCP servers (FastMCP, standard MCP)
- π οΈ Dynamic Tool Discovery - Automatically finds and uses available tools
- βοΈ Flexible Configuration - CLI > Config File > Env Variables > Defaults
- π Multi-Step Reasoning - Handles complex workflows autonomously
- π Structured Logging - Separate console and file logging with configurable levels
# 1. Clone and install
git clone https://github.com/AutomateIP/autonomousagent.git
cd autonomous_agent
uv sync
# 2. Configure API key
cp .env.example .env
# Edit .env and add: ANTHROPIC_API_KEY=your-key-here
# 3. Configure MCP servers (optional)
cp examples/mcp_config.json.example mcp_config.json
# Edit mcp_config.json with your MCP server paths
# 4. Run a simple task
uv run agent --agent-file tests/prompts/test_time.promptπ See docs/QUICKSTART.md for detailed setup guide
# Install dependencies
uv sync
# Set up configuration files
cp .env.example .env
cp examples/mcp_config.json.example mcp_config.json
# Edit .env with your ANTHROPIC_API_KEY
# Edit mcp_config.json with your MCP server paths (use absolute paths)See docs/QUICKSTART.md for step-by-step installation
# Run with default configuration (agent.conf and mcp_config.json)
uv run agent --agent-file <your-prompt-file># Enable debug output to console and detailed file logging
uv run agent --agent-file task.prompt --debug# Use different model
uv run agent --agent-file task.prompt --llm-model claude-3-haiku-20240307
# Custom configuration file
uv run agent --config my.conf --agent-file task.prompt
# List available MCP servers
uv run agent --show-mcps
# List all available tools
uv run agent --list-tools- agent.conf - Agent settings (LLM, behavior, MCP)
- mcp_config.json - MCP server definitions
- .env - API keys and credentials
CLI Arguments > agent.conf > Environment Variables > Defaults
[agent]
system_prompt = ./prompts/agent_system.prompt
# Console logging level (what you see in terminal)
console_log_level = ERROR
# File logging level (what gets written to logs/)
file_log_level = INFO
max_iterations = 15
[llm]
provider = anthropic
model = claude-sonnet-4-5-20250929
temperature = 0.7
max_tokens = 4096
[mcp]
config_file = ./mcp_config.json
itential_mcp_conf = ./itential-mcp.conf
timeout = 120See examples/agent.conf.example for full options.
The agent supports separate console and file logging levels:
- console_log_level: Controls what appears in your terminal (DEBUG, INFO, WARNING, ERROR, CRITICAL)
- file_log_level: Controls what gets written to
logs/agent_TIMESTAMP.log
Recommended setup:
- Console:
ERROR(clean terminal output) - File:
INFO(detailed logs for debugging)
Log files are automatically created with timestamps in the logs/ directory.
Prompts are simple text files with instructions:
Simple Prompt:
What are the key principles of autonomous agent design?
Multi-Step Prompt:
1. Analyze the available tools
2. Recommend which tools to use for system monitoring
3. Explain your reasoning
Tool-Specific Prompt:
Check system health using available monitoring tools.
Report key metrics and any issues found.
Best Practices:
- Be specific about desired outcome
- Let agent decide HOW to accomplish task
- Mention specific tools if needed
- Request specific output formats (table, JSON, etc.)
π See docs/QUICKSTART.md for prompt creation guide
autonomous_agent/
βββ src/ # Source code
β βββ agent.py # Main entry point
β βββ agent_core.py # LangGraph state machine
β βββ config.py # Configuration management
β βββ llm_provider.py # LLM provider interface
β βββ mcp_client.py # MCP server client
βββ prompts/ # System prompts
βββ tests/prompts/ # Test prompt files
βββ examples/ # Example configurations
β βββ agent.conf.example
β βββ mcp_config.json.example
β βββ itential-mcp.conf.example
βββ logs/ # Generated log files (timestamped)
βββ files/ # Agent workspace
β βββ templates/ # Report templates
βββ docs/ # Documentation
βββ agent.conf # Main configuration (not tracked in git)
βββ mcp_config.json # MCP server config (not tracked in git)
βββ .env # API keys (not tracked in git)
Note: Configuration files with personal settings are not tracked in git. Use the example files in examples/ as templates.
The agent can connect to any MCP server. Included servers:
- itential-mcp - Itential Platform integration (21 tools)
- time - Time and timezone operations
- filesystem - File system operations
- Add to
mcp_config.json - Tools are discovered automatically
- Use in prompts - agent will find and use them
For Itential Platform: See docs/ITENTIAL_INTEGRATION.md
# Required
--agent-file PATH # Path to prompt file
# Optional
--config PATH # agent.conf path (default: ./agent.conf)
--mcp-config PATH # MCP config override
--system-prompt PATH # System prompt override
--llm-provider NAME # anthropic or openai
--llm-model NAME # Model identifier
--debug # Enable debug loggingUser Prompt β Agent Configuration β MCP Connection β Tool Discovery
β
LLM Reasoning (decide action) β Execute Tools (if needed) β Generate Response
β β
ββββββββββββββββββββββββββββββββββββ
(Loop until task complete)
- LangGraph State Machine - Manages agent flow
- FastMCP Client - Connects to MCP servers
- LLM Provider - Anthropic Claude integration
- Configuration System - Multi-level precedence
Design Principle: 100% LLM-driven decisions, zero hardcoded logic
Check tests/prompts/ for working examples:
test_no_tools.prompt- Agent reasoning without toolstest_itential_health.prompt- Platform health check (with itential-mcp)test_get_devices.prompt- Device inventory retrievaltest_filesystem.prompt- File operationstest_time.prompt- Time queriestest_multi_step.prompt- Complex workflows
- docs/QUICKSTART.md - Get started in 5 minutes
- docs/ARCHITECTURE.md - System architecture and design
- docs/ITENTIAL_INTEGRATION.md - Itential Platform integration
- docs/TROUBLESHOOTING.md - Common issues and solutions
- docs/ENHANCEMENTS.md - Future roadmap
- CONTRIBUTING.md - Contribution guidelines
- Import errors: Run
uv sync - API key errors: Check
.envfile - MCP connection fails: Verify paths in
mcp_config.json - No output: Enable
--debugto see what's happening
Full troubleshooting: See docs/QUICKSTART.md
Anthropic (Recommended):
claude-sonnet-4-20250514(default - most capable)claude-3-5-sonnet-20241022(excellent reasoning)claude-3-haiku-20240307(fast & economical)
OpenAI (Supported):
gpt-4,gpt-4-turbo,gpt-3.5-turbo
Configure in agent.conf or use --llm-model flag.
- Startup: 1-2 seconds
- Tool Discovery: 2-3 seconds
- Simple Tasks: 5-10 seconds
- Complex Workflows: 20-60 seconds
This is an experimental project for learning purposes. Feel free to fork and experiment! See CONTRIBUTING.md for development workflow.
- LangGraph - Agent framework
- FastMCP - MCP client library
- Anthropic - Claude LLM
- Model Context Protocol - Universal tool protocol
See LICENSE file for details.
Purpose: Learning and experimentation with autonomous agents and MCP Last Updated: 2025-12-31
Get Started: docs/QUICKSTART.md