-
Notifications
You must be signed in to change notification settings - Fork 8.2k
feat: integrate Gemini CLI with Docker MCP for natural language observability #24134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
c866ff6
51e0640
cb7c158
9221c44
e7b4ade
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,148 @@ | ||
| --- | ||
| description: Integrate Gemini CLI with Grafana via Docker MCP Toolkit for natural language observability. | ||
| keywords: mcp, grafana, docker, gemini, devops | ||
| title: Connect Gemini to Grafana via MCP | ||
| summary: | | ||
| Learn how to leverage the Model Context Protocol (MCP) to interact with Grafana dashboards and datasources directly from your terminal. | ||
| levels: [intermediate] | ||
| subjects: [devops] | ||
| aliases: | ||
| - /guides/use-case/devops/ | ||
| params: | ||
| time: 15 minutes | ||
| --- | ||
|
|
||
| # Integrating Gemini CLI with Grafana via Docker MCP Toolkit | ||
|
|
||
| This guide shows how to connect Gemini CLI to a Grafana instance using the **Docker MCP Toolkit**. | ||
|
|
||
| ## Prerequisites | ||
|
|
||
| * **Gemini CLI** installed and authenticated. | ||
| * **Docker Desktop** with the **MCP Toolkit** extension enabled. | ||
| * An active **Grafana** instance. | ||
|
|
||
|
|
||
| ## 1. Provisioning Grafana Access | ||
|
|
||
falconcr marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| The MCP server requires a **Service Account Token** to interact with the Grafana API. Service Account Tokens are preferred over personal API keys because they can be revoked independently without affecting user access, and permissions can be scoped more narrowly. | ||
|
|
||
| 1. Navigate to **Administration > Users and access > Service accounts** in your Grafana dashboard. | ||
| 2. Create a new Service Account (e.g., `gemini-mcp-connector`). | ||
| 3. Assign the **Viewer** role (or **Editor** if you require alert management capabilities). | ||
| 4. Generate a new token. Copy the token immediately—you won't be able to view it again. | ||
|
|
||
|  | ||
|
|
||
|
|
||
|
|
||
| ## 2. MCP Server Configuration | ||
|
|
||
| The Docker MCP Toolkit provides a pre-configured Grafana catalog item. This connects the LLM to the Grafana API. | ||
|
|
||
falconcr marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| 1. Open the **MCP Toolkit** in Docker Desktop. | ||
| 2. Locate **Grafana** in the Catalog and add it to your active servers. | ||
| 3. In the **Configuration** view, define the following: | ||
| * **Grafana URL:** The endpoint or URL of your instance. | ||
| * **Service Account Token:** The token generated in the previous step. | ||
|
|
||
|  | ||
|
Check failure on line 49 in content/guides/grafana-mcp-server-gemini.md
|
||
|
|
||
|
|
||
|
|
||
| ## 3. Gemini CLI Integration | ||
|
|
||
| To register the Docker MCP gateway within Gemini, update your global configuration file located at `~/.gemini/settings.json`. | ||
|
|
||
| Ensure the `mcpServers` object includes the following entry: | ||
|
|
||
| ```json | ||
| { | ||
| "mcpServers": { | ||
| "MCP_DOCKER": { | ||
| "command": "docker", | ||
| "args": [ | ||
| "mcp", | ||
| "gateway", | ||
| "run" | ||
| ] | ||
| } | ||
| } | ||
| } | ||
|
|
||
| ``` | ||
|
|
||
|
|
||
| ## 4. Operational Validation | ||
|
|
||
| Restart your Gemini CLI session to load the new configuration. Verify the status of the MCP tools by running: | ||
|
|
||
| ```bash | ||
| > /mcp list | ||
|
|
||
| ``` | ||
|
|
||
|  | ||
|
Check failure on line 85 in content/guides/grafana-mcp-server-gemini.md
|
||
|
|
||
| A successful connection will show `MCP_DOCKER` as **Ready**, exposing over 61 tools for data fetching, dashboard searching, and alert inspection. | ||
|
|
||
| ## Use Cases | ||
|
|
||
| ### Datasource Discovery | ||
|
Check failure on line 91 in content/guides/grafana-mcp-server-gemini.md
|
||
|
|
||
| _List all Prometheus and Loki datasources._ | ||
|
|
||
|  | ||
|
|
||
|
|
||
|
|
||
|  | ||
|
|
||
|
|
||
| ### Logs Inspection | ||
|
|
||
| Gemini performs intent parsing and translates the request into a LogQL query: `{device_name="edge-device-01"} |= "nginx"`. This query targets specific logs, extracting raw OpenTelemetry (OTel) data that includes container metadata and system labels, which Gemini then uses to identify the source of the issue. | ||
|
|
||
falconcr marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|  | ||
|
Check failure on line 106 in content/guides/grafana-mcp-server-gemini.md
|
||
|
|
||
| Once the system identifies Loki as the active datasource, it translates the human intent into a precise technical command. The AI autonomously constructs a LogQL `query: {device_name="edge-device-01"} |= "nginx"`. This query targets the specific Kubernetes pod logs, extracting raw OpenTelemetry (OTel) data that includes pod UIDs and container metadata. Instead of the user writing complex syntax, the prompt acts as the bridge to pull structured data from the containerized environment | ||
|
Check failure on line 108 in content/guides/grafana-mcp-server-gemini.md
|
||
|
|
||
|
|
||
|  | ||
|
|
||
| In the final step, Gemini performs reasoning over the raw telemetry. After filtering through hundreds of lines to confirm the existence of Nginx logs, Gemini extracts a specific node_filesystem_device_error buried within the stream. By surfacing this critical event, it alerts the DevOps engineer to a volume mounting issue on the edge node, transforming raw data into an actionable incident report. | ||
|
Check failure on line 113 in content/guides/grafana-mcp-server-gemini.md
|
||
|
|
||
|  | ||
|
|
||
|
|
||
|
|
||
| ### Dashboard Navigation | ||
|
|
||
| _How many dashboards we have?_ | ||
|
Check warning on line 121 in content/guides/grafana-mcp-server-gemini.md
|
||
|
|
||
|  | ||
|
|
||
|
|
||
| _Tell me the summary of X dashboard_ | ||
|
|
||
|
|
||
|  | ||
|
|
||
| ### Other scenarios | ||
|
|
||
| Imagine you get a page that an application is slow. You could: | ||
|
|
||
| 1. Use list_alert_rules to see which alert is firing. | ||
|
Check failure on line 135 in content/guides/grafana-mcp-server-gemini.md
|
||
| 2. Use search_dashboards to find the relevant application dashboard. | ||
| 3. Use get_panel_image on a key panel to see the performance spike visually. | ||
| 4. Use query_loki_logs to search for "error" or "timeout" messages during the time of the spike. | ||
| 5. If you find the root cause, use create_incident to start the formal response and add_activity_to_incident to log your findings. | ||
|
|
||
| ## Next steps | ||
|
|
||
falconcr marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| - Learn about [Advanced LogQL queries](https://grafana.com/docs/loki/latest/query/log_queries/) | ||
| - Set up [Team-wide MCP configurations](https://modelcontextprotocol.io/docs/develop/connect-local-servers) | ||
| - Explore [Grafana alerting with MCP](https://github.com/grafana/mcp-grafana) | ||
| - Get help in the [Docker Community Forums](https://forums.docker.com) | ||
|
|
||
| Need help setting up your Docker MCP environment or customizing your Gemini prompts? Visit the [Docker Community Forums](https://forums.docker.com) or see the [MCP Troubleshooting Guide](https://docs.docker.com/guides/grafana-mcp-server-gemini). | ||
Uh oh!
There was an error while loading. Please reload this page.