Diagnose with External Observability
- 15-30 minutes of manual data stitching across platforms → minutes in a single conversation
- Zero blind spots — infrastructure, application, and business metrics correlated in one investigation
- Any observability platform with an MCP server connects the same way — no custom integrations needed
- New platforms added without code changes — your agent discovers their tools automatically
The problem: observability data scattered across platforms
Your applications run on Azure, but your observability stack spans multiple platforms — Dynatrace for traces, Azure Monitor for infrastructure, Splunk for logs, Kusto for business metrics. During an incident, you're manually bridging these silos: copying operation IDs between tabs, correlating timestamps across query languages (DQL, KQL, SPL), and spending 15–30 minutes stitching data together before you even start diagnosing.
How your agent solves this
Connect your observability tools through MCP (Model Context Protocol), and your agent queries all of them — Azure and external — during every investigation:
- Queries Azure services — Application Insights, Log Analytics, Azure Monitor, Resource Graph (built-in, no setup needed)
- Queries your external tools — Dynatrace logs via DQL, Datadog metrics, Splunk events (via MCP connectors)
- Correlates signals across platforms — connects error spikes in Dynatrace with deployment history in Azure, matches timestamps automatically
- Reports a unified picture — one investigation thread with evidence from every connected system
The key mechanism: your agent registers tools from every connected MCP server alongside its built-in Azure tools. During an investigation, it selects the right tools based on what it's investigating — not based on which platform they come from. Learn more about tool selection.
What makes this different
Unlike separate dashboards, your agent queries all your observability platforms in one investigation. You don't switch tabs or translate between query languages — your agent handles DQL for Dynatrace, KQL for Azure, and whatever your other tools expose.
Unlike manual correlation, your agent connects signals across platforms automatically. When Dynatrace shows a spike in 5xx errors and Azure shows a recent Container Apps deployment, your agent correlates those findings into a single root cause analysis.
Unlike point-to-point integrations, MCP is an open protocol. Dynatrace, Datadog, New Relic, Splunk — each publishes an MCP server that your agent connects to the same way. When a platform adds new capabilities to its MCP server, your agent discovers them automatically.
See how MCP connectors work, how custom agents specialize by platform, and how your knowledge base provides context for custom telemetry.
Before and after
| Before | After | |
|---|---|---|
| Investigation workflow | Open Azure Monitor, Dynatrace, and Splunk separately — query each one manually | Ask your agent once — it queries all connected platforms |
| Signal correlation | Copy error IDs between tools, match timestamps manually across platforms | Your agent follows the thread across platforms and correlates automatically |
| Context switching | 3-5 dashboards, different query languages (KQL, DQL, SPL) | One conversation — your agent handles the queries |
| Time to first insight | 15-30 minutes stitching data across tools | Minutes — your agent queries in parallel |
| Blind spots | Each tool sees its own slice — infrastructure vs. application vs. business metrics | Your agent sees the whole picture across all connected systems |
Investigation example: Cross-platform correlation
Symptom: "Orders are failing but Azure metrics look fine"
Your agent investigates across platforms:
-
Checks Azure infrastructure (built-in tools)
- App Service: healthy, low CPU
- Azure SQL: healthy, low DTU
- Application Insights: no exceptions in the app layer
-
Queries Dynatrace (via MCP)
- Queries for 5xx errors across services using Dynatrace's DQL tools
- Payment service p99 latency: 12 seconds (normal: 200ms)
- Error volume isolated to the latest deployment revision
-
Queries your Kusto cluster (via Kusto)
OrderEvents
| where Status == "Failed"
| summarize count() by FailureReasonResult: 847 failures with "PaymentGatewayTimeout"
-
Correlates findings: "Azure infrastructure is healthy. The 5xx error spike visible in Dynatrace correlates with the deployment of revision 0000039. The 847 PaymentGatewayTimeout failures in your Kusto order data confirm the impact. Root cause: bad deployment."
Without external observability: The investigation would stop at step 1 — "Azure is healthy, case closed." With MCP connectors, your agent found the actual root cause across three platforms.
What you can connect
| Data source | Connector | What your agent can do |
|---|---|---|
| Azure Data Explorer (Kusto) | Kusto connector | Query business metrics and custom telemetry |
| Dynatrace | MCP server | Query logs and metrics via DQL, identify error patterns |
| Datadog | MCP server | Query metrics, APM traces, logs, and monitors |
| Splunk | MCP server | Search logs, run saved searches, query events |
| New Relic | MCP server | Query metrics, traces, and application performance data |
| Elasticsearch | MCP server | Search and query Elasticsearch indices |
| Any tool with MCP | MCP server | Whatever tools the platform's MCP server exposes |
Get started
| What you want to connect | Connector | Setup guide |
|---|---|---|
| Dynatrace, Datadog, Splunk, custom tools | MCP server | MCP connector tutorial |
| Azure Data Explorer (Kusto) | Kusto connector | Kusto connector tutorial |
| Reusable KQL queries | Kusto tools | Create Kusto tools |
When to use which approach
| Your observability stack | Recommended approach |
|---|---|
| All telemetry in Azure (App Insights, Log Analytics) | Azure Observability — works out of the box |
| Azure + external APM (Dynatrace, Datadog, New Relic) | Azure Observability (built-in) + MCP connectors for each platform |
| Azure + custom business metrics in Kusto | Azure Observability + Kusto connector |
| Multi-platform (Azure + Dynatrace + Splunk + Kusto) | All of the above — your agent queries everything in one investigation |
Related capabilities
| Capability | What it adds |
|---|---|
| Azure Observability → | Built-in Azure diagnostic tools — App Insights, Log Analytics, Azure Monitor |
| Kusto Tools → | Create reusable KQL queries for business telemetry |
| Root Cause Analysis → | Hypothesis-driven investigation using evidence from all connected platforms |
| Connectors → | Full reference for connector types, health monitoring, and custom agent assignment |