Blog

Ask your incidents anything. Ask your tickets too. Introducing Xurrent MCP.

May 12, 2026
Gray upward-pointing arrow icon.
Click To Explore

Table of contents

Downward-pointing chevron dropdown arrow icon in black.

Most enterprise software in 2026 is shipping AI features and they've been pretty overwhelming. They're bolting it on. A chatbot lives next to your incident tool, a summary widget lives next to your tickets, and the AI works on whatever surface it's stuck onto, with no real access to the work happening underneath.

We don't believe in that approach.

That's why we built two MCP servers, one for Xurrent IMR and one for Xurrent ITSM. Both let you ask questions of your live operational data in plain English from inside Claude or any MCP-compatible client.

This blog is about why we shipped these the way we did, what they do, and what changes for you as a Xurrent user opening a Tuesday morning queue with 200 unread tickets.

The AI conversation has been stuck on the wrong thing

Every enterprise software vendor in 2026 is shipping AI. ServiceNow Now Assist. Atlassian Rovo. PagerDuty AIOps. Each one pitches their AI as the smartest in the room.

None of them are wrong about the AI. All of them are wrong about what makes it useful.

McKinsey put it cleanly earlier this year: "Today, AI is bolted on. But to deliver real impact, it must be integrated into core processes, becoming a catalyst for business transformation rather than a sidecar tool." Praveen Akkriraju said almost the same thing on CXOTalk two months later: "If you're just a bolt-on agent on top of software without fundamentally changing the way software interacts with data, with users, and being able to respond dynamically, then you clearly are going to lose that battle." Two senior voices, same conclusion.

There's a number that backs them up. MIT research found that 95% of AI projects never reach production. The bottleneck isn't model quality. The bottleneck is context. The data, the connections, the operational links the AI needs to actually be useful.

Here's the position we're taking. The AI in your operations is only as smart as the data it can reach. The vendors winning in 2026 won't be the ones with the smartest models. They'll be the ones whose AI has access to the work that's actually happening.

This is what we mean when we say AI built-in beats AI bolted-on. It should be able to give you enough context of your incidents, your tickets, your on-call rotations, your service catalog, your knowledge base, your CIs. Bolted-on means a chatbot that can answer questions about a vendor's documentation but goes silent when you ask it about your environment.

Two MCP servers. Both live. Both built this way on purpose.

This is the part where we show our work.

We shipped two MCP servers in the last six weeks. Both live in production today. Both read-only by design. Both built on Anthropic's open MCP standard, so you're not locked into Claude or us. Connect any MCP-compatible client and you're querying live in under five minutes.

Xurrent IMR MCP

The IMR MCP server gives Claude access to your live incident data. Open incidents, on-call schedules, escalation policies, timelines, alerts, all queryable in plain English. No SQL. No dashboards. No copy-paste from Slack threads.

What an SRE actually asks during a live incident:

  • "Walk me through the timeline of incident #75."
  • "Who's on call for SRE right now and what's their handoff?"
  • "Find any incidents related to Grafana alerts in the last week."
  • "Pull on-call load by team member, I have a QBR in 30 minutes."

Three queries that took beta users five tabs and twenty minutes now take one prompt and ten seconds.

Xurrent ITSM MCP

The ITSM MCP server does the same for service operations. The request queue, the knowledge base, the service catalog, the CI relationships, all readable through Claude. The Service Desk Manager, the ITSM admin, and the L1/L2 specialist each get the same plain-English access to their data.

What a Service Desk Manager actually asks on a Tuesday morning:

  • "Show me all open requests for Marketing, sorted by SLA risk."
  • "Find knowledge articles related to VPN access issues."
  • "What CIs are linked to the Salesforce service?"
  • "Which requests are aging past 48 hours without an update?"

Both servers expose a focused set of read-only tools mapped to real data in your Xurrent account. The data stays in your Xurrent. Claude just asks the questions on your behalf.

Interactive demo: ask plain-English questions of incidents or tickets, see what real responses look like

Type a question, or pick one below.

AI Assistant status: ready

Ask a question to see what your data looks like coming back.

A day in life of Xurrent MCP user

8:42 a.m.

Sarah · SRE

Walk me through the open Payments incident

Skips dashboard scroll. Goes straight to runbook.

9:15 a.m.

Marcus · ITSM

Pull all open tickets mentioning checkout

Routes 18 tickets as a linked group in 30 seconds.

11:00 a.m.

Sarah · SRE

Compare incident #87 to incident #75

Spots upstream pattern. Flags problem record.

2:30 p.m.

Marcus · ITSM

Ticket volume by service, last 30 days

Slide-ready answer in seconds. No JQL.

4:00 p.m.

Specialist · ITSM

Knowledge for Cisco AnyConnect on Sonoma

Resolves the request in 8 min, no escalation.

Context is the differentiator. MCP is how we ship it.

If you read the day-in-the-life carefully, you noticed something. None of those moments required a "smart" AI. Claude is a general-purpose model. What made it useful was the context it could reach.

This is the actual job AI does well in operations right now. Heinrich Hartmann, who has been writing about AI in SRE longer than most, put it cleanest. AI's most valuable role isn't autonomous remediation. It's giving the engineer the context they need to fix things fast.

Fred Hebert, a few weeks earlier in SRE Weekly, noticed the related framing problem. AI coding tools are sold as partners that augment engineers. AI SRE and ITSM tools are sold as replacements for low-value work. The marketing language is the tell. It says how decision-makers see the role.

We don't see incident response or service operations as low-value work. We see them as context-heavy work. The job isn't routine. The job is figuring out, in the first 30 seconds, what's actually happening, where, who's affected, what changed recently, what to try first. By the time the engineer has the context, the actual fix is often the easy part.

Every minute spent gathering context across five tools is a minute the incident continues, the customer waits, or the SLA ticks closer to breach. AI that helps gather context is high-impact. AI that tries to take over the resolution layer creates the AI babysitting toil the Runframe 2026 report named, where 42% of enterprises with AI in incident management report higher human oversight costs than before adoption.

That's why our MCP servers ship read-only by design. Not because the technology can't do more. Read-only is where AI adds value safely. Write actions need a different trust model, and we're building toward that, not skipping over it.

The metaphor we like is Informatica's. MCP is "USB-C for AI." A standard plug that any AI client can use to connect to any data source. The connector is universal. The data on the other end is yours. The intelligence happens on top of both.

That's our bet. The data layer is the differentiator. The model is the part that gets cheaper every quarter.

<!-- VISUAL 4: Bolted-on vs built-in comparison diagram.Spec: A side-by-side architectural diagram. Left side labeled "Bolted-on AI": shows a generic incident/ticket tool with a chatbot icon stuck on the side, with a dashed line indicating it doesn't really connect to the underlying data. Right side labeled "Built-in AI (MCP)": shows the AI client (Claude icon) connected via a clean MCP cable to the actual product data layer, with structured query/response arrows. Use Xurrent purple for the "right" side, gray for the "wrong" side. Make the visual contrast obvious in 2 seconds.Web team: this is a strong candidate for an SVG diagram. I can spec or build it. -->

What else we shipped this quarter

We talk a lot about MCP because it's the most visible expression of the philosophy. But the philosophy applies to everything we shipped this quarter. Every release was built around the same idea. Here's what we've shipped this quarter (a lot more than just this):

  • Noise Reduction: Group redundant alerts intelligently using time or content correlation to drastically reduce on-call engineer burnout and page fatigue.
  • ServiceNow Integration 2.0: Scale your workflows effortlessly with new auto-mapping, bulk setup capabilities, and zero Configuration Item dependencies.
  • Virtual Agent File Attachments: Accelerate issue resolution by allowing users to drop files and images directly into AI chat to provide rich, immediate context.
  • Request Classifier Intent Detection: Ensure tickets always land in the correct queue by using AI to accurately detect a request's true intent before category assignment.
  • Workflow Actions for Jira and Slack: Keep stakeholders perfectly aligned by automatically spinning up Jira tickets and Slack war-room messages the second an incident triggers.
  • AI Usage Reporting: Easily track platform adoption and measure the concrete ROI of your team's AI feature usage directly within your standard analytics dashboard.
  • Language-Aware AI Assist: Eliminate mixed-language errors and translation confusion with an AI Note Assist that automatically detects and generates content in the author's native language.
  • The pattern across everything we shipped this quarter is that nothing requires a separate AI license. Nothing requires a separate add-on. Nothing requires a separate vendor contract. The AI is part of the platform you're already paying for. The MCP servers are part of the platform you're already running. We're not building an AI product on the side. We're building one product that works the way modern engineering teams already work.

    What's coming next. We're investing heavily in the agentic ecosystem, and the next phase will move beyond read-only. Narrow, well-bounded agentic actions that respect the same audit trails and approval gates as any other change. Targeted, structured, traceable. Not "the AI decides what to do." More like "the AI does this one specific thing in this one specific situation, with a clearly bounded blast radius and full rollback."

    What good AI in your operations actually looks like

    The MCP servers are live. The day-in-the-life is happening in customer environments today, not someday. The philosophy is shipping in code, not slides.

    If you're picking AI tools for your service operations or your incident response in 2026, here's the question worth asking. Not "whose AI is smarter." Not "whose model has the most parameters." Ask: whose AI gets access to my real data, on my terms, with the audit trail my compliance team needs.

    The vendor whose answer to that question is a working MCP server, two of them in our case, is the vendor whose AI will actually be useful next year. The vendor whose answer is a proprietary chatbot you can't extend, can't audit, can't connect to your other tools, is the vendor whose AI will look impressive in a demo and quiet in a real outage.

    We want you to spend less time switching tabs and more time fixing what's actually broken. Less time hunting for context, more time using it. Less time being sold somebody else's AI, more time using yours.

    Connect your Xurrent IMR or Xurrent ITSM account to Claude in under five minutes. Ask it something real. See what your data looks like coming back.

    Rohan Taneja is the IMR PMM at Xurrent.