
We built the Mixpanel MCP server. Here’s how we’re actually using it.

Among the many things Mixpanel has shipped in recent months, one capability stands out for how fundamentally it changes the day-to-day of product dev work: the Mixpanel MCP server.
MCP—Model Context Protocol—is the open standard that lets AI clients connect to external tools and data sources. Once you connect Mixpanel's MCP server to the AI tools your team already uses—Claude, ChatGPT, Cursor, Windsurf, and more—you can ask questions in plain language and get answers wherever your teams already work and make decisions.
That's not just a convenience. It's a different way of working. The gap between a question and an answer used to involve knowing where to look, knowing how to build the report, and carving out time to do it. The MCP closes that gap to a prompt.
To understand what that looks like in practice, we chatted with engineers Ken Sherman and Gonzalo Lopez Bascur and designer Jess Chaidez—the team building the Mixpanel MCP server, and the team dogfooding it constantly. Here's how they're putting it to work.
Get to answers faster
This is the most immediate unlock. With the Mixpanel MCP server connected to your AI client, you describe what you want in plain language. The client reads your Mixpanel project—event schema, properties, the whole picture—and returns answers in the form of reports and full Boards directly.
When our teams launched the close beta for a new feature that allows for setting roles and permissions in Mixpanel, Jess prompted a Board to be built for tracking adoption of the new events—no manual chart-building required. "Just being able to describe what you're looking for is lovely," she said. "You get a link to a Mixpanel Board or report that usually has most of what I’m looking for, or, at the very least, it’s a really good starting point for me to work from."
You can have Claude Code read your code and create dashboards for you without telling it what the event names are. It knows from the context of your codebase what you actually care about and creates all that for you.”
Ken says having an AI coding agent connected to Mixpanel MCP gets him to insights with even less work on the front end. "You can have Claude Code read your code and create dashboards for you without telling it what the event names are. It knows from the context of your codebase what you actually care about and creates all that for you." A prompt as simple as "what should we be tracking to monitor errors and performance?" gets you a recommended instrumentation plan and a live dashboard in one motion.
The bottom line: The time between a question and a trusted, actionable answer has gotten a lot shorter.
Ask your AI client to include contextual explanations alongside the reports it builds. Mixpanel Boards support written descriptions, which make dashboards more self-serve and easier for all stakeholders to understand, and most AI clients include them when you ask. “If I ask for descriptions, it adds a really good explanation,” Gonzalo said. “If I say nothing, it’s just a title and maybe a line. It obviously saves me a lot of time if I don’t have to write these myself.”
Find, diagnose, and fix bugs all in one workflow
When an AI coding agent has access to both your codebase and your Mixpanel data at the same time, the loop closes in both directions: not just "read the code and tell me what to track," but "read what Mixpanel is showing and fix what's wrong in the code."
Ken walked through a real example. He had a dashboard tracking back-end performance—endpoint traffic, success rates, and latency—and started noticing timeouts concentrated on one endpoint. Ken prompted Claude Code to explore the codebase and identify possible causes:
We're noticing timeouts occurring against <endpoint>. They seem to happen in concentrated spikes at irregular intervals. Our Mixpanel dashboard with this endpoint data is <dashboard_name> in project <project>. Consult the data, the codebase, and our infrastructure and report back with any action items that would address this.
It flagged a caching opportunity. He reviewed it and challenged it by asked things like “How much would our Redis storage increase by?” and “Show me the backend code that supports your theory.” After confirming everything made sense, he had the agent implement it.
But here's what makes this loop different: After the coding agent applies a fix, it calls the MCP to validate that the changes are reflected in the data in Mixpanel—no human required to manually check. Gonzalo described it: "You don't even need a human checking the dashboard manually. It asks for the data and checks that now the property is there or that the fix is in Mixpanel the right way."
You don’t even need a human checking the dashboard manually. It asks for the data and checks that now the property is there or the fix is in Mixpanel the right way.”
Instrument, observe, fix, verify—all run by the agent.
The bottom line: A performance issue that would have taken hours to trace, fix, and confirm can now be resolved in a single agent session.
Don’t just prompt your AI client to deliver answers or actions; have a conversation with it. Ken’s advice: “It’s always wise to challenge AI. It’s always a larger conversation than just accepting what it first spits out.” Refine the scope, suggest additions, then put it to work.
Automate data governance at scale
Even though our teams are well-experienced with Mixpanel implementation and data governance, they've still found the MCP server to be a big help in those areas.
Gonzalo recently caught a small but widespread property naming issue. While he was reviewing a dashboard, he noticed a property showing up as tool instead of tool_name—a small typo with real downstream consequences for anyone filtering or aggregating on that field. He told Claude Code what was wrong and asked it to fix it. "It knows the event name, it knows where it's being tracked, it knows everything," he said. "It solved the issue without me having to think about what was wrong or going through to manually correct."
Because the coding agent has visibility into both the codebase and Mixpanel, it can trace a data quality problem from symptom to source and patch it in one motion. No context-switching, no hunting through files for where a given event fires.
[Mixpanel MCP] saves hours of time. And that is very nice.”
But data governance isn't only a code problem. Jess uses the MCP for a different layer entirely: Lexicon maintenance. Keeping event and property descriptions current is notoriously manual work—you'd open Lexicon and update entries one by one. With the MCP, an AI client can infer and apply descriptions across every event missing one, at scale, directly in Lexicon. "It saves hours of time," she said. "And that is very nice."
The bottom line: Whether it's a tracking bug buried in code or metadata gaps across hundreds of events, the MCP turns data governance from manual upkeep into something you can run in a prompt.
When working with your AI client to generate event names and descriptions, provide naming conventions and description language that is already familiar to your teams. And before committing to changes at scale, ask for a preview of proposed updates across your Lexicon first.
Prompt answers from Mixpanel and your other data tools at the same time
The MCP's reach extends beyond Mixpanel itself. Because it connects inside the same AI client as your other integrations—like a research repository, internal docs, Slack, etc.—your AI client can pull from all of them in a single conversation or query.
Jess uses this for user research synthesis. Her team stores qualitative product feedback separately from the behavioral data that lives in Mixpanel. Before the MCP, the workflow was checking one source, then the other, then manually connecting the dots. Now she can ask her AI client to look at both at once—surfacing where users reported friction alongside the event data showing where they dropped off. "Being able to look at completely different sources and types of data easily, which before you'd have to look at individually—being able to compile that data is really nice," she said.
The same logic applies within Mixpanel itself. Enterprise teams (ours included) often run separate projects for each product line, business unit, or platform. With the MCP, a single prompt gets you a compiled answer across all of them. "That's been really nice for saving the team time," Jess said.
The bottom line: Product decisions rarely live in one data source. The MCP brings Mixpanel into the same AI session as your research tools and other data—so synthesizing across all of them takes a prompt, not a manual effort across separate tabs.
Even more MCP integration to come
What Ken, Gonzalo, and Jess described is already in production—but the roadmap goes further.
The team is building toward embedded Mixpanel UX inside AI clients like Claude and ChatGPT: Think interactive report previews surfacing right in the chat, not just links back to Mixpanel. Gonzalo's take: "If instead of saying the result in plain text, it shares a dashboard in Mixpanel which has the data that you trust—that's a lot more powerful. I personally need the Mixpanel confirmation."
Also on the horizon: Claude Skills to automate full workflows and instrument Mixpanel automatically, and deeper API investments that unlock more AI capabilities across the board. Together, they point toward a future where the AI doesn't just retrieve your data—it actively helps you act on it.
For the team building it, the impact is already clear. As Ken put it: "It makes our jobs a lot easier."
That's the idea.
The Mixpanel MCP server is available now. Learn more about getting started.


