
7 analysis skills to teach your AI client (and the prompts behind them)

One of the most common questions we hear after product teams connect Mixpanel to their AI client is: "What should I actually ask?"
It's the right question, but the best answer isn't just a list of prompts. That's because the teams getting the most out of Mixpanel’s MCP server are focusing on building analytical skills: persistent, reusable instructions that make every future analysis faster and more consistent.
Here's a useful way to think about the difference:
- A prompt is a one-time question. You ask it, get an answer, and move on.
- A skill is a standing instruction you give your AI client once. It shapes how every future analysis runs—what dimensions to include, how to segment results, what signals to prioritize—without you having to repeat yourself.
Prompts are how most people get started. Skills are how teams scale. This guide covers both.
The examples here focus on Claude and Claude Skills, which let you define, through simple prompting, reusable instructions for Claude to follow across sessions. Other AI clients have similar capabilities—ChatGPT's Custom GPTs work on a similar principle—so the underlying logic applies broadly, even if the setup differs.
A framework for prompts that actually work
Mixpanel’s MCP server reads your entire project—events, properties, the full schema—and translates plain-language questions into analysis. A prompt gets sharper the more context it contains. Four elements help:
- Behavior: What action are you measuring?
- Population: Which users? (new, paid, iOS, etc.)
- Timeframe: When? (last 7 / 30 / 90 days, or specific dates)
- Output: How should the answer look? (table, breakdown, summary)
You don't need all four every time—but they're the first place to look when something comes back too broad.
Here's the difference in practice:
❌ "Show me funnel drop-off."
✅ "Where are new iOS users [population] dropping off in our checkout funnel [behavior] in the last 7 days [timeframe]? Break it down by acquisition channel [output]."
The second prompt doesn't require any more knowledge—it just applies the framework. Once you're in the habit, it takes seconds.
You don't need all four elements to get a sharper answer—adding just one usually makes a meaningful difference. If a result comes back too broad, the missing ingredient is almost always population (which users?) or timeframe (when?).
Quick reference
Prompt vs. skill: what's the difference?
Prompts are how you get started. Skills are how your team scales.
| Prompt | Skill | |
|---|---|---|
| What it is | A one-time question you ask in the moment | A standing instruction that shapes every future analysis automatically |
| Best for | New questions, exploration, one-off investigations | Recurring analysis patterns your team runs regularly |
| How you use it | Type it fresh each time | Save it once in Claude; it applies automatically going forward |
| What it produces | One answer | Consistently structured answers, every time |
| Analogy | Asking a colleague a question | Briefing a colleague on how to work with your team |
Skill 1: Understand your data model
Before you can ask good questions, you need to know what you're working with—what events exist, what properties they carry, and what might be stale or missing. MCP makes this orientation fast.
Prompt:
List all events and properties in this project, grouped by category. Flag any that haven't fired in the last 90 days.
Start here if you're new to a project or doing a pre-analysis audit. Anything that hasn't fired in 90 days is either abandoned tracking or a renamed event that left the old name behind.
To turn this into a skill:
For all my analyses, only reference events and properties that are standardized—used in at least two other reports and verified by the data team. If you’re unsure whether an event meets that bar, flag it rather than using it.
This means you never have to remind Claude to stay within your clean data layer. It just does.
Run the data model orientation prompt at the start of any analysis sprint—before you get into funnels or retention. Knowing which events are stale or unverified upfront prevents an entire analysis from being built on unreliable signals.
Skill 2: Diagnose conversion and drop-off
Instead of just finding out where users drop off, funnel analysis segments the answer so it points somewhere toward finding the why. Specifying the flow, the group of users, and a breakdown dimension is where the real work happens.
Prompt:
Where are new mobile users dropping off in our onboarding funnel in the last 7 days? Show me a step-by-step breakdown.
Drop-off on mobile looks different from drop-off on desktop—combining them obscures both. A well-framed prompt starts you in the right place.
To turn this into a skill:
For every funnel analysis, automatically break results down by platform, acquisition channel, and user cohort. Don’t wait for me to ask—include these dimensions by default.
Now every time you ask about a funnel, you get the full multi-dimensional view. No follow-up needed.
Skill 3: Find what's driving growth
Most teams default to volume metrics. The more useful question is always: Which inputs are producing durable growth?
Prompt:
Which acquisition channels drove the most activated users in the last 90 days? Rank them in a table.
This follows the framework and gets you a clean, comparable view across channels.
To turn this into a skill:
When analyzing acquisition channels, always rank by 30- and 90-day retention, not volume. Channels that drive high traffic but low retention should be flagged, not celebrated.
The "rank by retention, not volume" instruction is doing real work here. It shifts the output from a traffic report to a quality-of-growth view. You can see that channels driving high volume with low retention are just filling a leaky bucket. Once this is a skill, you never have to remember to ask for it.
For more on how product teams are using MCP in practice, see how Mixpanel uses its own MCP server and what makes a good analytics prompt.
Skill 4: Connect behavior to retention
Retention analysis is where specifying the right group of users matters most. Free users and paid users retain at different rates and for different reasons. Asking without that context returns an aggregate that masks exactly what you're trying to find.
Prompt:
Which in-app behaviors are most strongly correlated with 30-day retention for users who signed up in the last 60 days?
This gets you the behavioral signals that predict long-term retention, which is often more actionable than the retention curve itself.
To turn this into a skill:
Always separate retention analysis between free and paid users and run them individually. Never combine them into a single aggregate unless I explicitly ask you to.
Running the analysis separately for free and paid users is both cleaner and will often reveal completely different behavioral drivers. Making it a standing instruction means you always get the right split.
Skill 5: Trace individual user journeys
Aggregate analysis tells you what's happening at scale. User-level investigation tells you why, especially when you combine event data with session replays. One well-investigated user journey can surface an edge case no funnel chart would catch.
Prompt:
Why did user 12345 submit negative feedback on April 23, 2026? Summarize what you find.
This is simple to ask, and MCP brings the whole investigation into a single conversation.
To turn this into a skill:
When investigating individual users, always pull from all available signals: event activity, session replays, feature flags, support tickets, and any frustration indicators. Summarize findings in order of most likely relevance to the issue.
Use this as a complement to aggregate analysis, not a replacement: Find the drop-off in the funnel first, then pull individual journeys to understand the pattern underneath.
Skill 6: Govern your data at scale
Data governance—cleaning Lexicon, updating descriptions, tagging events—is important but tedious. MCP changes the economics. What used to be a team-level backlog item becomes a prompt.
Prompt:
We just added the high-value purchase event. Can you write a description for it, using our existing documented events as a guide?
MCP can handle this work by inferring intent from the naming patterns already in your project.
To turn this into a skill:
When writing or updating event descriptions, use existing documented events to infer naming conventions and intent. Highlight any events that may be duplicates of existing ones. Always show me a preview of proposed changes before applying them.
The teams building the MCP server at Mixpanel use this workflow to keep their Lexicon clean without treating it as a dedicated project. The preview step is important because governance changes at scale are hard to undo, so always review before applying.
When using MCP for governance at scale, share your naming conventions up front—paste a few examples of well-documented events at the start of the session. This lets MCP infer your patterns rather than guess from scratch, and the descriptions it writes will be far more consistent.
Skill 7: Build reporting that runs itself
The highest-leverage use of MCP is turning a repeatable question into a workflow that produces output on its own. Product performance summaries, executive updates, weekly metric digests: anything your team generates on a schedule is a candidate.
Prompt:
Generate a product performance report for all active users covering the last 7 days. Format it as a Slack update with key metrics, trends, and one recommended action.
To turn this into a skill:
For my weekly product report, always include: top-line activation and retention metrics, the biggest week-over-week movement (up or down), one segment that’s behaving differently from the average, and a single recommended action. Keep it to five bullets or fewer.
Teams that build these skills shift analyst time away from standing requests and toward questions that require real judgment. The weekly Slack summary, the executive update, the monthly performance review—all candidates for a skill.
The real unlock: Chaining prompts into workflows
None of these skills are most valuable in isolation. The pattern that separates teams getting real value from MCP is knowing how to chain prompts into an investigation—asking one question, seeing what surfaces, and following it somewhere specific.
Here's what that looks like in practice, using the growth and retention skills above:
Each prompt narrows the lens. By the third question, you've moved from a broad acquisition view to a targeted churn investigation—without leaving the conversation.
Summary reference
7 skills at a glance
Each skill starts with a prompt. The instruction turns it into something that scales.
| Skill | Starter prompt | Skill instruction |
|---|---|---|
| Understand your data model | List all events and properties, grouped by category. Flag any that haven't fired in 90 days. | Only reference events used in at least two reports and verified by the data team. Flag unknowns rather than using them. |
| Diagnose conversion and drop-off | Where are new mobile users dropping off in our onboarding funnel in the last 7 days? Show me a step-by-step breakdown. | For every funnel analysis, automatically break results down by platform, acquisition channel, and user cohort. |
| Find what's driving growth | Which acquisition channels drove the most activated users in the last 90 days? Rank them in a table. | Always rank acquisition channels by 30-day retention, not volume. Flag channels with high traffic but low retention. |
| Connect behavior to retention | Which in-app behaviors are most strongly correlated with 30-day retention for users who signed up in the last 60 days? | Always separate retention analysis between free and paid users. Never combine them into a single aggregate. |
| Trace individual user journeys | Why did user [ID] submit negative feedback on [date]? Summarize what you find. | Always pull from all available signals: event activity, session replays, feature flags, support tickets, and frustration indicators. |
| Govern your data at scale | We just added the [XYZ] event. Write a description for it using our existing documented events as a guide. | Use existing events to infer naming conventions. Flag potential duplicates. Always show a preview before applying changes. |
| Build reporting that runs itself | Generate a product performance report for all active users covering the last 7 days. Format it as a Slack update. | Always include top-line activation and retention, the biggest week-over-week movement, one outlier segment, and one recommended action. Five bullets max. |
The same logic applies across any combination of the seven skills:
- Start with a data model audit, then follow undocumented events into a governance cleanup
- Start with a funnel diagnosis, then pull individual user journeys for the drop-off segment
- Start with a retention correlation, then build a Board tracking the top-correlated behavior
Prompts get you started. Skills make it repeatable. Chains are where the real analysis lives.
Mixpanel's MCP server is available now. Start with a prompt. Build toward a skill.


