Why product analytics should be part of every feature sprint - Mixpanel
Why product analytics should be part of every feature sprint
Product Foundations

Why product analytics should be part of every feature sprint

Last edited: Aug 22, 2022 Published: Jul 26, 2022
Joseph Pacheco App Developer and Product Creator

Product analytics is an ongoing effort. As new features are added to your products, so should new analytics events be added to your codebases.

But different companies have different strategies for how and when they update their analytics implementations. Some make changes as they go and treat analytics as a fundamental part of considering a feature complete. Others wait to implement analytics until a later sprint all at once.

I’m here to tell you the former approach is decidedly better than the latter.

Here are eight reasons you should always implement your analytics in the same sprint as your feature development and not at a separate time after development is complete.

Implementation is better with less context-switching

When engineers implement features, they write code. When engineers add analytics events to new features, they add additional analytics code to their new feature code.

As such, the best time for an engineer to make changes to some piece of code is when their attention is fully focused on that piece of code, not weeks after.

In other words, when an engineer is implementing a feature for the first time, their mind is fully aware of the problems the code for the feature aims to solve. They’ve freshly thought through the requirements, the expected behavior, and any edge cases that may impact the final results. This is when they should make changes to the code like implementing analytics because this is when their understanding of the code is at its peak. The higher the understanding, the more accurate and efficient the analytics implementation will be.

However, let’s say an engineer waits even a week between the time they finished a feature and the time they choose to implement their analytics code. They will likely have context switched to a variety of other areas of the codebase between the time they finished the feature and the time they start writing the analytics code for the feature. Not only does this increase the likelihood they will overlook something important, they will also need additional time to get back into the headspace of the feature in question before they can even begin. This is a clear recipe for both mistakes and wasted time!

Analytics should be tested alongside features

The exact same argument can be made for your QA team.

In order to effectively test whether a feature has been implemented correctly, a QA engineer needs to understand the feature inside and out. They need to be actively aware of all of the requirements, why those requirements exist, and the nuanced value the feature intends to deliver to the user. Otherwise, they risk overlooking some critical behavior.

This is a massive mental burden, and every time they switch from testing one feature to testing another, their focus needs to shift from one highly complex context to another. And as you might imagine, too much switching can get confusing. Requirements blend into one another, and now you have QA team members spending time second-guessing themselves, or worse, allowing mistakes to slip through the cracks.

Implementing analytics during a sprint other than the one the feature is being developed typically guarantees this kind of context-switching as a fundamental reality of your process. The QA engineer will spend time testing the feature itself, then all the other features, and then wait to test the analytics for the feature after having already cycled through all other features pending release. This means that when they finally start testing analytics, they need to do additional work to get themselves back in the headspace of the feature’s requirements and how it works, just like the engineers implementing the feature in the first place. This is a multi-team waste fest!

Treating analytics as an afterthought eschews data-informed culture

Aside from reducing accuracy and efficiency due to context switching, implementing your analytics in a later sprint creates a pungently adversarial relationship between your team and the project of product analytics as a whole.

That’s because your team starts to think of implementing analytics as a kind of debt rather than a centerpiece to the project.

Imagine spending several sprints working hard on a bunch of features. You work like heck to get everything implemented and tested on time and get a brief moment of celebration…then surprise! You’re not actually done! You get to revisit every single feature you just completed and slog through adding analytics events for two weeks or more!!

I can tell you from experience that teams hate this, even if they consciously believe implementing analytics later has its benefits. In practice, it makes analytics feel like a chore to engineers, an afterthought, a burden that requires pausing any sense of progress for an awkward amount of time. It’s very hard for people to be enthused about analytics on its own, but when it’s just one additional step as part of the implementation process to begin with, analytics becomes part of the value-creation experience and culture of feature-building—as well as associated with the joy of crossing the finish line. Psychologically, there’s no contest.

Ongoing implementation forces product rigor

One of the greatest general benefits of adding analytics to your products is that doing so forces rigor in the product development process itself. And if analytics is considered a fundamental step in considering a feature complete, it forces product teams to be ready with event definitions prior to the start of implementation, which itself guarantees metrics have also been thoughtfully considered prior to a single line of code being written.

In other words, if you wait to implement analytics during a later sprint, then that makes it possible for the product team to defer thinking about event definitions and metrics until after a feature has been defined and assigned to engineering. But if you make it an ongoing requirement, then product teams can’t hand off work to engineering until they’ve applied the necessary amount of rigor to at least think through how measurement will work. This means that flaws in design thinking will be noticed sooner than later, and changes to designs or feature philosophy will be forced to the surface before a cent is spent on expensive implementation.

Analytics discussions increase engineer understanding

In addition to enforcing rigor for product teams, analytics is also a powerful way to increase an engineer’s understanding of the features they are implementing.

As an engineer, it’s extremely difficult to implement a feature correctly without a nuanced understanding of its value and role in the product overall. But since engineers and product folks tend to have such different values and perspectives, nuance is often lost in translation as the product team attempts to define stories and epics for engineering.

Analytics, however, can help with this.

Engineers are technical. They tend to process technical information better than less formal information, and that’s where analytics comes in. The act of defining events and metrics for a given feature by product owners is really an act of translating product nuance into formal requirements and measurements. When engineers see the event definitions and metrics, it’s a technical way for them to better understand what the point of the feature is in the first place. There’s less reliance on language that resonates with product-oriented folks and greater access to more technical frames that engineers prefer to parse.

As such, engineers gain better insight into the purpose of features early, and their feature implementations are better because of it.

Delayed implementation means delayed insight

Product analytics is so important because it allows you to test your assumptions as a product owner. That is, you may design a feature with a set of assumptions as to how it benefits your users, only to discover that this feature is providing a different kind of value—or even having a detrimental effect on user engagement.

When analytics implementation is pushed to a later sprint, features might even get released with little or no visibility into how they are affecting your users once they are out the door.

That means you may need to wait as long as weeks after release before you notice a strong negative trend in user behavior or discover some powerful insight that could have allowed you to pivot into even greater engagement weeks earlier.

The point is, it’s always better to have visibility right away, and delaying implementation delays that visibility, either by releasing products without analytics or postponing the release of feature sets entirely.

About Joseph Pacheco

Joseph is the founder of App Boss, a knowledge source for idea people (with little or no tech background) to turn their apps into viable businesses. He’s been developing apps for almost as long as the App Store has existed—wearing every hat from full-time engineer to product manager, UX designer, founder, content creator, and technical co-founder. He’s also given technical interviews to 1,400 software engineers who have gone on to accept roles at Apple, Dropbox, Yelp, and other major Bay Area firms.

Gain insights into how best to convert, engage, and retain your users with Mixpanel’s powerful product analytics. Try it free.

Get the latest from Mixpanel
This field is required.