Product analytics implementation hiccups - Mixpanel
Product analytics implementation hiccups that are easy to spot
Data Stack

Product analytics implementation hiccups that are easy to spot

Last edited: Aug 22, 2022 Published: Oct 5, 2021
Joseph Pacheco App Developer and Product Creator

You can spend all the time in the world devising a thoughtful, nuanced analytics strategy for your product analytics, but if your event tracking isn’t set up behind the scenes just how you need it, some (or all) of your data might be rendered far less helpful than you’d like.

As they say—garbage in, garbage out.

The good news is that many technical quirks tend to follow certain patterns that can be easily spotted—even by non-engineers.

Here are four technical hiccups associated with corrupt event tracking data, how to spot them, and what to do about them.

Successive event repeats

Let’s start with an easy one: a successive event repeat. This problem occurs when an event is inappropriately tracked two or more times in a row from the same device. And I say this hiccup is more of an easy one to catch because it’ll look a little fishy to even most novice product analytics practitioners.

For example, let’s say you have a music app wherein you track an event called Song Play with an event property called Song. This event should fire every time a user plays a song and include the name of the song played in the aforementioned property.

Now let’s say you open Mixpanel and see that Song Play has been tracked three times in a row from the same device, and that the song for all three plays was My Heart Will Go On.

It’s perfectly reasonable to assume that this user simply loves Celine Dion and intentionally played that song on repeat three times in a row.

But this could also be a result of a problem in the event tracking code.

In other words, imagine the user taps the song My Heart Will Go On only once, and in turn, listens to it only once. Even so, the code that tracks the Song Play event was nonetheless fired three times when it should only have fired once.

If you don’t write code yourself, it can be bewildering that something like this could be possible. If the user only tapped the song once and heard it once, and the engineer only intended to call the code once, what kind of technical chaos would lead it to be tracked any more times than once?!

Turns out, theres a lot of technical reasons why this might happen that have to do with where in the code the event was tracked. And sometimes, what seems like an obvious place to track an event actually leads to a completely unexpected result from the perspective of your engineers.

And sometimes, what seems like an obvious place to track an event actually leads to a completely unexpected result from the perspective of your engineers.

So how can you tell as a non-technical individual if this data is real or a successive repeat. For one, if you see multiple songs by multiple users being played multiple times in a row (beyond a reasonable threshold), it’s more likely the result of a bug than a sudden shift in user behavior. As well, if you look at the timestamp of all three events and they are within the same second of one another, thats because the events were tracked all at once by code rather than being spaced out by the amount of time it would realistically take to play a song three times.

So if you encounter repeat events in a row that just don’t feel right, you might want to talk to your engineering team about running automated tests that check for anomalies of this sort, or at least to give the relevant tracking code a once-over.

Disjoint event duplicates

Unfortunately, inappropriately duplicated events come in all forms, not just consecutive repeats. That is, an event might appear duplicated in your dashboard, but not one right after the other, and therefore be more difficult to spot.

In other words, the incorrectly duplicated firings of an event will be mixed in with a bunch of legitimately fired events, which could lead you to assume (incorrectly) that the duplicates are legitimate as well—if you even notice the duplicates to begin with.

Let’s continue with the Song Play example above, but this time you see that My Heart Will Go On was tracked once, followed by five different events, and then it was tracked again. Basically, you’ve got a duplicate event sandwich.

I want to re-emphasize that there’s nothing inherently wrong with this pattern; it could easily reflect legitimate data. You just have to dig a little deeper before you can be sure.

That is, sometimes it’s reasonable for multiple events to be tracked within milliseconds of one another. They might indeed reflect a series of distinct events that happened to occur simultaneously and were therefore tracked together and given an arbitrary order.

However, as we realized above, it’s not realistic that two Song Play events could occur in this way by virtue of what a song play actually is. So the fact that the first Song Play in the sandwich occurred within milliseconds of the last Song Play in the sandwich means something is amiss.

So why could that happen from a technical standpoint?

Again, lots of reasons. The simplest is that the event was tracked, mistakenly, in two different places in your code, perhaps by two different engineers. Alternatively, it could have been tracked once in the app code that lives on the device and then again on the server when your app requested the song. In that case, your back-end and front-end engineers should have a chat about why both ended up tracking the same exact event.

And these two possibilities are only scratching the surface, so it’s not realistic for a non-technical team member to necessarily catch these sorts of anomalies. However, if user behavior seems too good to be true (like inexplicably doubling or quadrupling overnight) or something just doesn’t feel right about the patterns you’re observing, you may want to chew on how something like this may be at play and get an engineer to help you dig for clues.

Event skipping

So far, we’ve talked a lot about how events can be incorrectly double and triple tracked, but they can also be skipped in non-obvious circumstances.

Let’s say you’ve taken it upon yourself to test the Song Play event was implemented properly. You have Mixpanel open on your laptop while you tap on a bunch of different songs and allow them to play. One after the other, the Song Play event with the name of each song appears as expected.

But then you tap on Over the Rainbow and wait. Nothing happens. No Song Play event is tracked. Why in the world would that be happening?

Is this a network issue? Maybe. But Mixpanel is designed such that the event will be tracked as soon as the network returns. (Never mind that you notice the network is working just fine.)

So you tap it again, the song plays again, and the event is tracked. So you think, “Must have been a one-off glitch,” and you continue to test.

Except the glitch probably wasn’t a one-off.

In this example, it’s entirely possible the engineer (for good reason or by mistake) implemented specific event tracking logic that requires a song would need to have already been downloaded at the time of tapping in order for the event to fire. Since Over the Rainbow has not been downloaded when you tapped the first time, the event simply wasn’t tracked even though the song was played.

But when you tapped the second time, the song was already downloaded by virtue of tapping the first time, so the event was tracked as expected.

As such, this seemingly one-off glitch would occur for ever song that’s not already downloaded to the device at the time it is tapped by the user, at least until the engineer tweaks the tracking logic to accommodate for this case more appropriately.

Again, these kinds of “edge cases” should be tested and considered by the engineering team, but its valuable for them to be on your radar so you’re not completely blindsided if the data doesn’t match your expectations. Code is complex, and mistakes like this slip through the cracks more often than we’d like to expect.

Semantic mismatch

So far, the technical quirks we’ve discussed have to do with events being tracked too many or too few times than they should as a result of peripheral logic errors.

But there’s an even harder-to-spot class of quirks that result from a fundamental disconnect about the very meaning of the event itself.

That is, it’s not necessarily obvious what should constitute a Song Play in the first place. Does it mean the song was played start-to-finish or merely started? Does it count as a song play if the user skipped to another song halfway through? How about 90% of the way through?

Events need to be defined with laser precision, or product folks risk having one set of intentions while leading engineers to implement the events with an incorrect interpretation of those intentions.

But yet even more insidious is when an PM and engineer are on the same page about what an event means, but the nuances of the implementation don’t always capture the event correctly.

For example, let’s say we precisely defined a Song Play as firing any time a song is started by a user, regardless of how long they listened to the song.

As an engineer, I have lots of options of where I can track this event.

I could track it at the instant the user taps the song, but that doesn’t guarantee the song would ever actually play. That is, if the song isn’t downloaded, and/or there’s no network connection, the song may never actually play.

Alternatively, I could track the event at the instant the song starts playing (ie. sound is coming from the device speakers unless muted). And that would indeed seem to align with the definition of the event more truthfully since it guarantees something actually played.

However, product folks also need to have a discussion with engineers about the why behind each event in addition to simply defining its boundaries. Because if you think about it, whether the song actually played in real life might not be that relevant to why this metric is being tracked. Rather, we might really be interested in whether the user intended to play the song, which would actually make our former approach more aligned with the goal of the event despite a slight divergence from its official definition.

Either way, if product and engineering are not on the same page with depth and nuance, the resultant data is telling the wrong story. And this can happen more often than we’d like because tracking events appropriately requires technical specificity both from the product and engineering perspective. Misalignment is inevitable, but the more the product team is aware of potential technical snags, the higher the likelihood misalignments will cease to fall through the cracks, making your product analytics all the more a source of meaningful insight.

About Joseph Pacheco

Joseph is the founder of App Boss, a knowledge source for idea people (with little or no tech background) to turn their apps into viable businesses. He’s been developing apps for almost as long as the App Store has existed—wearing every hat from full-time engineer to product manager, UX designer, founder, content creator, and technical co-founder. He’s also given technical interviews to 1,400 software engineers who have gone on to accept roles at Apple, Dropbox, Yelp, and other major Bay Area firms.

Get the latest from Mixpanel
This field is required.