How Segment avoids user onboarding drag for its technical product
Let’s get right into it: What are some of the challenges Segment faces with onboarding?
There are so many challenges. Segment enables at least 30 different use cases, so people tend to come to us with this grand vision of digital transformation and say, “Let’s do all these things.” That’s great, we want you to get there. But we also know that it’s really hard to do all of that at one time.
So what we have to do is to collect the right information to enable the specific use case—or use cases—each customer is anticipating. On the enterprise side, they usually have very complex and boutique use cases.
When you’re approaching onboarding for a powerful product like Segment, there’s this urge to get your new users up to speed on everything it can do—all the use cases. But we’ve found that’s basically like trying to boil the ocean. We’ve begun seeing success by starting each customer with use case-driven onboarding.
What does use case-driven onboarding look like?
We used to focus more on pointing customers toward information so they could teach themselves how to use the product. We still have that info available, and it’s still important, but we don’t recommend people start there. Instead, we give them a quick survey that asks them why they signed up for our product.
So rather than just saying, “Hey, you’re here to see a return on ad spend, and here’s the playbook for that,” we use the survey information to narrow down the particular use case that’s most relevant to them.
Then, we measure the value of that use case so they can gather buy-in to do more. Many companies now have fewer engineering resources, making it harder to get engineer buy-in. That makes it doubly important to explain why they should prioritize those resources for a given purpose.
Start very slim, narrow, and tight on that particular use case and drive that to completion. Measure the value of it. It’s a lot easier to get buy-in from engineers to do more work if you can prove the value of the first use case. Once you’ve done that, then you can continue to add more use cases.
How does Segment measure success here?
We have built several different adoption scores based on different kinds of features. These are all machine learning-based models that we have built with help from our data science team, and the models are trained on retention data. We know that if we’re getting to certain score thresholds, it’s going to drive good gross revenue retention (GRR) outcomes.
We have a score for our core product, which is based on various factors including company size, how many different kinds of features they need to adopt, and the timeline by which they need to adopt them. Then, we snapshot that info in Mixpanel weekly and show that score on all the different underlying components. So when we’re onboarding someone, we’re looking at that score weekly to measure impact. Mixpanel is easy for our team to use, so we can find that info on our own rather than waiting for the analytics team to do it for us. That’s key in allowing us to work efficiently.
What are the challenges of getting these measurements?
Measuring use case adoption can be hard. The most basic Segment use case is to collect data, then send data, and clean it along the way. So we think about all the different adoption components almost like a box of Legos, which we then attempt to build into a use case. If we can do that, then we can be reasonably sure that a use case is being actioned on even if we can’t see the end outcome. In addition to analyzing quantitative data and forming assumptions based on that, we look at the results of surveys given to customers during the onboarding process. If that qualitative data aligns with our assumptions, we’re better able to measure the end outcome. All of these components relate to how deeply customers are using our product, how many features they’re taking advantage of, and how successful it is for them.
We calculate behavior for every customer, and we’re able to see how they’re progressing on a daily or weekly basis. We have similar scores for our various products, which give us a pretty accurate understanding of how much a customer has adopted a use case.
"It's a lot easier to get buy-in from engineering to do more work if you can prove the value of the first use case. Once you've done that, then you can continue to add more use cases."
What’s been the biggest drop-off area you’ve seen for your onboarding?
There are two. Some specific data destinations can be especially tricky to set up, and if people get stuck on them it can derail the rest of the implementation. I won’t get into which destinations tend to have the most friction because it can fluctuate according to implementation, but we’ve tested adding guidance for specific ones in our onboarding nurture to help.
And then identity resolution can be sticky, as well. Often, implementing Segment is the first time a company needs to think about resolving identities across lots of different use cases, so we do see customers stumble here. This is why we layer in lots of best-practice content around this topic.
What check-in points can you have to make sure users are onboarding at a good pace?
We look at whether people have been added to the workspace—that needs to happen in the first three to seven days. During that time, they also need to pick their use case. We also look at what they’re doing in the workspace. Ideally, they’re looking at our catalog and docs or maybe they’re taking some of the courses.
If someone just signs up for a workspace and does nothing for a week, then they’re probably not that interested and they’re not going to move forward. If someone is on a paid plan, those timelines are pushed out a bit longer. But regardless, those actions need to be kicked off by the end of the first month or sooner.
What do you do when a recently onboarded customer isn’t as active as you’d like them to be?
We have a whole suite of different strategies. First, we have an email and in-app onboarding program that helps users take critical implementation actions and prioritize use cases. The goal of this program is to keep users on the happy path. Of course, we do inevitably end up with users who fall off the happy path, and for those, we begin a series of automated and manual interventions.
One of our more effective nurture programs is a campaign that pulls all of a customer’s important metrics into a personalized email and sends it to the main contact on that account monthly, starting in the first month. This comes from a pooled CSM—a customer service management model in which multiple CSMs share a pool of accounts. If you click a link in the email, it takes you to a queue in Chili Piper—an automated scheduling tool we use—to book time with that CSM if you’re getting stuck.
Later on in the onboarding process, about three to six months out, we do a lot of triggered outreach. If a customer has taken little or no action, we offer a workspace audit which comes from an onboarding CSM specialist.
We also use Chameleon—an in-product user messaging platform—to give onboarding customers helpful nudges along the way if they’re starting to show disengagement signs. If they haven’t logged in for two weeks and we consider them “red,” which means not active, we start doing more targeted outreach via email. All of that is automated.
And if all of that still doesn’t work?
Eventually, if someone’s consistently inactive, they go on a “hyper red” list. We have an internal escalations program that we call “Red Accounts,” which is reviewed regularly by a council of people across services, including account executives, customer service, and so on. That’s where we can apply lots of different resources to try to get them engaged. It’s like the final backstop to that experience.
If none of that works, we cut our losses and move on to customers who are more willing to be saved. We want to keep people on the happy path and put a lot of effort into that, but you can waste a lot of money trying to save a customer who has no interest in being saved. At some point, you have to just move on and reinvest that effort into keeping more customers on the happy path.
How do you design an onboarding experience that works for more than one persona?
We’ve learned—and are always learning—that it’s about really getting into the heads of each person. At the start of onboarding, we ask people to identify as either “technical” or “non-technical.” We also look at job titles, but because there are so many unique and novel titles nowadays, we have built models to extrapolate information from LinkedIn to help categorize people into usable titles.
If someone self-identifies as technical, we use a more “show not tell” approach. Generally speaking, engineers don’t want to be told what to do. They want to be shown what’s possible and given the freedom to build.
What does “show not tell” look like for the technical persona?
For that group, we built a lean onboarding experience. We don’t send the welcome email right away, but rather we let them play around first. When they do get the email, it says, “Hey, I want to quickly show you how we built this welcome email in Segment, targeting you as an engineer.” We then show them, step-by-step, how we used insights about them to decide what to show them. We then give them some suggestions of what to do next.
We’ve gotten so many emails responding to that method saying, “This is really cool.” And that’s always really gratifying because no one responds to lifecycle emails.
How do you decide when and how to offer help to the technical persona?
It’s very experimental. The email sequence I mentioned is adaptive based on how quickly the customer is progressing. If they’re moving along, we don’t bother them. But if they haven’t progressed further in a week, then we send them a nudge.
Qualitative assessment, like actually talking to people, helps as well. At the bottom of every email we send, we invite people to email us if they want to talk to a real person. We lean on our user research team, too.
Then, we continue to experiment. Maybe you want to lengthen or shorten the time window of a nudge period and see what happens. You have to be patient though. It’s tempting to just decide after a week, but you need to let enough users come through to see whether that change is impactful. Oftentimes, it’s months before we get to a solid understanding of whether we made a good call or not, but it’s all learning.
With B2B, it’s often hard to get statistically significant data, which is why we also work closely with our user research team to bring in qualitative data and then make the best decisions that we can. It’s also why we have multiple different tests running in different areas of the customer journey so that we always have something to work on rather than twiddling our thumbs and waiting for one experiment to finish.
And for the non-technical persona?
We built a different flow for people in roles like business or marketing that’s more like, “Hey, you need engineers to help you, so here’s how to go and get buy-in from them.” It’s a tougher situation—a technical signup is twice as successful in onboarding as a non-technical signup. So we still haven’t fully cracked that, but we have created a flow that can give anyone the information and tools to get the help they need.
“Making each step [of onboarding] more fun or interactive can be worthwhile, but creating a game that, say, offers people five points for getting to the next level often isn’t.”
Can a non-technical person follow along with a technical teammate? Should they even try?
Sometimes, a person in a non-technical role is the decision-maker. So they understandably want to be in the loop about where their investment is going. In those cases, it’s critical to have buy-in from them. Overall, onboarding is done best if you have a cross-functional group of people who are thinking about the problem. They all have different perspectives.
What are your thoughts on the gamification trend in user onboarding?
We tried that at one point, with some success, but it didn’t show huge results. Ultimately, buying software has to drive value, and gamifying it doesn’t necessarily equate to value. Making each step more fun or interactive can be worthwhile, but creating a game that, say, offers people five points for getting to the next level often isn’t.
I tell my team all the time that even though we’re not a product team, I want us to think like we’re product managers. Ultimately, we have to help customers get their jobs done. Otherwise, they’re not going to see value. That is key.
So get right into what drives value and lead with content that’s most relevant to that customer. Our customers don’t need to know about all of our settings, so we focus on what they can use in their jobs. We slip in best practices tips along the way but focus on the actionable use case rather than telling them all about the Segment platform and waxing poetic about how awesome we are.
How can you make the most of your internal resources?
We leverage our CSMs to lead workshops and webinars for customers. That’s worked well. These are people who’ve typically focused on one-to-one meetings and deeper relationships. They’ve talked to lots of customers, and they know exactly where the pain points are. Giving them a platform to talk to hundreds of people allows them to efficiently share their expertise. We’ve found a sweet spot in taking some lifecycle marketing best practices and applying them to the onboarding process with a CSM lens.
This interview was lightly edited for clarity and brevity.