Customer Activation

How to Increase your SaaS Trial conversions without Developer Involvement

product-led-ideas
Unlock the next 4 modules of the Product Led Growth Certificate ™ Program to learn how to build product that sells itself.
Learn More
product-led-ideas
Keep your access to all our product-led growth courses and private community of growth experts.
Upgrade Now
About
Transcript
Feedback

A product led growth approach requires you to implement an experimental and iterative approach to improving your customer journey. Without an approach for implementing experiments in a rapid manner, your PLG strategy will be stuck waiting on the developer bottleneck. This approach liberates your team by empowering the non-technical team members to experiment and iterate without developer involvement.

In this talk, you'll understand:

The 3 problems holding you back from achieving experimentation nirvana
The Customer Journey Optimization methodology and it's key components
The key phases of a SaaS Trial and the 3 types of events you need to track and implement
How the Customer Journey Optimization process helps you improve the customer journey without developer involvement

Audrey Melnik:
Hi there. My name is Audrey Melnik, and I'm here to talk to you today about how to increase your SaaS trial conversions without developer involvement. There's a process to increasing your conversions in a SaaS trial. It's not a matter of finding that one big thing that will have a massive impact on your conversion rate. It's an iterative and incremental process where each change will potentially have a small improvement on your conversion rate.

Audrey Melnik:
If you're familiar with the product-led growth approach, being able to experiment is key to getting our users to the aha moment. We all start out with a hypothesis of what the journey will look like to get our newly signed up trial users to the point that they're willing to insert their payment details and subscribe to our SaaS, but a hypothesis is just that. You can spend all sorts of time on UX and custom interviews, but the real data comes when they start using a product. At that point, it's where the work is just beginning.

Audrey Melnik:
You may already have your own experimentation process or framework. This is one that I've devised that makes sense to me. This experiment process has these key steps. Track your customer journey in trial. Establish baseline metrics and charts. Identify friction points. Hypothesize reasons for friction. Design experiment. Implement experiment. View the test results of the experiment. Choose the more successful version, and rinse and repeat. But before we can move forward with our experiments, we need to get real on where we're at and what might be holding us back from this experimentation nirvana.

Audrey Melnik:
There's one role in a startup that is constantly a bottleneck for forward movement, and that's the role of the software developer. It's not because they're bad at their job, but because they have way too much to do. There's always a huge backlog of work to do for the developer. So any new task that comes up needs to be reviewed in light of this backlog and prioritized accordingly. That's step number six. The one that says, "Implement the experiment," is often the key bottleneck. This made me realize an opportunity. How can we reduce the tasks that fall into the developers domain? What are the types of tasks the developer is doing that perhaps should not really be in their purview? Let's get back to that in a bit.

Audrey Melnik:
Back when I was designing and building software for corporates who had plenty of time and money to throw at this, we spent time identifying all the business rules that applied to the software we were building in advance of starting to code. Then, we might even find ways to set those business rules up. So they were editable by the business people perhaps through a business rule management system or by setting up a configuration module to manage a set of values and rules so that the business could change these rules at will.

Audrey Melnik:
Fast forward to today, in our efforts to move fast for us startups, we've reverted back to engulfing our business logic into code. The irony is that in doing so, we're actually slowing down the velocity of our startup because once again, we're aligned on the developer to make any changes to our business rules. So how can we put this business logic back in the hands of our business people so they can make changes to them using modern methods?

Audrey Melnik:
The first two steps in the experiment process involve analytics. Many teams think more is better. So they track everything. Clicks and page views. Everything. Problem is when people look at their analytics, they can't see the forest for the trees. They can't find the signal from the noise. In analytics, yes, we want to track a lot, but we want to do it with a great granularity, and we want to capture the full customer journey, not just the parts the developer has dominion over. We want the right information to be available at the right time in case we may want to segment on a specific attribute.

Audrey Melnik:
At the other end of the spectrum, I'm often amazed when I work with clients just how little they're tracking. They may have done some broad strokes like signups, but then it's a vacuum of information when it comes to what people are and are not doing in their SaaS. Both of these scenarios can amount to the same problem. Paralysis. Paralysis from too much data or paralysis from not enough data, and what's more, most companies don't activate the data that they have. They don't leverage that data to give them insights into their customers or to tailor a personalized action to move their customer along the journey. Nowadays, with the ever-expanding technology landscape, it's the complexity of capturing this journey and knitting it together across devices, and sessions, and tools is really non-trivial. Could an engineer handle it? Probably, but remember about them having too many other things to do.

Audrey Melnik:
So we all know the saying, "The definition of insanity is doing the same thing and expecting a different result." Yet, what you're doing isn't working. If it were, you wouldn't be here. So we need to break out our current patterns. Those three problems are some of the core issues that will amount to your experimentation process not being a success within the timeframe you need it to be. If you can experiment or even implement the changes you think that will move the needle, then you're likely to stay in this insanity loop.

Audrey Melnik:
So how can we use the experiment process without falling into these same problems? How can we break these patterns so we can deploy this experimentation process? Well, that's where customer journey optimization comes in. Customer journey optimization reduces the cost to acquire and retain a customer and increase its lifetime value, but without developer involvement. The one underlying principle of customer journey optimization is empowering your non-technical team members to move fast and iterate without developer involvement. In doing this, your team members feel unencumbered when experimenting and iterating on their following initiatives, and that's really, really powerful.

Audrey Melnik:
Startup problem number one, the bottleneck. It's one issue that everyone I talk with from startups to corporates connects with. I've never met anyone that tells me that their developer is sitting around, trying to figure out what work to do because they've got nothing on their plate. How many times have you come up with an idea you want to implement, and then the second thought you have is that you don't have the developer resources to implement it? I'm guessing you've lost count because I know that I have, and that's where customer journey optimization comes in. It removes business logic from code and puts it back in the hands of the people that make decisions about it. So they can make changes to it without writing code, without deploying a software release, and therefore solving startup problem number two, the business logic engulfment. All it requires is access to the right tools in the customer journey optimization toolkit.

Audrey Melnik:
The customer journey optimization methodology comprises these five key components. The customer funnel, fit-for-purpose tools, the complete view of the customer, playbooks, and principles. Let's start with the principles. So what we want to do is connect our fit-for-purpose tools together and integrate them by segment. Segment is an integration hub and also known as a customer data platform. What we want to do is basically when someone is using your app, we want to send data into segment about who they are and what they're doing. Then, signal will be able to publish that information out to all the other tools in your ecosystem.

Audrey Melnik:
So I've covered off principle number two. Now, number three is all about making these components replaceable. So let's say we're using Help Scout as our help desk, and we to replace that with Zendesk. We want to be able to do that without having to get a whole lot of effort from the developer to make this change. With this architecture where things are hub-and-spoke, you're able to do that pretty easily without having to get the developer involved.

Audrey Melnik:
Business rules should be changeable by the business and not leaving code. This requires a major mind shift, especially from your development team. Next time you want to implement something, first, think about whether this is a business rule that could live outside of code and in some of your other tools. That will enable you to have domain over making changes to it, which is very powerful.

Audrey Melnik:
Number five. We want to be able to experiment and iterate with guiding content. By that, I mean, those in-app walkthroughs, and pop-ups, and tooltips, and all of those kinds of things, you're never going to get that right on day one. If you use custom built software to implement those, then you're actually once again tying yourself way too closely to the developer and their bottleneck. So we want to be able to do that using off-the-shelf products so that we can own those, the content, and experiment on that.

Audrey Melnik:
Number six, don't distract the developer. I think you've gotten the gist by now, but really, the developers should be focusing on building the core product, not on converting the customer. Lastly, the last principle is all about being able to experiment with pricing plans because your pricing is never going to be right on day one, and it's going to evolve as your product evolves. So you want to have a framework that enables you to make some changes and experiment on that pricing.

Audrey Melnik:
So what do we mean by complete view of the customer? Every interaction we have with the customer, every interaction the systems we use have with the customer, and all the relevant attributes of a customer make up this complete view of a customer. These interactions and attributes have a number of sources. Your application is one source. Another source is your marketing automation tools. They'll send email events like sent, opened, and clicked. Your in-product adoption tools will send you what onboarding flows the customer is presented with and interacts with, and many, many other tools will send various events. Our integration hub segment will enable this data to be sent to all the other tools that are receiving customer data and events. Each of these tools knows how to consume that information and bring that into their view of the customer or user.

Audrey Melnik:
Our customer funnel is an eight-stage funnel that applies to any SaaS business. Its power is that it provides a framework and a common vocabulary for discussing what needs to be done. Each of these stages has a goal and a set of tools and playbooks we can apply to them. So for discovery, it's getting the user to discover your product. For familiarity, it's getting the user to sign up for a trial or purchase. For trial, it's getting the user to convert to a paying customer. For support, it's educating the user and responding to questions and issues, but in a scalable manner. For purchase, it's supporting subscription and payment. Upsell is encouraging greatest spend. Retain is retaining customers longer, and refer is getting new customers from current customers and partners. For this talk, we're focusing on the trial phase and getting the user to convert to a paying customer.

Audrey Melnik:
The final piece in our puzzle is playbooks. Playbooks bring together all of the other pieces in our customer journey optimization together. The tools, the principles, the complete view of the customer, and the customer funnel. A playbook maps out the interactions and events, and how these pieces of information flow between tools and trigger various actions. For the trial phase, we have these playbooks. Trial signup playbook, the initial value playbook, the trial extension and expiry playbook, the activation playbook, and the land and expand playbook.

Audrey Melnik:
Most of these playbooks in ordinary circumstances in the way you probably do things currently are usually implemented using one technique, software development, but that was when you only had one tool in your toolbox. The customer journey optimization methodology is about giving you more tools, excuse me, in your toolbox than what you have right now. The tool you currently have is your developer, but we need to reframe our thinking next time you want to implement something. Is code the best and only way to implement it? To move to a place where you can move faster to implement your initiatives, we need to use other tools. So we don't have to wait on the developer backlog.

Audrey Melnik:
Let's look at one example of a playbook that applies to the trial stage of the customer funnel, the trial signup playbook. Most likely you've got a similar journey in your app, and most likely it's implemented entirely in code. The purpose of this playbook is to get the user to verify their email address and add them to an onboarding email drip campaign. When this is executed entirely by a software developer, it amounts to a verification email being sent through tool like SendGrid. The user clicks on that email, and then maybe a single email is sent through SendGrid again to welcome them to your app.

Audrey Melnik:
If a company is a bit more aware of what can be done with marketing automation tools, there might be some data sent to a tool like Intercom. From there, you can own and manage the drip campaign. So in that case, you might be wondering how different this playbook really is to your current setup. So think about this scenario. If someone signs up to your app and never opens your verification email, and they can't move forward in your app without verification, then you've just lost those users that don't open or click your email. I can pretty much guarantee that your developer hasn't set things up, so they send a second email if the first email isn't clicked.

Audrey Melnik:
But with this playbook, by sending the user signup track event through segment and into your marketing automation tool, you can add them to a multi-email drip campaign that continues to send the verification email and variations on that email until they do actually click on the link and verify their email address. You can test out different subject lines and email treatments if you choose as well.

Audrey Melnik:
Also, because in this playbook, you're sending these two key track events to your analytics tool, you're able to understand just how many people are dropping off at that point by putting those events, the user signed up, an email validated event, into a funnel analysis chart. So that's what a playbook looks like. It looks relatively simple from a diagram perspective, but its power lies in the ability to glean the right information for what's happening in your app, and to own the email communications that are going, out and experiment and iterate on them.

Audrey Melnik:
A key trial concept I want to discuss is the concept of initial value. Think back to a time when you signed up for a new SaaS product. You had it in your mind, this preconception about what this tool would help you do either from the marketing messages, the messages on the homepage, or possibly even from a colleague or friend. So when you signed up for this tool, you really wanted to have this preconception validated. Can it really do what it promises to do, and can it do it in an effortless manner? When you get to that point where you experience the value that relates with your preconceived notion, that's what's called initial value.

Audrey Melnik:
Let's see how different companies define initial value. For Asana, initial value can mean creating a new project with tasks successfully assigned to a team member. For Zoom, it can mean signing up, organizing, and holding the first Zoom conference. For Expensify, it could be creating the first expense report that is approved for payment. For Amplitude, it's creating the first chart.

Audrey Melnik:
Getting to initial value requires being on point with three key factors. When a user first signs up to your product, they have curiosity. They must have enough curiosity to see them through the onboarding process to get to initial value. They come to your product with a preconception about the value they think your product provides based on the marketing messages they've seen, and they hopefully have a relevant problem that needs solving, one that your tool can help solve. You must get all three right to get to initial value and ultimately activation.

Audrey Melnik:
If they have curiosity and a preconception, but no problem to solve, they'll walk away from a tool thinking it's cool, but not needing to add their payment details. If they have a problem and a preconception, but their curiosity runs out before they get to initial value, they won't be a converting customer. If they have a problem and the requisite amount of curiosity, but your solution fails to match their preconception of what your product does, you'll lose their trust and their business. When you're crafting your onboarding process, you need to keep all of these factors in mind. It will help you craft the language you present to your user to keep them on track and to help them get to that point of initial value.

Audrey Melnik:
Let's talk about what it takes to go from curiosity to initial value. Your new user needs to date before they get married. You user isn't going to see the value of your product without going through some key steps first. What those key steps are will differ for each product. You most likely won't know right now what those steps comprise definitively, but you will have a hypothesis about what they are. That set of steps to go from curiosity to initial value is called the initial value ladder.

Audrey Melnik:
It's important to note that not all onboarding journeys are created equally. The path from signing up for a trial to getting to initial value has two key components, the curiosity component and the setup component. In the curiosity steps, your user will be clicking around, and we'll be looking to validate the preconception about whether your tool can solve their problem. They haven't yet decided to commit to setting up your tool for their needs.

Audrey Melnik:
In a B2B environment, we often have multi-user SaaS products. The first user from the account is called first user, and then other users from the same account are called subsequent users or invited users. Their onboarding journeys are often different, and that usually means that the subsequent users don't have a setup component to their trial experience. One thing you should consider is that because the subsequent user often doesn't view the landing page marketing copy, you need to introduce them to the product benefits from within the product and in your automation messages. When you set up your onboarding experience, you absolutely need to tailor the experience differently between these two roles.

Audrey Melnik:
Also, what you'll notice from this diagram is that different SaaS tools can take different approaches to getting the user to their aha moment. If it's the kind of tool that requires a lot of setup, then in order to extend the user's curiosity period, you may need to leverage demo data to illustrate what they can achieve with your tool. Amplitude, an analytics tool, does a great job of this by providing you access to see a lot of the charts they've set up using their music streaming demo Amplitunes.

Audrey Melnik:
Because the setup is non-trivial and it takes a considerable amount of time to get a volume of data established to create good charts, they used a demo to satisfy this curiosity factor. So a user considering whether Amplitunes is right for them will have satisfied their curiosity about what is possible. They will still need to figure out whether they are able to use the tool easily, and that can be done with the demo data as well, but would also come after the setup phase to see if it can solve their particular problem with real data.

Audrey Melnik:
Other more simple tools don't require any demo data and can swing straight into setup mode. So think about which path your tool takes and whether you can get your users to initial value before their curiosity runs out. If your tool takes a lot of setup, then you may want to switch to populating demo down on signups so you can extend their curiosity runway. The last step of each of these paths is engagement, which leads to activation, where the user decides to turn into a paying customer.

Audrey Melnik:
I think it's helpful here to walk you through a specific example as I've discussed some key concepts, and I'm going to use a product that I've created for generating test users and simulated customer journey events into segment, which is really useful for populating your testing environments. For context, the way it works is you define events in your app and how they relate to each other. Every day, the tool pushes new events into your segment source and out to your destination. So you can populate your test environments in your marketing automation and analytics tools, for example.

Audrey Melnik:
This tool is called Trackbot. As part of the signup process, it asks you what your business use case is. B2B SaaS, B2C SaaS, or e-commerce, and then it populates your account with event types that suit your business type. This gives you a great starting point as well as personalizing the setup to your needs. So it leverages demo data, but really quickly gets them to being able to customize the setup to suit their own business.

Audrey Melnik:
A value gap represents the chasm between perceived value and experienced value. A value gap can happen for many reasons, but here are some of the big ones. Number one, the product fails to provide adequate value. Number two, the prospect is the wrong fit for the product. Number three, the prospect doesn't understand the product's capabilities or how to use it. Number four, the prospect experiences something jarring or painful like confusion, dissatisfaction, et cetera that changes their perception while using the product.

Audrey Melnik:
So how do we bridge this value gap? One way is to make changes to the product, but that's usually a rather long life cycle, as we know. So the other way is to use what we call bump events. Wes Bush, the author of the Product-Led Growth book, came up with this concept, the Bowling Alley framework. To get the user from the current state to the desired outcome, we install bumper bars like we would in bumper bowling that pushes the prospect back on course. The two types of bumpers are conversational bumpers and product bumpers. So any event that's triggered from these bumpers is what I call a bump event. The best part of these bump events is that you don't need to get a developer to implement them as long as you follow my framework.

Audrey Melnik:
So let's look at Trackbot to illustrate what I'm talking about. These are the key events we've defined within the Trackbot product that will get the user to initial value. They sign up and update at least one event type that's been populated for them by default. They click on the "Start Generating" button, which performs validation on their configuration. If all is okay, they can then trigger the generation to be initiated, and that starts generating events for them.

Audrey Melnik:
We have some additional bump events. So that includes the checklist tutorial was started. We might be using user pilot. So that event comes through as user pilot checklist started. A nurturing email link has been clicked, and in that model, a guide was displayed to bump the user to a key step like the generation-initiated step. So that's here as user pilot experience started. They have opened an email notification for generating events, and that's what I call a reinforcement email. These are the bump events we've designed to push the prospect back on course to achieve initial value, and the best thing about these bump events is that they can help with this goal of converting you customers during trial, but they don't require developer involvement. They use tools that can be managed by your non-technical resources.

Audrey Melnik:
Some of the events that we define for our customer journey can be to classify as signal events. These events are events that happen often and can give us signals into what's happening for the user. So the three types of signal events are promising, concerning, and activation signals. We use promising signals to identify the user's potential in reaching initial value. We use concerning signals to identify when the user may be struggling to reach initial value, and we use activation signals to identify user's potential in reaching activation.

Audrey Melnik:
But signals aren't things that we act on after one signal event happens. Instead, when we detect a series of these events, we can adjust the customer's journey or even flag that activation as having been achieved. You may remember Facebook declaring the activation was reached for their new users after they added seven friends. At some point, you'll come across or you'll come up with your own similar hypothesis for your product.

Audrey Melnik:
Let's go back to Trackbot to identify these types of signal events. When someone clicks on the "Generate" button, it triggers the configuration validation performed event. This can result in one of two things, a success or a validation failure. So we have a property on this event called validation successful, and it's set to yes if validation passed and no if it didn't. So our concerning signal is when this event is performed and the validation successful property is no. If this happens a whole lot of times within a short period of time, we probably want to reach out and see if we can help them to get over this hurdle. It's probably a good signal for a product manager to focus on helping to improve the process because the user is clearly not getting the picture of what they need to do.

Audrey Melnik:
I saw this happen with an early adopter of Trackbot, and I updated the validation errors to identify which event types were an error so they understood what they needed to fix. The generation initiated event happens as a result of the previous step passing. So that is naturally a promising step. Someone who is really seeing the value with Trackbot is likely to have a few iterations of updating events and properties, and initiating generations. So when you see this event happening a few times like that, this can be a promising signal towards activation. Finally, the events generated event happens each time Trackbot pushes out new events for the customer's test users, and so this can be a good way to determine activation.

Audrey Melnik:
There is one final way I plan to use the events generated signal event. This event directly correlates to my cost base. So the more events generated for a customer, the higher my cost. So I can actually use this information to determine if this customer represents a high cost to me or a low cost. I have a paying customer of Trackbot that chooses to turn on activation sporadically because they really just want to use it to populate their customers' demo accounts. So this kind of customer represents a low cost base to me. So maybe I can use that to my advantage by offering them an upgrade to an annual plan for a lower price.

Audrey Melnik:
Once we have this kind of data available, there are all sorts of experiments we can try like this. But the key to note here is once again, this data is actually made available in tools that don't have to... you don't have to rely on a developer to access. So once again, you can implement these kinds of playbooks where you're prompting the user in different ways to convert them to a paying customer.

Audrey Melnik:
So while initial value gets them to that first step where they see a tangible benefit, activation is the point that they see enough value to commit. They've been dating up to now and activation is when they accept your proposal of marriage. So make sure your promises match your delivery. If the promise of your product was productivity, your activation points should be related to them by being able to maximize achievement. If the promise of your product was simplicity, your activation point should be related to how soon they were able to achieve their goal. If the promise of your product was collaboration, your activation point should have realized a clear path to connection.

Audrey Melnik:
Also, keep in mind that while the path to initial value is often a sequence of specific events that we can define in initial value ladder, the path to activation is often more nebulous, and it's usually a result of a frequent occurrence of one or more specific events. So you'll probably get to a true understanding of activation after you've converted a decent amount of customers and have been able to analyze what these customers have in common.

Audrey Melnik:
I want to take a few moments to walk through the optimal way to implement your tracking. In the plan phase, we define the identify calls and track events we're going to send for each scenario. We'll map out that data and think through how that impacts on our playbook, how that data shows up in our destination tools, and what we want to do with that data in those destination tools. I think this is a super important step because we're not just tracking for the sake of tracking. We're tracking to either understand through analytics or activate through some automation of the customer's journey.

Audrey Melnik:
Think about the events we've defined as bump events and signal events. We build automation journeys based on those events happening either straight away or based on the counts of those events happening. Sometimes we use the information from those events in messaging we present to the user. When I do this design step, I'm not just thinking about what I need right now, but also, what I might want in the future.

Audrey Melnik:
We want to be really discerning about how much we do here, but there are certain attributes that might be great for some future analytics. For example, adding an attribute on an activation event to tell us how many days into trial they are or how many days they are from their trial ending. In the simulate phase, this is where we set up the tools with our playbooks and start simulating the customer journey to flesh out our destination tool. So we can see if there are any gaps in our tracking plan that we set up in the plan phase and to make sure our automations are set up right.

Audrey Melnik:
The benefit of this is that before we even go to our developer with a tracking plan to get them to implement it in the code base, when doing our own testing, which will expose more requirements for what we want to capture about the customer journey. So when we do finally go to our developer to implement the tracking plan, we can then be more assured that we won't have to go back to them a bunch more times to make further changes. The final step is the implement and launch phase, which is where we get the developer to implement the codes and flip everything over to the production environment rather than it all being in the staging or testing environment.

Audrey Melnik:
So you may be wondering from the previous slide how we simulate the customer journey without getting the developer involved. In your production environment, your app will send events to segment that you'll be able to act upon in your destination tools. But for simulation, we haven't yet got to the development team to implement the tracking. So instead, we can use our tool called Trackbot, which allows you to set up your tracking events and how they relate to each other so that Trackbot then creates test users. It simulates a customer journey for you. So that's a little trick that we use in customer journey optimization so we can move faster without developer involvement.

Audrey Melnik:
So in summary, I've gone through the experiment process. We've talked about the three key problems that are holding you back from being able to get to that experimental nirvana. We've talked about customer journey optimization components and the trial signup playbook, and I've introduced the concepts of initial value bump events and signal events. Then, we've talked about the customer journey optimization process.

Audrey Melnik:
So this content is part of a larger coaching program I have where we work together for 90 days to implement this approach. If you're interested in learning more about this program and getting access to my trial playbooks, go to rocketship.funnelventures.com to book a consultation with me. If you're curious about Trackbot and how it can help you with your test simulation, you can check it out at gettrackbot.com. Thanks, everyone. You can contact me here.

 

Course Feedback

  • This field is for validation purposes and should be left unchanged.
Gretchen Duhaime
Audrey Melnik
CEO of Funnel Ventures
Entrepreneur, growth consultant, and Founder at Trackbot.
chevron-left