At Trak.io, we’re extremely passionate and confident of our purpose as a startup. We know what we want to do in this industry: we want to make other SaaS companies more successful.
But the exact way that we execute that purpose has been a very fluid process since we started work on our very first prototype in May 2013. Like many Lean Startups, our product needs to be able to adapt and evolve as we search for our own product-market fit.
And that means getting constant feedback from customers and potential customers.
“Would you sign up for this? Would you use that every day? Would you pay for this?”
You can ask potential customers these sorts of questions in interviews all day long, but the ultimate validator comes when they’re actually faced with that credit card form.
As the saying goes, “Pay up or shut up”.
We’d already validated our original purpose with a $29 Beta queue jump and $499 “Launch Partner” deal. But our product had changed considerably in the last few months, both in target customer and price point, and in some cases we were asking users to pay upwards of $299 per month.
Now here’s the honest truth: since we launched our public Beta, users were getting to the end of their free trial (after installing our tracking code) and then….. nothing. They weren’t paying. Our conversion rate sucked!
Something had gone wrong. Had we misunderstood our customer conversations? Had we been arrogant to assume these companies would actually ditch their existing customer analytics and put their faith in us? All of our feedback in emails and conversations had been great, everyone had told us they loved the product!
We were spinning and didn’t know what was wrong.
We needed to get very specific feedback. We already knew people were using the product from our own data tracking. But asking “Would you pay for this?” hadn’t translated into conversions. We needed to ask a different question to a different segment of our users.
“Why did you not pay, after using the product successfully during your free trial?”
This post is an explanation of how we segmented our onboarding users, and then how we implemented a super simple, very scaleable tactic to filter out our most engaged users and find out why they weren’t ready to start a paid subscription.
By the end of this process, we were receiving 27% more feedback, and increased our paid conversion rate by 33%!
The Different Segments Of Users In Our Free Trial
Once a user signs up for a free trial at Trak.io, they end up in one of these 6 basic segments:
- Not interested at all. They churn within the first few minutes without even integrating our tracking code. They never come back
- Not interested in integrating our tracking code right now, but might come back to test when they have an appropriate project
- They tested the product, but they hated it! There’s too much wrong, and they’ll never come back to try it again
- They tested the product, and they loved it! But they just don’t need it right now
- They tested the product, they loved it, and they paid without problem at the end of the free trial
- They tested the product, and they liked it. They do have a need right now, but there’s at least 1 problem stopping them from subscribing
Ideally, we’d only have people in segments (4) and (5). Our goal was to try and find out what went wrong in all the other situations so that we can correct it.
However, each of these segments have different responsiveness to feedback requests and are at different stages of our AARRR funnel.
We decided to look deeper into each segment, and see where we could get the biggest wins.
1. Users Who Were Not Interested At All
For these users, there isn’t usually much we can do. While our marketing may have been effective at piquing their interest, at some point there was a disconnect between the product and the marketing.
They couldn’t even be bothered to test the product – perhaps because they assumed it wasn’t for their needs or because the effort was not worth the reward.
We’ve had really poor success with engaging these users in conversations – they don’t respond to exit interview emails.
How To Identify: They didn’t activate (install our tracking code). They never log back in after first signing up and they don’t open any of the onboarding emails
Action: Nothing right now. Just keep sending automated exit interview emails and come back to optimise later
2. Users Who Weren’t Interested In Testing The Product Right Now, But Might Come Back Later
We see these users opening our content marketing emails (linking to posts such as this one) and although their Trak.io account remains inactive, we see a passive interest in our brand. We need to wait until they decide to come back and take more action so that we can engage them in a conversation about the product.
Right now, we’re happy for these users to stay on our marketing list.
How To Identify: They didn’t activate. They haven’t logged back into their account. They do open our content marketing emails
Action: Nothing right now. Just keep sending these users content marketing and see if they come back later
3. Users Who Tested The Product, But Hated It
These are the most painful users as a founder. Getting users through Activation for a product like Trak.io is extremely hard. Most developer tools have activation rates around 10% – 20%, so losing a user who has gone through all that effort is particularly painful.
The only thing we can do with these users is keep trying to get them in an exit interview so that we can find out where we went wrong and how to correct it for future users.
We can learn from these users, but they’re very unlikely to become paying customers.
How To Identify: They activated (installed our tracking code). They seem to have looked around, but never came back. They don’t open our content marketing emails.
Action: Nothing right now. Just keep sending automated exit interview emails, which we can optimise later.
4. Users Who tested The Product And Loved It, But Don’t Need It Right Now
These users sting, but there’s not much you can do. A sale requires both interest and requirement.
Because we’re seen as a “cool” product, we get a lot of users test us out to compare to existing solutions or out of curiosity. But once it comes to paying, they don’t because they’re on an open source project (and wont see a financial ROI), they have a zero revenue startup (and no funding) or their employer wouldn’t purchase this.
While gaining some viral promotion via open source projects is cool, we were determined to optimise the product for revenue, and so we didn’t see a whole lot we could do with these users other than be patient.
How To Identify: They activated (installed our tracking code). They used the product a lot, but it looks like they have development/staging data. They churn as soon as their free trial expires
Action: Offer open-source projects a free account. Keep sending users content marketing.
5. Users Who Tested The Product, Loved It and Paid
We didn’t have enough of these users, but we wanted more! We’re trying to focus on a low-touch product and so ideally we’d like users to go through the onboarding and purchasing process with very little Sales Rep involvement.
How To Identify: They activated, use daily, invite colleagues and enter a credit card without fuss.
6. Users Who Tested The Product And Liked It, But Have A Problem Preventing Them From Purchasing
These are the users we identified as the “low hanging fruit” in our free trials. They’d gone to the effort of installing our script and they’d continued to use the product, however, something was stopping them from paying.
They had activated, they had become sticky, but they weren’t paying. We knew we needed a tactic that was more than just sending an automated email. Manually reaching out wasn’t scaling (we’re a team of 3!) and so we needed to try something.
How To Identify: They activated (installed our tracking code). They were coming back each day to view customer profiles. They were opening content marketing emails. They hadn’t entered a credit card and had churned on the day that their free trial expired.
Action: Try offering a free trial extension in exchange for feedback, act on the feedback on a case-by-case and consider a highly engaged lead
Offering A Free Trial Extension
Once we’d decided to start offering free trial extensions, we knew that we didn’t want to do this via automated emails. We already have enough emails going to our users, and we’re always reliant on open rates for them to be effective.
We decided to put our extension tool directly into our product – after all, this is when the users are most engaged with our app and thinking about whether they need it.
Remember, the reason we’d started this in the first place was that the feedback from these 2 questions is very different:
“What would you pay for?” vs. “Why didn’t you want to pay?”
We knew this feedback had to be exactly at the point of paying.
What Does It Look Like In Our App?
While it did need some coding, the total task of adding the was around 4 hours, including adding the admin screen (so that I didn’t need to access the database).
Here’s the modified subscription plan screen within the app:
Notice how we’re very specific about using the language “Why aren’t you ready to pay?” in the feedback form?
We want users to be really honest. We’ve done asking about “Do you like this?” and we’re now asking for money.
And here’s the admin screen we built to view the trial extension reasons, and also add an additional extension ourselves if needed:
Would People Still Pay If They Could Get It For Free?
One of our biggest reservations before we added this so blatantly on our payment screen was, would people just keep hitting “extend” instead of paying?
To try and deter this, we set the extension to 3 days. We figured this was basically “nagware” and that users who were just looking for a ‘free meal ticket’ would soon get fed up, however 3 days was long enough for me to intercept, discuss and issue, and then manually add a longer trial extension if needed.
As it turns out, we haven’t had a single case of a user abusing the trial extension option, and since implementing it users have been paying to subscribe despite having the option to get a free 3 day extension.
What Sort Of Feedback Did We Get?
As you can see from the screenshot above, we got way more feedback than we expected that was simply users who didn’t feel like they’d fully evaluated the product yet.
We’d waited 30 days to give them a “nudge”. This sucked.
We immediately dropped our trial period to 14 days.
In these case, the very fact they’ve extended their trial is a great indicator of interest, and provides a “warm leads” for me to focus on in conversations. Instead of waiting for 30 days to ask the hard question, we halved the time to get to this point.
I’ve received a 100% (so far!) email reply rate when I reach out to users who extend their trials simply because they haven’t had a long enough chance to evaluate the product.
However, uncovering these users is great, but didn’t provide us with the missing product feedback we needed. We already know from our usage data which users aren’t logging in often enough to test the product. Instead, we needed to know what users thought who had been actively using the product.
So, we were thrilled when we started seeing feedback come in such as:
- “I’m worried about how the pricing is based on # of users and not # of active users. Can you explain please?”
- “We can only pay by invoice”
- “I couldn’t understand how funnels worked”
- “I couldn’t find where to setup customer segments”
- “I heard you guys calculate a ‘health score’ like Gainsight but couldn’t find that”
- “We need to be able to manage users via the dashboard, not just API”
- “We need the ability to drill into event meta data”
- “Couldn’t see how to send emails based on events”
This was the golden sort of blockers we could resolve, and use to inform our product roadmap!
- “I’m worried about how the pricing is based on # of users and not # of active users. Can you explain please?” – We updated our pricing to use ‘Active Users’. The user became a paying customer
- “We can only pay by invoice” – We added invoice payments on Enterprise plans
- “I couldn’t understand how funnels worked” – We got this one a lot. We walked the users through our funnels product. After many realised it didn’t suit their needs, we decided we needed to re-assess the funnel report and how it fitted into our overall product
- “I couldn’t find where to setup customer segments” – We’ve been working on this as part of the new CSM product. We informed the customer and setup a product demo
- “I heard you guys calculate a ‘health score’ like Gainsight but couldn’t find that” – Again, we’ve been working on this as part of the new CSM product. We informed the customer and setup a product demo
- “We need to be able to manage users via the dashboard, not just API” – We fast tracked this feature (it’s small), the user became a customer
- “We need the ability to drill into event meta data” – We reached out and explained this was coming up in our new CSM product release, and setup a demo
- “Couldn’t see how to send emails based on events” – We pointed the user towards Customer.io and explained we’ll support Alert emails but not marketing automation. We reworded some of our marketing to make this clearer
These are just a few of the most recent feedbacks we’ve received. Not all of these can be resolved into sales: some we can use to change our marketing or our self-serve support materials.
What was particularly interesting was all the feedback we received about using our 2 main data reports: Metrics and Funnels.
We actually learned just how many users really didn’t understand them very well, and it allowed us to dig much deeper into those users’ usage data. On the surface, they’d used the features perfectly. However, it wasn’t until we dug into conversations that we realised they’d only been testing the surface, but hadn’t managed to see the obvious ROI on them.
It was a great lesson for us in that product usage doesn’t always correlate into product engagement.
How Many People Used The Extension Option?
For the last 30 days, of all users who have reached the end of their trial and have been presented with our payment screen inside the app:
- 9% of users entered their billing details without hesitation
- 27% of users opted to receive a free trial extension
- 64% of users just left the app
Before we had the free trial extension, the numbers for paid conversions were identical:
- 9% of users entered billing details
- 91% of users just left the app
So we were definitely getting feedback from users who would otherwise have just left!
With those free trial extension users, I can focus crazy hard on removing their obstacles, and communicating better about their objections.
And the final results, our paid conversion rate is now 12%!
(And all of the additional conversions are users who have had at least 1 free trial extension)
Even though we added 3% to our paid conversion rate for active users who reach the end of their trial, this is still lower than our targets. However, we have a lot more feedback to work with than we were getting from exit interviews, and it’s feedback from users who are the most likely to pay too. And it’s a 33% increase!
This was a long post to explain a simple tactic that we added to our billing and feedback process, but it’s a situation with many variables and factors at play and I wanted to be totally transparent so that other startups could see how this could apply to their product.
One of the biggest surprises for me was just how many users used the free trial extension because they had just “ran out of time” (after 30 days!) but eventually went on to become paying customers. Previously, these people would have simply let their trial expire and then not bothered sticking with the product.
I know many startups will offer discretionary extensions if you ask/beg, but we’re trying to be more transparent.
We have transparent pricing. We have transparent data policies. And now we have a transparent free trial extension policy. And it works great!