Can we use our ‘magic 8 ball’ to predict Streamtime’s churn? And the million $ question – if we can predict it, can we reduce it!? We assembled a team to focus on these two nagging questions. Our findings were truly amazing and could mean hundreds of thousands in saved Annual Recurring Revenue (ARR).
What types of Churn do we track and how are they measured?
Churn (referring to lost money or customers), like at all SaaS companies, is the enemy here at Streamtime. It hurts to see clients leave, particularly when so much love, thought, effort and time goes into getting them onboard.
We track a number of churn related figures using baremetrics (user churn, revenue churn, net revenue churn, contractions, downgrades) but for now we’ll focus on both Revenue and User Churn. Baremetrics defines and measures these types of churn as:
Revenue Churn – Is the percentage of Monthly Recurring Revenue (MRR) that has been lost in the last 30 days relative to your total MRR 30 days ago. Churn of any kind is bad, but a high revenue churn rate means that high value customers are leaving at a higher rate than others. How is it calculated? – (MRR Lost to Downgrades & Cancellations in the last 30 days ÷ MRR 30 days ago) x 100
User Churn – Is the percentage of customers who left in the previous 30 day period relative to your total customer count 30 days ago. Generally you want to aim for a churn rate in the low single digits. How is it calculated? – (Cancelled Customers in the last 30 days ÷ Active Customers 30 days ago) x 100
How Streamtime is travelling…
At the time of writing, our current Revenue Churn sits bang on 3.8% and User Churn at 2.6%. Both are not alarmingly high (actually by many benchmarks these figures are pretty good).
Our approach to reducing churn…
Although emergency sirens weren’t sounding too loudly for us as a company it was still clear that any reduction in churn would be welcomed. So where did we start?
Churn team asssembbbbbbble – we did as Ron Burgundy would do, assembled a team & met to kickoff the project. We decided to start by doing a deep dive on some recently churned clients, looking at all the data we had on those clients and map out what their complete journey looked like – initial free plan signup to cancellation. This is what we ended up with:
This particular client had been with us for 13months, so we had a treasure trove of support interactions to analyse. We decided to work backwards from the cancellation email which, in this case was quite detailed. There were a few points mentioned so it was a sensible place to start.
Slowly but surely, we found the original conversations around the points raised – each providing a new piece to the puzzle. It became clear that each of the conversations were had with the owner of the organisation, all of which in the space of a month or so (indicated by the bombs in the line graph above). Something very obvious that we should be more attuned to.
We already had a clearer understanding of why the client had left by mapping the conversations on a timeline. But the team also agreed upfront that how the clients usage of the product had changed over time would be key. We agreed to report on 5 key actions as an indication of usage:
- Jobs created
- ToDo’s created
- Invoices created
- Quotes created
- Number of users (subscription licenses)
Our team consisted of a developer – which was convenient in that they were able to pull the total number of monthly actions completed by that client – seen in coloured lines in the below graph.
We now had the usage of the client mapped over the key conversations mentioned in the ‘break up’ email. The ‘thumbs up’ icons indicate peak number of actions (usage) which help with the overall picture we were building. Job creation peaked just prior to the time bomb conversations with the business owner. ToDo’s peaked just after – which makes sense as ToDo’s are created on jobs – so an increase would normally follow an increase in job creation.
‘X’ marks the spot – in this case the X icon indicates when the cancellation email was received. In terms of timing it makes perfect sense – after peak usage was reached and a few months after the ‘ticking time bombs’.
‘We really could be onto something here’
With our extra information, could we have avoided this particular client churning?Absolutely! We could and should have done better with those time bombs. Resolving just a few of them sooner (they have since been resolved) would have saved our client a lot of pain. If we had known that usage was on the decline in real time would have allowed us to reach out and help at a critical time.
This all seemed too good to be true, had we uncovered the ‘7 secret herbs and spices’ or just tracked one clients path to cancellation? Good question, so we set out to validate our findings.
We gave 5 recently churned clients the same treatment – our theory held up. We then reversed it and tested 5 of our ‘happiest’ clients – again, the theory held true! ✅
‘If only we had more time’
As a team we were excited, now we had to get our founder and MD on board. We presented our findings to them both in a bid for more time to build out our thinking. Turns our they shared our excitement and gave us the green light to crack on with it.
Where to from here?
To date, the work we have done had all been manual – requiring a human to spend time reading conversations, querying databases and establishing a clear understanding of the clients journey. Very insightful, but not scalable.
Which led the team to the next objective – how can we automate this or generally make it more scalable? And how can we incorporate that into our daily processes so our churn rates are the lowest they can be? Enter our pattern recognition algorithm – our focus in our next post (part 2)…