I’m not the youngest at the table, but definitely not the oldest. There’s 3 of us in our 20’s, and then the rest of the room seems to be spaced between 30-60.
I am however dressed like a child. I should have gone for the safe ‘smart jeans and white collared shirt’. Classic entrepreneur. It quietly whispers “I ironed a shirt today, because you guys are worth it”.
But I didn’t. I’m dressed like an intern, and people have assumed I’m an intern. Or perhaps I’m Grey Fox’s assistant.
Grey Fox is the guy sitting to my left. He looks like he came straight out of an IBM consulting gig. Very senior, with a strong presence. He’s dropping analytics acronyms so fast that he’s borderline beatboxing. He’s pretty much dominated the conversation for the last 2 hours.
To my right is Vendor Vanessa. She’s casually dressed, in her early 40’s, and wearing the largest pearl earrings I’ve ever seen. If it wasn’t for the fact I know which company she works at, I’d assume she was here on sales commission. She’s made it her personal mission to name-drop a vendor into the conversation every few minutes. Never anything useful, but just enough so that we know that she knows who they are.
Sitting directly over from me is someone I’m going to call Copycat Charlie. He’s early 30’s, dressed for business-casual, and has made it his sole mission tonight to try and finish the sentences of Grey Fox, and ends every one of Vendor Vanessa’s contributions with “Oh yeah, they’re great!”
I’m sitting in a private roundtable on “Customer Analytics For Subscription Companies”. There’s 10 of us in the room. Everyone here is an Analytics Director (or equivalent senior position) except myself and another vendor CEO. I’ve been listening quietly and intently for the last 2 hours.
And for the last 2 hours, the 3 people talking have been bullshitting.
Grey Fox has no clue about analytics strategy, he just knows the terminology. Vendor Vanessa has only memorised the names and feature sets of each tool in the analytics space, but has never actually integrated or used one of them. And Copycat Charlie’s only strategy is to align himself with Grey Fox and Vendor Vanessa in the hope of being associated with their thought leadership.
Please don’t be one of those people.
Before you attend your next customer analytics meeting or workshop, please read my Executive’s 60 Minute Survival Guide To Customer Analytics.
(p.s. despite the cute names, the above is a true story and the 3 people who stood out to me as total fraudsters were all from multi-million dollar turnover SaaS companies!)
The Executive’s 60 Minute Survival Guide Contents
- Introduction: Business Over Technology
- Why You Need To Care About Customer Analytics
- Start With The Problem Before Looking For The Answer
- Look After The Data, And The Data Will Look After You
- Achieving The Famous Customer 360 View
- Using Customer Analytics To Improve The Customer Lifecycle
- The 3 Maturity Stages Of Customer Analytics
- How Much Is Customer Analytics Going To Cost
This is a BIG post at nearly 5,000 words… I’ll repurpose this post into a download PDF to read later and a digestible course in our Academy shortly. For now, here’s the full guide as a blog post…
Business Over Technology
Too often when a group of people get together to discuss customer analytics, they focus on the technology. Graphs are colorful and cool. Data visualisation is engaging and pretty. It’s easy to get distracted by shiny things. Just take a look at the home pages of most analytics companies and you’ll notice they show off how colorful their reports are before anything else!
But if you’re the type of executive who likes to walk home each year with the biggest bonus, you’ve learned to focus on the business benefits – not the technology.
In this guide, I’ll be focussing on the business benefits over the technology. While it’s important to know the technical language (and we’ll cover that), I’ll be covering this always within the context of the business.
By learning about customer analytics from this angle, you wont sound like an awkward beatboxer spitting data acronyms at your next team meeting. You’ll understand the context behind everything and the underlying strategy. And when the technology changes (and it will) you wont be playing catchup all over again.
Why You Need To Care About Customer Analytics
We are in an experience economy. Organisations are quickly adapting to the realisation that a focus on Customer Experience (CX) results in longer customer retention and increased growth. From Apple to Zappos, there are now multiple case studies of companies who have placed a fanatical emphasis on CX and have reaped the rewards in revenues.
One of the best ways to improve your CX is with customer analytics. If you’re able to measure each interaction a customer has with your brand (aka “touchpoint”) then you’ll be able to start learning from these analytics, and improve the experience in future.
In practice, this could mean anticipating a customers needs before they even know it. Recommending products or services that you know they will need, based on their previous behaviour (purchase history, browsing history etc.).
Or it could mean recovering “at risk” customers. For example, your customer analytics platform could raise a flag in your CRM when it detects a customer is exhibiting a behaviour pattern often seen in customers in the week before they cancel.
In both of these cases, proactively delivering a better CX means that the customer is happier, and so are your revenues.
These kind of complex data collections, aggregations and predictions would have been a significant expense just a few years ago, available only to giants like Amazon.com or big retailers who could afford to pay IBM or Oracle millions of dollars per year. But thanks to cheap cloud processing power and open-source software components, there is now an entire market of affordable customer analytics solutions.
This makes it the perfect time to make an organisation-wide shift towards a data-driven culture and investing into solid customer analytics.
Start With The Problem Before Looking For The Answer
Probably the first tell-tale sign that someone is trying to bluff their way through a customer analytics conversation is that they look straight to the technology and tools without even considering the purpose or strategy.
The fastest way to guarantee a customer analytics project fails, no matter how clean the data and sophisticated the machine learning, is to rush in head first without any idea about the question you were hoping to answer in the first place!
Graphs and charts look pretty, and you can definitely distract people for long enough with them, but eventually you’ll want to show the true ROI of the campaign.
- Which product decisions did we make differently because of this?
- How much more revenue did we generate thanks to this?
- How much churning revenue did we recover with this?
If you weren’t sure what you were even tracking the data or running the report, you can’t answer these questions clearly or efficiently.
Just as with all good projects in business, goals should be set SMART at the beginning of the process, and you may need to reign in over-zealous techies in the working group to hold back, otherwise you could find yourself trying to bend a particularly technology or process because you made a premature commitment.
Making sure the strategy is well defined, expected outcomes are well planned, and the questions you need to ask are quite literally penned out before approaching any technology or process decisions.
Look After The Data, And The Data Will Look After You
Rubbish goes in, rubbish comes out. This is a slightly nicer version of the saying I use often with other data & analytics professionals. You may have heard it before, but if not, start internalizing this.
There is no better way to guarantee the failure of a customer analytics implementation than not looking after your data. A solid data foundation will define the success or failure of a project over everything else.
A successful data strategy has 4 main components:
Build Central Data Warehouse
Eventually, every company needs to create a central data repository of the data in their organisation. And very young companies would be wise to set up a central data warehouse from the beginning to save costs in the long run.
A data warehouse, regardless of the size or complexity, just means all of the data is available from a central resource, available to query in aggregate.
In practice, this doesn’t always mean all data is physically moved to the same location – but it’s usually easier in the long run if it is. This means data from your CRM, marketing, web/app analytics, support… is physically copied to a central database.
At the time of implementation, this will mean setting up real-time synchronisation of the data, and once setup, a historical “batch import” of historical data.
I discuss the expected costs of different data warehousing strategies later.
Understanding All Data Tracking Within Context
Simply collecting the data together isn’t enough to get meaningful insights out of it. You need to understand the context of the raw data in order to make sense of the aggregate CX story.
This doesn’t seem like such a big problem when the team is small and the product is young – the engineers are probably aware of which marketing automation platform is used, how campaigns are run, and what the strategy behind each marketing campaign is (all form a high level). similarly, the marketing team will likely still be familiar with each feature in the product, what high or low usage of that product correlates to etc.
But when an organisation grows, this context starts to become departmentalized. This means that the effectiveness of meaningful queries on the data warehouse starts to decline. The analytics team might see the marketing data structure start to change, different types of data buried in the app click stream data etc.
It’s extremely important that you don’t lose this context with data tracking. In practice, successfully solving this issue can vary depending on size.
At the very least, I recommend a spreadsheet that describes, in human language, what each tracking event means. As each new event is sent into the data warehouse, they should be added to this doc. and a description of the context added. This should scale for quite a while, particularly if there is a strict discipline across all departments.
After a while, you may need to spirit these sheets as the volume grows, and assign someone to dedicate to maintaining this sheet.
For example: a particular task I’ve seen implemented is that each unique Event type detected in the data warehouse is looked up in the spreadsheet. If a row for it isn’t found, then one is added, and a ’todo’ task is automatically created for the data team to chase it up and figure out what it means. i.e. if a new event called “Used_feature_d” suddenly appears, but the product team haven’t recorded that event in the sheet, the data team will get in touch with the product team and ask them to explain the context of that data point (or remove it from tracking!)
Regularly cleaning data is one of the most important, and risky tasks, associated with data.
As a general rule, any DELETE operation run on the data is risky – remember that data is the currency of the future. So anytime we are contemplating deleting or modifying that data, we need to be more diligent than normal to ensure what we’re doing adds value to the data set overall.
For this reason, we want to try to UPDATE the data instead of deleting whenever possible. This means we try to repair damaged or misplaced data before we simply prune it.
A common data cleaning operation is updating old customer records. From simple tasks such as updating the most recent employment email address and title, to more complex tasks such as connecting multiple customer CX profiles that actually belong to the same person (but perhaps one was setup on an older email address).
There are automated data cleaning services, such as checking the validity of email addresses, but often it is a task for cheap labour (such as interns, outsourced virtual assistants) with the supervision of the experienced data team.
One of the beautiful operations I love to perform on customer data (because it never ceases to amaze people) is enriching their data with 3rd party data sources. This is amazing because it’s like connecting your data warehouse to a much larger data organisation instantly, and usually for the cost of a few cents.
The main purpose is that we want to create a more complete CX record by leveraging the data on this person held by a third-party source.
Data enrichment is sometimes associated with digital advertising networks, whose practices definitely toe the line both legally and ethically. These companies have been spending money to buy complimentary data on their users for a long time, and are experts at creating full CX profiles for a visitor within milliseconds of the user landing on a website running their ads. As a data guy, while I may not feel warm and fluffy inside at the thought of ad networks using data enrichment, I definitely respect them for being the best at it.
Enriching data can be something simple such as adding every social media profile for that customer, or it can be more personal demographic details such as their home address or marriage status, or (popular with ad networks) it could be the sorts of Apps the user likes to download.
While the laws on data enrichment and aggregation vary hugely internationally (Germany, I’m looking at you) generally speaking, if enriching your data with that particular information allows you to deliver a more valuable service to the customer, then ethically you’ll be on green pastures.
Achieving The Famous Customer 360 View
Most customer analytics conversations will discuss a concept called the “Customer 360 View”. This is a fairly commonly accepted term that describes creating a “complete picture” of all important information about that customer, including all of their demographic data, all of their behaviour history, and anything else related to the touchpoint of that customer.
For example, for a customer of a B2C iPhone application, the 360 view would include:
- Their name, age, location etc.
- Their username within the app
- All of the in-app events recorded since sign up, in chronological order
- The purchase dates and amounts of their subscription payments
- Any upgrades or downgrades of their subscription plan
- Open and click activity on marketing emails
- Details of all support tickets logged, including which agent handled the ticket and any ratings or satisfaction scores
- Any NPS ratings, and any comments given
- Any tags or segments this user belongs to
As you can see, customer 360 views are extremely comprehensive. Most implementations will provide a nicely formatted web dashboard to access this information, but for real power, you should also expect to have developer API access so that you can distribute this integrated data object to other applications that can benefit from it.
In practice, ensuring all disparate data sources can me aggregated for each real-life person requires what is known as a Unique User Identifier (UUID). This means we need a way to merge our Support ticket records with our email marketing records with our app username records.
Most implementations I have seen use a combination of email addresses and/or internal database IDs (e.g. 747DU36DW824DW37643).
The advantages of an internal database ID is that it is much more robust in the long-term, and doesn’t matter if the user updates or changes their email address. The advantages of using email addresses is that it is more human-friendly when setting up, particularly if you are bringing together databases that have been operating independently already. For example, it is unlikely that you thought to push your App’s database ID back into Mailchimp when you subscribed the user to your Onboarding email sequence.
In practice, a combination of both is usually used, meaning new systems can marge their data onto the central record using either email address, Unique ID, or any other possible identifier (such as a Twitter username that you already know belongs to that user).
Using Customer Analytics To Improve The Customer Lifecycle
Customer analytics has multiple benefits and applications within the organisation, however one of the most relevant uses for subscription or IoT companies is in optimising the customer lifecycle.
When trying to improve your customer acquisition rate, customer analytics can help with providing data on each of the unique conversion points in your funnel. This means it can help to:
- Optimise the conversion rate of your landing page, improving the number of sign ups or sales contacts
- Optimise the onboarding email sequence to improve the conversion rate of free trial to paid users
- Optimise product engagement during onboarding to proactively push the user towards “Aha!” moments
- Analyse the CX during the whole onboarding process to suggest patterns and commonalities between successful and unsuccessful sign ups
Customer analytics can often show us patterns and trends that may not seem intuitive or immediately obvious.
For example, running a Behavioural Cohort report (a particular type of customer analytics report) might show you that free trial sign ups who invited 3 colleagues to collaborate on their account were 40% more likely to upgrade to a paid account. Or it could show you which email sequence in an A/B test resulted in seemingly lower login behaviour during the free trial, but ultimately generated customers who retained for longer and therefore had a higher Lifetime Value.
Retention is usually seen as the #1 growth driver in subscription companies, and unsurprising it is one of the main focus areas for applying customer analytics.
While the technical reports and techniques may differ, all analytics around customer retention usually falls into two categories:
- Understanding behaviour of customers who retained longer vs. those who churned
- Attempting to predict which customers are most likely to churn to allow proactive remedial action
The first of these applications is the easiest of the two, providing you have good tools in place to crunch the data. We are usually looking at two defined segments of our user base, and asking “What makes them different?”. On example of an insight we could receive would be “Users who follow over 100 users will stay an active user for an average of 9 months vs. users who churned in their first week who only followed 10 users”. With this knowledge, we could change our onboarding flow to try to make sure new sign ups follow at least 100 users.
The second retention technique involves creating some level of “model” of a successful (and inversely, unsuccessful) customer on our product. Our platform would then constantly check a user against this model to see if their current behaviour exhibits similarities to that model. Even at it’s simplest implementation, this might involve a long list of “rules” that our app would check against, such as “Has the user logged in the last 3 days?” and “Has the user opened a marketing email in the last 3 months?”. In more complex data models where user patterns may not be as intuitive, or where the data volume is larger, you could implement machine learning models. These are vastly more complicated models that go beyond a “rule based” approach. Machine learning is a powerful tool for larger data sets, but the full scope of it is beyond this guide.
The third common application of customer analytics is increasing upsells on customers. Very similar to analysing and predicting churn patterns, we can look at the opposite end of our customer base to identify our customers who are the most successful, and therefore most receptive to upgrading their plan or purchasing complimentary products.
The techniques and report types are almost identical, with the main difference being that we are trying to spot users who are overusing the product, perhaps pushing features past the capacity limits you expected or making lots of WoM referrals (for example, from ‘Powered By’ links in your widget).
With these users, customer analytics can alert sales reps to potential deals or trigger automated marketing campaigns to drive new revenue or even provide leads to your partnership team to recruit new customer evangelists!
The Three Maturity Stages Of Customer Analytics
Customer analytics projects fall into 3 distinct types, with increasing difficulty, sophistication and data-systems required to achieve each.
- Descriptive analytics
- Predictive analytics
- Prescriptive analytics
Generally speaking, organisations are equipped to handle analytics up to a certain level. I like to call this the Organisations Analytics Maturity Level, as it does take time to evolve through the stages. This evolution occurs not just in the people and technology investments that the company makes and accrues assets over time, but also in the analytics and data success stories that build up within the company.
Descriptive analytics is looking backwards into the past and things that have already happened. It’s taking raw data and make it more easily readable to describe those events. For example, a report on the number of social media likes, comments and shares your company blog has received over the last year is an example of descriptive analytics.
Nearly every organisation has a firm handle of descriptive analytics today.
Predictive analytics builds on descriptive analytics to take the data we have, build patterns and models, and then use these patterns to look forward into the future to predict what is likely to happen. Predictive analytics is only as accurate as the statistical modelling behind it, and so we are only working with probabilities, not definite.
Many organisations are experimenting with predictive analytics today. The technologies and techniques have been available and affordable for quite a while, but it is mostly the business cases which are only just started to prove themselves and the ROI of investing into this type of analytics and customer analytics.
Prescriptive analytics builds even further on predictive and descriptive analytics. It’s promise is to not only look forwards at a probably outcome, but also to try to prescribe the right course of action in order to get there. For example, your predictive customer analytics could predict that a particular account will churn within the next 30 days. Your prescriptive analytics engine could then recommend that a proven drip email campaign, followed up by a sales rep call on day 14, is 80% likely to recover that account. Prescriptive analytics requires more technology in place, in particular, all of your remedial actions need to be tracked and a feedback loop put into place.
Very few organisations (startups or larger enterprises) have a successful prescriptive analytics deployment. Technologies are available and affordable, however it requires a very disciplined process to ensure all online and offline actions are recorded correctly in the feedback loop system.
How Much Is Customer Analytics Going To Cost
The costs of customer analytics fall into 3 main categories:
- Data Warehousing
- Analytics processing and visualisation Software
- Analytics consulting expertise
As discussed earlier, a solid data strategy is at the core of any successful customer analytics program. If a budget is restricted, it’s better to over-invest into the data warehousing and start with simpler analytics tools and bring in external expertise on short engagements as required. Data is the currency of the future, so it’s worth investing into a solid vault!
Data warehousing has 2 main costs: People & Technology.
All data warehousing projects will need dedicated people focussed on the project’s implementation and ongoing support. However in younger organisations taking advantage of SaaS solutions, this responsibility can be shared between internal employees and the professional services provided by the cloud vendor. In larger organisations, the size of the data team will be relative to the state of your data and how many people regularly require access to it (i.e. supporting colleagues with data tasks).
Costs for data warehousing technology are expressed in terms of Gigabytes (GB) or Terabytes (1TB = 1000GB) of Database Storage. Data for customer analytics cannot sit “static” on simple storage disks (Like photos or word documents), as it needs to be ready to be queried at any time.
When calculating costs, you need to think about how much new data you generate per month and how much you currently have in historical data. For example, you generate 100GB of new data per month, and already have 1TB of data archived. So you need to invest in a data-warehousing solution that can handle future growth capacity.
In terms of ball-park costs, data warehousing estimates fall into 2 pretty distinct camps based on the type of organisation and the culture of the company.
Over the last few years, a number of cloud based data-warehousing companies have emerged with extremely great value offerings. Following a modern SaaS pricing model of all technology, licenses and support inclusive, these vendors have simple and low “per TB” pricing (Note: some vendors instead charge per 1M events/actions recorded but the pricing estimates here are equivelant).
The trade-off is that you’ll be expected to do a little bit more work in the setup (i.e. you may be responsible for following a User Guide on how to connect Marketo and Salesforce.com to your data warehouse). Also, bespoke customisation would cost a premium or in some cases, might not even be possible.
An illustration of the costs of one of these solutions:
In a fairly young 50-100 person B2B SaaS company, we can estimate data warehousing based on an inclusive fee of $20,000 /TB /Yr, with all of the support costs included. (I based these figures from Trakio Hub, RJMetrics, GoodData and TerraData).
This means a 2TB data warehouse, growing at 0.5TB per year, will have a 3 year total cost of ownership of $180,000
In contrast, a large organisation requiring a more sophisticated solution might use a specialist data warehousing consulting firm to handle the setup. In this case for the same data size (2TB growing 0.5TB /yr) the cost of ownership over 3 years would be $2m – $2.75m. (I based these figures on anecdotal conversations with 2 data warehousing consultants, referencing multiple previous projects). In these cases, most of the costs seem to be front-loaded in the setup, with a 1TB project costing as much as $500k in setup fees.
Analytics Processing and Visualisation Software
Unfortunately, it’s very unlikely that a single analytics/visualisation tool will suit every requirement in your organisation. You might have a BI dashboard for producing comprehensive management reports from custom SQL queries. Multiple predictive analytics tools running various machine learning models. And an organisation-wide metrics dashboard allowing any employee to get quick KPI information.
Based on the complexity of the tool, how many people need access, or the value-based pricing of the proprietary technology, your costs for analytics tools will vary hugely.
- Simple yet attractive Metrics dashboard and visualisation that connect to your existing data layer will cost around $10 /employee /month
- A customer analytics platform offering funnel visualisation, cohort matrixes and trend comparisons, could cost $100 to $2,000 /mo (based on 250k to 5M ‘actions’ analysed)
- A machine-learning platform responsible for constantly refining a predictive model on your online CX, and providing real-time product recommendations via it’s API to your 5 million monthly visitors, will cost $5,000 /mo.
Some organisations budget analytics per-department where as other companies have organisation wide budgets. In per-department budgeting, I have seen marketing teams spend 3 times as much on analytics software as they spend on marketing automation software.
Analytics Consulting Expertise
Once you move beyond the standard reports and want to start looking at your customer analytics with more personalised context, you’ll want to hire a data scientist. While most young startups don’t need dedicated internal data scientists, by employee 20-30 you will begin to feel the pressure.
Whether you decide to hire full-time employees or bring in outside contractors, there’s good news and bad news.
The bad news is that “Data Scientist” is now one of the highest paid professions in the IT industry. In Silicon Valley, an intermediate Data Scientist can attract a salary of $170,000.
The good news is that Universities and recruiters saw this trend coming, so there is a huge workforce training to be data scientists right now. I predict the market will settle within a few years
There are also a large number of analytics vendors and consultancies who are increasing their professional services capacity around dedicated data scientists, to handle the “on demand” requirements of most companies. These also provide a cost-effective “data scientist as a service” solution suitable for most growing companies.
A downloadable PDF will be available for this guide once I’ve had a bit more time to edit this and add in some comments/notes from other industry experts.
Subscribe to our email list to be the first to hear about the downloadable PDF!