So you’ve launched (or are about to launch) your new digital product. Congratulations. I’m sure that was hard work; the journey from ideation through to product launch is a demanding one.
Unfortunately, the hard work is just beginning! Now your product is in the market and users are engaging, you need to consider how you’re going to monitor its performance and how you’ll use this information to plan future improvements.
Best practice says that this process starts with clearly defining what success looks like for you and your product. This is something best done as early as possible. The sooner you’re thinking about what success means for your platform, the sooner you can start steering towards it.
In my mind, defining success should always start with the development of a ‘north star metric’. This is a global point of measurement that represents what success means to the business as a whole and, importantly, is viewable by all members of your team. This could be a financial measure, but often metrics such as efficiency or client satisfaction work better.
For example, our definition of success (our north star metric) is the grades we receive from our clients on the Clutch review platform. This data is important to us because it serves as a proxy for numerous other measures (such as how likely our clients are to become repeat customers and how likely they are to recommend our services to others) and, as public data, the reviews are open to our whole team.
North star metrics, by their very nature, vary from business to business. What works for us may not work for you or your product, so I can’t define what your metric should be here in this post. However, once you’ve taken the time to consider what north star metric will help steer your company, the next step is to build the reporting and measuring structure that sits beneath and supports this metric. These are most commonly called key performance indicators (KPIs).
Each area of your team should have their own set of KPIs, and these should be designed to help the team contribute towards the cross-functional north star metric. For example, though your marketing, sales, and product functions are inherently interlinked, I believe they still should have their own granular KPIs that can be measured independently and against one another.
Over the past decade of helping numerous clients create and launch digital products (and even incubating and launching a few of our own) I feel we’ve developed a good grab bag of transferable KPIs that are relevant for newly launched products and platforms. They roughly sort into three groups.
The most obvious route to easily measurable KPIs is through the usage data that your platform should be generating. Accessing this will require some kind of data monitoring and analytics capability within your app, digital product or website; this could be built bespoke as part of your product build, or via a third-party provider like Post Hog or Amplitude for apps, or Google Analytics for websites.
Often the issue isn’t getting the data, but knowing which to actually pay attention to – most analytics platforms present a tidal wave of information and metrics. We most commonly look at the following areas.
Every analytics platform will let you set goals. These are important actions within the product that you want to monitor, such as a newsletter signup, checkout or booking. Measuring the volume of goal completions is a simple KPI, though you need to have some degree of certainty that the goals you’re setting will actually contribute to your north star metric.
Monitoring when users drop off and where can be a good KPI for a platform that relies on engagement. For example, if you find a lot of users dropping off on a certain page or a certain point in your funnel you’re able to develop a sense of where potential adjustments need to be made within the product. Of course, adequate time and user volume needs to be gained before any decisions can be made confidently.
There’s also the platform conversion rate which measures the number of users that complete a given action (sign up, purchase, start a free trial, whatever). We like this measure because when it’s done well, it’s a measure of the quality of the experience a user is having on the platform. 10% more users on your platform is nice, but if they don’t convert, it’s not worth a lot to you. A 10% increase in conversion rate will nearly always be a good thing though (assuming steady traffic figures).
Once a user subscribes, or even before this, you may want to measure stickiness. This measures product engagement by examining how many of your users return on a regular basis. When a product is sticky, users don’t just signup and login occasionally, the product becomes part of their daily professional life. If you’re selling a service, then the stickier your customers are, the easier it is to predict your future revenue and plan accordingly.
Retention can be measured over virtually any time period and will really depend on your product, your business, and what you’re looking to learn. Measuring it will help you assess whether your product’s user experience is keeping users engaged. One common way to measure app retention is by taking the percentage of users retained over the first three months of usage.
If your app or platform has numerous features or parts (think Adobe Creative Cloud, or Google Workspace), then the more features a user adopts, the more value they receive and the less likely they are to churn.
You can measure feature adoption either as a count over time or with respect to click volume (e.g. the percentage of features that generate 80% of clicks). Tracking your feature adoption like this can help you understand whether your company is building features that enhance your retention rate and adjust your product development accordingly.
In addition to the above it’s worth considering exposing your analytics and CRM data to a business intelligence software package such as Microsoft Power BI or Google Data Studio. This can offer some benefits, such as bringing multiple data sources or pots together. For example Microsoft Power BI will allow you to run queries over a selection of data sets to help you understand what the possible impact of any change to your product may be.
The difference between usage KPIs and the analytics KPIs I’ve listed above is subtle, and often the two groups bleed together somewhat. In my view, usage KPIs should be more complex and three dimensional than pure analytics KPIs. They are more about how parts of the platform or system link and work together and often bring in data from multiple sources. Here are the three we deploy most often.
This is probably the most commonly used performance KPI that reflects how effectively platform users are able to complete certain tasks. Expressed as a simple percentage, this measure often has a headline measure, and then sub measures. For example, for the HMRC site, the headline measure would be the percentage of successfully completed tax processes carried out, with sub measures of individual tasks (self assessment, capital gains assessments, etc).
As long as your tasks have a clearly defined end point and a set of steps towards it, such as completing a registration form by filling out the individual fields, we can measure the success rate. So before collecting data, it is important to define what constitutes success.
Time on task is sometimes referred to as task completion time or task time and is a measure that is expressed in minutes and seconds. You might use this when a user inputs to a form, or runs a search query, or begins extracting a report via the platform. It’s useful over time to monitor how areas of the platform are performing, and if there are any specific drop off points, perhaps due to over complex design execution.
Generally, the smaller time-on-task metric, the better user experience. Time on task data can be analysed and presented in different ways, but the most common way is to present the average time spent on each task. Note that this is a measure that can often span between different platforms and services, as it’s not uncommon for a task (such as drawing up a report) to require inputs from multiple places. As such, this KPI can be tricky to measure, though as analytics and business intelligence platforms improve and integrate with ever more services, more complex task tracking is becoming possible.
Generally errors are a useful way of evaluating user and platform performance. Error rate can tell you how many mistakes were made, where they were made and how various designs produce different frequencies and types of errors. Errors and usability issues are very closely related and even referred to as the same thing which is why this metric is frequently relied on to assist teams when making design decisions.
The final area we like to monitor is user’s attitudes to the product. The most simple and common way to do this is through generating a Net Promoter Score (NPS) metric from your users, typically via a in app or on site popup, or as part of the post checkout process.
NPS has become one of the most popular ways for organisations to gauge customer loyalty and customer satisfaction, and because it’s become something of a standard across the industry, it’s easy to compare NPS scores across different platforms or products.
NPS asks customers how likely they are to recommend a company’s product or service on a ten point scale, the results of which you can then connect back to customer happiness and, eventually, business growth. However it’s worth noting that NPS doesn’t always represent an accurate measurement of customer satisfaction, so it’s best used alongside other measures of customer satisfaction, such as stickiness or churn.
Establishing a north star metric is a useful practice for most new digital platforms or products. It makes it explicitly clear to your company, colleagues and stakeholders what success looks like for your organisation and will help inform decisions about product development and strategy. That metric should always be supported by sensible and measurable KPIs, which I’d suggest should be set at a team level.
The post Measuring success: finding your north star metric and the KPIs that support it appeared first on Browser London.