Measuring user experience design

How successful is my website, app, feature, or design change? It’s a simple question to ask, but more challenging to answer.

What are you trying to achieve with your design or development?

There’s little point talking about measurement if you don’t know why you built, or are building, something.

Also, who is asking the question? That matters too, because different stakeholders have different views of success.

Different stakeholders have different views of success

While success from business and UX perspectives are likely to be aligned, the measurements that the business wants to make and the measurements that a UX professional or content producer wants to make might be different.

Consider a charity that is trying to grow donations. They’ve completed a donations UX improvement project.

The CEO

The CEO will want to measure the donations revenue, and will be hoping to see an increase in donations comparing each month after the project was implemented with the same month a year ago prior to the project.

That’s a good measure for the CEO who cares about outcomes and returns on investment, but doesn’t want to know in detail about conversion optimisation strategy.

The UX designer

The designer, on the other hand, might want to measure other things that they hope have improved since the UX improvement project launched.

Of course the designer is interested in the numbers of donors and the total amounts donated. If these numbers are going up it’s good for the designer because the CEO is happy.

But these totals don’t say much about users’ experiences. For that we need rates.

The usefulness of rates

Rates are useful for making comparisons.

For example, you can compare the 2019 GDP of Luxembourg ($71 billion) and India ($596 billion). This tells you something – that India has a larger economy than Luxembourg. No surprise there.

But looking at the GDP per person (a rate) for Luxembourg ($106k) and India ($2k) tells you something about the relative average living standard in those countries.

Rates are useful for making comparisons

Back to the example of the donations UX improvement project…

Probing UX performance

Comparing, say, a month post-launch with the same period a year ago, the designer might be hoping that the number of donors per user session has gone up.

With some caveats (see below) this measure provides an insight into changes in effectiveness of the new UX in converting users into donors.

The designer might also hope to see an increase in the amount donated per donor. With this measure they can see something different. This says something about the effectiveness of the new UX in inspiring more generous giving.

The designer’s conversion optimisation strategy might have hinged on a radical new donation page design. In that case it would be useful to measure, and compare, the number of donors per user session in which the donation page is viewed.

That rate would measure the relative effectiveness of the new donation page compared with the old design.

There’s a pattern here. As the rate definition is refined, measuring the rate provides an increasingly specialised probe into UX performance.

Refine rate definitions to probe specific features of UX performance

Of course, in the real world there are often challenges in disentangling the impact of new UX from other factors.

Measuring in the real world

Imagine that the charity’s marketing team has been experimenting with a Google Ads Grant (i.e. free) pay-per-click campaign focused on recruiting people to participate in sponsored events. That might have substantially increased overall user sessions without attracting new donors, with the result that the first metric above has taken a hit even if numbers of donors and donation revenues have increased.

This issue could be solved by defining a user segment that excludes user sessions acquired from any pay-per-click campaigns focused on recruiting for sponsored events. That should level the playing field.

Consider focusing on specific audience segments to enable fair comparisons

Beware bad metrics in the wrong hands

Ever keen to demonstrate superior knowledge of the organisation’s digital marketing, one of the charity’s directors has access to Google Analytics. One of the handy home page charts shows that the average bounce rate across the site was 80% last month and 70% in the same month last year.

Quelle horreur! The designer gets criticised because of the apparently declining performance of the website.

Of course, what this aggregate measure fails to capture is that, while the marketing team’s experiments with free pay-per-click advertising have generated more visitors, many of these were not well targeted and really didn’t want to be on the site in the first place.

The problem, in this case, isn’t with the performance of the site, but with the targeting of pay-per-click advertising.

Additionally, while bounce rate is a really useful success metric for individual pages that are specifically designed not to bounce – for example campaign landing pages containing calls to action – it doesn’t say anything about the success of pages that are user destinations.

When a user finds what they were looking for on the page they land on they’ve reached a valid destination. If the user exits from the page with their need met, that is a success. It is also a bounce, but a good bounce, not a bad bounce.

Average bounce rate for an entire site is almost always an unhelpful metric. Don’t use it, and don’t let others have access to it.

Analytics dashboards

Rather than allowing inexperienced users to dig around in Google Analytics, unearthing numbers that they potentially don’t know how to interpret, try providing online interactive dashboards.

Use dashboards to provide data that is relevant and consistent

By using dashboards you can control the metrics you make available to different types of stakeholder. They’ll see consistent data that is relevant to them and their role in the organisation.

Aligning success measures with objectives

Getting back to the question of the charity’s donations UX improvement project, was its objective to turn more users into donors, or to increase the average amount per donation, or perhaps both?

The primary success metric, or metrics, need to align with the project objectives.

Measuring non-transactional outcomes

The scenario I’ve been talking about is all about transactions. Transactions are a lot easier to measure than non-transactional outcomes. How can we measure non-transactional outcomes?

Imagine a a professional services business has just launched a thought leadership blog. How do they know whether that’s a success?

In my article ‘Measuring thought leadership content‘ I describe four metrics – two for a CEO, and two for a designer or writer.

Metrics for a CEO

The CEO’s metrics are totals (per month, quarter, year or whatever period the CEO is interested in).

  • The total number of views of thought leadership content scrolled to at least 75% of its length.
  • The total number of thought leadership shares.

These tell the CEO something about the aggregate of all engagements with the thought leadership output he or she is investing the company’s money in.

If the numbers are going up each month / quarter / year that’s a good thing.

A designer or writer might need to work a bit harder at selling it than they do selling donation revenues, but if users in the target audience are willing to invest the time to scroll through articles and to share them then that says something very positive about their experiences of the content.

Metrics for a designer and writer

On the other hand, those totals don’t give a designer or writer much insight.

For example, designers and writers might want to compare articles that are doing well, and articles that aren’t doing so well. This is where some similar, but slightly different metrics come in useful.

  • The percentage of views of each thought leadership article scrolled to at least 75%.
  • The percentage of views of each thought leadership article with a share.

Notice that, again, these are rates. They measure the percentage of views in which something happened.

And also these rates are on a per article basis rather than aggregated across all articles.

This means that one article can be compared with another. Those that are scrolled and shared can be identified, and compared with those that aren’t. This might help to expose features that make for scrollable, sharable content so that future content can be better written and designed.

Conclusions

In conclusion we can see that:

  • Measuring success depends on precisely what your objectives are
  • Success means different things to different people, and you need to work out what this means for what you measure
  • Your CEO will want ‘big picture’ numbers that reflect overall outcomes
  • Take control of your analytics data to avoid bad metrics, and unhelpful data in the wrong hands – for example by publishing dashboards rather than allowing access to Google Analytics
  • As a UX specialist, developer or content producer you are more likely to want to define rates that you can use to probe what you’ve designed or built, and to make comparisons
  • Take care to control your probing measurements as much as possible to filter out the effects of factors that will skew your results – you might need to implement segmentation
  • The success of transactional outcomes is usually easier to measure than non-transactional outcomes, but there are useful measures of non-transactional outcomes
Chris Scott's avatar
Author
Chris Scott
Date
19 May 2020
Categories