The goal of all CX work is to deliver a better experience for people you interact with. That begins with designing the experience you intend to deliver.
Digital CX is fundamentally about the optimisation of user journeys; the improvement of the interaction between organisation and user.
Before we get into how we do this though, we should take a moment to define what we mean when we say “user journeys”. Increasingly we see people talking about the customer journey when what they mean is marketing funnel. For example, this image from Pointillist shows a funnel, not a journey:
A funnel-based approach to optimisation aims to i
mprove the conversion rate for the business, not the experience for the user. They say nothing about whether the user achieved what they wanted, just the performance of the touchpoints.
Looking at the interface for a horse tells you nothing about what it’s like to ride oneTweet This
A journey map doesn’t looks at performance, but at experience. In much the same way as looking at the interface for a horse tells you nothing about what it’s like to ride one, funnel analytics tell you about how effectively a touchpoint moves the user to the next touchpoint. It doesn’t tell you what the user feels, who they are, the context for their interaction that day, or what they view as a successful outcome.
Thus a journey map is the story of a single representative user from a persona type, starting with a goal, context and motivation, and ending in an outcome. Along the way we see how the journey evolves. As such, a journey map might look something like this:
So, moving on to the detail of customer journey optimisation. This work has two parts, each consisting of four elements. The first block of four covers research, and the second covers development. In order, all eight steps are as follows: goal setting; customer journey research; touchpoint analysis; monitoring setup; challenge definition; hypothesis creation; test planning and implementation and change management.
Let’s take each of these in turn, to see how change is developed and implemented successfully. Firstly, we’ll tackle the research stage.
Before anything else occurs, a strategic goal must be set. This goal will be the foundation on which everything else builds. It allows us to set success and failure criteria, to discover which user journeys are important to improving that goal, and to evaluate the performance of touchpoints along the path.
What this goal should be isn’t something that we can dictate to you. It requires looking at your organisation and finding an area you wish to improve. We like to find that point by surveying people who’ve completed goals with tangible business value, and asking them what got in their way in the process. We then pick the one with the biggest number of issues, and tackle that first.
The goal in question though must go beyond simply “fix the things that get in the way”. The journeys involved in getting to and completing that goal need to be considered, mapped against each other, and then the goal set based on meeting a specific improvement. That might be a higher number of completions on the goal, or an improvement in the quality of those completions, or a better experience of the journey, leading to an improved perception of the process. There needs to be something specific and measurable, which can be used to judge the success or failure of the optimisation process.
For example, if we’re working with a retailer to optimise the shopping and checkout experience of their site, the metrics might be to do with session duration, or number of unique visitor sessions to purchase.
On the other hand, if we’re working to optimise the journey of user onboarding with a platform, that begins with a user first discovering the brand, and ending with them signed on and using the platform fluently. The metric we might be trying to influence then might be around interactions per week, or FAQs viewed.
Part of the key to this is that we aren’t measuring an individual touchpoint. Whilst touchpoint analytics are of course important (and we’ll get to that later), they can be optimised individually without delivering improved end results. In the same way as that a marketing channel can produce great metrics but little tangible benefit (the torrid circus the programmatic industry has created springs to mind), it’s quite possible to improve individual parts of a journey without delivering any uplift when measuring the final goal.
This is why this initial goal setting must be yoked to direct business value, with success and failure criteria set for its movement. It’s not enough just to make a metric change; it’s got to change in a positive way, which delivers the benefits desired by the strategic framework in which this effort is operating.
Customer Journey Research
As we move on to the second phase, we start to consider the journeys that effect the goal and create the outcomes which deliver it. In DCXD, it’s these combinations of touchpoints which we work to improve upon to deliver tangible results.
There was a time when we could have considered our work as useful by focusing on developing the various touchpoints in absentia of wider context. We’d have improved each individually and patted ourselves on the back as being customer-focused, and indeed, we may well have even delivered results.
This touchpoint-focused delivery still has its benefits. For example, see the entire fields of SEO, programmatic and media advertising, generating brand awareness and affinity, or CRO, focusing on improving late-stage funnel results. However, the days when these can be considered in isolation and still both out-perform the competition and also be seen enough by consumers have long since passed. Instead, optimising touchpoints, individual pieces of the overall journey, is now just one part of a larger overall picture.
Thus today’s leading organisations are focused on adapting how they work on a more fundamental to improve the experience delivered. So whilst, continuing with our retailer example, a company could focus on tweaking website UI to deliver a better experience and so improve their conversion rate, this style of work is now often akin to rearranging the deckchairs on a sinking ship.
It’s not enough to simply view the components of digital interactivity as a stand-alone part of the user journey and experience. Instead, if a retailer is going to deliver on digital experiences, they’ll find customers turning up and asking more of them than ever before.
Companies who excel at this are those like Schuh, who tie their stocking, logistics and website together, so they can deliver an in-store pickup experience, even to stores which don’t have a product in stock. They manage this by having overnight logistics on digital orders to allow collection from stores which didn’t have the product the day before. More than that, by having the data on store location and warehousing, they can use predictive logistics to show product as being in store for pick-up even when it’s not now, because by the time a customer arrives to pick the item up, they’ll have transferred it to the store for them.
Where another retailer might have worked on messaging to inform the user that they couldn’t collect at a local store, which would reduce user frustration, Schuh have focused on how to both deliver what the user desires, and produce a better experience, at the same time. It’s not the cheaper option; instead it’s the one focused on long-term customer acquisition and retention. It’s focused on both customer acquisition and development.
Once the relevant customer journeys have been identified based on the goal set, they are next broken down to their constituent touchpoints. We recommend listing these along with the following:
- User aims and success/failure criteria
- User mindset, device and location
- Business priorities (strategy, process, governance)
- Data and application requirements
- Technologies interacted with
- Specific goal metric
The end result of this is a matrix which can be analysed and pivoted, in terms of touchpoint types, to allow for the prioritisation of work and batching of workflow, when these points are worked on.
Of particular importance here is the specific goal metric. This is the way we will measure the importance of the amends to the touchpoint itself, and monitor against the metrics for the overall strategic goal. This allows us to understand, as different touchpoints fluctuate and are amended and improved, how important each is to the overall performance.
This final part of the first stage looks at how the changes created will be monitored, both in terms of their analytics, reporting structures, people involved, and governance for when changes will be approved and locked in, or backed out. This allows for transparency in the project, and planning for the management and internal reporting structure, to create a framework for good management of the project.
It’s worth remembering that the advantage we have as digital CX practitioners is that, due to the nature of the medium, there’s a wealth of ways to extract both qualitative and quantitative measurement for our hypotheses and tests. From social media marketing to programmatic, SEO to digital PR, almost everything we might want to work on can be tracked and measured. This data cannot necessarily all be joined together on a per user basis, with cross-channel and cross-device movement breaking individual journeys and GDPR and similar legislation often leading to broken analysis, but that need not be too much of a problem.
Instead of aiming for perfect tracking between platforms (an ultimately impossible goal), organisations should instead focus on the strategic and tactical goals for each individual channel, how each touchpoint performs, analysed by user intent rather than visit source. After all, the end goal is to improve the fit of each touchpoint to the user journey, not to optimise it in isolation.
This final part is the key, especially in systems with long lead times or lifecycles, where an optimisation around brand awareness or affinity might take months to show in the key metric, due to the lag between users becoming aware of the brand or increasing desirability, and actual pay-off in revenue.
The first step to addressing the issues in our customer journey is to define them. Now we know what the various journeys look like, we can create hypotheses around what doesn’t work in the current model.
This step isn’t about finding ways to address them - we’ll be doing that next. Instead, we want to list out possible reasons users might not complete the goals the business has. This needs to be informed by data - user surveys, funnel analytics, click mapping, form analytics and method marketing are all great ways to get the information in you need to create strong, testable hypotheses. If you want information on how to set these up, and to use them for testing, don’t worry, we’ll be covering that later.
It’s okay for this step to take time. It’s better to spend the time up-front, making sure that the scale and scope of the challenge at hand are understood, and to have solid groundings for your experiments, rather than rushing it and trying to get to the testing without understanding the nature of the problem. Otherwise, you risk spending twice the time testing blindly, hoping to stumble on the right solutions, and not understanding why they work when they do.
A good hypothesis has three elements. It should be:
- Based on research, specific to a single topic
- Simple, specific, testable and falsifiable
- Built around an independent and a dependent variable
The first of these we’ve covered in the touchpoint analysis section. The other two bear looking at before we proceed though, as they’re vital to the next section.
The second part we defined as simple, specific, testable and falsifiable. The third of these is obvious - if you can’t test your hypothesis by changing something and observing an outcome, then it’s fundamentally useless. However, it’s not enough to simply have something which can be tested. You’ll come up with various hypotheses for why users are interacting the way they are, and you likely won’t be able to test them all. As a result, we can also kick out any test that doesn’t have something specific which would disprove it. Unless there’s a way it can be shown to be false, the test can be claimed to be a success, whatever the observed result. It’s why measures like bounce rate tend to lead to poor tests; whether it goes up or down, the result can be spun to be good. Any hypothesis created must have success and failure criteria attached, so that it can be said to have been shown to be false, if x criteria aren’t met. Equally, if two hypotheses are equally probable, remember Occam’s Razor, and pick the simplest one, the one with fewest independent variables to test.
It’s also worth noting that the hypothesis you create can never be proven to be true. Instead, it can simply be shown to be true, within a degree of probability. This is called the test’s p value. If this isn’t something you know about, we’d suggest reading up on statistics before you start testing, to ensure you understand what you’re doing.
For the final element, in case you haven’t come across these terms before, we have dependent and independent variables. The first is the metric or metrics being measured. If we took the equation a + 2 = b, b is a dependent variable; its value depends on the value of a.
The independent variable is therefore a - it’s something we have control over, and changing it will affect the value of b.
Now we come to the meat of things: testing. There are a whole bunch of ways to test and measure changes to your site, both in terms of gathering qualitative and quantitative data. We’ll cover a few here, but this is by no means an exhaustive or comprehensive list.
What is it? You have a control and a treatment version of the same thing, where one change is made, to see which version performs better.
Best for: getting quantitative data on touchpoints with immediate, trackable goals, to test what people do
Not for: getting what people think of, observe on, or take away from a touchpoint and its interface and information
What is it? Asking members of a general population questions about a specific interface, such as “what do you think this is about?”, “what’s the first thing you noticed?” and “what would you expect to see after interacting with this element?”
Best for: getting qualitative data on what people observe, and what their expectations are, based on the areas of the interface to be changed.
Not for: testing what people will actually do, which can be different from what they say, or getting data which would require familiarity with the industry or product in question to be usefully beneficial.
What is it? Asking actual or potential users of your product or service for answers to specific questions, such as “what brought you here today?”, “what information were you looking for?”, and for users who completed a task, “did anything put you off whilst shopping/etc today?”
Best for: getting qualitative data on what relevant users are trying to do on your digital spaces, to inform how you can better help them, and asking the same questions after changes have been made to see if these issues have been resolved.
Not for: getting long answers to lengthy questions or sets of questions, which might distract users from taking actions you wish them to proceed with.
What is it? Showing several groups of people treatments of a single interface for five seconds, and asking them to tell you what they think it’s about.
Best for: getting qualitative data on how efficiently your interface conveys its message to users.
Not for: Anything where any level of detail or thoroughness is required, or any significant quantity of copy needs to be read.
What is it? You have a small series of actions users need to take, during a short period of time, in some form of order, to complete a single goal.
Best for: observing how people move through a mainly linear sequence of connected actions, and improving each point to smooth the overall journey. In essence, user journey optimisation in micro.
Not for: analysing non-linear paths, or paths with long time periods from start to finish.
What is it? You have a form in your site/app/other, which users need to complete, with a specific intended outcome in mind. For example, contact, registration, checkout or sign-up forms.
Best for: getting quantitative data on which fields are most commonly left uncompleted, where people drop out in filling out the data, how many people interacted with the form in any way...
Not for: getting data on anything that isn’t a form. Also, not useful for understanding why users drop out or don’t complete the form.
What is it? Software which records interactions on digital interfaces.
Best for: getting information on what people are clicking on, how they use the interface presented, and what produces redundant or frustrating interactions (like auto-rotating carousels).
Not for: evaluating pages with particularly large sets of items to be interacted with, or with elements in motion, where the click can’t be tied to a specific thing. Carousels are again an example of where this falls down.
What is it? Like method acting, but for marketers. Starts with creating personas, then matching their goals, current state of knowledge and likely device and location against the touchpoint.
Best for: getting inside the mind of different types of users, to evaluate how an interface might fit with specific groups of users, rather than users in aggregate.
Not for: getting any form of data; this is purely a way to think about how you can better meet the needs of specific groups of users with a single touchpoint.
Implementation and Change Management
Once you understand of what your organisation wants to change from a KPI standpoint, and the how, where and what behind the changes, we can start planning how we’re going to go about managing our testing program and the changes introduced as a result of it. In this phase we focus on test selection, management and governance. Remember: functions and processes don’t exist in a vacuum - amends made in one place almost always cascade effects into other processes and outcomes where they’re also present. CX work needs governance to ensure that changes in one place don’t negatively impact other areas at the same time.
Taking the three parts on their own, we start with how to decide what tests will be run and how to prioritise them, to ensure the greatest possible impact for the effort spent. We suggest two approaches:
- Bottom up, focused on the number of users interacting with a touchpoint in the journey
- Top down, based on predicted conversion impact per user of ameliorating issues with a single touchpoint
In the first case, we’re aiming for impact through positively effecting change against a volume of users, delivering returns through a large volume against small gains. In the case of the latter, we’re looking for smaller numbers of users, but larger impact on each one. Of course, it’s possible to find touchpoints where optimisation efforts will both impact large numbers and deliver huge benefit, but these are rarer. Also, the trade-off of volume vs impact won’t balance for each test, so it’s worth spending time trying to estimate how successful each test would need to be to be able to outperform the others.
The management and governance of testing also presents challenges which need addressing. Test work invariably intersects with work from various stakeholders, typically marketing, development, and operations at the minimum. Software such as Unbounce or SearchPilot can get around that to an extent, but only in as far as you’re happy having fragmented content management. Ensuring that the work done dovetails into the queues of each of those departments, and creating an environment where both the strategic aims, and the ongoing change requests are made transparent throughout the organisation is tricky.
However, when you make it work, it creates an environment where there’s not only the potential for other parts of the organisation and external agencies to feed in to the work being done, but also provides avenues for messaging to be communicated out, bringing everyone on-board with the aims of that work.
Ultimately this goes back to the concept that CX doesn’t just affect the customer; the organisation itself has to deliver in a way that focused on people, which itself is often a cultural shift internally.
Selecting Journeys to Optimise
With this abstract framework in place for what work should be done, we now need to consider the thorny question of where to start to choose what to optimise.
When asked, customers repeatedly return to four areas which significantly contribute to the experience they have with an organisation’s touchpoints: relevance, lack of friction, empathy and expectation management.
First we’ll take a moment to unpack what these mean, and then later look at where should organisations look to improve these areas, both in terms of their digital interfaces with end users, and their organisational structure and operations.
This is the single most valuable point in any experience. Many organisations consider personalisation as the most important element in today’s marketing environment, but in reality this confuses the method for the goal. Personalisation is just one way to create relevance. On its own however, it doesn’t necessarily deliver greater relevancy than would have been the case without it.
This is another part as to why simply adding CRM systems to organisations, or going mobile-first or digital-first rarely works. It’s a new coat of paint onto a theory of work which is otherwise unchanged.
The goal should be to create experiences which resonate with consumers, through demonstrating an understanding of their circumstances and requirements. If 500 end users have the same need, and the same solution will meet that need, trying to personalise 500 messages is only going to get a micro-level improvement, vs finding areas where the messaging to those 500 people isn’t properly aligned with their needs and journey stage.
Lack of Friction
The second pillar in the customer experience tetrad is creating systems and processes which create as little friction as possible.
We increasingly see consumers looking for instant gratification, and results delivered in timespans which ten years ago would have been unthinkable. Processes consumers go through should require as little as possible, and operate at pace. Speed and elegance in systems interaction is increasingly becoming not just a demand from users, but also a competitive advantage, as alternative providers hamper users, providing incentive for them to jump ship.
The penultimate element is empathetic design and communications. It’s not enough simply to be able to tell users you have the solution to their problem, or even demonstrate that, although that’s always going to help.
In order to compete effectively in the customer experience economy today, organisations need to demonstrate empathy with a user, their challenges and needs, the specifics of their situation, the frustrations they feel, and the ways they’re looking to resolve those challenges.
Showing empathy has the dual benefits of both building trust, and raising expectations, due to the users’ belief that you’re able to deliver against their requirements.
The final component of delivering great digital CX is setting and meeting expectations effectively. Users have expectations about how their needs will be met before you engage with them, and so how you interact with them through the user journey, and both alter, set, and meet those expectations is hugely important.
These can be directly set, like Amazon’s next day delivery, or more subtle, like John Lewis’ “Never Knowingly Undersold”. But whatever the expectations you’re setting, and that the user is coming to you with, they need to be realistic, managed, met and where possible, exceeded.