top of page
Kashif Hasan

The joy of experiments


The idea of experimentation isn’t new. It could be argued it's hard-wired in us. Since forever, we’ve been driven by the idea of improving the quality of the things we make, and the tools we use to make them. And, as we discover better ways of doing and making things, we abandon the old. A broken flint became a cutting tool. A whittled flint edge became a knife blade. The new way becomes the bar we use to measure all else that comes after.


Our human instinct for progress is ceaseless and so the rising tide lifts all boats. Leap forward around 2.6 million years from the Stone Age to the 1970s and take a look at Sesame Street’s pioneering CTW model for combining school curriculum and pedagogy with adorable children’s entertainment. The model was meticulously tested using carefully constructed scientific experiments. The results speak for themselves. Sesame Street has, in some way, touched every parent and child alive. That's no accident, they showed a phenomenal determination to testing their content.


Today, we’re aware that mega-brands are relentlessly testing ideas on us, usually covertly, occasionally provocatively. They’re searching for the very best ways — of doing everything they can — to win us over. Although the concept of experimentation is as old as we are, the idea of starting or upgrading our own programme might be new to us. And so here’s some advice, gained the old-fashioned way, with the wisdom of hindsight.


How much?


One of the first questions we’re certain to be asked when we start thinking about launching a programme of web experimentation is, unfortunately, ‘how much will it cost?’ Followed by the auto-reflexive requisition for ‘ROI’. Mournfully, the ROI model for experimentation is almost impossible to lay claim to. It’s a red herring question. Just as it would be unusual to ask, ‘what’s the ROI of regularly cleaning the toilets in a chain of fast food restaurants?’, or, ‘what’s the ROI of launching the spring collection?’ Sometimes we have to consider carefully, what will happen if we don’t do something? If our business is of a certain size and if we’re looking for greater advantage and, if our website (or perhaps more accurately, our ‘digital customer’) is very important to us, then to embrace an empirical experimentation culture is probably essential. Everything we consume is evolving quicker and quicker - which means everything is aging faster. A neglected website becomes more obvious and, as consumers, our willingness to forgive is perhaps less generous than it used to be.


We know we must keep up to stay competitive but change offers risk as well as reward. Consider the following scenario, we’ve briefed our web team to design a raft of improvements to our website. Everything looks and feels great, we feel sure our business requirements have been met, so we green-light the go live and wait for congratulations to land in our inbox. Sure enough, everyone loves the new features, but wait, new business leads have dipped by a noticeable percentage.


Out of the dozen changes made, which of these has hurt our conversation rate? Should we rollback everything? How depressing. What a pickle.


Now imagine each of those dozen changes were each part of an A/B test. Each one of those new features could then be turned on and off-able inside our experimentation software. Each of those new features could be tracked for impact on an individual basis. Not only this, but we can decide who gets to see each new feature. Fundamentally this is the true power of experimentation. Because not every change deployed is going to be a change for the better, we have no way of knowing for sure until it's 'in the wild'.


No plan survives first impact. The fear of getting it wrong creates onerous procrastination. If we’re not testing, we won’t know if we’ve launched a lemon. If this happens too often for too long, we could end up with a mess too tangled to fix.


Testing is really powerful stuff. It shows us the value of every change we make - for better or for worse. It allows us to minimise the risk of mistakes and amplify the upside of our winning ideas. In an ideal world, every business would embrace this way of working. It’s how all products should be developed.


The basics


A website experiment gives us the ability to run different versions of the same feature with a controlled percentage of our audience.



As digital Product Owners we’re forever thinking about our product’s performance, aren’t we? Comparing every detail to our closest competitors and wondering whether they know something we don’t. Web experiments allow us to test those hunches. For example:


I reckon users aren’t seeing the link to my case studies page… my competitors use much more prominent CTAs. Let’s test a more eye catching design to see if that makes a difference to the click through rate.

This hunch is our ‘hypothesis’. It’s a focussed statement based on an observation (i.e.: not enough page views of our case study page), its based on research (our competitors’ are doing things differently) and it’s measurable (we know the click through rate today, or we can find it out quickly, and we can see what happens when we try something new).

Having observed, done some competitor research and written our hypothesis, the next job is to brief our design team. We need them to create an alternative version (a Design Variation) of the feature we think will perform better - and plug that into the test software. Before we do plug it in (and assuming you’ve already installed your Experimentation software) its wise to give thought to the efficiency of our experiment and there are two ways to think about this:


  1. Is the feature we’re testing commonly used throughout the website? If so, then the economic impact of improving it becomes more obvious.

  2. Can we set up the experiment so that individual variables are adjustable? More simply put - don’t upload a static red version of the feature, create an experiment where you can input the colour’s hexadecimal code, so if we find that the first colour we test is inconclusive, we can simply type in another hex code and repeat the test right there and then - no more coding required.


How long?


There’s a fancy phrase, ‘statistical significance’, academic language and perhaps unnecessarily confusing. When we sit for dinner, we don't consider how much food we need to feel full. Yet, as long as we eat mindfully, we will know. The same is true for statistical significance, we need to monitor the results regularly - every few days, every week, until we start to see patterns. We need to run experiments until we achieve statistical significance. Meaning, we feel confident the results are real, not fluke, fake, or freak.

In our hypothetical example (about changing the CTA to get more visits to our case studies page) the test results may show, for a few weeks in a row, that our more eye-catching design variation makes practically no difference to the click through rate. Does that mean the experiment is a failure? Certainly not. It means you’ve eliminated a variable, which is good. Was it the wrong type of eye-catching design? Perhaps. Maybe there are other factors we’ve failed to consider. We can now refocus our observations, expand on our research and brainstorm new hypotheses - preferably as a team - because more heads are better than less.

If our experiment shows that the eye-catching design does indeed improve the click through rate - is that a success? Yes, congratulations! Your hunch paid off. Now the work of ‘optimisation’ truly begins as we abandon the old version of the CTA, replacing it with the new winning version. This new version becomes our new baseline (in the language of experimentation it’s our new ‘control’).

What next? Well we might well consider reporting our findings and re-briefing our design colleagues, requesting another even more eye-catching design, to see if that might increase click through rate further still. And so it goes, until we’re satisfied we’ve achieved the optimum design of that particular feature or interactive moment.

That makes sense, doesn’t it? What we’re talking about here is the very foundation of the scientific method. Observe, research, hypothesise, test, analyse data, report conclusions - and go again.


Goal focussed


Perhaps it doesn’t need saying, but unsurprisingly, ad-hoc experiments should be avoided. We should organise ourselves in a strategic and systematic way - so that we can be more certain we’ll enjoy the fruits of our labour come harvest time. And so, it’s often useful to think in terms of frameworks (and targets) to focus our thinking.



Sometimes, the team responsible for the website and by extension our digital customer, doesn’t appear to have a commercial target. Indeed, this may well be true. Although targets can feel like a millstone, it’s better to have, than have not because if we can’t make a clear connection between our work and a commercial benefit, we’ll struggle to make the case for funding. Perhaps more pressing, our projects may be viewed as costs, instead of investments, and come under greater scrutiny in tougher business climates. If no top-down financial target has been set, set one, bottom-up. On the other hand we may be running an e-commerce team and our targets are cut and dried. If so, great. Whichever end of the spectrum we find ourselves, we always need a quantifiable commercial goal to frame our programme of work - that’s the first goal.


Before we look to the future we should review the recent past. We need to collect some data, ideally from the last 12 months (or prior financial year). We want to hit target, yes. We want to know our target is realistic, so we need to break it down. So for example, we need to know for the last 12 months - things like:


  • How many digital customers have we traded with?

  • How much total income has that generated? (Now we can work out the Average Value per Customer per Year)

  • How many of those were new digital customers?

  • How many were long standing?

  • How many have left us?


They say knowledge is power. Well, surely this kind of data is be the best kind of knowledge and gives us the power to make a meaningful difference.


Gathering good data forces us to consider the objective realities of performance optimisation. As we dig in, we uncover nuance, we observe curiosities and we identify opportunities that deserve more attention. This activity is not just an altogether healthy business practice, it's vital to create and protect competitive advantage. It informs our strategic viewpoint, whether we’re kicking off our very first experiment or our one thousandth, and in doing this work of due diligence, it will become clear where our priorities lie, be that in acquiring new customers or cultivating loyalty among our existing base.


Establishing our prior year datum points allows us to set, and adjust the dial for each metric. Inevitably we end up making statements something like this:

To hit our target, we need:


The number of customers to increase by ‘x%’ on prior year
The annual value of a customer to increase by ‘y%’ on prior year
To increase traffic by ‘a%’ on prior year
To increase conversion rate by ‘b%’ on prior year

You get the idea, its not rocket science, its simple maths - but these percentage calculations are often neglected, and if they are, its easy to see how navigating toward our goal could feel like an act of faith.


There are four themes of optimisation


We’ve done the hard work of gathering data for each of the metrics we’d like to work on. We’ve set the dial for each, and we’ve given ourself the confidence our goal is achievable. The next step is to develop a list of ideas we’d like to test to see how quickly we can move the needle and here it’s important we give structure to our creative efforts. There are essentially four themes of web design optimisation:

  1. Discovery

  2. Confidence

  3. Distraction

  4. Call to Action


Discovery.


  • How easy it is for our customers to find the information, products and services they need? How can we make things like site search, navigation, content sign-posting more intuitive? Often our web analytics data gives us the biggest clues, by showing us which pages our customers tend to pause, get stuck and exit our site.


Confidence


  • When a customer arrives on our site, they have some kind of shopping list in mind. How well do we understand it? In what way are we being compared to our rivals? Do we know what triggers our customers' fears, prompting them to click to another open tab? It could be argued that customers are always looking for some combination of the following six things: 1) value for money, 2) convenience, 3) product quality, 4) brand desirability, 5) credibility and 6) integrity. What matters most? That depends. Our job is to eliminate doubt and inspire every confidence needed in apt and elegant ways. To defog the windscreen and develop clearest possible vision of our customers' motivations. Experimentation is crucial to our understanding.


Distractions


  • One of the biggest problems we face is information overload. It could be said that good design is just as much about taking things away as it is about anything else. However, our websites often need to be many things to many people. We have stories to tell, ideas to share, customers to care for and announcements to make. So the process of removing distractions, reducing ‘friction’ from the things that truly matter is a process of continuous learning - perfect for experimenting.


Call to Action


  • It’s easy to underestimate the CTA. First we must consider context. CTAs are more effective when presented in the context of what a customer has already seen and done. If we ask a prospective customer to ‘get in touch’ before they’ve viewed any of our product pages, we’re making an assumption about them that may not be true, and could be frustrating. If data capture, or driving transactions are important to us, then testing variations of context is vital. Of course, it's just as important to test the aesthetics of CTA design and the language we use to persuade customers to act. Things like on page placement, colour, shape, copy writing and use of imagery all make a difference. We'd be wise to test every variable to be sure we've mastered the art of CTA.


Do I need a team?


The short answer is, yes - of course!


Whilst the software we use is truly amazing, it still needs expert operators. It needs people who are invested in the success of the programme. People who have an intellectual interest in digital product design and customer experience.

It goes without saying that the team we recruit will depend on a host of factors - budget, chief among them. Nevertheless, thinking about the ideal team shape (or the Target Operating Model, to borrow from the lexicon of management consulting) is useful to help us plan our roadmap of work.



In the model shown above, there are eleven separate roles indicated. That number may be surprising, and there's every chance that some individuals may sucessfully wear several hats, and also that some of those people are provided by third parties - copy writing or technical development, good examples of that. However, whoever we cast in our team, if we want to give ourselves best chance of working with purpose and achieving real impact, we should be mindful not to compromise on two over-arching principles:


  • The good governance provided by an experienced technical project manager - orchestrating the team with predictable cadence - can make or break the initiative.

  • Whoever we have in our team, the tasks described need to be done in some way shape and form, and, ideally, by a close-knit team of interdisciplinary practitioners.


Talk is cheap

It is relatively easy to write essays about theory. It’s the practicality of making it happen that matters, right? And the devil really is waiting in every detail. In writing this, I’ve tried to imagine what I would have wanted to read, hear and know before starting my own adventures in web optimisation. Knowing, as we all do, that talk is cheap, but hoping to glean useful nudges to jump-start my brain and inspire me to act with greater confidence. I hope what’s written here offers something of that, for you. Good luck!


Foot note

I've been working closely with Optimizley for many years, I'm not paid to say this, but I can vouch for their products, software and expert services - check them out for yourself.





Comentarios

Obtuvo 0 de 5 estrellas.
Aún no hay calificaciones

Agrega una calificación
bottom of page