Practitioners using insights from behavioural sciences to improve service design is one thing, but we’re also now seeing the need for policy makers to be systematic about understanding behaviour. This has a number of dimensions. For example:
- it makes ethnographic observation more important than straightforward qualitative research. In short, you’re more likely to be able to influence behaviour if you gain insight into when and how it happens, than from focus group findings of people telling you what would make them change what they do.
- it makes experimentation and prototyping vital, because we’re not going to rely (are we?) on interventions based on a belief that people are ‘rational, economic’ beings. There’s a clear link here with design thinking.
- in designing interventions, it shifts the focus from ‘why’ to ‘what’: from “why do people do this and why might they do what we want to encourage?” to “what it is that people actually do? what triggers it and what are the barriers to people doing what we want to encourage them to do?” I’ve
I should write about each of these, really. But not today. Today, I’m reflecting briefly on something Rory Sutherland said at an event last week that I hadn’t previously given much thought to. He suggested that we need new metrics in public services and public policy. He’s right of course, though there’s a strong case that we’ve got used to having so many metrics that we don’t take enough notice of the ones we do have.
The specific example I remember is about mobile coverage. In the early days of mobile networks, covering 90% of the population when your competitors only reached 75% was a way of differentiating one service from another and creating competitive advantage, in turn driving other companies to increase their coverage. From a public policy point of view, the main metric being used was in citizens’ (customers’) interests. But once every operator had 98% coverage, they began competing on price, and innovation focused on how to design and present specific packages of data/calls, etc. This hasn’t driven up service quality; why would it?
Rory suggests that, since then, the main differentiator between services (in practical experience, though not reflected in marketing) has been the reliability – or uninterruptedness – of service outside of city and town centres, especially in transit. And although we know this as users, we don’t have access to any way of comparing services on this. Competition on this would have driven up the standard of service – but there’s no metric for this. My take on this is that, well, we COULD invent a metric for this, if we were so inclined. And maybe Oftel could have made each network publish its indicator on this and include it in all publicity.
Now, how relevant is this to my main theme – that people need to be systematic about understanding behaviour? I appreciate that some people would agree with Rory’s point, and say that it’s simply about being more ‘customer focused’. Maybe so. But I’m pretty sure that those who saw ‘percentage coverage’ as a key metric weren’t trying to be anything other than customer centric. And I’m also pretty sure that if you’re being smart about understanding behaviour, you’re more likely to generate the sort of useful, clever metric Rory suggests because you’d be focusing on what people actually do.