Behavioural insights, better metrics and @rorysutherland

Practitioners using insights from behavioural sciences to improve service design is one thing, but we’re also now seeing the need for policy makers to be systematic about understanding behaviour. This has a number of dimensions. For example:

  • it makes ethnographic observation more important than straightforward qualitative research. In short, you’re more likely to be able to influence behaviour if you gain insight into when and how it happens, than from focus group findings of people telling you what would make them change what they do.
  • it makes experimentation and prototyping vital, because we’re not going to rely (are we?) on interventions based on a belief that people are ‘rational, economic’ beings. There’s a clear link here with design thinking.
  • in designing interventions, it shifts the focus from ‘why’ to ‘what’: from “why do people do this and why might they do what we want to encourage?” to “what it is that people actually do? what triggers it and what are the barriers to people doing what we want to encourage them to do?” I’ve

I should write about each of these, really. But not today. Today, I’m reflecting briefly on something Rory Sutherland said at an event last week that I hadn’t previously given much thought to. He suggested that we need new metrics in public services and public policy. He’s right of course, though there’s a strong case that we’ve got used to having so many metrics that we don’t take enough notice of the ones we do have.

The specific example I remember is about mobile coverage. In the early days of mobile networks, covering 90% of the population when your competitors only reached 75% was a way of differentiating one service from another and creating competitive advantage, in turn driving other companies to increase their coverage. From a public policy point of view, the main metric being used was in citizens’ (customers’) interests. But once every operator had 98% coverage, they began competing on price, and innovation focused on how to design and present specific packages of data/calls, etc. This hasn’t driven up service quality; why would it?

Rory suggests that, since then, the main differentiator between services (in practical experience, though not reflected in marketing) has been the reliability – or uninterruptedness – of service outside of city and town centres, especially in transit. And although we know this as users, we don’t have access to any way of comparing services on this. Competition on this would have driven up the standard of service – but there’s no metric for this. My take on this is that, well, we COULD invent a metric for this, if we were so inclined. And maybe Oftel could have made each network publish its indicator on this and include it in all publicity.

Now, how relevant is this to my main theme – that people need to be systematic about understanding behaviour? I appreciate that some people would agree with Rory’s point, and say that it’s simply about being more ‘customer focused’. Maybe so. But I’m pretty sure that those who saw ‘percentage coverage’ as a key metric weren’t trying to be anything other than customer centric. And I’m also pretty sure that if you’re being smart about understanding behaviour, you’re more likely to generate the sort of useful, clever metric Rory suggests because you’d be focusing on what people actually do.

Using behavioural insights: commissioners need to get sharp

I spent an afternoon last week at Information Is In The Eye Of The Beholder, an event organised by the Design Councils’ Behavioural Design Lab. Maybe I’m lazy, but the main thing I took from it was a confirmation of something that has been becoming very in my work: that practitioners who apply insights from behavioural sciences in the policy sphere and in public services (and there aren’t many of us) need to challenge our clients to be very clear about the behavioural outcomes they want.

This may seem an obvious point to make, but I don’t think it is. Why? First, there is no established market for public service commissioning of behavioural insights, so commissioners have little experience to draw on. Second, so much policy is designed on the false assumption that people are rational, economic beings that there are likely to be some false assumptions built in to any default view of behavioural goals.

Let me illustrate this.

Felicity Algate from the Cabinet Office’s Behavioural Insights Team showed a prototype smartphone app that would enable people to compare their energy bill with the putative bill from competing energy companies, and to switch supplier with one click. In terms of providing feedback, applying mere exposure effect, and reducing goal dilution (a real barrier to switching), it’s great. It’s a real “if only all public services could be like this” moment.

But it does raise plenty of policy issues. For one thing, what are the carbon implications (given that the UK has self-imposed carbon budgets to meet by law)? Well, first, let’s be clear that paying less for energy does not increase energy efficiency, which is improved by reducing the energy input required for a given output, not by making it cheaper! Plenty of switching campaigns make this mistake, which in policy or behavioural terms is a significant error (sorry Calderdale, I had to pick on someone). Second, switching suppliers to pay less could increase energy usage. Third, there is an (arguably underfunded) policy drive to help people reduce household energy bills be reducing usase, through the Green Deal, hence a real risk of lack of clarity on how to manage energy bills, with damaging behavioural outcomes.  (I should point out that, at the event, others raised risks around the potential behaviour of companies in a market operating in this way; I’m only thinking here about the behaviour policy-makers encourage in citizens). In summary, there’s a case to be made that this is great innovation using behavioural insights, to serve a policy goal that isn’t very smart behaviourally.

In contrast, Felicity also illustrated the use of text messages to encourage people to pay fines (already in the public domain). This is surely a good use of behavioural insights, bringing a public service into line with current knowledge, but also uncontestable in policy terms. Not only is justice being served, but ‘clients’ stand to save the £200 that would otherwise be added to their fine to pay bailiffs.

My point, in summary: taking behavioural sciences seriously means applying them to policy considerations, not merely applying them once the policy thinking is done. And for those of us working in the field, that means not always accepting a commissioner’s brief ‘as is’.