Is your work making an impact?
This seems like such an obvious question to ask but answering it is fraught with difficulty. One of the problems is that ‘impact’ can be such a slippery concept. Even setting aside for a moment the question of what counts as an ‘impact’, measuring impact means being clear about such things as:
- Impact for whom? Where? When?
- How much of an impact?
- How long did the impact last?
- Was it worth it? (i.e., did the scale and duration of the impact justify the investment and effort?) .
The key to being able to successfully measure impact is to be clear at the outset what it is you are setting out to achieve. In other words, before you cry havoc and engage your cogs of awe, you need to be clear about what success looks like. Nor is it enough for you (and your team) to be clear about what success looks like – you need to write it down so you can refer to it later.
Once you know what it is you want to achieve, you can then work on your theory of change. This is simply a logical diagram that outlines how you are going to achieve the success you’ve clearly outlined.
Let’s say you work in communications and PR and you want to know if you’re work is making a difference. Drawing a simple logic model will take you from your work in communications to the impact that you want to create. But what is useful about this kind of logic model is that it clearly distinguishes between things like activities (what you do); outputs (the things you create); and impact (the success you want to achieve):
These distinctions are critical because they help us resist the understandable urge to look in the wrong places and count the wrong things. Too much communication evaluation has focused on outputs and outcomes precisely because they are easy to see and simple to measure (indeed, for many analytical reports both are automated).
But outputs and outcomes are not impacts. More critically, they may not even be reliable markers on the way to impact. Think about this example: You’ve been asked to create a campaign to get people out of their cars and onto buses. Tapping into your genius for communication, you and your team create a multi-level and multi-channel approach built around a series of catchy messages. Because the campaign’s been carefully crafted to have an attention-grabbing gonzo element, the whole thing goes viral, is covered in the mainstream media, and wins your agency an illustrious award. Time for tea and medals for everyone?
Probably not. No matter how hard and brilliantly you work (‘activities’), no matter how clever the campaign materials are (‘outputs’), and no matter how often links are clicked, stories are read, and your client is interviewed on Paul Henry’s show (‘outcomes’), all of that is for nought if people don’t actually get out of their cars and onto buses.
As Shakespeare said about something else, ‘ambition should be made of sterner stuff’. Which is why it is important to be clear about success before the awards start rolling in and Paul Henry starts calling your client. There’s nothing wrong with building a buzz, but for this campaign you can’t claim to be driving change until people change how they drive.
If you talk to your marketing colleagues about this they will nod as sagely as a tree full of owls. That’s because marketers are taught in Stage One classes that customers don’t go to hardware stores to buy drills but to buy the holes those drills make. They are also taught that there is no point talking about the features of you product (‘a drill with extended battery life’) if you don’t know the benefits the customers want. The same logic applies to communications.
In other words, we need to think carefully (and deeply) about what we’re trying to do before we reach for any kind of measurement tool. The notion of ‘evidence-based’ (or ‘evidence-led’) approaches to communication practice is an attractive one, but we first need to be clear about what we’re trying to measure with evidence. There are over 150 measurement tools in the TRASI (Tools and Resources for Assessing Impact) database but none of them are any use if we keep looking in the wrong place.
Or as Shakespeare put it, without understanding impact, evaluation of communication effectiveness will remain a ‘tale told by an idiot, full of sound and fury, signifying nothing’.
Research First is PRINZ’s research partner, and specialises in impact measurement, behaviour change, and evidence-based insights