Build Your Own Time Machine!

Back to the Future DeLorean Time Machine

Okay, not a real time machine. Not in the sense of one that will take you from 1985 to 2015 once you hit 88 mph. That kind of time machine is probably impossible. If you’re holding out for wormholes to do it then you’ll be disappointed to hear that the physicists at Caltech, using a partial unification of general relativity with quantum physics, think that:

Any wormhole that allows time travel would collapse as soon as it formed.

But it is possible for you to create a personal time machine in your head. Think of it as a psychological time machine. You can do this because we can all alter how we experience time by the degree of novelty we allow into our lives.

When you experience new things, the experiences seem to take longer. Or as Joshua Foer put it:

Monotony collapses time; novelty unfolds it.

This explains why time seems to speed up as you age (and seemed to stretch on forever when you were young). Or, as Scientific American put it:

When the passage of time is measured by “firsts” (first kiss, first day of school, first family vacation), the lack of new experiences in adulthood, James morosely argues, causes “the days and weeks [to] smooth themselves out…and the years grow hollow and collapse.”

This ‘monotony collapses time; novelty unfolds it’ rule also contributes to the Return Trip Effect, where journeys home from new places seem to be shorter than the ride out.

All of which has important implications for how we live our lives.

  • To make your weekends last longer, build more novelty into them (new places, new activities).
  • If time seems to be rushing by you, find ways to add novelty to your daily routine.
  • And to make meetings somewhat less miserable, introduce an element of surprise.

As others have noted, the real lesson here is that you get to choose how you want to experience time. And there is a wonderful irony here – if you want to slow down the passage of time, start doing more with it!

 

Build Your Own Time Machine!

Where Is My Jetpack?

Jetpack

The recent visit of SingularityU to New Zealand and TV1’s What’s Next? show have both sparked debate about what the future holds and what it will mean for all of us. While we at Research First are excited about the opportunities it may bring, we feel honour-bound to re-iterate that no-one really knows what the future will hold.

Not the slick presenters at SingularityU, and certainly not the presenters on TV1 (a point made well here). As always, Bob Hoffman captures this better than most.

“I go to a lot of conferences (hey, it’s a living) and I have to listen to a lot of speakers. It’s pretty easy to know pretty quickly who the bullshit artists are. They’re the ones who are telling us what the future is going to be like and warning us that we’d better be ready for it or we’ll be left behind… If you’re a buffoon with a Powerpoint and a bag full of clichés stay away from the present. Nothing to see here. Head for the future – it’s your happy place”

But Umberto Eco could have been describing all these futurologists when he said:

“while they seem to act as a thermometer, reporting a rise in temperature, they are actually part of the fuel that keeps the furnace going”.

We’ve talked in the past about the work of Philip Tetlock, and we wish the producers at TV1 or SingularityU spent less time watching TED talks and more time reading Tetlock.

But even a better understanding of the past would help temper these debates about the future. People have been making predictions about what the future will hold (and getting them mostly spectacularly wrong) for hundreds of years. Similarly, every age thinks theirs is an age of disruption.

We like the way Scientific American put it when it noted

“futurology has always bounced around between common sense, nonsense and a healthy dose of wishful thinking”.

But we prefer the point John Micklethwait and Adrian Wooldridge made in a different context, when they said

“even if it’s not all bullshit, enough of it is to disqualify the rest”.

Keep that in mind the next time someone drops the word ‘disruption’ into a presentation

Where Is My Jetpack?

Facts Are Stubborn Things

best-movie-quotes

There really is no polite way to say this: the world is awash with bullshit. We can dress this up in all the ‘post-truth’ and ‘alternative facts’ packaging we want, but it’s much more useful not to mince our words. After all, one of the golden rules of psychology is that ‘to name it is to tame it’. Working in the world of research and policy, we confront this problem every day. We see it in ‘voodoo polls’ that take on the appearance of science without any of the substance. And we see it in ‘experts’ who clearly have no idea about how little they really know.

Facts may be stubborn things but assertions are clearly more of a push-over. As Oliver Wendell Holmes Jnr put it, “certitude is not the test of certainty”. The key is not to dismiss all research and evidence but to be clear about when you can trust it.

Back in the mid nineties Carl Sagan compiled a ‘Baloney Detection Kit’ that remains a great resource for anyone dealing with claims made from evidence. It also outlines a number of the common rhetorical tricks that get rolled out to shift your attention away from the quality of the research. There is a version of that article on Research First’s website (here: http://www.researchfirst.co.nz/uploads/The%20Fine%20Art%20of%20Baloney%20Detection.pdf), and we have a shorter, easier to use, checklist version you can use too (here: http://www.researchfirst.co.nz/uploads/files/users/30375/RF_Research_Ninja.pdf).

But fact-checking is only part of the way to hold back the tide of bullshit. As well as being able to check the quality of the evidence used to support an argument, we need to be able to interrogate the quality of thinking that sits behind it. This is the notion of ‘critical thinking’, which is the art of thinking about thinking. What critical thinking often shows us is that the weakest part of an argument is not the facts it ends up with but the assumptions it starts with. There is nothing hard about critical thinking, but it is a skill that needs instruction and practice. Given how often we see the need for this in the organisations we work with, we now offer a range of seminars in how to improve your critical thinking (see a list here: http://www.researchfirst.co.nz/index.php?page=seminars).

It may be unfashionable to say this but I can’t help thinking that the best way to beat back the wave of bullshit washing over the world is by encouraging more students to study the liberal arts and the humanities. These subjects let us see where our current ideas fit within a historical and philosophical context, while training graduates how to balance open-mindedness and scepticism. In this regard, these disciplines aren’t about anything in particular so much as a way to think about everything. And make no mistake, it is very much a ‘discipline’. These subjects teach how to ask difficult questions and mistrust easy answers. They also show how every solution creates new problems. As Seneca said, nobody was ever wise by chance.

If it’s true that many of living in the West have ‘lost faith in our own future’ (or, in 2017, think they are about to) then it’s time to rethink that future. To do that, what we need are people who can call that future to a higher standard. Which means, now more than ever, we need more Arts graduates.

  • Carl Davidson, Head of Insight at Research First
Facts Are Stubborn Things

Where are the BA Graduates When We Need Them?

d6be81b7ba742e64576365983cf6bdad

Reflecting on the last few weeks of this year, it’s easy to see why that apocryphal Ancient Chinese proverb linked ‘living in interesting times’ with being cursed. Today we capture much the same idea when we talk about living with ‘disruption’. And it’s entirely possible that this past year will be seen as the start of some new ‘Age of Disruption’. This has been a year when many of the old certainties (along with the people who championed them) left us for good.

Alongside the major geopolitical events that blindsided most everyone, events across the Middle East keep showing us that we are a long way from ‘the end of history’. Even the internet, which was once promoted as tool for building communities and creating peace, seems to have become what the blog The Daily Banter calls ‘a disinformation dystopia’; a place that has fractured into ‘post-truth’ echo chambers where “lies, conspiracy theories, and general bullshit” thrive.

Add in what is happening with housing, social mobility, and economic inequality, and it’s no wonder that The Financial Times opined that “the west is losing faith in its own future”. Yogi Berra was right when he said “the future ain’t what it used to be” but the present isn’t making much sense either.

It’s precisely times like these when the liberal arts and the humanities show their true value. They do this by allowing us to engage in questions about ‘why?’ rather than simply ‘how?’. We live in a time of unprecedented technical knowledge but it’s hard not to think that people and communities are often left behind in these conversations. Martin Luther King put this much better when he said “our scientific power has outrun our spiritual power, we have guided missiles and misguided men”. His notion of ‘misguided’ fits superbly with our age of disruption. It reminds us that we need to talk about ends as well as means, and to subject both to the fierce light of critical thinking. The kind of conversation where we can separate matters of judgement from matters of fact. In the process, we can start to talk about how we collectively, be that as a city or a country, shape change rather than be shaped by it.

This is what the liberal arts and the humanities offer. They let us see where our current ideas fit within a historical and philosophical context, and they train graduates how to balance open-mindedness and scepticism. At the heart of these disciplines sits the question ‘what does it mean to be human?’. In this regard, these disciplines aren’t about anything in particular so much as a way to think about everything. And make no mistake, it is very much a ‘discipline’. These subjects teach how to ask difficult questions and mistrust easy answers. They also show how every solution creates new problems. As Seneca said, nobody was ever wise by chance.

If it’s true that many of living in the West have ‘lost faith in our own future’ then it’s time to rethink that future. To do that, what we need are people who can call that future to a higher standard. Which means, now more than ever, we need our Arts graduates to step up. Academics use the term ‘zeitgeist’ to capture the notion of the spirit of a particular age. The zeitgeist is the common core of assumptions and beliefs that people living at a particular time share. It’s this zeitgeist that shapes our actions, or leads to inaction on particular issues. But the important point about the zeitgeist is that it’s something most of us think with but rarely about. The exception here is in the liberal arts and humanities. It’s become somewhat unfashionable to talk about ‘deconstruction’ but this is precisely what thinking about the zeitgeist involves. By pulling it apart we can make the assumptions and values underneath the zeitgeist apparent, and in the process expose whose real interests those ideas serve. Interestingly, facing disruption the business world has become hungry for ‘contrarian thinking’ without recognising that the liberal arts and the humanities eat it for breakfast.

 -Carl Davidson is the Head of Insight at Research First Ltd

Where are the BA Graduates When We Need Them?

How Did the Polls Underestimate Trump?

Trump WTF.jpg

The day after Donald Trump’s US Presidential Election victory The Dominion Post ran a headline saying ‘WTF’. It left off the question mark so not to cause offence (and asked us to believe that they really meant ‘Why Trump Flourished’). But the question lingers regardless.

For those of us in the research industry, the question we have been asked most often since then – and the question we have asked ourselves most often – is ‘how did the polls get it so wrong?’. It’s a good question. And coming hot on the heels of the polls’ failure to predict Brexit, an important one.

People have tried to answer this question in a number of ways, and each of them tells us something a little different about the nature of polling, the research industry, and voters in general.

The first response might be called the ‘divide and conquer’ argument. This is the one that says not all the polls got the election result wrong. The USC/LA Times poll, for instance, tracked a wave of support for Trump building and predicted Trump’s victory a week out. Similarly, the team at Columbia University and Microsoft Research also predicted Trump’s victory. But what this argument doesn’t do is explain why most polls clearly got it wrong (and even a broken clock is right twice a day).

There is a variation on this argument that we might call ‘divide and conquer 2.0’. This is the argument that says people outside of the industry misunderstood what the polls actually meant. The best example here might be Nate Silver’s FiveThirtyEight.com. Before the election 538 gave Trump about a thirty percent chance of winning. To most people, that sounds like statistical short hand for ‘no chance’. But to statisticians, it means that if we ran the election ten times, Trump would win three of them. In other words, Silver was saying all along that Trump could win. Just it was more likely that Hilary would. As Nassim Nicholas Taleb might put it, the problem here is that non-specialists were ‘fooled by randomness’. So the problem isn’t with the pollsters but the pundits.

The next argument might be called ‘duck and run’. This is the argument that says the fault lies with the voters themselves because they probably misrepresented their intentions. Pollsters typically first ask people if they intend to vote, and only then who they’re going to vote for. But, of course, there’s no guarantee the answer to either is accurate. This seems to be the explanation that David Farrar (who is one of New Zealand’s most thoughtful and conscientious pollsters) reached for when approached for comment. Given how many Americans didn’t vote in the election, expect to hear this argument often.

A variation on this ‘duck and run’ argument is that polls are at their least effective where a tight race is being run. In this election, nearly 120 million votes were cast but the difference between the two candidates was only about 200,000 (or less than one third of one percent). It could be that no polling method is sufficiently precise to work under these conditions. If you want to try this line of argument in the office, award yourself a bonus point for referring to the ‘bias-variance dilemma’.

But I think all of these arguments are a kind of special pleading. Worse than that, much of what the industry is now saying looks like classic hindsight bias to me. This is also known as the ‘I-Knew-It-All-Along Effect’, which describes the tendency, after something has happened, to see the event as having been inevitable (despite not actually predicting it). While it’s easy to be wise after the fact, the point of polling is to provide foresight, not heroic hindsight.

And no matter how well intentioned any of these arguments might be, it’s hard not to think we’ve seen them all before. Philip Tetlock’s masterful Expert Political Judgment: How Good Is It? reports a 20 year research project tracking predictions made by a collection of experts. These predictions were spectacularly wrong but even more dazzling was the experts’ ability to explain away their failures. They did this by some combination of arguing that their predictions, while wrong, were such a ‘near miss’ they shouldn’t count as failure; that they made ‘the right mistake’; or that something ‘exceptional’ happened to spoil their lovely models (think ‘black swans’ or ‘unknown unknowns’). In other words, the same arguments that we’re now seeing the polling industry rolling out to explain what happened with this election.

For me, all of these arguments miss the point and distract us from the real answer. The pollsters (mostly) got the election wrong because the future – despite all our clever models and data analytics – is fundamentally uncertain. Our society loves polls because we crave certainty. It’s the same reason we fall for the Cardinal Bias, the tendency to place more weight on what can be counted than on what can’t be. But certainty will always remain out of reach. What Trump’s victory really teaches us is that all of us should spend less time reading polls and more time reading Pliny the Elder. It was Pliny, after all, who told us ‘the only certainty is that nothing is certain’.

How Did the Polls Underestimate Trump?

When To Accentuate The Positive?

Doing it wrong

Recently one of our researchers presented at a conference for PR and Communications professionals and highlighted the importance of ‘loss aversion’ in human behaviour. This describes how all of our brains are wired to experience losses much more acutely than gains. As a result, our researchers suggested that when they wanted to influence behaviour, communications professionals should talk about the costs of not doing something rather than the benefits of doing it.

In the question time following that presentation, one of the conference participants noted that this idea flies in the face of conventional communication practice which places the emphasis on the positive message. So which is it?

Fortunately, social science has a clear answer. According to Peter Salovey, it depends on whether the new behaviour we want to promote is perceived as risky or safe. If the person we’re talking to considers the new behaviour to be safe, the key is to emphasise all the good things that will happen if they change to it.

But where they believe the new behaviour is a risk, the challenge is to overcome the status quo bias. To do this, we need to emphasise the bad things that will happen if they don’t change. This makes taking that risk more appealing, because of the threat of that loss.

So the lesson seems to be to accentuate the positive where the audience sees safety, and emphasise the negative where they fear risk.

 

When To Accentuate The Positive?

Shakespeare, on Impact Measurement

Shakespeare

Is your work making an impact?

This seems like such an obvious question to ask but answering it is fraught with difficulty. One of the problems is that ‘impact’ can be such a slippery concept. Even setting aside for a moment the question of what counts as an ‘impact’, measuring impact means being clear about such things as:

  • Impact for whom? Where? When?
  • How much of an impact?
  • How long did the impact last?
  • Was it worth it? (i.e., did the scale and duration of the impact justify the investment and effort?) .

The key to being able to successfully measure impact is to be clear at the outset what it is you are setting out to achieve. In other words, before you cry havoc and engage your cogs of awe, you need to be clear about what success looks like. Nor is it enough for you (and your team) to be clear about what success looks like – you need to write it down so you can refer to it later.

Once you know what it is you want to achieve, you can then work on your theory of change. This is simply a logical diagram that outlines how you are going to achieve the success you’ve clearly outlined.

Let’s say you work in communications and PR and you want to know if you’re work is making a difference. Drawing a simple logic model will take you from your work in communications to the impact that you want to create. But what is useful about this kind of logic model is that it clearly distinguishes between things like activities (what you do); outputs (the things you create); and impact (the success you want to achieve):

Sytems logic 2

These distinctions are critical because they help us resist the understandable urge to look in the wrong places and count the wrong things. Too much communication evaluation has focused on outputs and outcomes precisely because they are easy to see and simple to measure (indeed, for many analytical reports both are automated).

But outputs and outcomes are not impacts. More critically, they may not even be reliable markers on the way to impact. Think about this example: You’ve been asked to create a campaign to get people out of their cars and onto buses. Tapping into your genius for communication, you and your team create a multi-level and multi-channel approach built around a series of catchy messages. Because the campaign’s been carefully crafted to have an attention-grabbing gonzo element, the whole thing goes viral, is covered in the mainstream media, and wins your agency an illustrious award. Time for tea and medals for everyone?

Probably not. No matter how hard and brilliantly you work (‘activities’), no matter how clever the campaign materials are (‘outputs’), and no matter how often links are clicked, stories are read, and your client is interviewed on Paul Henry’s show (‘outcomes’), all of that is for nought if people don’t actually get out of their cars and onto buses.

As Shakespeare said about something else, ‘ambition should be made of sterner stuff’. Which is why it is important to be clear about success before the awards start rolling in and Paul Henry starts calling your client. There’s nothing wrong with building a buzz, but for this campaign you can’t claim to be driving change until people change how they drive.

If you talk to your marketing colleagues about this they will nod as sagely as a tree full of owls. That’s because marketers are taught in Stage One classes that customers don’t go to hardware stores to buy drills but to buy the holes those drills make. They are also taught that there is no point talking about the features of you product (‘a drill with extended battery life’) if you don’t know the benefits the customers want. The same logic applies to communications.

In other words, we need to think carefully (and deeply) about what we’re trying to do before we reach for any kind of measurement tool. The notion of ‘evidence-based’ (or ‘evidence-led’) approaches to communication practice is an attractive one, but we first need to be clear about what we’re trying to measure with evidence. There are over 150 measurement tools in the TRASI (Tools and Resources for Assessing Impact) database but none of them are any use if we keep looking in the wrong place.

Or as Shakespeare put it, without understanding impact, evaluation of communication effectiveness will remain a ‘tale told by an idiot, full of sound and fury, signifying nothing’.

 

Research First is PRINZ’s research partner, and specialises in impact measurement, behaviour change, and evidence-based insights

Shakespeare, on Impact Measurement