Social Exchange Theory and Survey Design

wiifm-820x410

‘Social exchange theory’ is a critical component of sociological and social psychological theorising about how relations are created and maintained. Most often associated with the work of the (then) University of Washington sociologists Richard Emerson[1] and Karen Cook[2], social exchange theory states that exchanges constitute the fundamental form of human interaction. More than this, that interaction patterns are shaped by power relationships and the corresponding efforts to achieve balance in exchange relations. In other words, social exchange theory tells us that the  actions of individuals are motivated by the return that these actions are expected to bring (and in fact, usually do bring). The theory argues that people engage in an activity because of the rewards they hope to reap, and that all activities include certain costs and people attempt to keep the costs below the rewards they hope to receive.

Social exchange theory is important in research because, as noted by Don Dillman, interviews (and especially surveys) are a special case of social exchange. Thinking about surveys in terms of social exchange means there are three things researchers can do to maximize survey responses

  1. Minimise the costs of responding;
  2. Maximise the rewards of responding; and
  3. Establish trust that those rewards will be delivered.

In surveying, the largest ‘reward’ is letting people know that they are part of a specially selected sample, that there opinions and responses are valued. People, after all, like to feel valued. But some other ways that surveys ‘reward’ participants are through –

  1. Showing positive regard to the respondent;
  2. Providing verbal appreciation to the respondent; and
  3. Making the questionnaire interesting.

Equally, surveys can reduce the costs of responding by

  1. Making the task appear brief;
  2. Reducing the effort required (through such things as closed ended questions); and
  3. Eliminating any chances of embarrassment

A further technique suggested by social exchange theory is to include a small gift with each questionnaire sent out. This can be in addition to an incentive draw for those that complete the questionnaire but social exchange theory predicts that a small gift with every survey helps seed the exchange researchers are looking for. We also know from other psychological research that small immediate rewards are often preferred over potentially larger long-term rewards[3].

 

Notes:

[1] Emerson, Richard (1976) “Social Exchange Theory,” Annual Review of Sociology (1976), 335-362.

[2] Cook, K. (ed.) (1987) Social Exchange Theory. Sage Publications, Newbury Park, Ca.

[3] This ‘paradox’ of people choosing short-term gratification over longer-term rewards is also used to explain the rise of obesity and the lack of savings for old age. See O’Donoghue T and Rabin M (2000) ‘The economics of immediate gratification’ in Journal of Behavioral Decision Making, 13(2), 233-250 (2000).

 

 

Advertisements
Social Exchange Theory and Survey Design

Why You Won’t Remember This Blog

Memory1-1024x606

If you’re in the business of influencing others (to buy something, to believe something, or to act differently), then it’s critical that you understand how the human brain really works. One of the important lessons emerging from the social sciences is that our intuitions about ourselves and others are often not as accurate or as insightful as we think.

While it feels like you’re reasoning your way through your life, that’s rarely the case. Instead, our brains are wired to take shortcuts, to be influenced by how things are framed, and profoundly shaped by what others are doing. When we talk about knowing something, we really mean experiencing what the neurologist Robert Burton called a “feeling of knowing”.

Part of the puzzle here is that we all rely on our memories to construct our ideas of what the future might look like. And while it feels like our memories are up to the task, they’re not really designed for it. The truth is, we don’t really recall memories so much as reconstruct them from traces distributed throughout the brain. This makes memories what psychologists call ‘plastic’, meaning they can be shaped and reshaped as we replay them.

There is a long list of these memory biases to highlight just how unreliable our memories really but the problem with this line of argument is that it starts from an assumption that our memories should be accurate. That is, the emphasis on remembering obscures the role that forgetting plays for our brains. We tend to think about forgetting as a failure to remember but this misunderstands what our brains evolved to do. The reason you have a memory at all is to help you survive in an uncertain world. Memory exists to optimise decision-making, not to accurately capture and reproduce information.

As the psychologists Blake Richards and Paul Frankland have noted, this means forgetting is just as important to a healthy brain as remembering. This is supported by the fact we tend to forget memories about what happened to us (known as ‘episodic memories’) quicker than we forget memories about general knowledge (known as ‘semantic memories’). The reality is that those episodic memories might just not be very useful from a survival perspective. Which also means that not being able to remember how you know someone might be a feature of our brains and not a bug. It also explains why you’re unlikely to remember this blog, and why we won’t be too offended when you don’t[i].

 

Endnotes

[i]             To learn more about the fallibility of memory, have a look at the work of UCLA’s Bjork Learning and Forgetting Lab (https://bjorklab.psych.ucla.edu/people/). Another great place to start is with Elizabeth and Robert Bjork’s (1996) Memory, Academic Press, San Diego or Daniel Schacter’s (2001) The Seven Sins of Memory, Houghton Mifflin. The point about memory being a decision-making tool to help you survive in an uncertain world comes from Blake Richards and Paul Frankland’s 2017 paper “The Persistence and Transience of Memory” in Neuron, Volume 94, Issue 6, p1071–1084.  The point about forgetting being a feature and not a bug is from Angela Chen’s (2017) “The purpose of memory might not be to record everything”, The Verge Jun 21, 2017.

 

Why You Won’t Remember This Blog

A Short List of Essential Reading for a World Full of Fake News

t-shirt great again

Charlie Jones once said that “you are the same today as you will be in five years, except for two things: the people you meet and the books you read. Choose both carefully”. At Research First we have always believed that critical thinking skills are essential to both thrive in business and fully participate in civic life.

If anything, those skills have become even more important given the batshit crazy world we find ourselves in today. Now, seemingly more than ever, the ability to think critically about how arguments are constructed, supported, and presented is an essential antidote to a world awash with bullshit (in Harry Frankfurt’s sense of the word).

Three of our favourite books for helping develop that antidote are Darrell Huff’s How to Lie with Statistics; John Allen Paulos’s A Mathematician Reads the Newspaper; and Cynthia Crossen’s Tainted Truth.

The first of these – How to Lie with Statistics – is now called ‘a classic’ but don’t let that put you off. As the entry on Wikipedia puts it, Huff’s book is “a brief, breezy, illustrated volume outlining common errors, both intentional and unintentional, associated with the interpretation of statistics”. One reason why Huff’s book is so easy to read and follow is that he was a journalist rather than a statistician. And, obviously, this book isn’t really about how to lie with statistics but about how to know when you’re being lied to with statistics. If you work with numbers and statistics (and who doesn’t?) you’ll love Huff’s book.

We’d be tempted to say ‘keep a copy handy whenever you read the newspaper if John Allen Paulos hadn’t beaten us to it with his book. A Mathematician Reads the Newspapers is just as ‘breezy’ as Huff’s and just as insightful. It shows how maths and numbers are central to many of the articles we read every day (Paulos takes stories that don’t seem to involve maths and – as Amazon puts it – ‘demonstrates how a lack of mathematical knowledge can hinder our understanding of them’). In the process, he demonstrates how maths and statistics are often abused in the support of bullshit and bluff.

Tainted Truth rounds out this collection by showing how sponsored studies have become a powerful tool of persuasion. These studies look like real science to the casual observer but they manipulate truth to reflect the intentions of their sponsors. Tainted Truth also shows that how an argument is presented and communicated can have a significant effect on how persuasive we find it (regardless of its real merits). One of the quotes we love in it says “everybody gets so much information all day long that they lose their common sense”. All three books here help us retain (or reclaim?) that common sense.

And, God knows, now more than ever, that’s something we all need. Check out these three books and let’s make critical thinking great again!

 

 

 

A Short List of Essential Reading for a World Full of Fake News

Cambridge Analytica and the Limits of Big Data

Big data

In case you missed it, the Cambridge Analytica scandal revolved around an innocuous-looking survey posted on Facebook that deceitfully collected users’ details. The scandal caught the attention of the world because Facebook’s “privacy” settings enabled Cambridge Analytica to also harvest the details of the friends of anyone who completed the survey. This meant Cambridge Analytica collected data from over 50 million users (and maybe as many as 87 million) to help in the company’s stated aim of ‘changing audience behaviour’.

Given that most of these millions of users had not given permission to access their data, most of the coverage of this scandal has focused on the ethical and legal transgressions involved. But other commentators have noted that the scandal provides a powerful lens into what we have given up in order to have access to the wonders of social media. Think of it as a contemporary Faustian bargain, just one where Mephistopheles is disguised in full hipster mode and Faust forgot to read the fine print. No-one has run this argument more persuasively or entertainingly than Bob Hoffman (of Ad Contrarian fame). To give you a taste, in a recent blog Bob wrote:

We used to be able to dismiss Zuckerberg and his gang as greedy, silly brats with no perspective and no ethical compass. But he is far more dangerous than that.  

As important as these debates are, there is another side to the Cambridge Analytica story that needs to better known. Long before it started its Facebook deceit, Cambridge Analytica boasted that “data drives all we do”. The company promised to be able to form ‘psychographic’ profiles from these data, and to use these psychological insights to better influence opinions, preferences, and behaviour. In this regard, Cambridge Analytica was hitching a ride on the wave of hype created around ‘big data’ and ‘data analytics’.

Because of the association with Steve Bannon and the Trump campaign, it’s easy to think that Cambridge Analytica delivered on that promise. But the evidence clearly points the other way. The company worked on campaigns for both Ted Cruz and Ben Carson (neither of which ended well), and was fired from Trump’s campaign too. According to CBS News, Trump’s campaign ended its relationship with Cambridge Analytica because the data it was a supplying were of “suspect quality and value”. In other words, even with all those stolen data, Cambridge Analytica wasn’t able to change much audience behaviour.

Worse still, investigations done by The Atlantic, and elsewhere, suggest that not even Cambridge Analytica believed the data could do what they were telling their clients it could. An undercover investigation by Channel 4 News showed staff from Cambridge Analytica promoting the virtues of blackmail and bribery over bits and bytes to change behaviour. As The Atlantic noted:

If the consulting firm’s “psychographic” modeling was really the key to winning campaigns, why would it even flirt with sketchier skullduggery?

As The Atlantic also notes, Cambridge Analytica found so many willing buyers for its psychographic claptrap because we’ve all become suckers for what Bob Hoffman calls “buffoon[s] with a Powerpoint and a bag full of clichés”. After all, it wasn’t so long ago the world was agog at how Obama’s campaign used data to drive microtargeting to influence voters.

All of which means the other important lesson in the Cambridge Analytica story is this: we should always remain skeptical about revolutionary techniques claiming to wring unique insights from data.

At Research First we love data, and we like our data big, but what matters is the quality of the science not the size of the data file. While we follow Celia Green’s counsel that “the way to do research is to attack the facts at the point of greatest astonishment”, we are also acutely conscious of Heisenberg’s warning that:

What we observe is not nature herself, but nature exposed to our method of questioning.

What the Cambridge Analytica scandal reminds us is that with big data, as with so much in research, we can be our own worst enemies. The real problem is not that the right answers are too hard but that the wrong answers are often too easy.

 

 

 

 

Cambridge Analytica and the Limits of Big Data

Paved With Good Intentions

2018-trip-003.jpeg

We’ve all heard the aphorism that ‘the road to hell is paved with good intentions’. It warns us that, in trying to make something better, we often end up making it worse. In many ways this is another warning about the hubris that comes with believing we understand how the world around us truly works. In reality, the rules of science are clear that our understanding is always incomplete and open to revision. This is why George Box famously observed that while some of our models are ‘useful’, they are all ultimately ‘wrong’.

One beautiful illustration of this is provided by “Braess’s Paradox”. This began life as a mathematical model about traffic congestion. It shows that adding extra capacity to a network when the users of can reduce overall performance of the network. In other words, attempting to improve congestion by adding more roads can actually make the congestion worse. Interestingly, this also suggests that you could reduce congestion by removing roads (it’s not called a ‘paradox’ for nothing). Subsequent experience in a number of large cities across the world demonstrate that this is what happens. There’s a nice summary in the New York Times here:

But we said that Braess’s Paradox ‘began life’ as a mathematical model because it has subsequently become a description of all those cases where attempts to improve a situation result in making it works. In particular, those cases where individuals’ choices end up leaving the group worse off. In this regard, it’s a great scientific proof of the good old fashioned ‘tragedy of the commons’ problem.

Braess’s Paradox (and the Tragedy of the Commons) reminds us that we rarely know as much as we think we do. Research published this month in the Harvard Business Review notes that this is particularly a problem for beginners. The authors of that study, Carmen Sanchez and David Dunning, call this “the beginner’s bubble”, reflecting how confidence builds much faster than competence when learning a new task (and just in case you are wondering, yes that is the same David Dunning who gave his name to the Dunning-Kruger Effect).

As with most of the ways our brains play tricks on us, these biases and effects are much easier to see in others than in ourselves. You’re unlikely to spot them on your own, or if you’ve been trained to see criticism as a barometer of failure. Yet we’d all be better off if we embraced Voltaire’s truism that “doubt may be an uncomfortable position but certainty is a ridiculous one”.

 

Paved With Good Intentions

Time to Stop Fooling Yourself

Feynman

One of my favourite quotes comes from the great physicist Richard Feynman, who cautioned his colleagues:

The first principle is that you must not fool yourself and you are the easiest person to fool.

Feynman was talking to other scientists about the practice of science, but his caution is just as relevant for communication practitioners. To understand why, consider the case of Donald Trump.

If you’re middle class and educated, you might still be struggling to understand how Trump became President. Or, like many of my own friends, you might still be enthralled by the daily updates of the fear and loathing coming out of the White House, eagerly anticipating the train wreck that is surely imminent.

But the more I think about this, the more I think about what Feynman said. And the surer I am that there is a larger lesson to be learned here.

Let’s start with the grim facts: According to the Pew Research Center, most of those people who voted for Trump in 2016 would likely vote for him again. As a result, Pew is convinced that, if the election was held again tomorrow, Trump would win again.

Before I go on, pause for a moment and consider your reaction to that.

Because here’s the important part of what I’m going to say: that reaction provides a powerful insight into how we are all wired to fool ourselves.

The good news is that, at some level, this is not really our fault. Evolution has wired all of us to be ‘cognitive misers’. That is, we all think about as little as we can get away with. This makes more sense when you realise that while our brains take up about 2% of our body weight, they burn at least 20% of the fuel that goes into our bodies. As a result, our brains take shortcuts wherever and whenever they can. In a very real way, our brains prefer processing fluency over accuracy.

This means that what we believe is ‘thinking’ is really the application of the same old well-worn ‘heuristics and biases’.  It feels like we’re thinking but we’re mostly going through the motions. This is because those biases and heuristics are things we think with but only rarely about.

I think we need to spend more time in 2018 thinking about them. Now, there are too many biases to discuss here (you can read a fuller list at: http://researchfirst.co.nz/wordpress/wp-content/uploads/2017/06/RF_2017_Annual_v15P2-WEB.pdf) but I want to briefly outline three of the big ones. In the process, I want to show you why none of us is really as smart as we think we are.

Now you might think all of this explains what’s wrong with Trump and the confederacy of dunces he’s gathered around him but I’m actually talking about you. And me. And everyone else.

The first of the biases we need to confront, and one of those most powerful shortcuts our brains uses to make sense of the world, is called confirmation bias. ‘Confirmation bias’ is, in David McRaney’s words:

A filter through which you see a reality that matches your expectations.

In other words, it’s the tendency to look for confirmation for our pre-existing ideas while ignoring any evidence that might disprove those ideas.

The evidence suggests we process confirming ideas about twice as fast as dis-confirming ones. As a result, one of the most common reasons we all make mistakes is not because the right answers are too hard but because the wrong answers are too easy.

There isn’t room here to demonstrate this bias to you, but try this simple question: when did you last change your mind about anything really important?

One of the reasons it’s hard to spot ourselves using confirmation bias is because our brains tidy away our mistakes. The second of the biases I want to talk about here, ‘hindsight bias’, is a common way we do that.

Hindsight bias, also known as ‘I knew-it-all-along effect’, is the inclination, after an event has occurred, to see the event as having been predictable. We do this all the time, despite failing to predict those events beforehand. Think of this bias as a way of editing your memory to fit the current situation. This is a particularly pernicious bias because it leads us to genuinely believe we knew the outcome all along, even though we didn’t.

Where hindsight bias is not enough to expunge error, your brain fails to record your own mistakes thanks to the third bias known as ‘the fundamental attribution error’. This leads us to see our own failings as being caused by external factors beyond our control while seeing the failings of others as a reflection of their character. In other words, when we make mistakes, we always have a good reason (we were tired, rushed, not really thinking) but when other people do the same thing, it’s because there is something wrong with them. Tell me again about how you feel about the proposition Trump voters would re-elect him tomorrow?

I said I’d talk about three biases but there’s a fourth that goes hand in hand with these three. If you’re like most people, then you probably don’t think you’re like most people. As a result, you might be able to appreciate these biases intellectually, and to spot them in others. But it’s unlikely that you think they apply to you.

Well guess what? That itself is a bias. It’s known as the Bias Blind Spot Bias. This occurs when we can recognise the impact of biases on the judgement of others but fail to see the impact of those biases on our judgement. The fact that we are often cognitively blind is no surprise, but the extent to which we can be blind to our own blindness should be.

This is why Feynman’s caution is so powerful and so important. For all of us, our ability to make sense of the world rests on what Daniel Kahneman called “an almost unlimited ability to ignore our ignorance”.  Your challenge for 2018 is to work hard to notice and arrest that impulse. As that old adage put it: Fool me once, shame on you. Fool me twice, shame on me.

– Carl Davidson, Research First

Time to Stop Fooling Yourself

Free to Choose?

escolher_empresa_multinivel

If you’re like most people, you probably think you’re good at making decisions and pretty much always know what you want (and why). The evidence from psychology, on the other hand, points the other way.

For instance, Barry Scwartz’s The Paradox of Choice shows that the more choices we are faced with when making a decision, the slower we are to make that decision and the more unhappy we are with our choice (if you haven’t read the book you can watch the TED talk here).

Because Schwartz’s work flies in the face of common sense and classical economics (where more choice is always a good thing), his research has attracted its fair share of critics. In addition, attempts to replicate the jam experiment at the heart of Schwartz’s argument have not been an unqualified success. However, the notion that more choices slow down decision making has been demonstrated many times. It forms the basis of Hick’s Law, which states there is a logarithmic relationship between the number of options presented to someone and their reaction time.

Hick’s Law is often used when designing control systems (‘user interfaces’) and, more recently, websites. Just like the heart of Schwartz’s argument, Hick’s Law tells us that the key when presenting options is not to remove choice but to reduce it.

But don’t think having fewer choices automatically means greater agency in our decision-making: one of the most useful insights from behavioural economics is that people don’t respond to choices so much as how those choices are framed. Clever marketers know this and so frame choices in a way that silently influence your decision making.

The most famous of these is the so-called Decoy Price. This is the use of high-priced alternatives to reset your expectation of what ‘reasonable’ prices are. Restaurants don’t really expect to sell those $400 bottles of wine but they use them to influence you to buy the $40 bottles (as an aside, always avoid the second cheapest bottle on a wine list because this is the one the owner knows you are most likely to buy and is often lower quality than you think the price signals).

Menus are a masterclass in the use of options to shape the choices you make. There are even menu engineers to help restaurants  (and we are not making this up) “build value and increase profits through menus.” The lesson here is that the menu is trying to manipulate you and every little detail helps.

And the greater insight here is that ‘freedom to choose’ doesn’t always mean you’re choosing freely.

Free to Choose?