Why You Can’t Pay People to Give Blood


In a previous post we talked about the rise of ‘bullshit jobs’ (here) but that analysis only addresses part of what is wrong with the world of work. You have undoubtedly heard the expression that ‘people quit managers, not organisations’, and there is plenty of evidence that this is the case: For instance in one Gallup study about 50% of the 7,200 adults surveyed left a job to get away from their manager.

There is clearly something wrong with the way many of us are managed, but what is it? The view from the social sciences suggests this might be because so much of what passes for management ‘science’ embodies a profound misunderstanding of how people behave and what it takes to get the best from them.

Let’s just look at something that is central to the role of any manager, recognising and rewarding performance. Many managers are surprised to learn that performance-related pay doesn’t encourage people to work harder, and often has the opposite effect. This is because it fundamentally misunderstands what motivates people to do a good job at work. Way back in 1970 Richard Titmuss discovered that paying people to donate blood actually reduced the number of people willing to give blood. It might seem obvious that if someone is already willing to do something, then paying them to do it would encourage them even more. But the clue is in the language we use: people ‘give’ blood they don’t ‘sell’ it.

What Titmuss found, and most managers miss, is that extrinsic motivations displace intrinsic ones. Intrinsic motivations are what drive us to do something because we find that task interesting, enjoyable, or worthwhile. The motivation to do the task, and do it well, come from within. Extrinsic motivation, on the other hand, is about doing something for what it will bring you. Psychologists have known for a long time that explicit incentives often backfire by undermining the value inherent in the task. It may seem bizarre, but the research is clear that if there is something you love doing for the sake of it (such as playing tennis or golf), than getting paid to do it will probably make you enjoy it less. This is also why you really don’t want to turn your hobby into a job.

Rather than writing performance plans and setting targets, the secret to motivating staff seems to be in compliments and pizza. I’m not being flippant either, a great experiment by Dan Ariely demonstrated that giving staff pizza or compliments was more effective at keeping them motivated than giving them cash rewards. People work harder when they feel appreciated, which is also why it’s a good idea to compliment them in public (and criticise them in private).

Managers are an easy target for social scientists but you might be surprised to hear that social science has a great deal of sympathy for managers too. This is because managers struggle with the same biases as the rest of us, and are usually just as clueless about those biases. This means they are just not very good at judging other people. Like all of us, they will probably be unaware how their impressions of others are influenced by how good looking those people are, how tall they are, and how often they talk in meetings. The research into how we judge others is clear that we see those who speak-up first, or loudest, or most often, as being more charismatic than those that don’t. This can lead a ‘tyranny of the articulate’, where the best talkers win the debate even though they have the weakest argument. But often we’re drawn to the talkers regardless of what they actually have to say. There is an easy way to see this for yourself: Google ‘how to be more charismatic’ and what you will find are endless lists about how to talk, dress, and impress but very little about the quality of your message.

There is plenty more we could add to this criticism of management in modern workplace but the larger point should have been made by now. The root cause of all the problems outlined above is that managers are either taught or come to believe that the world is just what it seems to their senses (a perspective known as ‘naïve realism’). From the perspective of the social sciences, we can see that nothing could be further from the truth[i].


[i]               The Gallup study is quoted in Benjamin Snyder’s (2015) “Half of us have quit our job because of a bad boss”, Fortune, April 2, 2015. There is a useful systematic analysis of performance-related pay and bonuses by Bernd Irlenbusch, and Dirk Sliwka titled “Incentives, Decision Frames, and Motivation Crowding Out: an Experimental Investigation”, IZA Discussion Paper No. 1758. (September 2005). Available at SSRN: https://ssrn.com/abstract=822866. There is also a nice short summary in Samuel Bowles (2009) “When Economic Incentives Backfire”, Harvard Business Review, March 2009 Issue. The concepts of intrinsic and motivation are covered in most undergraduate psychology textbooks but there is a nice summary in Richard M. Ryan and Edward L. Deci’s (2000) “Intrinsic and Extrinsic Motivations: Classic Definitions and New Directions”, in Contemporary Educational Psychology 25, 54–67 (2000). Richard Titmuss’s 1970 study is in his The gift relationship: From human blood to social policy. London: Allen & Unwin, and is well worth reading. One of the reasons why extrinsic rewards crowd out intrinsic motivations is explained by The OverJustification Effect. Have a look at Lepper, M.P; Greene, D.; Nisbett, R. E (1973). “Undermining children’s Intrinsic interest with extrinsic reward: A test of the “overjustification” hypothesis”, Journal of Personality and Social Psychology. 28 (1): 129–137 to learn more. The Dan Ariely experiment is documented in his (2016). Payoff: The hidden logic that shapes our motivations, New York: TED Books, Simon & Schuster. For a great insight to the power of peer pressure to motivate staff, see Monsalve MN, Pemmaraju SV, Thomas GW, Herman T, Segre AM, Polgreen PM (2015) “Do Peer Effects Improve Hand Hygiene Adherence among Healthcare Workers?”, Infection control and hospital epidemiology, 2014: 35(10):1277-1285. To find out more about ‘the beauty bias’, try Deborah Rhode’s eminently readable (2010) The beauty bias: The injustice of appearance in life and law. Oxford ; New York: Oxford University Press. The ‘height premium’ is covered in Joe Pinsker’s (2015) “The Financial Perks of Being Tall”, The Atlantic, May 18, 2015. The description of “the tyranny of the articulate” comes from the “Pinterest Founder Ben Silbermann’s Lessons on Decision Making, Values, and Taking Time for Yourself”, which is a Village Global post on Medium (https://medium.com/@villageglobal/pinterest-founder-ben-silbermanns-lessons-on-decision-making-values-and-taking-time-for-yourself-5c76c1517a38). To understand how easy it is to be fooled by charisma, have a look at Cass Sunstein and Reid Hastie’s (2015). Wiser: Getting beyond groupthink to make groups smarter, Harvard Business Review Press, Boston. It’s also worth reading John Antonakis’s (2012) “Learning Charisma”, Harvard Business Review, June 2012.

Why You Can’t Pay People to Give Blood

Are You in a Bullshit Job?


Teddy Roosevelt once said that “far and away the best prize that life offers is the chance to work hard at work worth doing”. There is lots of advice about how to ‘work hard’ but what about that ‘work worth doing’ part? What if the real problem is that your entire job is meaningless make-work? David Graeber, an anthropologist at the London School of Economics, argues that this is a much more common problem than we might think. He has called this ‘The Phenomenon of Bullshit Jobs’ and his ideas are worth considering.

Graeber draws special attention to jobs in the service sector but argues that about half of the jobs in the modern economy are pointless. Count yourself in this half if your job is really about fixing problems that shouldn’t exist (what he calls ‘duct tapers’); keeping other people on task (‘task masters’); managing the performance of others (‘box tickers’); making other people feel more important (‘flunkies’); or promoting the interests of others (‘goons’). As he said, “it’s as if someone were out there making up pointless jobs just for the sake of keeping us all working”.

Graeber’s idea is important because he’s clear that he’s not talking about ‘shit jobs’ as we traditionally understand them, the kinds of jobs that come with poor pay, conditions, and low prestige. Instead, he’s talking about jobs that look perfectly respectable from the outside but are essentially hollow on the inside. He singles out corporate lawyers as a great example of what he’s talking about (but he’s also smart enough to see that, as a Professor of Anthropology, he shouldn’t be throwing stones at anyone). Graeber’s point is that it’s hard to find meaning in your work if you think your work is pointless.

To which the obvious solution seems to be to find work that has meaning and value. Except this is where ‘The Phenomenon of Bullshit Jobs’ becomes really interesting. Alongside the creation of all those jobs that are pointless, the modern economy seems to be filling even the good jobs with more and more pointless tasks. Graeber calls this “the creeping bullshitization of real jobs”. It describes how compliance and administration crowd out the ability to do any actual work. One of the earliest examples he used was closest to his heart, that of universities. He notes how academics find themselves doing less and less studying, teaching, or research; and more and more time measuring, assessing, or quantifying the way they should be studying, teaching, or researching. He also talks about the bizarre phenomenon in universities of “forming more committees to discuss the problems of too many committees”. You can understand why when you read that the average US employee spends less than half of each day on their real job, with the majority of their time taken with witless emails or pointless meetings.

As a result, it should be no surprise that so many people are disengaged from their work. Gallup has an annual international survey that tracks just how disengaged most people are. The latest survey shows that while only a minority (16%) actively hate their job, the majority (51%) are just turning up for the pay cheque. Only a third of workers say they love their job and try to make the company better every day. But perhaps the best indicator of how disengaged people are from their work is the fact that heart attacks are more likely to occur on a Monday than any other day of the week [i].


[i]               The Roosevelt quote comes from Teddy’s (1903) Address to the New York State Agricultural Association, Syracuse, New York, September 7th 1903. David Graeber originally outlined his idea about Bullshit Jobs in Strike! magazine (“On the Phenomenon of Bullshit Jobs: A Work Rant, August 2013). He expanded that idea in his (2018) book Bullshit Jobs: A Theory, Simon and Schuster, New York. And wrote about what it meant for academics in The Chronicle Of Higher Education (“Are you in a BS job? In Academe, you’re hardly alone”, May 2018). In 2018 Graeber’s argument attracted a great deal of interest, and in this blog I have also drawn on Normal Heller’s “The Bullshit-Job Boom” (The New Yorker, June 7th 2018) and Elaine Glaser’s “Bullshit Jobs: A Theory by David Graeber” (The Guardian, May 25th 2018). The data about how the average US employee spends more than half their day on administration is from Andre Spicer’s (2017) “From inboxing to thought showers: how business bullshit took over”, The Guardian 23 Nov 2017. The Gallup data are from Anna Robaton’s (2017) “Why so many Americans hate their jobs”, CBS News MoneyWatch March 31, 2017 (www.cbsnews.com/news/why-so-many-americans-hate-their-jobs/). The heart attack statistic is in Anahad O’Connor’s (2006) “The Claim: Heart Attacks are More Common on Mondays”, The New York Times March 14 2006.



Are You in a Bullshit Job?

Social Exchange Theory and Survey Design


‘Social exchange theory’ is a critical component of sociological and social psychological theorising about how relations are created and maintained. Most often associated with the work of the (then) University of Washington sociologists Richard Emerson[1] and Karen Cook[2], social exchange theory states that exchanges constitute the fundamental form of human interaction. More than this, that interaction patterns are shaped by power relationships and the corresponding efforts to achieve balance in exchange relations. In other words, social exchange theory tells us that the  actions of individuals are motivated by the return that these actions are expected to bring (and in fact, usually do bring). The theory argues that people engage in an activity because of the rewards they hope to reap, and that all activities include certain costs and people attempt to keep the costs below the rewards they hope to receive.

Social exchange theory is important in research because, as noted by Don Dillman, interviews (and especially surveys) are a special case of social exchange. Thinking about surveys in terms of social exchange means there are three things researchers can do to maximize survey responses

  1. Minimise the costs of responding;
  2. Maximise the rewards of responding; and
  3. Establish trust that those rewards will be delivered.

In surveying, the largest ‘reward’ is letting people know that they are part of a specially selected sample, that there opinions and responses are valued. People, after all, like to feel valued. But some other ways that surveys ‘reward’ participants are through –

  1. Showing positive regard to the respondent;
  2. Providing verbal appreciation to the respondent; and
  3. Making the questionnaire interesting.

Equally, surveys can reduce the costs of responding by

  1. Making the task appear brief;
  2. Reducing the effort required (through such things as closed ended questions); and
  3. Eliminating any chances of embarrassment

A further technique suggested by social exchange theory is to include a small gift with each questionnaire sent out. This can be in addition to an incentive draw for those that complete the questionnaire but social exchange theory predicts that a small gift with every survey helps seed the exchange researchers are looking for. We also know from other psychological research that small immediate rewards are often preferred over potentially larger long-term rewards[3].



[1] Emerson, Richard (1976) “Social Exchange Theory,” Annual Review of Sociology (1976), 335-362.

[2] Cook, K. (ed.) (1987) Social Exchange Theory. Sage Publications, Newbury Park, Ca.

[3] This ‘paradox’ of people choosing short-term gratification over longer-term rewards is also used to explain the rise of obesity and the lack of savings for old age. See O’Donoghue T and Rabin M (2000) ‘The economics of immediate gratification’ in Journal of Behavioral Decision Making, 13(2), 233-250 (2000).



Social Exchange Theory and Survey Design

Why You Won’t Remember This Blog


If you’re in the business of influencing others (to buy something, to believe something, or to act differently), then it’s critical that you understand how the human brain really works. One of the important lessons emerging from the social sciences is that our intuitions about ourselves and others are often not as accurate or as insightful as we think.

While it feels like you’re reasoning your way through your life, that’s rarely the case. Instead, our brains are wired to take shortcuts, to be influenced by how things are framed, and profoundly shaped by what others are doing. When we talk about knowing something, we really mean experiencing what the neurologist Robert Burton called a “feeling of knowing”.

Part of the puzzle here is that we all rely on our memories to construct our ideas of what the future might look like. And while it feels like our memories are up to the task, they’re not really designed for it. The truth is, we don’t really recall memories so much as reconstruct them from traces distributed throughout the brain. This makes memories what psychologists call ‘plastic’, meaning they can be shaped and reshaped as we replay them.

There is a long list of these memory biases to highlight just how unreliable our memories really but the problem with this line of argument is that it starts from an assumption that our memories should be accurate. That is, the emphasis on remembering obscures the role that forgetting plays for our brains. We tend to think about forgetting as a failure to remember but this misunderstands what our brains evolved to do. The reason you have a memory at all is to help you survive in an uncertain world. Memory exists to optimise decision-making, not to accurately capture and reproduce information.

As the psychologists Blake Richards and Paul Frankland have noted, this means forgetting is just as important to a healthy brain as remembering. This is supported by the fact we tend to forget memories about what happened to us (known as ‘episodic memories’) quicker than we forget memories about general knowledge (known as ‘semantic memories’). The reality is that those episodic memories might just not be very useful from a survival perspective. Which also means that not being able to remember how you know someone might be a feature of our brains and not a bug. It also explains why you’re unlikely to remember this blog, and why we won’t be too offended when you don’t[i].



[i]             To learn more about the fallibility of memory, have a look at the work of UCLA’s Bjork Learning and Forgetting Lab (https://bjorklab.psych.ucla.edu/people/). Another great place to start is with Elizabeth and Robert Bjork’s (1996) Memory, Academic Press, San Diego or Daniel Schacter’s (2001) The Seven Sins of Memory, Houghton Mifflin. The point about memory being a decision-making tool to help you survive in an uncertain world comes from Blake Richards and Paul Frankland’s 2017 paper “The Persistence and Transience of Memory” in Neuron, Volume 94, Issue 6, p1071–1084.  The point about forgetting being a feature and not a bug is from Angela Chen’s (2017) “The purpose of memory might not be to record everything”, The Verge Jun 21, 2017.


Why You Won’t Remember This Blog

A Short List of Essential Reading for a World Full of Fake News

t-shirt great again

Charlie Jones once said that “you are the same today as you will be in five years, except for two things: the people you meet and the books you read. Choose both carefully”. At Research First we have always believed that critical thinking skills are essential to both thrive in business and fully participate in civic life.

If anything, those skills have become even more important given the batshit crazy world we find ourselves in today. Now, seemingly more than ever, the ability to think critically about how arguments are constructed, supported, and presented is an essential antidote to a world awash with bullshit (in Harry Frankfurt’s sense of the word).

Three of our favourite books for helping develop that antidote are Darrell Huff’s How to Lie with Statistics; John Allen Paulos’s A Mathematician Reads the Newspaper; and Cynthia Crossen’s Tainted Truth.

The first of these – How to Lie with Statistics – is now called ‘a classic’ but don’t let that put you off. As the entry on Wikipedia puts it, Huff’s book is “a brief, breezy, illustrated volume outlining common errors, both intentional and unintentional, associated with the interpretation of statistics”. One reason why Huff’s book is so easy to read and follow is that he was a journalist rather than a statistician. And, obviously, this book isn’t really about how to lie with statistics but about how to know when you’re being lied to with statistics. If you work with numbers and statistics (and who doesn’t?) you’ll love Huff’s book.

We’d be tempted to say ‘keep a copy handy whenever you read the newspaper if John Allen Paulos hadn’t beaten us to it with his book. A Mathematician Reads the Newspapers is just as ‘breezy’ as Huff’s and just as insightful. It shows how maths and numbers are central to many of the articles we read every day (Paulos takes stories that don’t seem to involve maths and – as Amazon puts it – ‘demonstrates how a lack of mathematical knowledge can hinder our understanding of them’). In the process, he demonstrates how maths and statistics are often abused in the support of bullshit and bluff.

Tainted Truth rounds out this collection by showing how sponsored studies have become a powerful tool of persuasion. These studies look like real science to the casual observer but they manipulate truth to reflect the intentions of their sponsors. Tainted Truth also shows that how an argument is presented and communicated can have a significant effect on how persuasive we find it (regardless of its real merits). One of the quotes we love in it says “everybody gets so much information all day long that they lose their common sense”. All three books here help us retain (or reclaim?) that common sense.

And, God knows, now more than ever, that’s something we all need. Check out these three books and let’s make critical thinking great again!




A Short List of Essential Reading for a World Full of Fake News

Cambridge Analytica and the Limits of Big Data

Big data

In case you missed it, the Cambridge Analytica scandal revolved around an innocuous-looking survey posted on Facebook that deceitfully collected users’ details. The scandal caught the attention of the world because Facebook’s “privacy” settings enabled Cambridge Analytica to also harvest the details of the friends of anyone who completed the survey. This meant Cambridge Analytica collected data from over 50 million users (and maybe as many as 87 million) to help in the company’s stated aim of ‘changing audience behaviour’.

Given that most of these millions of users had not given permission to access their data, most of the coverage of this scandal has focused on the ethical and legal transgressions involved. But other commentators have noted that the scandal provides a powerful lens into what we have given up in order to have access to the wonders of social media. Think of it as a contemporary Faustian bargain, just one where Mephistopheles is disguised in full hipster mode and Faust forgot to read the fine print. No-one has run this argument more persuasively or entertainingly than Bob Hoffman (of Ad Contrarian fame). To give you a taste, in a recent blog Bob wrote:

We used to be able to dismiss Zuckerberg and his gang as greedy, silly brats with no perspective and no ethical compass. But he is far more dangerous than that.  

As important as these debates are, there is another side to the Cambridge Analytica story that needs to better known. Long before it started its Facebook deceit, Cambridge Analytica boasted that “data drives all we do”. The company promised to be able to form ‘psychographic’ profiles from these data, and to use these psychological insights to better influence opinions, preferences, and behaviour. In this regard, Cambridge Analytica was hitching a ride on the wave of hype created around ‘big data’ and ‘data analytics’.

Because of the association with Steve Bannon and the Trump campaign, it’s easy to think that Cambridge Analytica delivered on that promise. But the evidence clearly points the other way. The company worked on campaigns for both Ted Cruz and Ben Carson (neither of which ended well), and was fired from Trump’s campaign too. According to CBS News, Trump’s campaign ended its relationship with Cambridge Analytica because the data it was a supplying were of “suspect quality and value”. In other words, even with all those stolen data, Cambridge Analytica wasn’t able to change much audience behaviour.

Worse still, investigations done by The Atlantic, and elsewhere, suggest that not even Cambridge Analytica believed the data could do what they were telling their clients it could. An undercover investigation by Channel 4 News showed staff from Cambridge Analytica promoting the virtues of blackmail and bribery over bits and bytes to change behaviour. As The Atlantic noted:

If the consulting firm’s “psychographic” modeling was really the key to winning campaigns, why would it even flirt with sketchier skullduggery?

As The Atlantic also notes, Cambridge Analytica found so many willing buyers for its psychographic claptrap because we’ve all become suckers for what Bob Hoffman calls “buffoon[s] with a Powerpoint and a bag full of clichés”. After all, it wasn’t so long ago the world was agog at how Obama’s campaign used data to drive microtargeting to influence voters.

All of which means the other important lesson in the Cambridge Analytica story is this: we should always remain skeptical about revolutionary techniques claiming to wring unique insights from data.

At Research First we love data, and we like our data big, but what matters is the quality of the science not the size of the data file. While we follow Celia Green’s counsel that “the way to do research is to attack the facts at the point of greatest astonishment”, we are also acutely conscious of Heisenberg’s warning that:

What we observe is not nature herself, but nature exposed to our method of questioning.

What the Cambridge Analytica scandal reminds us is that with big data, as with so much in research, we can be our own worst enemies. The real problem is not that the right answers are too hard but that the wrong answers are often too easy.





Cambridge Analytica and the Limits of Big Data

Paved With Good Intentions


We’ve all heard the aphorism that ‘the road to hell is paved with good intentions’. It warns us that, in trying to make something better, we often end up making it worse. In many ways this is another warning about the hubris that comes with believing we understand how the world around us truly works. In reality, the rules of science are clear that our understanding is always incomplete and open to revision. This is why George Box famously observed that while some of our models are ‘useful’, they are all ultimately ‘wrong’.

One beautiful illustration of this is provided by “Braess’s Paradox”. This began life as a mathematical model about traffic congestion. It shows that adding extra capacity to a network when the users of can reduce overall performance of the network. In other words, attempting to improve congestion by adding more roads can actually make the congestion worse. Interestingly, this also suggests that you could reduce congestion by removing roads (it’s not called a ‘paradox’ for nothing). Subsequent experience in a number of large cities across the world demonstrate that this is what happens. There’s a nice summary in the New York Times here:

But we said that Braess’s Paradox ‘began life’ as a mathematical model because it has subsequently become a description of all those cases where attempts to improve a situation result in making it works. In particular, those cases where individuals’ choices end up leaving the group worse off. In this regard, it’s a great scientific proof of the good old fashioned ‘tragedy of the commons’ problem.

Braess’s Paradox (and the Tragedy of the Commons) reminds us that we rarely know as much as we think we do. Research published this month in the Harvard Business Review notes that this is particularly a problem for beginners. The authors of that study, Carmen Sanchez and David Dunning, call this “the beginner’s bubble”, reflecting how confidence builds much faster than competence when learning a new task (and just in case you are wondering, yes that is the same David Dunning who gave his name to the Dunning-Kruger Effect).

As with most of the ways our brains play tricks on us, these biases and effects are much easier to see in others than in ourselves. You’re unlikely to spot them on your own, or if you’ve been trained to see criticism as a barometer of failure. Yet we’d all be better off if we embraced Voltaire’s truism that “doubt may be an uncomfortable position but certainty is a ridiculous one”.


Paved With Good Intentions