Cognitive Science Conference, Philadelphia

fitThis week, 10-13th August, I am a the Annual Cognitive Science Society Conference, in Philadelphia. While there I am presenting work which uses a large data set on chess players and their games.

Previously the phenomenon of ‘stereotype threat’ has been found in many domains where people’s performance suffers when they are made more aware of their identity as a member of a social group which is expected to perform poorly – for example there is a stereotype that men are better at maths, and stereotype threat has been reported for female students taking maths exams when their identity as a women is emphasised, even if only subtly (by asking them to declare their gender on the top of the exam paper, for example). This effect has been reported for Chess, which is heavily male dominated, especially among top players. However, the reports of stereotype threat in chess, like in many other domains, often rely on laboratory experiments with a small number of people (around or less than 100).

My data are more than 11 million games of chess: every tournament recorded with FIDE, the international chess authority, between 2008-2015. Using this data, I asked if it was possible to observe stereotype threat in this real world setting. If the phenomenon is real, however small it is, I should be able to observe it playing out in this data – the sheer number of games I can analyse allows me a very powerful statistical lens.

The answer is, no: there is no stereotype threat in international chess. To see how I determined this, and what I think it means, you can read the paper here, or see the Jupyter notebook which walks you through the key analysis. And if you’re at the Conference, come and visit the poster (as PDF, as PNG). Jeff Sonas, who was compiled the data, has been kind enough to allow me to make available a 10% sample of the data (still over 1 million games), and this, along with all the analysis code for the paper, is available via the Open Science Framework.

There’s lots more to come from this data – as well as analysing performance related effects, the data affords a fantastic opportunity to look at learning curves and try to figure out what affects how players’ performance changes over time.

Written by Comments Off on Cognitive Science Conference, Philadelphia Posted in Research

We are European Scholars

I am British, but consider myself a European scholar. At the start of my time as a lecturer at the University I was lucky enough to be part of an EU funded project on learning in robots. With that project I worked with brilliant colleagues around the EU, as well as being able to do the piece of work I regard as my single most important scientific contribution. It was EU projects like this which inspired the foundation of Sheffield Robotics, a collaboration between the two Universities in Sheffield which aims to define the future of a technology vital to manufacturing in the UK. Two British PhD students I supervised during this project went on to start a business, and a third – from Cyprus – did work with me that led to a major grant from the Michael J Fox Foundation for research into Parkinson’s Disease – bringing US funding into the UK to allow me to work with colleagues in Spain and the US on a health issue that will affect 1 in 500 of us: over 120,000 people in the UK.

Since then I have had two more brilliant PhD students from the EU. One, from Greece, completed a PhD on differences in sensory processing in ADHD and has since gone to work in industry, applying her research skills for a company based in Manchester. The other, an Italian, is currently writing up, and considering job opportunities from around the world. My hope is that we’ll be able to keep him in the UK, where he’ll be able to continue to contribute to the research environment that make British Universities the best in the world.

The UK needs Universities to train our young people, to contribute to public life and to investigate the world around us and within us. And the UK’s Universities need Europe.

I am a European scholar. We are European Scholars at the University of Sheffield. Without our European links and colleagues we, and the UK, would be immeasurably impoverished.

Written in support of yesterday’s call by the University of Sheffield’s Vice-Chancellor

Written by Comments Off on We are European Scholars Posted in events

Why don’t we trust the experts?

During the EU referendum debate a friend of mine, who happens to be a Professor of European Law, asked in exasperation why so much of the country seems unwilling to trust experts on the EU. When we want a haircut, we trust the hairdresser. If we want a car fixed, we trust the mechanic. Now, when we need informed comment on EU, why don’t we trust people who have spent a lifetime studying the topic?

The question rattled around in my mind, until I realised I had actually done some research which provides part of the answer. During my post-doc with Dick Eiser we did a survey of people who lived on land which may have been contaminated with industrial pollutants. We asked people with houses on two such ‘brownfield’ sites, one in the urban north, and one in the urban south, who they trusted to tell them about the possible risks.

One group we asked about the perception of was scientists. The householders answered on scale which went from 1 to 5 (5 is the most trust). Here’s the distribution of answers:

trust_scientists

As you can see, scientists are highly trusted. Compare with the ratings of property developers:

trust_devs

We also asked our respondents about how they rated different communicators on various dimensions. One dimension was expertise about this topic. As you’d expect, scientists were rated as highly expert in the risks of possible brownfield pollution. We also asked people about whether they believed the different potential communicators of risks has their interests at heart, and whether they would be open with their beliefs about risks. With this information, it is possible, statistically, to analyse not just who is trusted, but why they are trusted.

The results, published in Eiser at al (2009), show that expertise is not the strongest determinate of who is trusted. Instead, people trust those who they believe have their best interests at heart. This is three to four times more important than perception of expertise (fig. 3 on p294 for those reading along with the paper in hand).

One way of making this clear is to pick out the people who have high trust in scientists (rating or 4 or 5), and compared them to people who have low trust (rating scientists a 1 or 2 for trust). The perceptions of their expertise differ, but not too much:

perc_expertiseEven those who don’t trust scientists recognise that they know about pollution risks. In other words, their actual expertise isn’t in question.

The difference is seen whether scientists are seen to have the householders’ interests at heart:

perc_interests

So those who didn’t trust the scientist tend to believe that the scientists don’t care about them.

The difference is made clear by one group that was highly trusted to communicate risks of brownfield land-  friends and family:

trust_frfam

Again, the same relationship between variables held. Trust in friends and family was driven more by a perception of shared interests than it was by perceptions of expertise. Remember, this isn’t a measure of generalised trust, but specifically of trust in their communications about pollution risks. Maybe your friends and family aren’t experts in pollution risks, but they surely have your best interests at heart, and that it why they are nearly as trusted on this topic as scientists, despite their lack of expertise.

So here we have a partial answer to why experts aren’t trusted. They aren’t trusted by people who feel alienated from them. My reading of this study would be that it isn’t that we live in a ‘post-fact’ political climate. Rather it is that attempts to take facts out of their social context won’t work. For me and my friends it seems incomprehensible to ignore the facts, whether about the science of vaccination, or the law and economics of leaving the EU. But me and my friends do very well from the status quo- the Treasury, the Bar, the University work well for us. We know who these people are, we know how they work, and we trust them because we feel they are working for us, in some wider sense. People who voted Leave do suffer from a lack of trust, and my best guess is that this is a reflection of a belief that most authorities aren’t on their side, not because they necessarily reject their status as experts.

The paper is written up as : Eiser, J. R., Stafford, T., Henneberry, J., & Catney, P. (2009). “Trust me, I’m a Scientist (Not a Developer)”: Perceived Expertise and Motives as Predictors of Trust in Assessment of Risk from Contaminated Land. Risk Analysis, 29(2), 288-297. I’ve just made the data and analysis for this post available here.

Written by Comments Off on Why don’t we trust the experts? Posted in Research

New paper: Improving training for sensory augmentation using the science of expertise.

A few years ago, we started work on a device we called “the tactile helmet”  (Bertram et al, 2013). This would, the plan was, help you navigate without sight, using ultrasound sensors to give humans an extended sense of touch. Virtual rat-whiskers!

As well as doing some basic testing with the device (Kerdergari et al, 2014), Craig and I also reviewed the existing literature on similar sensory augmentation devices.

What we found was that there are many such devices, with little consistency in how their effectiveness is assessed. Critically, for us, research reports neglected to consider the ease and extent of training with a device. So some devices have users who have practiced with the device for thousands of hours (even decades!), while the results from others are described with users who have little more than a few minutes familiarisation.

In our new paper, Improving training for sensory augmentation using the science of expertise, we review existing sensory augmentation devices with an eye on how users can be trained to use them effectively. We make recommendations for which features of training should be reported, so fair comparisons can be made across devices. These aspects of training also provide a natural focus for how training can be optimised (because for each them, as cogntive scientists, we know how they can be adjusted so as to enhance learning. Our features of training are:

  • The total training duration
  • Session duration and interval
  • Feedback
  • The similarity of training to end use

We discuss each of these in turn, with reference to the psychology literature on skill acquisition, as well as discussing non-training factors which affect device usability.

A post-print of the paper is available here:

References:

Bertram, C., & Stafford, T. (2016). Improving training for sensory augmentation using the science of expertise. Neuroscience & Biobehavioral Reviews, 68, 234-244.

Bertram, C., Evans, M. H., Javaid, M., Stafford, T., & Prescott, T. (2013). Sensory augmentation with distal touch: the tactile helmet project. In Biomimetic and Biohybrid Systems (pp. 24-35). Springer Berlin Heidelberg.

Kerdegari, H., Kim, Y., Stafford, T., & Prescott, T. J. (2014). Centralizing bias and the vibrotactile funneling illusion on the forehead. In Haptics: Neuroscience, Devices, Modeling, and Applications (pp. 55-62). Springer Berlin Heidelberg.

Written by Comments Off on New paper: Improving training for sensory augmentation using the science of expertise. Posted in Research

A hierarchy of critique

Paul Graham has a hierarchy of disagreement. He’s obviously spent his fair share of time watching debates unfold on internet forums, and has categorised the quality of points people make. At the bottom are distraction and name calling. To get to the top you need to identify and refute the central point. Obviously we should aim to produce disagreements from the top of the hierarchy if we want to have a productive debate.

The hierarchy has been expressed in this handy graphic:

Graham's_Hierarchy_of_Disagreement.svg

I think some students would find it useful to have a ‘hierarchy of critique’ to identify the most valuable points to make in an essay. I’ve written before about how to criticise a psychology study. The essential idea is the same as Graham’s: not all criticisms are equal – there are more and less interesting flaws in a study which you can point out.

In brief, like the top levels of Graham’s hierarchy, the best criticisms of a study engage with the propositions that the study authors are trying to establish. Every study will have flaws, but the critical flaws are the ones which break the links between what the experiment shows and what the author’s are trying to claim based on it.

So, without further ado, here is my hierarchy of critique:

h

The exact contents aren’t as important as the fact that there is a hierarchy, and we should always be asking ourselves how high up the hierarchy the point we’re trying to make is. If it is near the bottom, maybe there are better criticisms to spent time and words on.

For more on this, read my: what it means to be critical of a psychology study or watch this video I made saying the same thing.

Written by Comments Off on A hierarchy of critique Posted in Teaching

Dangers and advantages in the idea of implicit bias

Last night I was on a panel discussion around the theme of “Success: is it all in the mind? A discussion on women in science.”

On that panel we discussed the idea of implicit bias, that we can behave in ways that are prejudiced even if we believe ourselves to be without prejudice (or even, anti-prejudice). Relevant examples might be: in meetings interrupting women more than men, filling departmental seminar or conference keynote slots with men rather than women, rating CVs which come from women as less employable and deserving less salary and so on.

The idea of implicit bias has both benefits and dangers for how we talk about bias. On the positive side, it gives us a mechanism for thinking about discrimination which isn’t about straightforward explicit prejudice. Sure, there are people who think “Women can’t do physics” or “Women shouldn’t work”, but implicit bias lets us talk about more subtle prejudice, it helps make visible the intangible feeling that, in a thousand different ways, life is different for members of different social groups. Relatedly, implicit bias lets us recognise that the line of division cuts through every one of us. It isn’t a matter of dividing the world into the sexists vs the feminists, say. Rather, because we’re all brought up in a world which discriminates against women we acquire certain gender-prejudiced habits of thought. Even if that only means automatically of a man when asked to imagine a scientist, then that can have accumulating effects for women in science. Finally, thinking about implicit bias gives a handle on what it might mean for an institution or a culture to be prejudiced. Again, without the need to identify individuals, implicit bias can help us talk about the ways in which we participate, or our organisation participates, in perpetuating discrimination. Nobody has to want people who are more likely to have childcare commitments to be excluded, but if your departmental meetings are always at 4pm then you are risking excluding them.

But the idea of implicit bias can have a negative influence as well. We live in an age which is fascinated by the individual and the psychological. Just because implicit biases can be measured in individuals’ behaviour doesn’t mean that all problems of discrimination should be addressed at the psychological level of individuals. If one thing is clear about implicit bias it is that the best approaches to addressing it won’t be psychological. This is a collective project, there is little or no evidence that ‘retraining’ individual’s implicit biases works, and raising awareness, whilst important, doesn’t provide a simple cure. Approaching bias at an institutional or inter-personal level is more likely to be effective – things like tracking the outcomes of hiring decisions or anonymised marking have been shown to be effective for mitigating bias or insulating individuals from the possibility of bias.

Secondly, the way people talk about bias evokes a metaphor of our rational versus irrational selves which owes more to religion than it does to science. Implicit biases are often described as unconscious biases, when the meaning of unconscious is unclear, and there’s plenty of evidence that people are aware, in some ways, of their biases and/or able to intervene in their expression. By describing bias as ‘unconscious’ we risk thinking of these biases as essentially mysterious -unknowable and unalterable (and from there the natural thought is, well there’s nothing I can do about them). My argument is that biases are not some unconscious, extraneous, process polluting our thinking. Rather, they are constitutive of our thinking – you can’t think without assumptions and shortcuts. And assumptions and shortcuts, while essential, also create systematic distortions in the conclusions you come to.

The idea of implicit bias helps us see prejudice in unexpected places – including our own behaviour. It sets our expectations that there will be no magic bullet for addressing bias, and progress will probably be slow, because cultural change is slow. These are the good things about thinking about the psychology of bias, but although the psychological mechanisms of bias are fascinating, we must recognise the limitations of only thinking about individuals and individual psychology when trying to deal with prejudice, especially when that prejudice is embedded in far wider historical, social and economic injustices. Nor should we allow the rhetoric of biases being ‘unconscious’ trick us into thinking that bias is unknowable or unaccountable for. There is no single thing to be done about discrimination, but things can be done.

Links & Endnotes:

The event was “Success: is it all in the mind? A discussion on women in science.”, organised by Cavendish Inspiring Women the other panelists were the Jessica Wade , Athene Donald and Michelle Ryan. My thanks to all the organisers and our chair, Stuart Higgins.

My thinking about bias is funded by the Leverhulme Trust, on a project led by Jules Holroyd. All my thinking about this has benefited from extensive discussion with her, and with the other members of that project (Robin Scaife, Andreas Bunge).

A previous post of mine about bias mitigation, which arose from doing training on bias with employment tribunal judges

A great book review: What Works: Gender Equality by Design, by Iris Bohnet, which says many sensible things but which risks describing bias as unconscious and therefore more mysterious and intractable than it really is

A good example of the risk of ‘psychologising’ bias: there are more police killings of blacks than whites in the US, but that may reflect other injustices in society rather than straightforward racist biases in police decisions to shoot (and even if it did, it isn’t clear that the solutions would be to target individual officers). See also ‘Implicit Bias Training for Police May Help, but It’s Not Enough‘.

A great discussion of the Williams and Ceci (2015) claim that “sexism in science is over”, and also here . See also ‘How have gender stereotypes changed in the last 30 years?’.

Written by Comments Off on Dangers and advantages in the idea of implicit bias Posted in events

2015 review

Here’s a selective round-up of my academic year

Teaching: I taught my Cognitive Psychology course for the second time. It takes inspiration from MOOCs and ‘flipped classroom’ models, so I try and scaffold the lectures with a bunch of online resources and pre- and post- lecture activities. This year I added pre-lecture quizes and personalised feedback for each student on their engagement. Based on thinking about my lecture discussions I wrote a short post on Medium ‘Cheap tricks for starting discussions in lectures‘ (the truth is, lectures are bad place for starting discussions, but sometimes that’s what you have to work with). I rewrote my first year course on emergent models of mind and brain. It uses interactive jupyter notebooks, which I’m very happy with. The lectures themselves show off a simple neural network as an associative model of memory, and the interactive notebooks mean that students can train the neural network on their own photos if they want. I also held an ‘intergenerational tea party’ every Thursday afternoon of autumn semester where I invited two students I supervise from every year of the undergraduate course (and my PG students and post-docs). If you came to one of these, thanks – I’ll be doing it again next semester.

Writing: I had a piece in the Guardian The science of learning: five classic studies,  as well as my regular BBC Future column, and a few pieces for The Conversation, and some ad-hoc blogging as a minor player on the mindhacks.com blog. I self published an e-book ‘For argument’s sake: evidence that reason can change minds‘ which was very briefly the 8th most popular experimental psychology e-book on Amazon (1 place behind ’50 sexting tips for women’).

Engagement: The year began with me on a sabbatical, which I spent at Folksy. Thanks to everyone there who made it such an enjoyable experience. I learnt more observing small business life in my home city than I think I would have in another Psychology department on the other side of the world. This year I was also lucky enough to do some work with 2CV related to a Transport for London brief on applying behavioural science to passenger behaviours, with Comparethemarket.com on understanding customer decisions and with McGraw-Hill Education on analysis of student learning. Our work on decision biases in court was also kindly mentioned on the UK parliament website, but I have to say that my getting-out-of-the-university highlight of the year was appearing in the promotional video for Folksy’s drone delivery project (released 1/4/2015).

Research: We rebooted interdisciplinary Cognitive Science activities at Sheffield with a Workshop, several seminars and a mailing list for everyone to keep in touch. Kudos to Luca for help instigating these things.

Several existing grants kept me very busy:

Our Leverhulme grant on Bias & Blame continued with our investigation into the cognitive foundation and philosophical implications of implicit bias. The PI, Jules Holroyd was awarded a prestigious Vice Chancellor’s Fellowship at Sheffield, so she’ll be a colleague in the new year as well as a collaborator (well done Jules!).  As part of this project we pre-registered an experimental test of our core hypothesis and this December Robin Scaife finished a heroic effort in data collection, so expect results on this in the new year. Pre-registration was an immensely informative process, not least because it made me finally take power analysis seriously (previously I just sought to side-step the issue). As a result of this work on decision making and implicit bias I did training for employment tribunal judges on bias in decision making, during which I probably learnt more from them than they learnt from me.

We’ve been scanning at the York Neuroimaging Centre, as part of our project on ‘Neuroimaging as a marker of Attention Deficit Hyperactivity Disorder (ADHD)’ . One of the inspirations for this project, Maria Panagiotidi, passed her PhD viva in November for her thesis titled: ‘The Role of the Superior Colliculus in Attention Deficit Hyperactivity Disorder’. Congratulations to Maria, who goes on to work as a research psychologist for Arctic Shores in Manchester.

Funded by the Michael J Fox Foundation we’d continued testing in Sheffield and Madrid, using typing as a measure of the strength of habitual behaviour in Parkinson’s Disease. For this grant the heroic testing efforts were performed by Mariana Leriche. For the analysis we are combing timing information (my specialty) and an information theoretic analysis based on language structure. Colin Bannard (University of Liverpool) is leading on this part of the analysis and working with him has been a great pleasure and immensely informative on computational linguistics.

Our students as part of the Sheffield Neuroeconomics network approach their final years. Angelo Pirrone and I have been working with James Marshall in Computer Science on perceptual decision making, and fitting models of decision making.

That’s not all, but that is all for now. The greatest pleasure of the year has been all the people I’ve had a chance to work with; students, colleagues and collaborators. Everything I have done this year has been teamwork. So apologies if you’re not mentioned above – it is only due to lack of space, not lack of appreciation – and my best wishes for 2016.

Written by Comments Off on 2015 review Posted in events

Individualised student feedback

My Cognitive Psychology course is structured around activities which occur before and after the lectures, many of them online. This year I wrote a Python script which emailed each student an analysis and personalised graph of their engagement with the course. Here’s what it looked like:


———- Forwarded message ———-
From: me
Date: 4 December 2015 at 10:53
Subject: engagement with PSY243
To: student@sheffield.ac.uk

This is an automatically generated email, containing feedback on your engagement with PSY243 course activities. Nobody but you (not even me) has seen these results, and they DO NOT AFFECT OR REFLECT your grade for this course. They have been prepared merely as feedback on how you have engaged with activities as part of PSY243.

Here is a record of your activities:
Weeks 1-9, concept checking quizes completed (out of 7):  4
Week 1-10, asked question via wiki or discussion group:  NO
Week 3, submitted practice answer:  NO
Week 7, submitted answer for peer review (compulsory):  YES
Week 8, number of peer reviews submitted (out of 3, compulsory):  3
Week 10, attended seminar discussion :  NO

We can combine these records to create a MODULE ACTIVITY ENGAGEMENT SCORE.

* * * Your score is 57% * * *

This puts you in the TOP half of the course. Obviously this score does not include activities for which I do not have records. This includes things like lecture attendance, asking questions in lectures, private study, etc.

If we plot the engagement scores for the whole year against the number of people who get that engagement score or lower we get a graph showing the spread of engagement across the course. This graph, and your position on it, are attached to this email. People who have done the least will be towards the left, people who have done the most will appear towards the right of the curve. You can see that there is a spread of engagement scores. Very few people have not done anything, very few have done everything.

I hope you find this feedback useful. PSY243 is designed as a course where the activities structure your private study, rather than as a course where a fixed set of knowledge is conveyed in lectures. This is why I put such emphasis on these extra activities, and provide feedback on your engagement with them. Next week you have the chance to give feedback on PSY243 as part of the course evaluation, so please do say if you can think how the course might be improved

Yours,
Tom, PSY243 MO

student

I designed this course to be structured around a single editable webpage, a wiki, which would provide all the information needed to understand the course from day one. My ambition was to use the lectures to focus on two things you can’t get from a textbook. The first being live exposure to a specialist explaining how they approach a problem or topic in their area. The second being an opportunity to discuss the material (a so called ‘flipped classroom‘). This year I added pre-lecture quizzes to the range of activities available on the course (you can see these here). These were designed so students could test their understanding of the foundational material upon which each lecture drew, and are part of this wider plan to provide clear structure for student’s engagement with the course around the lectures.

If you’re the sort of person who wants to see the code, it is here. At your own risk.

Written by Comments Off on Individualised student feedback Posted in Teaching

Crowdsourcing analysis, an alternative approach to scientific research

Crowdsourcing analysis, an alternative approach to scientific research: Many Hands make tight work

Guest Lecture by Raphael Silberzahn, IESE Business School, University of Navarra

11:00 – 12:00, 9th of December, 2015

Lecture Theatre 6, The Diamond (32 Leavygreave Rd, Sheffield S3 7RD)

Is soccer players’ skin colour associated with how often they are shown a red card? The answer depends on how the data is analysed. With access to a dataset capturing the player-referee interactions of premiership players from the 2012-13 season in the English, German, French and Spanish leagues we organised a crowdsourced research project involving 29 different research teams and 61 individual researchers. Teams initially exchanged analytical approaches — but not results — and incorporated feedback from other teams into their analyses. Despite, the teams came to a broad range of conclusions. The overall group consensus (that a correlation exists) was much more tentative than would be expected from a single-team analysis. Raphael Silberzahn will provide insights from his perspective as one of the project coordinators and Tom Stafford will speak about his experience as a participant in this project. We will discuss how also smaller research projects can benefit from bringing together teams of skilled researchers to work simultaneously on the same data and thereby balance discussions and provide scientific findings with greater validity.

Links to coverage of this research in Nature (‘Crowdsourced research: Many hands make tight work’), and on FiveThirtyEight (‘Science Isn’t Broken: It’s just a hell of a lot harder than we give it credit for’). Our group’s analysis was supported by some great data exploration and visualisation work led by Mat Evans. You can see an interactive notebook of this work here

 

Written by Comments Off on Crowdsourcing analysis, an alternative approach to scientific research Posted in Research, events

Bias mitigation

200px-Unbalanced_scales2.svgOn Friday I gave a talk on cognitive and implicit biases, to a group of employment tribunal judges. The judges were a great audience, far younger, more receptive and more diverse than my own prejudices had led me to expect, and I enjoyed the opportunity to think about the area of cognitive bias, and how some conclusions from that literature might be usefully carried over to the related area of implicit bias.

First off, let’s define cognitive bias versus implicit bias. Cognitive bias is a catch all term for systematic flaws in thinking. The phrase is associated with the ‘Judgement and decision making’ literature which was spearheaded by Daniel Kahneman and colleagues (and for which he received the Nobel Prize in 2002). Implicit bias, for our purposes, refers to a bias in judgements of other people which is both unduly influenced by social categories such as sex or ethnicity and in which the person making this biased judgement is either unaware or unable to control the undue influence.

So from the cognitive bias literature we get a menagerie of biases such as ‘the overconfidence effect‘, ‘confirmation bias‘, ‘anchoring‘, ‘base rate neglect‘, and on and on. From implicit bias we get findings such as that maths exam papers are marked higher when they carry a male name on the top, job applicants with stereotypically black American names have to send out twice as many CVs, on average, to get an interview or that people sit further away from someone they believe has a mental health condition such as schizophrenia. Importantly all these behaviours are observed in individuals who insist that they are not only not sexist/racist/prejudiced but are actively anti-sexism/racism/prejudice.

My argument to the judges boiled down to four key points, which I think build on one another:

1. Implicit biases are cognitive biases

There is slippage in how we identify cognitive biases compared to how we identify implicit biases. Cognitive biases are defined against a standard of rationality – either we know the correct answer (as in the Wason selection task, for example), or we feel able to define irrelevant factors which shouldn’t affect a decision (as in the framing effect found with the ‘Asian Disease problem‘). Implicit biases use the second, contrastive, standard. Additionally it is unclear whether the thing being violated is a standard of rationality, or a standard of equity. So, for example, it is unjust to allow the sex of a student influence their exam score, but is it irrational? (If you think there is a clear answer to this, either way, then you are more confident of the ultimate definition of rationality than a full century of scholars).

Despite these differences, implicit biases can usefully be thought of as a kind of cognitive bias. They are a habit of thought, which produces systematic errors, and which we may be unaware we are deploying (although elsewhere I have argued that the evidence for the unconscious nature of these process is over-egged). Once you start to think of implicit biases and cognitive biases as very similar, it buys some important insights.

Specifically:

2. Biases are integral to thinking

Cognitive biases exist for a reason. They are not rogue processes which contaminate what would be otherwise intelligent thought. They are the foundation of intelligent thought. To grasp this, you need to appreciate just how hard principled, consistent thought is. In a world of limited time, information, certainty and intellectual energy cognitive biases arise from necessary short-cuts and assumptions which keep our intellectual show on the road. Time and time again psychologists have looked at specific cognitive biases and found that there is a good reason for people to make that mistake. Sometimes they even find that animals make that mistake, demonstrating that even without the human traits of pride, ideological confusion and general self-consciousness the error persists – suggesting that there are good evolutionary reasons for it to exist.

For an example, take confirmation bias. Although there are risks to preferring to seek information that confirms whatever you already believe, the strategy does provide a way of dealing with complex information, and a starting point (i.e. what you already suspect) which is as good as any other starting point. It doesn’t require that you speculate endless about what might be true, and in many situations the world (or other people) is more than likely to put contradictory evidence in front of you without you having to expend effort in seeking it out. Confirmation bias exists because it is an efficient information seeking strategy – certainly more efficient than constantly trying to disprove every aspect of what you believe.

Implicit biases concern social judgement and socially significant behaviours, but they also seem to share a common mechanism. In cognitive terms, implicit biases arise from our tendency towards associative thoughts – we pick up on things which co-occur, and have the tendency to make judgements relying on these associations, even if strict logic does not justify it. The scope of how associations are created and strengthened in our minds is beyond the scope of the post.

For now it is clear that making judgements based on circumstantial evidence is unjustified but practical. An uncontentious example might be you get sick after eating at a particular noodle bar. Maybe it was bad luck, you were going to get sick anyway or it was the sandwich you ate a lunch, but the odds are good you’ll avoid the noodle bar in the future. Why chance it, there are plenty of other restaurants? It would be impractical to never make some assumptions, and the assumption-laden (biased!) route offers a practical solution to the riddle of what you should conclude from your food poisoning.

3. There is no bias-free individual

Once you realise that our thinking is built on many fast, assumption-making, processes which may not be perfect – indeed which have systematic tendencies which produce the errors we identify as cognitive bias – you then realise that it would be impossible to have bias-free decision processes. If you want to make good choices today rather than a perfect choices in the distant future, you have to compromise and accept decisions which will have some biases in them. You cannot free yourself of bias, in this sense, and you shouldn’t expect to.

This realisation encourages some humility in the face of cognitive bias. We all have biases, and we shouldn’t pretend that we don’t or hope that we can free ourselves of them.

We can be aware of the biases we are exposed to and likely to harbour within ourselves. We can, with a collective effort, change the content of the biases we foster as a culture. We can try hard to identify situations where bias may play a larger role, or identify particular biases which are latent in our culture or thinking. We can direct our bias mitigation efforts at particularly important decisions, or decisions we think are particularly likely to be prone to bias. But bias-free thinking isn’t an option, it is part of who we are.

4. Many effective mitigation strategies will be supra-personal:

If humility in the face of bias is the first practical reaction to the science of cognitive bias, I’d argue that second is to recognise that bias isn’t something you can solve on your own at a personal psychological level. Obviously you have to start by trying your honest best to be clear-headed and reasonable, but all the evidence suggests that biases will persist, that they cannot be cut out of thinking and may even thrive when we think ourselves most objective.

The solution is to embed yourself in groups, procedures and institutions which help counter-act bias. Obviously, to a large extent, the institutions of law have evolved to counter personal biases. It would be an interesting exercise to review how legal cases are conducted from a psychological perspective, interpreting different features as to how they work with or against our cognitive tendencies (so, for example, the adversarial system doesn’t get rid of confirmation bias, but it does mean that confirmation bias is given equal and opposite opportunity to work in the minds of the two advocates).

Amongst other kinds of ‘ecological control‘ we might count proper procedure (following the letter of the law, checklists, etc), control of (admissible) information and the systematic collection of feedback (without which you may not ever come to realise that you are making systematically biased decisions).

Slides from my talk here as Google docs slides and as PDF. Thanks to Robin Scaife for comments on a draft of this post. Cross-posted to the blog of our Leverhulme trust funded project on “Bias and Blame“.

Written by Comments Off on Bias mitigation Posted in Research