The Open Science Framework

Open science essentials in 2 minutes, part 2

The Open Science Framework (osf.io) is a website designed for the complete life-cycle of your research project – designing projects; collaborating; collecting, storing and sharing data; sharing analysis scripts, stimuli, results and publishing results.

You can read more about the rationale for the site here.

Open Science is fast becoming the new standard for science. As I see it, there are two major drivers of this:

1. Distributing your results via a slim journal article dates from the 17th century. Constraints on the timing, speed and volume of scholarly communication no longer apply. In short, now there is no reason not to share your full materials, data, and analysis scripts.

2. The Replicability crisis means that how people interpret research is changing. Obviously sharing your work doesn’t automatically make it reliable, but since it is a costly signal, it is a good sign that you take the reliability of your work seriously.

You could share aspects of your work in many ways, but the OSF has many benefits

  • the OSF is backed by serious money & institutional support, so the online side of your project will be live many years after you publish the link
  • It integrates with various other platform (github, dropbox, the PsyArXiv preprint server)
  • Totally free, run for scientists by scientists as a non-profit

All this, and the OSF also makes easy things like version control and pre-registration.

Good science is open science. And the fringe benefit is that making materials open forces you to properly document everything, which makes you a better collaborator with your number one research partner – your future self.

Notes to support lighting talk as part of Open Science seminar in the Department of Psychology, University of Sheffield on 14/11/17.

Part of a series

  1. Pre-registration
  2. The Open Science Framework
Written by Comments Off on The Open Science Framework Posted in Research

Pre-registration

Open Science essentials in 2 minutes, part 1

The Problem

As a scholarly community we allowed ourselves to forget the distinction between exploratory vs confirmatory research, presenting exploratory results as confirmatory, presenting post-hoc rationales as predictions. As well as being dishonest, this makes for unreliable science.

Flexibility in how you analyse your data (“researcher degrees of freedom“) can invalidate statistical inferences.

Importantly, you can employ questionable research practices like this (“p-hacking“) without knowing you are doing it. Decide to stop an analysis because the results are significant? Measure 3 dependent variables and use the one that “works”? Exclude participants who don’t respond to your manipulation? All justified in exploratory research, but mean you are exploring a garden of forking paths in the space of possible analysis – when you arrive at a significant result, you won’t be sure you got there because of the data, or your choices.

The solution

There is a solution – pre-registration. Declare in advance the details of your method and your analysis: sample size, exclusion conditions, dependent variables, directional predictions.

You can do this

Pre-registration is easy. There is no single, universally accepted, way to do it.

  • you could write your data collection and analysis plan down and post it on your blog.
  • you can use the Open Science Framework to timestamp and archive a pre-registration, so you can prove you made a prediction ahead of time.
  • you can visit AsPredicted.org which provides a form to complete, which will help you structure your pre-registration (making sure you include all relevant information).
  • Registered Reports“: more and more journals are committing to published pre-registered studies. They review the method and analysis plan before data collection and agree to publish once the results are in (however they turn out).

You should do this

Why do this?

  • credibility – other researchers (and journals) will know you predicted the results before you got them.
  • you can still do exploratory analysis, it just makes it clear which is which.
  • forces you to think about the analysis before collecting the data (a great benefit).
  • more confidence in your results.

Further reading

 

Addendum 14/11/17

As luck would have it, I stumbled across a bunch of useful extra resources in the days after publishing this post

Notes to support lighting talk as part of Open Science seminar in the Department of Psychology, University of Sheffield on 14/11/17.

Part of a series

  1. Pre-registration
  2. The Open Science Framework
Written by Comments Off on Pre-registration Posted in Research

Seminar: Framing Effects in the Field: Evidence from Two Million Bets

Seminar announcement

Framing Effects in the Field: Evidence from Two Million Bets

Friday 8th of December, 1pm, The Diamond LT2

Alasdair Brown, School of Economics, UEA

Abstract: Psychologists and economists have often found that risky choices can be affected by the way that the gamble is presented or framed.  We analyse two million tennis bets over a 6 year period to analyse 1) whether frames are important in a real high-stakes environment, and 2) whether individuals pay a premium in order to avoid certain frames.  In this betting market, the same asset can be traded at two different prices at precisely the same time.  The only difference is the way that the two bets are framed.  The fact that these isomorphic bets arise naturally allows us to examine a scale of activity beyond even the most well-funded experiments.  We find that bettors make frequent mistakes, choosing the worse of the two bets in 29% of cases.  Bettors display a (costly) aversion to the framing of bets as high risk, but there is little evidence of loss aversion.  This suggests that individuals are indeed susceptible to framing manipulations in real-world situations, but not in the way predicted by prospect theory.

Part of the Psychology department seminar series. Tom Stafford is the host.

Please contact me if you’d like to meet with Alasdair.

Written by Comments Off on Seminar: Framing Effects in the Field: Evidence from Two Million Bets Posted in Uncategorized

2016 review

Research. Theme #1: Decision making: Most of the work I’ve done this year hasn’t yet seen the light of day. Our Michael J Fox Foundation funded project using typing as a measure of the strength of habitual behaviour in Parkinson’s Disease continues, and we’ll finish the data analysis next month. Likewise, we should also soon finish the analysis on our project ‘Neuroimaging as a marker of Attention Deficit Hyperactivity Disorder (ADHD)’. Angelo successfully passed his viva (thesis title: “Decision modelling insights in cognition and adaptive decision making”) and takes up a fellowship at Peking University in 2017 (well done Angelo!).

This thread of work, which is concerned with the neural and mechanistic basis of decision making, informs the ‘higher-level’ work I do on decision making, which is preoccupied with bias in decision making and how to address it. This work, done with Jules Holroyd and Robin Scaife, has focussed on the idea of ‘implicit bias‘, and what might be done about it. As well as running experiments and doing conceptual analysis, we’ve been developing an intervention on cognitive and implicit bias, which summarises the current state of research and gives some practical advice on avoiding bias in decision making. I’ve done a number of these sessions with judges, which has been a humbling experience: to merely study decision making and then be confronted with a room of professionals who dedicate their time to actually making fair decisions. As with the other projects, much more on this work will hopefully see the light in 2017.

World events have made studying decision making to understand better decisions seem more and more relevant. Here’s a re-analysis of some older data which I completed following the UK’s referendum on leaving the EU in June: Why don’t we trust the experts? (and, relatedly, my thoughts on being a European scholar). Also on this topic, a piece for The ConversationHow to check if you’re in a news echo chamber – and what to do about it.

Journal publications on decision making:

Holroyd, J., Scaife, R., Stafford, T. (in press). Responsibility for Implicit Bias. Philosophy Compass.
Pirrone, A., Azab, H., Hayden, B.Y., Stafford, T. and Marshall, J.A.R. (in press). Evidence for the speed-value trade-off: human and monkey decision making is magnitude sensitive. Decision
Panagiotidi, M., Overton, P.G., Stafford, T. (in press). Attention Deficit Hyperactivity Disorder-like traits and distractibility in the visual periphery. Perception.
Pirrone, A., Dickinson, A., Gomez, R., Stafford, T. and Milne, E. (in press). Understanding perceptual judgement in autism spectrum disorder using the drift diffusion model. Neuropsychology.
Bednark J., Reynolds J., Stafford T., Redgrave P. and Franz E. (2016). Action experience and action discovery in medicated individuals with Parkinson’s disease. Frontiers in Human Neuroscience, 10, 427. DOI 10.3389/fnhum.2016.00427.
Lu, Y., Stafford, T., & Fox, C. (2016). Maximum saliency bias in binocular fusion. Connection Science, 28(3),258-269.

(catch up on all publications on my scholarly publications page)

 

Research. Theme #2: Skill and learning

My argument is that games provide a unique data set where participants engage in profound skill acquisition AND the complete history of their skill development is easily recorded. To this end, I’ve several projects analysing data from games. This new paper : Stafford, T. & Haasnoot, E. (in press). Testing sleep consolidation in skill learning: a field study using an online game. Topics in Cognitive Science. (data + code) is an example of the new kinds of analysis – as well as the new results – which large data from games allow. The paper is an advance on our first work on this data (Stafford & Dewar, 2014), and is a featured project at the Centre for Data on the Mind. I gave a talk about this work at a workshop ‘Innovations in online learning environments: intrapersonal perspectives‘, for which there is video (view here: Factors influencing optimal skill learning: data from a simple online game).

I have been analysing a large dataset of chess games (11 million + games) and presented initial work on this at the Cognitive Science Conference. You can read the paper or see the code, results and commentary in an integrated Jupyter notebook (these are the future). There’s lots more exciting stuff to come out of this data!

Our overview of how the science of skill acquisition can inform development of sensory protheses came out: Bertram, C., & Stafford, T. (2016). Improving training for sensory augmentation using the science of expertise. Neuroscience & Biobehavioral Reviews, 68, 234-244 (Talk slides, lay summary).

Also: I wrote for The Conversation about an important review of the literature on the benefits of Brain Training, and I had a great summer student looking at the expertise acquired by Candy Crush players.

 

Teaching & thinking about teaching: Not as much to report as last year, since I had teaching leave for the autumn semester, as part of our Leverhulme project on bias and blame. At the beginning of the year I taught a graduate discussion class on dual-process theories in psychology and neuroscience, which was very worthwhile, but didn’t leave much digital trace. Whilst I’ve not been teaching classes, I have been thinking about teaching, publishing this in The Guardian: The way you’re revising may let you down in exams – and here’s why (my third piece in the G on learning), this on NPJ ‘Science of Learning’ Community: Do students know what’s good for them? (I’m proud of this one, mainly for the quality of the outgoing links it includes), and this, for The Conversation, on a under-noted consequence of testing in education: Good tests make children fail – here’s why.

I also used some informal platforms (i.e. blogging etc) to produce some guidance for psychology students: This on what I call the Hierarchy of critique, and this on the logic of student experiment reports, and I tried to provoke some discussion around this : I don’t read students’ drafts. Should I?

I did some talks for graduate students (follow the links for slides): Adventures in research blogging and Expanding your writing portfolio.

 

Peer reviewing: I feel this should be recorded somewhere, since peer reviewing is a part of an academic’s job which requires the pinnacle of their expertise and experience, yet is generally unrecognised and unrewarded. This year I helped the scholarly community out by doing grant reviews for the Medical Research Council and the Biotechnology and Biological Sciences Research Council and manuscript reviews for Trends in Cognitive Sciences, Memory and Cognition, Connection Science, Canadian Journal of Philosophy, Journal of European Psychology Students, International Journal of Communication and the Annual Cognitive Science Society conference. From 1st of January I will only be reviewing papers which make their data freely available, as part of the Peer Reviewers’ Openness Initiative.

 

That’s mostly it, bar a few things I couldn’t fit under these four headlines. Thanks to everyone who helped with the work in 2016 – getting to talk, write and pursue ideas with sincere, intelligent, kind and interesting people is the best part of the job.

(Previously: 2015 review)

Written by Comments Off on 2016 review Posted in events

Using Candy Crush to study perceptual learning

This is a guest post by Gabriela Raleva, who did a summer project with me in between her first and second years of the undergraduate degree.

gabrielaVisual learning refers to the enhanced sensitivity to visually relevant stimuli. Affective value of stimuli (e.g. reward, punishment), has been proposed to enhance action selection via instrumental learning (Hickey et al., 2010, Wilbertz et al., 2014).  The majority of studies in perceptual learning adopt an artificial approach of training participants in the lab for many sessions before testing them (although, see (Bavelier et al., 2012)). We used Candy Crush game sets as it represents an ideal platform for natural visual learning and assessed the performance of experienced players that have willingly engaged in a lot of practice hours as well as non-players.

In a between-subject design participants completed a visual search task, searching for a uniquely-shaped Candy Crush target among a number of nonhomogeneous Candy Crush distractors. Targets were divided into 4 conditions: neutral value, reward-value, punishment value and control condition (Figure 1). Reaction time for detection of targets was assessed and compared for each condition.

Figure 1. Target-present sets in all 4 conditions: (a) Neutral condition showing a single green candy (circled) which is a target of neutral consequences in the game, (b) Reward-associated containing a single multi-coloured bomb candy (circled) which is of positive consequences in the game, (c) Punishment-associated condition containing a blue bomb (circled) of negative consequences in the game, and (d) Control condition
Figure 1. Target-present sets in all 4 conditions: (a) Neutral condition showing a single green candy (circled) which is a target of neutral consequences in the game, (b) Reward-associated containing a single multi-coloured bomb candy (circled) which is of positive consequences in the game, (c) Punishment-associated condition containing a blue bomb (circled) of negative consequences in the game, and (d) Control condition

The results suggest that players and non-players revealed largely comparable responses in their detection of control, neutral, negative and positive stimuli (Figure 2). This indicates that the results are not due to self-selection bias. However, players were 35% faster at detecting rewarding targets than neutral targets stimuli. In contrast, non-players were on average 5% slower in detecting rewarding compared to neutral targets. Our analyses indicate that players reveal a consistent pattern of greater rewarding/neutral reaction time ratios than those of non-players consistent with the idea that features of affective-associated stimuli facilitate their perception in visual processing. Furthermore, the only Candy Crush condition in which players showed significantly slower reaction times than non-players is the neutral condition. One possible explanation is that players have developed better visual templates in regards to the game (Bejjanki et al., 2014) and therefore exhibit a visually holistic mode of performance (Green & Bavelier, 2003). Players may have learnt to quickly recognize patterns beneficial for the game such as the rewarding bomb. The green neutral candies compose a beneficial pattern only when they can be combined (at least 3) so it is possible that players learnt to recognize a single neutral candy as a distractor and thus suppress it more effectively. Such holistic expert performance is characterized by “chunking” – a process during which individual constituents are processed as a single perceptual or cognitive entity.

Figure 2. Mean reaction time of players and non-players in all 4 conditions as obtained by the target detection task.
Figure 2. Mean reaction time of players and non-players in all 4 conditions as obtained by the target detection task.

Bavelier, D., Green, C. S., Pouget, A., & Schrater, P. (2012). Brain plasticity through the life span: learning to learn and action video games. Annual Reviews of Neuroscience, 35, 391–416.
Bejjanki, V. R., Zhang, R., Li, R., Pouget, A., Green, C. S., Lu, Z.-L., & Bavelier, D. (2014). Action video game play facilitates the development of better perceptual templates. Proceedings of the National Academy of Sciences, 111(47), 16961–6.
Green, C. S., & Bavelier, D. (2003). Action video game modifies visual selective attention. Nature, 423(6939), 534-537
Hickey, C., Chelazzi, L., & Theeuwes, J. (2010). Reward guides vision when it’s your thing: Trait reward-seeking in reward-mediated visual priming. PLoS ONE, 5(11), 1–5.
Wilbertz, G., Van Slooten, J., & Sterzer, P. (2014). Reinforcement of perceptual inference: Reward and punishment alter conscious visual perception during binocular rivalry. Frontiers in Psychology, 5, 1–9.

 

Postscript from Tom:

One of the great difficulties of studying learning is that true expertise only comes after many many hours of practice. Psychologists often study perceptual learning in the lab with participants engaging in training with specific stimuli that last a few hours. The results of this project demonstrate the potential for using people who have already given themselves hundreds of hours of training with the specific stimuli as a side effect of a game they’ve played. To illustrate how large the effect is I plotted each participant’s Candy Crush level (so 0 for non-players) against the ratio rewarding / punishing stimulus RT : neutral stimulus RT. Even with the small number of participants the effect is clear – Candy Crush players are faster for the value-relevant stimuli relative to neutral stimuli, non-players aren’t.

The black line shows where participants’ reaction times should be if there are equally fast on the valuable Candy Crush stimuli as with neutral Candy Crush stimuli

indiv_diffs

Written by Comments Off on Using Candy Crush to study perceptual learning Posted in Projects

internship: Public Engagement Coordinator

If you are a recent graduate of the University of Sheffield, then you can apply for this paid internship as Public Engagement Coordinator, working with me in the Department of Psychology. Here’s a bit about what we want to do:

Help the Department of Psychology engage with the public. Our vision is to arrange, promote, run and record a series of
“TED”-style talks for Psychology at Sheffield. These will be our chance to reach hundreds of college age students – both those in our majority recruitment demographic and those from under-represented backgrounds.

And here’s a bit about who we’re looking for:

The ideal candidate will be enthusiastic for what Universities can offer society, and vice versa. You will have an appreciation of the concerns of applicants to the University – especially those from “widening participation” backgrounds – and be capable of keeping track of a complex set of tasks. In this internship you will learn to organise and promote large events, to put scholarship in a wider context and see how issues in people’s everyday lives connect to the work we do in the Department of Psychology. You will practice writing in an engaging and accessible way and get to work with people across the University and the region.

It’s six months, full time, paid. Here are links for the overview and job description, but to apply you need to go to careerconnect.sheffield.ac.uk and enter reference UOS014640. To be eligible you need to have graduated from a University of Sheffield undergraduate degree in 2016. Closing date: 4th of November 2016

Any questions, feel free to get in touch with me

Written by Comments Off on internship: Public Engagement Coordinator Posted in events

CogSci @ Sheffield

This mailing list: CogSci at Sheffield supports the ad hoc network of researchers at the University of Sheffield who are interested in Cognitive Science. You can sign yourself up and receive notifications about events happening across the University (but mostly emanating from Psychology, Philosophy, Linguistics and Computer Science).

Written by Comments Off on CogSci @ Sheffield Posted in events

Cognitive Science Conference, Philadelphia

fitThis week, 10-13th August, I am a the Annual Cognitive Science Society Conference, in Philadelphia. While there I am presenting work which uses a large data set on chess players and their games.

Previously the phenomenon of ‘stereotype threat’ has been found in many domains where people’s performance suffers when they are made more aware of their identity as a member of a social group which is expected to perform poorly – for example there is a stereotype that men are better at maths, and stereotype threat has been reported for female students taking maths exams when their identity as a women is emphasised, even if only subtly (by asking them to declare their gender on the top of the exam paper, for example). This effect has been reported for Chess, which is heavily male dominated, especially among top players. However, the reports of stereotype threat in chess, like in many other domains, often rely on laboratory experiments with a small number of people (around or less than 100).

My data are more than 11 million games of chess: every tournament recorded with FIDE, the international chess authority, between 2008-2015. Using this data, I asked if it was possible to observe stereotype threat in this real world setting. If the phenomenon is real, however small it is, I should be able to observe it playing out in this data – the sheer number of games I can analyse allows me a very powerful statistical lens.

The answer is, no: there is no stereotype threat in international chess. To see how I determined this, and what I think it means, you can read the paper here, or see the Jupyter notebook which walks you through the key analysis. And if you’re at the Conference, come and visit the poster (as PDF, as PNG). Jeff Sonas, who was compiled the data, has been kind enough to allow me to make available a 10% sample of the data (still over 1 million games), and this, along with all the analysis code for the paper, is available via the Open Science Framework.

There’s lots more to come from this data – as well as analysing performance related effects, the data affords a fantastic opportunity to look at learning curves and try to figure out what affects how players’ performance changes over time.

Written by Comments Off on Cognitive Science Conference, Philadelphia Posted in Research

We are European Scholars

I am British, but consider myself a European scholar. At the start of my time as a lecturer at the University I was lucky enough to be part of an EU funded project on learning in robots. With that project I worked with brilliant colleagues around the EU, as well as being able to do the piece of work I regard as my single most important scientific contribution. It was EU projects like this which inspired the foundation of Sheffield Robotics, a collaboration between the two Universities in Sheffield which aims to define the future of a technology vital to manufacturing in the UK. Two British PhD students I supervised during this project went on to start a business, and a third – from Cyprus – did work with me that led to a major grant from the Michael J Fox Foundation for research into Parkinson’s Disease – bringing US funding into the UK to allow me to work with colleagues in Spain and the US on a health issue that will affect 1 in 500 of us: over 120,000 people in the UK.

Since then I have had two more brilliant PhD students from the EU. One, from Greece, completed a PhD on differences in sensory processing in ADHD and has since gone to work in industry, applying her research skills for a company based in Manchester. The other, an Italian, is currently writing up, and considering job opportunities from around the world. My hope is that we’ll be able to keep him in the UK, where he’ll be able to continue to contribute to the research environment that make British Universities the best in the world.

The UK needs Universities to train our young people, to contribute to public life and to investigate the world around us and within us. And the UK’s Universities need Europe.

I am a European scholar. We are European Scholars at the University of Sheffield. Without our European links and colleagues we, and the UK, would be immeasurably impoverished.

Written in support of yesterday’s call by the University of Sheffield’s Vice-Chancellor

Written by Comments Off on We are European Scholars Posted in events

Why don’t we trust the experts?

During the EU referendum debate a friend of mine, who happens to be a Professor of European Law, asked in exasperation why so much of the country seems unwilling to trust experts on the EU. When we want a haircut, we trust the hairdresser. If we want a car fixed, we trust the mechanic. Now, when we need informed comment on EU, why don’t we trust people who have spent a lifetime studying the topic?

The question rattled around in my mind, until I realised I had actually done some research which provides part of the answer. During my post-doc with Dick Eiser we did a survey of people who lived on land which may have been contaminated with industrial pollutants. We asked people with houses on two such ‘brownfield’ sites, one in the urban north, and one in the urban south, who they trusted to tell them about the possible risks.

One group we asked about the perception of was scientists. The householders answered on scale which went from 1 to 5 (5 is the most trust). Here’s the distribution of answers:

trust_scientists

As you can see, scientists are highly trusted. Compare with the ratings of property developers:

trust_devs

We also asked our respondents about how they rated different communicators on various dimensions. One dimension was expertise about this topic. As you’d expect, scientists were rated as highly expert in the risks of possible brownfield pollution. We also asked people about whether they believed the different potential communicators of risks has their interests at heart, and whether they would be open with their beliefs about risks. With this information, it is possible, statistically, to analyse not just who is trusted, but why they are trusted.

The results, published in Eiser at al (2009), show that expertise is not the strongest determinate of who is trusted. Instead, people trust those who they believe have their best interests at heart. This is three to four times more important than perception of expertise (fig. 3 on p294 for those reading along with the paper in hand).

One way of making this clear is to pick out the people who have high trust in scientists (rating or 4 or 5), and compared them to people who have low trust (rating scientists a 1 or 2 for trust). The perceptions of their expertise differ, but not too much:

perc_expertiseEven those who don’t trust scientists recognise that they know about pollution risks. In other words, their actual expertise isn’t in question.

The difference is seen whether scientists are seen to have the householders’ interests at heart:

perc_interests

So those who didn’t trust the scientist tend to believe that the scientists don’t care about them.

The difference is made clear by one group that was highly trusted to communicate risks of brownfield land-  friends and family:

trust_frfam

Again, the same relationship between variables held. Trust in friends and family was driven more by a perception of shared interests than it was by perceptions of expertise. Remember, this isn’t a measure of generalised trust, but specifically of trust in their communications about pollution risks. Maybe your friends and family aren’t experts in pollution risks, but they surely have your best interests at heart, and that it why they are nearly as trusted on this topic as scientists, despite their lack of expertise.

So here we have a partial answer to why experts aren’t trusted. They aren’t trusted by people who feel alienated from them. My reading of this study would be that it isn’t that we live in a ‘post-fact’ political climate. Rather it is that attempts to take facts out of their social context won’t work. For me and my friends it seems incomprehensible to ignore the facts, whether about the science of vaccination, or the law and economics of leaving the EU. But me and my friends do very well from the status quo- the Treasury, the Bar, the University work well for us. We know who these people are, we know how they work, and we trust them because we feel they are working for us, in some wider sense. People who voted Leave do suffer from a lack of trust, and my best guess is that this is a reflection of a belief that most authorities aren’t on their side, not because they necessarily reject their status as experts.

The paper is written up as : Eiser, J. R., Stafford, T., Henneberry, J., & Catney, P. (2009). “Trust me, I’m a Scientist (Not a Developer)”: Perceived Expertise and Motives as Predictors of Trust in Assessment of Risk from Contaminated Land. Risk Analysis, 29(2), 288-297. I’ve just made the data and analysis for this post available here.

Written by Comments Off on Why don’t we trust the experts? Posted in Research