Replication and Reproducibility Event II: Moving Psychological Science Forward

On Friday 26th of January at the Royal Society there was a series of talk on how psychology could progress as a science, with an emphasis on replication and reproducibility. I’m going to summarise the key points from the individual talks and make a few comments. A collection of all the individual videos of the talks can be found here.

Professor Daryl O’Connor – “Psychological Science as a Trailblazer for Science?”

  • The Reproducibility Project (2015) started this movement off.
  • The number of positive developments that arose due to the findings from the Reproducibility Project (2015) e.g. the Centre for Open Science, the Open Science Framework, Registered Reports and their associated format the Registered Replication Report, among other things, is very encouraging.
  • The argument can be made that the field as a whole is improving.
  • The discussion on social media over the Boston globe article is an example of constructive disagreement e.g. between Amy Cuddy and Brian Nosek.
  • But is there some element of researchers forming echo chambers among like-minded peers over the perennial tone debate?
  • Science as a behaviour model. Behaviour occurs as an interaction between three necessary conditions: capability (an individual’s psychological and physical capacity to engage in the activity concerned), motivation (reflective and automatic processes that increase or decrease your desire to engage in the behaviour), and opportunity (all the factors outside the individual which make the behaviour possible or prompt it). These affect and are affected by behaviour.
  • Other fields have taken note of what psychology has done and are learning from us.
  • The “revolutionaries” have improved scientific practice and triggered new ways of working.
  • All levels of science need to be targeted, including methodologies and incentive structures.
  • It is a very exciting time to be a scientist, especially an Early Career Researcher (E.C.R) because of all the changes.


Some people have argued they first started taking notice of the problems in psychology in 2011, with the publication of Simons, Nelson, & Simonsohn (2011) and Bem (2011). Regardless of an individual researchers starting point, the field has made great strides in a short space of time. Of course the calls to action have been ringing out for years, but actual change seems to be occurring which is highly encouraging. And I agree with O’Connor’s point that it has mainly come about because of the actions of those branded as “revolutionaries”. This isn’t to dismiss a genuine discussion about how these criticisms should be handled and that sometimes they can go to far. I think having that debate is important as it keeps the process in check. But progress isn’t going to be painless, though this pain should be minimised. As for social media, I generally think it has been a force for good with increased chances of visibility and interaction for those who typically take a back seat in discussions (though old power structures are still highly relevant and should be challenged). It is also almost universally in favour of measures to improve replicability and methodological rigour so people can see positive examples of these measures and be rewarded via complements. read more

Why do people leave academia?- The results

On social media, there are often discussions about why psychologists leave academia. Some argue that the new culture of criticism (where overly harsh criticism is leveled against those who make errors and accusations of malfeasance are rife) make academics, especially younger ones, change profession. They give examples of high profile cases where researchers have made errors or merely used previously accepted standards of practice and the obloquy they’ve received when their results don’t hold up. Others provide counter examples of former colleagues who grew frustrated at their inability to replicate supposedly rock-solid findings or who had a crisis in confidence about the validity of vast swathes of the literature. But one thing is lacking from this discussion. read more

I’m a non-methodologist, does it matter if my definition is slightly wrong?

A few weeks ago, Nature published an article summarising the various measures and counter-measures suggested to improve statistical inferences and science as a whole (Chawla, 2017). It detailed the initial call to lower the significance threshold to 0.005 from 0.05 (Benjamin et al., 2017) and the paper published in response (Lakens et al., 2017). It was a well written article, with one minor mistake: an incorrect definition of a p-value: 

The two best sources for the correct definition of a p-value (along with its implications and examples of how a p-value can be misinterpreted) are Wasserstein & Lazar (2016) and its supplementary paper Greenland et al. (2016). A p-value has been defined as: “a statistical summary of the compatibility between the observed data and what we would predict or expect to see if we knew the entire statistical model (all the assumptions used to compute the P value) were correct” (Greenland et al., 2016).  To put it another way, it tells us the probability of finding the data you have or more extreme data assuming the null hypothesis (along with all the other assumptions about randomness in sampling, treatment, assignment, loss, and missingness, the study protocol, etc.) are true. The definition provided in the Chawla article is incorrect because it states “the smaller the p-value, the less likely it is that the results are due to chance”. This gets things backwards: the p-value is a probability deduced from a set of assumptions e.g. the null hypothesis is true, so it can’t also tell you the probability of that assumption at the same time. Joachim Vandekerckhove and Ken Rothman give further evidence as to why this definition is incorrect: read more

Assessing the validity of labs as teaching methods and controlling for confounds

Anyone who has taken one of the harder sciences at university or knows someone who has will know what “labs” are. You are given practical assignments to complete that are meant to consolidate what you’ve learnt in the lecture/seminar. They are almost ubiquitous in physics after becoming widespread by the beginning of the 20th century (Meltzer & Otero, 2015), as they are for chemistry (Layton, 1990), and biology (Brownell, Kloser, Fukami, & Shavelson, 2012). Their value is widely assumed to have been demonstrated multiple times across the hard sciences (Finkelstein et al., 2005; Blosser, 1990) but questions have occasionally been raised as to their effectiveness (Hofstein & Lunetta, 2004). A new paper by Holmes, Olsen, Thomas, & Wieman (2017) sought to test whether participating in labs actually improved physics students’ final grades or not. Across three American universities they tested three questions: what is the impact of labs on associated exam performance; did labs selectively impact the learning physics concepts; and are there short-term learning benefits that are “washed out on the final exams”? read more

Why do psychologists leave academia?

Every once in a while in the psychology sphere of social media there’s a discussion about why people leave academia. This talking point often comes up in the context of “the open science movement” and whether more academics leave because of the cultural of criticism or because of the lack of replicability of some findings. People who have left academia offer their reasons and people who are still in give several anecdotes about why someone they know left. But what seems to be lacking is some actual data. So I’ve written this survey with the hopes of shedding some light on the situation. It’s for people who have considered leaving or have actually left academia or practicing psychology (educational, clinical, etc.). But this survey will only be useful if you share this with people you know who have left. So please share the survey on social media or relevant mailing lists but especially link it directly to people you know who have left psychology. I’m writing this blog post so those who are subscribed to the methods blog feed will see this survey, hopefully increasing the number of respondents. Thank you for your help. read more

[Guest post] How Twitter made me a better scientist

I’m a big fan of Twitter and have learned so much from the people on there, so I’m always happy to share someone singing it’s praises. This article was written by Jean-Jacques Orban de Xivry for the University of Leuven’s blog. He talks about how he uses it to find out about interesting papers and a whole host of other benefits. The article can be found here. His twitter account is: @jjodx.

Prediction markets and how to power a study

Do you think you know which studies will replicate? Do you want to help improve the replicability of science? Do you want to make some money? Then take part in this study on predicting replications!

But why are researchers gambling on whether a study will successfully replicate (defined as finding “an effect in the same direction as the original study and a p-value<0.05 in a two-sided test” for this study)? Because there is some evidence to suggest that a prediction market can be a good predictor of replicability, even better than individual psychology researchers. read more

Credit where credit is due

There has been a lot of tension in the psychological community recently. Replications are becoming more prevalent and many of them are finding much smaller effects or none at all. This then raises a lot of uncomfortable questions: is the studied effect real? How was it achieved in the first place? Were less than honest methods used (p-hacking etc.)? The original researchers can sometimes feel that these questions go beyond valid criticisms to full-blown attacks on their integrity and/or their abilities as a scientist. This has led to heated exchanges and some choice pejoratives being thrown about by both “sides”. read more

The replication crisis, context sensitivity, and the Simpson’s (Paradox)

The Reproducibility Project: Psychology:

The Reproducibility Project: Psychology (OSC, 2015) was a huge effort by many different psychologists across the world to try and assess whether the effects of a selection of papers could be replicated. This was in response to the growing concern about the (lack of) reproducibility of many psychological findings with some high profile failed replications being reported (Hagger & Chatzisarantis, 2016 for ego-depletion and Ranehill, Dreber, Johannesson, Leiberg, Sul, & Weber, 2015 for power-posing). They reported that of the 100 replication attempts, only ~35 were successful. This provoked a strong reaction not only in the psychological literature but also in the popular press, with many news outlets reporting on it. read more

In defence of preregistration

This post is a response to “Pre-Registration of Analysis of Experiments is Dangerous for Science” by Mel Slater (2016). Preregistration is stating what you’re going to do and how you’re going to do it before you collect data (for more detail, read this). Slater gives a few examples of hypothetical (but highly plausible) experiments and explains why preregistering the analyses of the studies (not preregistration of the studies themselves) would not have worked. I will reply to his comments and attempt to show why he is wrong. read more