Replication and Reproducibility Event II: Moving Psychological Science Forward

On Friday 26th of January at the Royal Society there was a series of talk on how psychology could progress as a science, with an emphasis on replication and reproducibility. I’m going to summarise the key points from the individual talks and make a few comments. A collection of all the individual videos of the talks can be found here.

Professor Daryl O’Connor – “Psychological Science as a Trailblazer for Science?”

  • The Reproducibility Project (2015) started this movement off.
  • The number of positive developments that arose due to the findings from the Reproducibility Project (2015) e.g. the Centre for Open Science, the Open Science Framework, Registered Reports and their associated format the Registered Replication Report, among other things, is very encouraging.
  • The argument can be made that the field as a whole is improving.
  • The discussion on social media over the Boston globe article is an example of constructive disagreement e.g. between Amy Cuddy and Brian Nosek.
  • But is there some element of researchers forming echo chambers among like-minded peers over the perennial tone debate?
  • Science as a behaviour model. Behaviour occurs as an interaction between three necessary conditions: capability (an individual’s psychological and physical capacity to engage in the activity concerned), motivation (reflective and automatic processes that increase or decrease your desire to engage in the behaviour), and opportunity (all the factors outside the individual which make the behaviour possible or prompt it). These affect and are affected by behaviour.
  • Other fields have taken note of what psychology has done and are learning from us.
  • The “revolutionaries” have improved scientific practice and triggered new ways of working.
  • All levels of science need to be targeted, including methodologies and incentive structures.
  • It is a very exciting time to be a scientist, especially an Early Career Researcher (E.C.R) because of all the changes.

Comments:

Some people have argued they first started taking notice of the problems in psychology in 2011, with the publication of Simons, Nelson, & Simonsohn (2011) and Bem (2011). Regardless of an individual researchers starting point, the field has made great strides in a short space of time. Of course the calls to action have been ringing out for years, but actual change seems to be occurring which is highly encouraging. And I agree with O’Connor’s point that it has mainly come about because of the actions of those branded as “revolutionaries”. This isn’t to dismiss a genuine discussion about how these criticisms should be handled and that sometimes they can go to far. I think having that debate is important as it keeps the process in check. But progress isn’t going to be painless, though this pain should be minimised. As for social media, I generally think it has been a force for good with increased chances of visibility and interaction for those who typically take a back seat in discussions (though old power structures are still highly relevant and should be challenged). It is also almost universally in favour of measures to improve replicability and methodological rigour so people can see positive examples of these measures and be rewarded via complements. read more

Why you should think of statistical power as a curve

Statistical power is defined as “the probability of correctly rejecting H0 when a true association is present” where H0 is the null hypothesis, often an association or effect size of zero (Sham & Purcell, 2014). It is determined by the the effect size you want to detect, the size of your sample (N), and the alpha level which is typically 0.05 but you can set it to whatever you want (Lakens et al. 2017). I always thought of power as a static value for your study.

But this is wrong. read more

I’m a non-methodologist, does it matter if my definition is slightly wrong?

A few weeks ago, Nature published an article summarising the various measures and counter-measures suggested to improve statistical inferences and science as a whole (Chawla, 2017). It detailed the initial call to lower the significance threshold to 0.005 from 0.05 (Benjamin et al., 2017) and the paper published in response (Lakens et al., 2017). It was a well written article, with one minor mistake: an incorrect definition of a p-value: 

The two best sources for the correct definition of a p-value (along with its implications and examples of how a p-value can be misinterpreted) are Wasserstein & Lazar (2016) and its supplementary paper Greenland et al. (2016). A p-value has been defined as: “a statistical summary of the compatibility between the observed data and what we would predict or expect to see if we knew the entire statistical model (all the assumptions used to compute the P value) were correct” (Greenland et al., 2016).  To put it another way, it tells us the probability of finding the data you have or more extreme data assuming the null hypothesis (along with all the other assumptions about randomness in sampling, treatment, assignment, loss, and missingness, the study protocol, etc.) are true. The definition provided in the Chawla article is incorrect because it states “the smaller the p-value, the less likely it is that the results are due to chance”. This gets things backwards: the p-value is a probability deduced from a set of assumptions e.g. the null hypothesis is true, so it can’t also tell you the probability of that assumption at the same time. Joachim Vandekerckhove and Ken Rothman give further evidence as to why this definition is incorrect: read more