Why do people leave academia?- The results

On social media, there are often discussions about why psychologists leave academia. Some argue that the new culture of criticism (where overly harsh criticism is leveled against those who make errors and accusations of malfeasance are rife) make academics, especially younger ones, change profession. They give examples of high profile cases where researchers have made errors or merely used previously accepted standards of practice and the obloquy they’ve received when their results don’t hold up. Others provide counter examples of former colleagues who grew frustrated at their inability to replicate supposedly rock-solid findings or who had a crisis in confidence about the validity of vast swathes of the literature. But one thing is lacking from this discussion. read more

Notes on Paul Meehl’s “Philosophical Psychology Session” #06

These are the notes I made whilst watching the video recording of Paul Meehl’s philosophy of science lectures. This is the sixth episode (a list of all the videos can he found here). Please note that these posts are not designed to replace or be used instead of the actual videos (I highly recommend you watch them). They are to be read alongside to help you understand what was said. I also do not include everything that he said (just the main/most complex points).

A core postulate is one that is found in every derivation chain or a postulate that consists of only core concepts. A core concept is one that is relied on implicitly in every derivation chain is in the hard core. read more

I’m a non-methodologist, does it matter if my definition is slightly wrong?

A few weeks ago, Nature published an article summarising the various measures and counter-measures suggested to improve statistical inferences and science as a whole (Chawla, 2017). It detailed the initial call to lower the significance threshold to 0.005 from 0.05 (Benjamin et al., 2017) and the paper published in response (Lakens et al., 2017). It was a well written article, with one minor mistake: an incorrect definition of a p-value: 

The two best sources for the correct definition of a p-value (along with its implications and examples of how a p-value can be misinterpreted) are Wasserstein & Lazar (2016) and its supplementary paper Greenland et al. (2016). A p-value has been defined as: “a statistical summary of the compatibility between the observed data and what we would predict or expect to see if we knew the entire statistical model (all the assumptions used to compute the P value) were correct” (Greenland et al., 2016).  To put it another way, it tells us the probability of finding the data you have or more extreme data assuming the null hypothesis (along with all the other assumptions about randomness in sampling, treatment, assignment, loss, and missingness, the study protocol, etc.) are true. The definition provided in the Chawla article is incorrect because it states “the smaller the p-value, the less likely it is that the results are due to chance”. This gets things backwards: the p-value is a probability deduced from a set of assumptions e.g. the null hypothesis is true, so it can’t also tell you the probability of that assumption at the same time. Joachim Vandekerckhove and Ken Rothman give further evidence as to why this definition is incorrect: read more

Assessing the validity of labs as teaching methods and controlling for confounds

Anyone who has taken one of the harder sciences at university or knows someone who has will know what “labs” are. You are given practical assignments to complete that are meant to consolidate what you’ve learnt in the lecture/seminar. They are almost ubiquitous in physics after becoming widespread by the beginning of the 20th century (Meltzer & Otero, 2015), as they are for chemistry (Layton, 1990), and biology (Brownell, Kloser, Fukami, & Shavelson, 2012). Their value is widely assumed to have been demonstrated multiple times across the hard sciences (Finkelstein et al., 2005; Blosser, 1990) but questions have occasionally been raised as to their effectiveness (Hofstein & Lunetta, 2004). A new paper by Holmes, Olsen, Thomas, & Wieman (2017) sought to test whether participating in labs actually improved physics students’ final grades or not. Across three American universities they tested three questions: what is the impact of labs on associated exam performance; did labs selectively impact the learning physics concepts; and are there short-term learning benefits that are “washed out on the final exams”? read more

Why do psychologists leave academia?

Every once in a while in the psychology sphere of social media there’s a discussion about why people leave academia. This talking point often comes up in the context of “the open science movement” and whether more academics leave because of the cultural of criticism or because of the lack of replicability of some findings. People who have left academia offer their reasons and people who are still in give several anecdotes about why someone they know left. But what seems to be lacking is some actual data. So I’ve written this survey with the hopes of shedding some light on the situation. It’s for people who have considered leaving or have actually left academia or practicing psychology (educational, clinical, etc.). But this survey will only be useful if you share this with people you know who have left. So please share the survey on social media or relevant mailing lists but especially link it directly to people you know who have left psychology. I’m writing this blog post so those who are subscribed to the methods blog feed will see this survey, hopefully increasing the number of respondents. Thank you for your help. read more

[Guest post] How Twitter made me a better scientist

I’m a big fan of Twitter and have learned so much from the people on there, so I’m always happy to share someone singing it’s praises. This article was written by Jean-Jacques Orban de Xivry for the University of Leuven’s blog. He talks about how he uses it to find out about interesting papers and a whole host of other benefits. The article can be found here. His twitter account is: @jjodx.

Improving the psychological methods feed

The issue of diversity has once again been raised in relation to online discussions of psychology. Others have talked about why it may happen and the consequences of it. I have nothing to add about those areas so I’m not going to discuss them. The purpose of this post is to analyse the diversity of my main contribution to social media discussions that I have total control over: the psychological methods blog feed. How many blogs by women are featured? How many non-white authors are there? How many early-career researchers (ECR’s) are shared read more

The best papers and articles of 2016

These are some of the best scientific papers and articles I’ve read this year. They’re in no particular order and not all of them were written this year. I don’t necessarily agree with them. I’ve divided it into different categories for convenience.


Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions by Andrew Higginson and Marcus Munafò. How the current way of doing things in science encourages scientists to run lots of small scale studies with low evidentiary value. read more

Prediction markets and how to power a study

Do you think you know which studies will replicate? Do you want to help improve the replicability of science? Do you want to make some money? Then take part in this study on predicting replications!

But why are researchers gambling on whether a study will successfully replicate (defined as finding “an effect in the same direction as the original study and a p-value<0.05 in a two-sided test” for this study)? Because there is some evidence to suggest that a prediction market can be a good predictor of replicability, even better than individual psychology researchers. read more

Credit where credit is due

There has been a lot of tension in the psychological community recently. Replications are becoming more prevalent and many of them are finding much smaller effects or none at all. This then raises a lot of uncomfortable questions: is the studied effect real? How was it achieved in the first place? Were less than honest methods used (p-hacking etc.)? The original researchers can sometimes feel that these questions go beyond valid criticisms to full-blown attacks on their integrity and/or their abilities as a scientist. This has led to heated exchanges and some choice pejoratives being thrown about by both “sides”. read more