Notes on Paul Meehl’s “Philosophical Psychology Session” #07

These are the notes I made whilst watching the video recording of Paul Meehl’s philosophy of science lectures. This is the seventh episode (a list of all the videos can he found here). Please note that these posts are not designed to replace or be used instead of the actual videos (I highly recommend you watch them). They are to be read alongside to help you understand what was said. I also do not include everything that he said (just the main/most complex points).

  • Example for Lyken’s crud factor:

T is literally true, two auxiliary theories (A1 and A2) both of which have a .9 probability of being true, cp clause has a .9 probability of being true, and the conditions have a .9 probability of being true. What’s the probability (given the above) that o1o2? .94=.66. Chances of getting that result because of theory being true is 2/3 even if you had perfect power. With 80% the probability is .52 of the observation coming from the theory. read more

Best reads of 2017

These are some of the best or most thought-provoking articles I’ve read this year. The categories and articles are organised alphabetically and I don’t necessarily agree with the ideas put forward.

Economics:

Labour’s Higher Education proposals will cost £8bn per year, although increase the deficit by more. Graduates who earn most in future would benefit most by Chris BelfieldJack Britton, and Laura van der Erve. A strong counter argument against free tuition for all university students. read more

Why you should think of statistical power as a curve

Statistical power is defined as “the probability of correctly rejecting H0 when a true association is present” where H0 is the null hypothesis, often an association or effect size of zero (Sham & Purcell, 2014). It is determined by the the effect size you want to detect, the size of your sample (N), and the alpha level which is typically 0.05 but you can set it to whatever you want (Lakens et al. 2017). I always thought of power as a static value for your study.

But this is wrong. read more

Why do people leave academia?- The results

On social media, there are often discussions about why psychologists leave academia. Some argue that the new culture of criticism (where overly harsh criticism is leveled against those who make errors and accusations of malfeasance are rife) make academics, especially younger ones, change profession. They give examples of high profile cases where researchers have made errors or merely used previously accepted standards of practice and the obloquy they’ve received when their results don’t hold up. Others provide counter examples of former colleagues who grew frustrated at their inability to replicate supposedly rock-solid findings or who had a crisis in confidence about the validity of vast swathes of the literature. But one thing is lacking from this discussion. read more

Notes on Paul Meehl’s “Philosophical Psychology Session” #06

These are the notes I made whilst watching the video recording of Paul Meehl’s philosophy of science lectures. This is the sixth episode (a list of all the videos can he found here). Please note that these posts are not designed to replace or be used instead of the actual videos (I highly recommend you watch them). They are to be read alongside to help you understand what was said. I also do not include everything that he said (just the main/most complex points).

A core postulate is one that is found in every derivation chain or a postulate that consists of only core concepts. A core concept is one that is relied on implicitly in every derivation chain is in the hard core. read more

I’m a non-methodologist, does it matter if my definition is slightly wrong?

A few weeks ago, Nature published an article summarising the various measures and counter-measures suggested to improve statistical inferences and science as a whole (Chawla, 2017). It detailed the initial call to lower the significance threshold to 0.005 from 0.05 (Benjamin et al., 2017) and the paper published in response (Lakens et al., 2017). It was a well written article, with one minor mistake: an incorrect definition of a p-value: 

The two best sources for the correct definition of a p-value (along with its implications and examples of how a p-value can be misinterpreted) are Wasserstein & Lazar (2016) and its supplementary paper Greenland et al. (2016). A p-value has been defined as: “a statistical summary of the compatibility between the observed data and what we would predict or expect to see if we knew the entire statistical model (all the assumptions used to compute the P value) were correct” (Greenland et al., 2016).  To put it another way, it tells us the probability of finding the data you have or more extreme data assuming the null hypothesis (along with all the other assumptions about randomness in sampling, treatment, assignment, loss, and missingness, the study protocol, etc.) are true. The definition provided in the Chawla article is incorrect because it states “the smaller the p-value, the less likely it is that the results are due to chance”. This gets things backwards: the p-value is a probability deduced from a set of assumptions e.g. the null hypothesis is true, so it can’t also tell you the probability of that assumption at the same time. Joachim Vandekerckhove and Ken Rothman give further evidence as to why this definition is incorrect: read more

Assessing the validity of labs as teaching methods and controlling for confounds

Anyone who has taken one of the harder sciences at university or knows someone who has will know what “labs” are. You are given practical assignments to complete that are meant to consolidate what you’ve learnt in the lecture/seminar. They are almost ubiquitous in physics after becoming widespread by the beginning of the 20th century (Meltzer & Otero, 2015), as they are for chemistry (Layton, 1990), and biology (Brownell, Kloser, Fukami, & Shavelson, 2012). Their value is widely assumed to have been demonstrated multiple times across the hard sciences (Finkelstein et al., 2005; Blosser, 1990) but questions have occasionally been raised as to their effectiveness (Hofstein & Lunetta, 2004). A new paper by Holmes, Olsen, Thomas, & Wieman (2017) sought to test whether participating in labs actually improved physics students’ final grades or not. Across three American universities they tested three questions: what is the impact of labs on associated exam performance; did labs selectively impact the learning physics concepts; and are there short-term learning benefits that are “washed out on the final exams”? read more

Why do psychologists leave academia?

Every once in a while in the psychology sphere of social media there’s a discussion about why people leave academia. This talking point often comes up in the context of “the open science movement” and whether more academics leave because of the cultural of criticism or because of the lack of replicability of some findings. People who have left academia offer their reasons and people who are still in give several anecdotes about why someone they know left. But what seems to be lacking is some actual data. So I’ve written this survey with the hopes of shedding some light on the situation. It’s for people who have considered leaving or have actually left academia or practicing psychology (educational, clinical, etc.). But this survey will only be useful if you share this with people you know who have left. So please share the survey on social media or relevant mailing lists but especially link it directly to people you know who have left psychology. I’m writing this blog post so those who are subscribed to the methods blog feed will see this survey, hopefully increasing the number of respondents. Thank you for your help. read more

[Guest post] How Twitter made me a better scientist

I’m a big fan of Twitter and have learned so much from the people on there, so I’m always happy to share someone singing it’s praises. This article was written by Jean-Jacques Orban de Xivry for the University of Leuven’s blog. He talks about how he uses it to find out about interesting papers and a whole host of other benefits. The article can be found here. His twitter account is: @jjodx.

Improving the psychological methods feed

The issue of diversity has once again been raised in relation to online discussions of psychology. Others have talked about why it may happen and the consequences of it. I have nothing to add about those areas so I’m not going to discuss them. The purpose of this post is to analyse the diversity of my main contribution to social media discussions that I have total control over: the psychological methods blog feed. How many blogs by women are featured? How many non-white authors are there? How many early-career researchers (ECR’s) are shared read more