Ability grouping of students doesn’t work

Academic achievement in England is strongly impacted by class, with those of a higher socioeconomic status (S.E.S.) more likely to achieve than than those of a lower S.E.S. (Clifton & Cook, 2012). These gaps between students can be seen between students as early as three years old (Feinstein, 2003) and continue to widen as the children age (Feinstein, 2004). One of the historical measures to reduce these inequalities is ability grouping. Students are placed into groups based on their test scores for certain subjects so they can be taught with their peers of similar ability. ‘Streaming’ (called ‘tracking’ in the US) divides students into groups based on their test scores across all/most of their subjects, meaning they stay with the same students across those subjects. This is similar to ‘banding’. ‘Setting’ occurs when students are put into ability groups for specific subjects that are not necessarily consistent across subjects e.g. a student could be placed in top set for maths but middle set for English (Francis et al., 2017). Data on the prevalence of ability grouping is inconsistent but the evidence suggests it is prevalent in secondary school and to a lesser extent primary school in the U.K. (Dracup, 2014). It is becoming more common in the U.S. after a drop in popularity during the 1990’s (Steenbergen-Hu, Makel, & Olszewski-Kubilius, 2016). read more

Video games cause violence

For almost as long as there have been video games, there have been people arguing that they are bad for you. There also seems to be a wealth of experimental evidence behind it (Hasan et al., 2013, to name just one of many). But there have been suggestions that these negative outcomes are oversold.


Problems with the literature:

One of the strongest pieces of evidence for the negative effects of video games is a meta-analysis by Anderson et al. (2010). They found strong evidence that “exposure to violent video games is a causal risk factor for increased aggressive behaviour, aggressive cognition, and aggressive affect and for decreased empathy and prosocial behaviour”. However, there were immediate questions about the methodology in this meta-analysis. Ferguson & Kilburn (2010) commented that many studies do not relate well to aggression and the authors do not consider the impact of unstandardised aggression measures (differences between studies in how they measured aggressive behaviour), among other things. They comment that the studies analysed in Anderson et al. (2010) only show weak evidence for their conclusion. A more recent reanalysis by Hilgard, Engelhardt, and Rouder (2016) used more advanced tools to adjust for research bias and found that the short-term effects of game play on aggressive feelings and behaviour were badly overestimated by bias. The adjustments recommended by Hilgard et al. (2016) were mostly substantially lower than those performed by Anderson et al. (2010), with some being smaller adjustments. In some studies, the result was adjusted to zero e.g. aggressive affect. This does not completely eliminate the original findings but I feel we should adjust our estimate of the strength of the causal association downwards. read more

Stereotype threat

Don’t you just love being wrong? Of course you don’t, no one does. But there is a grim satisfaction in no longer believing something that there isn’t good enough evidence for. This is what I experienced after examining the phenomenon known as ‘stereotype threat’. In short, it’s the idea that groups with negative stereotypes about them feel anxiety when these stereotypes are made salient (and are therefore more likely to confirm those stereotypes) e.g. women being inferior than men at maths. read more

The benefits of single-sex schooling

Many people claim that single-sex (SS) education is better for students than co-educational (CE) e.g. Jackson (2016). There have been criticisms of this idea e.g. Halpern et a. (2011) but generally it is believed to be beneficial. But what does the evidence suggest? A large-scale meta-analysis by Pahlke et al. (2014), involving 184 studies and 1,663,662 students, compared them on a variety of variables (mathematics performance; mathematics attitudes; science performance; science attitudes; attitudes about school; gender stereotyping; self-concept; interpersonal relations; aggression; victimisation; and body-image) to see if attending a SS school benefited males, females, or both. read more

Learning styles

The idea of learning styles is that people have a preference for which mode information is presented in and that they learn better when the information is presented in this modality. There have been a huge number of different types but I’m going to focus on VAK (visual, auditory and kinaesthetic) as it’s the most well-known. This idea intuitively makes sense; people will learn something better if it’s presented in the mode that they are most comfortable/ they find the easiest to learn. read more

The seductive allure of neuroscience scans

An article by Farah & Hook (2013) examined the much discussed idea that attaching fMRI scans to an article (even if they are unrelated or totally meaningless) makes said article appear more “scientific” and that they “overwhelm critical consideration” (Uttal, 2011). This is obviously not a good thing, as it could lend undue credibility to “bad science”. The evidence to support these claims about the unstoppable power of fMRI scans comes from two articles: McCabe & Castel (2008) and Weisberg et al. (2008).

The McCabe & Castel (2008) study looked at how credible made up studies were rated when they had either functional brain images, bar charts, topographical maps of scalp-recorded EEG (electroencephalogram) or no image attached. When comparing the credibility of studies with either the bar charts or the fMRI scans, they found the fMRI scan condition studies were rated as more credible. However it is not a fair comparison between the two different types of information: the fMRI scans vividly displayed the location and shape of the activity within the temporal lobe, whilst the bar charts merely showed total activity levels within the temporal lobe. The “study” was based on comparing activity between brain areas to see differences, so the fact you could “see” the brain activity meant it would be more likely to be persuasive than just a description of the total brain activity. They are not “informationally equivalent” aka the fMRI images give more information to the reader than the bar charts so are more likely to be persuasive. They also compared the fMRI scans to EEG readings and whilst the EEG readings are more specific (as to the location of the brain activity) than the bar graph, they are still not as specific as the fMRI scans (low temporal resolution is one of the biggest problems with measuring electrical activity in the brain).

The Weisberg et al. (2008) study didn’t actually examine the impact neuroscience IMAGES have on people’s ratings of an article’s credibility, it looked at how neuroscience INFORMATION affected the perceived quality of information about psychological phenomena (and no images were used).

A huge-scale replication of McCabe & Castel’s original study showed a brain image had almost no effect on how persuasive the article was (Micheal et al., 2013). There is another failed replication of McCabe & Castel’s study by Hook & Farah (2013).

The idea that fMRI scans are intrinsically influential is relatively widespread yet there appears to be a lack of evidence to support this claim, so I feel declaring neuroscience images to be overwhelming a bit premature. 


Farah, M.J. & Hook, C.J. (2013). The seductive allure of “seductive allure”. Perspectives on Psychological Science, 8 (1), 88-90.
Hook, C.J. & Farah, M.J. (2013). Look again: effects of brain images and mind-brain dualism on lay evaluations of research. Journal of Cognitive Neuroscience, 25 (9), 1397-1405.
McCabe, D.P. & Castel, A.D. (2008). Seeing is believing: the effects of brain images on judgments of scientific reasoning. Cognition, 107 (1), 343-352.
Michael, R.B.; Newman, E.J.; Vuorre, M.; Cumming, G. & Garry, M. (2013). On the (non)persuasive power of a brain image. Psychonomic Bulletin and Review. 20 (4), 720-725.
Uttal, W.R. (2011). Mind and Brain: A Critical Appraisal of Cognitive Neuroscience. London: The MIT Press.
Weisberg, D.S.; Keil, F.C.; Goodstein, J.; Rawson, E. & Gray, J.R. (2008). The Seductive Allure of Neuroscience Explanations. Journal of Cognitive Neuroscience, 20 (3), 470-477. (function(i,s,o,g,r,a,m){i[‘GoogleAnalyticsObject’]=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,’script’,’//www.google-analytics.com/analytics.js’,’ga’); ga(‘create’, ‘UA-63654510-1’, ‘auto’); ga(‘send’, ‘pageview’); read more

Internet addiction

It is quite common to hear people being described as “addicted to the internet” or that we are a culture of “internet addicts”. Unfortunately, this impossible. This classification is a category error. ‘The Internet’ is a medium for information. You cannot be addicted to a form of transferring information. You can no more be addicted to the internet that you can be addicted to radio waves. This paper explains it in detail. However, that doesn’t mean you cannot be addicted to specific activities e.g. online gambling etc. But it’s the thing that you are doing that you can be addicted to, not the medium through which it is presented. Another important distinction that has to be made is explained nicely by Vaughan Bell (of Mindhacks): read more