Do bilinguals enjoy enhanced executive functioning?

Many believe people who speak more than one language are smarter (just Google ‘bilingualism’ and ‘smarter’ and you’ll see many articles putting this idea forward). This popular meme arose due to a number of studies finding bilinguals scored higher on executive functioning (EF) tests compared to monolinguals [zotpressInText item=”{5421944:ZY8QPLW8}”]. Executive functioning is a broad concept which still lacks a formal definition [zotpressInText item=”{5421944:AKZDHRJI}”]. However, it is widely understood as referring to a set of general-purpose control processes that are related to all complex mental activities e.g. regulating your thoughts and behaviours [zotpressInText item=”{5421944:6A78C6MP}”].

But what’s the problem? A large number of studies putatively show evidence for a bilingual advantage in executive functioning. Can’t we be relatively confident this is a genuine effect, given it has been replicated multiple times? As with most things, it’s what you can’t see that’s the issue.

Where have all the negative results gone?

Publication bias is the common phenomenon of studies with a statistically significant result being more likely to be published than studies with less conclusive or significant results [zotpressInText item=”{5421944:I88ZJDHA}”]. Publication bias has wide ranging effects, from distorting our perception of the literature through meta-analyses [zotpressInText item=”{5421944:GG65IRFI}”] to inflating effect sizes [zotpressInText item=”{5421944:6RPX5NZU}”].

The spectre of publication bias with regards to the bilingual advantage was first raised by [zotpressInText item=”{5421944:5IIQEGAJ}” format=”%a% (%d%, %p%)”]. They argued there was widespread confirmation bias for positive results. The large number of conceptual replications and lack of direct replications gave credence to this argument. Conceptual replications (where the general phenomenon is tested using different measures) are easier to publish than direct replications [zotpressInText item=”{5421944:46XXFTCS}”] and, therefore, are more appealing to researchers. Conceptual replications allow greater measurement flexibility as more measures can be used. On the surface, this appears to demonstrate the generalisability of the phenomenon. But in fact, this may be a significant issue due to concerns around task validity and reliability of EF tasks [zotpressInText item=”{5421944:STK6PMRR}”], as reliability between tasks is low [zotpressInText item=”{5421944:WX3RALMQ}”]. Multiple conceptual replications, with the successful studies being published and the unsuccessful ones being silently filed away, paint the picture of a robust phenomenon. But the reality may be very different.

How can you measure what isn’t there?

Assessing publication bias is notoriously difficult. Compensating for it is even more so. One of the simplest ways to visualise the distribution of effect sizes is to use a funnel plot. This is a scatter plot of effect size estimates[note]Typically Cohen’s d, Hedge’s g, standardised mean difference, or odds ratio.[/note] plotted against the standard error of those estimates.  The standard error refers to the precision of the results; a proxy of the sample size (the larger the sample size, the smaller the standard error).

Source: de Bruin, Treccani, & Della Salla (2015)

Each dot represents one study. The solid vertical line represents the overall effect size estimate. The dotted lines from the top of the black line are the pseudo 95% confidence intervals[note]This is a statement about the likely range of the population parameter: a 95% confidence interval tells you that if you were to repeat this 100 times, the true population parameter would be in this range 95 times. The dotted lines get closer to the estimate the larger the sample size, as the range of possible values for the estimate decreases.[/note]. If the distribution of studies is asymmetric, this implies publication bias. But this method should not be used in isolation, nor as conclusive proof of publication bias [zotpressInText item=”{5421944:PZ2V827S}”]. It is most effective for visualising the distribution of effect sizes. One method built off using a funnel plot to diagnose publication is the trim-and-fill method [zotpressInText item=”{5421944:5HG4RBLH}”]. But this, along with Failsafe N [zotpressInText item=”{5421944:R357GBW8}”] are sub-optimal. Trim and fill has a false positive rate close to 100% given likely values of publication bias [zotpressInText item=”{5421944:4PPNIY8A}”], whilst Failsafe N is prone to “misinterpretation and misuse” [zotpressInText item=”{5421944:GG65IRFI}”].

[zotpressInText item=”{5421944:4PPNIY8A}” format=”%a% (%d%, %p%)”] conducted a series of simulations to test the performance of different means of measuring publication bias. All showed unacceptable performance under differing but likely conditions. Thus, they concluded meta-analysts should be cautious in their conclusions and reiterated the importance of high quality methods and preregistered replications.

Bilingual advantage vs publication bias: round 1

Whilst [zotpressInText item=”{5421944:5IIQEGAJ}” format=”%a% (%d%, %p%)”] introduced the potential problem of publication bias, they did nothing to try and assess the scope of it. [zotpressInText item=”{5421944:ZY8QPLW8}” format=”%a% (%d%, %p%)”] were the first to systematically examine how biased the literature is. They looked at the number of conference articles that tested for a difference in executive functioning between monolinguals and bilinguals. They divided them into 4 groups. There were two broad categories: ‘abstracts supporting the bilingual advantage’ and ‘abstracts challenging the bilingual advantage’. Each of these were then further divided in two. For ‘abstracts supporting the bilingual advantage’ there were articles which provided only positive results and articles which provided mixed results that, on the whole, supported the hypothesis. A similar categorisation system was employed for the other category. Either the studies were judged as only providing findings against the bilingual advantage, or as demonstrating mixed support which overall challenged the advantage.

They found a roughly equal number of abstract conferences showing a bilingual advantage (54) versus challenging the bilingual advantage (50). However, there was a statistically significant difference between the number of conference abstracts which went on to be published. 63% of abstracts supporting the bilingual advantage were published, compared to only 36% of those challenging the advantage. They also conducted a meta-analysis of the published studies and found a asymmetrical funnel plot. This suggests publication bias is warping the literature. But we need more evidence.

Bilingual advantage vs publication bias: round 2

[zotpressInText item=”{5421944:PE8PN38G}” format=”%a% (%d%, %p%)”] conducted a meta-analysis of published and unpublished studies broadly testing EF and bilingualism. Due to the wide variety of tasks used and the poor reliability between them, the researchers examined the differences in results between tasks.

After collecting all effect sizes in the selected studies, they found a very small positive effect in favour of bilinguals: g=0.06 [0.01, 0.10]. After correcting for publication bias[note]Though we should keep in mind the caveats identified earlier by Carter et al., (2017)[/note] the effect size was negative: g=-0.08 [-0.17, 0.01]. Studies with smaller sample sizes tended to show larger positive effects for a bilingual advantage. Larger studies generally found no bilingual advantage e.g. [zotpressInText item=”{5421944:9IBZTZYJ}” format=”%a% (%d%, %p%)”] and [zotpressInText item=”{5421944:82L4SKFJ}” format=”%a% (%d%, %p%)”]. Patterns such as this suggest the phenomenon isn’t real, as better quality studies don’t find an effect whereas weaker ones do. There was also evidence to suggest a bilingual advantage differing by task type: verbal vs non-verbal. In studies using non-verbal tasks, bilinguals had higher scores on a host of cognitive domains e.g. working memory. This demonstrates the importance of considering task type when discussing bilingual advantage.

Are bilinguals smarter?

Looking at the evidence base as a whole, the evidence for a bilingual advantage is very weak. A large N study finding no evidence of bilingual advantage in 2 key aspects of executive functioning [zotpressInText item=”{5421944:TPEB4AHU}”].  Another study found no cognitive advantage in bilinguals across 12 executive tests after controlling for confounds. A randomised control trial of elderly participants found teaching them a second language had negligible benefits on their executive functioning [zotpressInText item=”{5421944:MFJJVJCG}”]. Even before correcting for publication bias, the evidence only shows a very small EF advantage, if at all. Thus, the evidence to support the hypothesis of a cognitive advantage for bilinguals is too weak to draw positive conclusions. Until stronger evidence is found for this benefit, I think it’s reasonable to accept the null hypothesis of no bilingual advantage in EF. Of course, the true core benefit of being able to talk with new people is still present. We don’t need a scientific study to tell us meeting new people and experiencing new things (books, T.V. shows, etc.) is wonderful. Whether learning a new language makes you smarter or not, it enriches your life, and so is always worthwhile.

References

[zotpressInTextBib style=”apa” sort=”ASC”]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: