Does everything come in twos? Problems with dual-process theories

Dual process theories are everywhere in psychology [zotpressInText item=”{5421944:A45HTJM4}”]. From decision making to emotions, the idea that complex cognitive phenomena can be categorised into two classes with specific common features is very alluring. [zotpressInText item=”{5421944:DT8HLGCV}” format=”%a% (%d%, %p%)”]

[note]Open access version here.[/note]

identified psychologists’ penchant for dichotomies (nature vs nurture etc.) but this dichotomisation began with the cognitive revolution. Given that this framework holds almost universal appeal, you would presume there is solid evidence for this model with a plethora of empirical evidence. But, as [zotpressInText item=”{5421944:HGASSQG4}” format=”%a% (%d%, %p%)”]

[note]Open access version here.[/note]

argue, the foundation is actually much weaker than many (including myself) believe[note]Where have we heard that before?[/note].

Better cop everything two times

Dual process theories are not only found in psychology, but widely disparate fields such as finance [zotpressInText item=”{5421944:RHZKN9ZV}”] and medicine [zotpressInText item=”{5421944:LYAFXEDG}”]. Perhaps the most well-known presentation of a dual process theory comes from the very popular book ‘Thinking, Fast and Slow’ by Daniel Kahneman[note]I haven’t actually read this staple of psychology, but I’ve heard it’s pretty good (minus the chapter on priming).[/note]. The names commonly given to the two strands of dual process theories are ‘Type 1’ and ‘Type 2’ or ‘System 1’ and ‘System 2’.

Type 1 is viewed as being efficient (it doesn’t take up much mental resources), unintentional, uncontrollable, and unconscious. Type 2 is the opposite: inefficient, intentional, controllable, and conscious. One of the central premises of dual process typology is that the four dichotomous features co-occur. Thus, you won’t find a process that is efficient, unintentional, controllable, and conscious. This means that out of the 16 possible combinations

[note]4 attributes each with 2 options, 24=16.[/note]

only 2 occur. This is statistically unlikely. There is also no theoretical reason to presume that features co-occur [zotpressInText item=”{5421944:543IB5AD}”].Therefore, strong empirical evidence is required to show this is the case. But there has been a lack of thorough examination of this hypothesis.

Where’s the proof?

There has been little testing of how likely certain features of Type 1 are to co-occur with other features of Type 1. The same goes for Type 2 and Type 2, as well as Type 1 and Type 2. There is some evidence that the features align as predicted [zotpressInText item=”{5421944:49JEMZU7}”], but there are more examples where they do not [zotpressInText item=”{5421944:BHLCBJD2},{5421944:XPXFEVFI}”]. Anyone who can touch type can demonstrate a counter example. Because you have enough skill at typing, you can unconsciously move your fingers across the keys. But you aren’t randomly slapping the keyboard and hoping for the correct letters to appear; the process is highly intentional. Driving, especially when you’ve zoned out and are on autopilot[note]I’m obviously not encouraging you to do this as it’s pretty dangerous, but it does happen.[/note], is another example of an intentional process that is simultaneously unconscious. See [zotpressInText item=”{5421944:HGASSQG4}” format=”%a% (%d%, %p%)”] for a more thorough collection of counter examples.

Weird groups and greater complexity

Another issue is you group seemingly disparate processes together. The thought processes of a chess grand master when deciding their next move are efficient, uncontrollable[note]They are automatically scanning for the next option[/note], unintentional[note]They don’t have to intentionally think to see only advantageous or legal moves.[/note], and unconscious. Most would argue they are qualitatively different from flinching in reaction to a loud noise. But within the logic of dual process models, they exhibit the same features.

There are also misalignments within processing features. Each of the processes described are complex and multifaceted. For example, the concept of control (as it relates to brain processes) can take different forms. You can either stop a process after it has happened or you can modify the output. Specifically, you can’t stop an evaluation of a stimuli when it is presented to you in a lab setting. But you can alter, through cognitive effort, the output so it is either a positive or negative evaluation. On a more fundamental note: why should we group complex cognitive processes by the number of features they have? Why not by something more substantial e.g. the type of processes used?

Got some, got some problems

Despite the lack of evidence, some might argue that it’s a useful typology to employ. But there are several issues with employing this typology which should give researchers pause. If a researcher observes one of the features in the cognitive process under inspection, they may presume the others are present (even if they don’t test for it) [zotpressInText item=”{5421944:SMZK3T6V}”]. Using this approach can also bias researchers against counter-intuitive findings. Phenomena that display a mixture of the two are considered unlikely a priori. They are therefore less likely to be discovered or treated as a real phenomenon.

There is also an unfortunate characterisation of the two types of thinking: Type 1 as ‘bad’ and Type 2 as ‘good’. According to this, Type 1 produces error-prone judgements whereas Type 2 results in rational judgements. This concept has been pushed by [zotpressInText item=”{5421944:IMYD9D64}” format=”%a% (%d%, %p%)”]. Encouragingly, some proponents of this model have cautioned against this [zotpressInText item=”{5421944:T2BIXKCL}”]. But many still buy into this moral dimension of the process. Setting aside the issue of assigning a moral angle to features of a model, there are also many counter examples to this. Type 2 thinking can promote self-serving rationalisations and motivated reasoning. To classify it as the system which only produces rational or “good” decisions is obviously false.

Within the sphere of self-regulation, dual-process models would predict Type 1 reasoning leads to giving in to temptation and Type 2 helps maintain your will. But a growing body of research shows that effortful cognition can be a hindrance to effective self-regulation. Not only that, but those who are more successful at self-regulating rely more on Type 1 thinking than Type 2 [zotpressInText item=”{5421944:NQRSITZ7}”]. Using Type 2 reasoning can also lead to errors when you focus too much attention on superfluous information or, in the realm of sports, can lead to inhibited performance as players overthink their actions [zotpressInText item=”{5421944:3NLGXJDC}”].

Do they align?

[zotpressInText item=”{5421944:F2PQH52N}” format=”%a% (%d%, %p%)”]

[note]Open access version here.[/note]

argue that [zotpressInText item=”{5421944:HGASSQG4}” format=”%a% (%d%, %p%)”]’s characterisation of the literature is outdated, specifically with regards to the alignment of features between the dual processes. They highlight the ideas of ‘defining features'[note]Features used to define the two-types distinction.[/note] and ‘typical correlates'[note]Features that researchers associate with the two-types distinction.[/note]. However,  Melnikoff and Bargh rebut their points by stating that very few advocates of dual process theories agree with their characterisation of feature alignment as outdated. Not only this, but it is unclear how a ‘defining feature’ that wasn’t correlated with other features could make any predictions, which is a key feature of a theory [zotpressInText item=”{5421944:KC6ZDPU7}”]. They also show the pervasiveness of correlational thinking between the features in the authors’ own writing [zotpressInText item=”{5421944:U24UZGBC}”]

[note]Open access version here.[/note]

. One of the lead authors of the original response (Gordon Pennycook) wrote a twitter thread on their exchange which I recommend you check out.

Double or nothing

Why does the number two hold such power over our thoughts? Why do we so often reduce complex ideas to two components? [zotpressInText item=”{5421944:VMMD2WXS}” format=”%a% (%d%, %p%)”]

[note]Open access version here.[/note]

give four reasons: the limits of our cognition; the bias that explaining something simply means we fully understand it; cultural norms; and limitations in our ability to communicate complex concepts.

The computing power of our brain is a trade-off between various factors: size, speed, and energy [zotpressInText item=”{5421944:LFIE2RSG}”]. Our brains cannot be too big to prohibit live births or movement, they have to allow processing within finite time frames, and the energy requirements can’t exceed our metabolic capacities. These place limits on our cognitive abilities. This means we often don’t or can’t think in higher dimensions or conceptualise phenomena in more complex ways. Our natural inclination is to simplify these highly complex phenomena into more basic ideas, using heuristic-like thinking.

This is related to the common bias that a simple explanation of a process means we have fully understood it. We often overestimate how much we understand something, especially if its inner workings are opaque [zotpressInText item=”{5421944:2SL32NDR}”]. This likely stems from our need to understand our experiences and the world around us [zotpressInText item=”{5421944:8AWBY4UT}”]. Taking refuge in simple explanations help overcome uncertainty, even if they don’t provide a complete picture.

These individual factors work in tandem with a cultural explanation: over‐reliance on traditional experimental design and analytic approaches. The prevalence of two‐way analysis of variance (ANOVA) using Null-Hypthosis Significance Testing, to the detriment of all else for the majority of students and researchers, encourages us to test phenomena in limited ways. Not only that, but it guides us towards conceptualising them in such a low-dimensional fashion. Most of us are not taught to think of and test phenomena in any other way besides looking for mean differences between variables manipulated along a few psychological dimensions. This gives us a very poor foundation to test and develop theories. Combined with with a lack of a common language to understand these phenomena, we are left with vague and limited theories which amount to little.

Maybe there are more

Prior to looking into the subject, I had completely bought into the utility of dual process theories. They made intuitive sense and appealed to the human tendency to categorise things into two. It also satisfied my/our collective need for knowledge and predictive power while cutting costs in terms of mental effort [zotpressInText item=”{5421944:6GPWLSGI}”]. But even after this relatively short exploration of the evidence base, I have serious doubts about the validity of dual process theories. There may be some instances where cognitive processes can meaningfully be divided into two and they share the features as predicted by the model. But I don’t think the default assumption should be that cognitive phenomena are split into two mirror processes.


This blog post was inspired by episode 158 of the podcast Very Bad Wizards. I recommend listening to it, as well as their other episodes.


[zotpressInTextBib style=”apa” sort=”ASC”]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: