Discussions of pseudoscience often offer a multidisciplinary analysis of the long history of popular beliefs that challenge the scientific mainstream, highlighting their harmful implications and enduring popularity. Pseudoscience is usually defined as cognitions involving radical epistemic misconduct and science mimicry (e.g., Fasce, 2020; Hansson, 2009), and scholars have held the position that the adoption of pseudoscientific beliefs is mostly due to a widespread confusion between science and claims that mimic science. I will refer to this position as the “confusion-based conception”. Though popular, in this article I will argue that this confusion-based account has limited explanatory power, as it does not fit the best current psychological evidence. In contrast, I will develop an alternative account of the endorsement of pseudoscience—the Explanation-Polarisation model (EX-PO). This model aims at explaining the endorsement of pseudoscientific conceptions on the basis of a combination of their perceived explanatory appeal and their capacity to foster group polarisation, thus being based on a large corpus of cognitive and social psychological evidence on explanatory satisfaction and intergroup dynamics. It incorporates several psychological phenomena elicited by pseudoscience, explaining the function of science mimicry in psychological terms.
Are Pseudoscientific Beliefs Confusion-Based?
The confusion-based conception of the endorsement of pseudoscience asserts that the appeal of pseudoscience is mostly explained by the existence of a rhetorical pathway aimed at exploiting superficial scientific literacy. Hence, paradoxically, pseudoscience would thrive in environments in which science is regarded as an epistemic authority, in that pseudoscientific doctrines would conceal lay cognition and cognitive biases in the trappings of science (Allchin, 2012). In this section, I will argue that methodological shortcomings, a lack of specificity, and contradictions with experimental results argue against the accuracy of the specific predictions of this confusion-based conception.
Some examples of this conception can be extracted from an influential multi-author book on the topic, Pigliucci and Boudry (2013), in which several prominent philosophers of pseudoscience emphasize an exploitation of the epistemic authority of science when explaining the psychological function of science mimicry:
-
“Pseudoscience can cause so much trouble in part because the public does not appreciate the difference between real science and something that masquerades as science. (…) Pseudoscience thrives because we have not fully come to grips yet with the cognitive, sociological, and epistemological roots of this phenomenon” (pp. 3–4).
-
“Pseudosciences piggyback on the authority science has been endowed with in modern society. The question remains as to why it is so important for pseudosciences to seek that authority, and why they often succeed in attaining it” (p. 373).
-
“Pseudoscientific beliefs are usually dressed up in scientific garb. This does not substantially alter how they interact with human cognitive systems, however. All that it may do is render pseudoscientific beliefs somewhat more attractive in the context of modern cultures that hold scientific knowledge in great regard but have limited actual understanding of it” (p. 392).
-
“Pseudoscientists seek to be taken seriously for the same reason that scientists claim our attention, that the propositions of a rigorous and rational science are more worthy of belief than the common run of opinion” (p. 417).
Some research outcomes on individual differences indicate that people have difficulties distinguishing between science and pseudoscience (Gaze, 2014; Lyddy & Hughes, 2012)1. Nevertheless, the trappings of science do not explain these difficulties, as the same lack of discernment has also been found in relation to non-science-mimicking paranormal beliefs (Brewer, 2013; Garrett & Cutting, 2017) and astrology, a borderline doctrine between pseudoscientific and paranormal rhetoric (DeRobertis & Delaney, 2000; Sugarman, Impey, Buxner, & Antonellis, 2011). The negative association between well-oriented scientific literacy and pseudoscientific beliefs, as predicted by the confusion-based conception, is similarly not well-supported. Even though one study has found negative correlations between knowledge of scientific facts and pseudoscientific beliefs (Fasce & Picó, 2019a), again, the effect is not exclusive to pseudoscience as it has been found also for non-mimicking paranormal beliefs (Aarnio & Lindeman, 2005; Fasce & Picó, 2019a; Vilela & Álvarez, 2004). Moreover, this association seems to be mediated by trust in science, which shows the same negative positive correlation with paranormal beliefs and conspiracy theories (Fasce & Picó, 2019a; Irwin, Dagnall, & Drinkwater, 2016)2.
In a recent experimental study, researchers concluded that courses that directly promoted a motivational state of distrust in pseudoscience produced a reduction of those beliefs, whereas general education classes on critical thinking and research methods did not (Dyer & Hall, 2019), and additional experimental pre-test/post-test studies also suggest this mediated relationship (Franz & Green, 2013; Morier & Keeports, 1994; Wilson, 2018). Accordingly, contrary to the idea of a backfire effect caused by misguided trust in science, low confidence in science and disregard for the values of scientific inquiry constitute good predictors for the endorsement of pseudoscience (Lewandowsky & Oberauer, 2021; Omer, Salmon, Orenstein, deHart, & Halsey, 2009).
The confusion-based account also exhibits limitations regarding experimental results on the role of scientists as judgmental shortcuts. Sloman and Rabb (2016) conducted a series of experiments to test how people behave under conditions of division of cognitive labour and epistemic dependence. Their results show that knowing that scientists understand a phenomenon gives individuals the sense that they understand it better themselves, but only when they have ostensible access to scientists' explanations and accept them. So, these individuals were not blinded by scientists' discourses and aesthetics, instead, they tended to use experts’ information as an echo chamber for their subjective assessment of scientific contents.
Brewer (2013) evaluated the effect of three versions of a news story about paranormal investigators: one in terms of traditional supernaturalism, a second with a pseudoscientific rationale, and a third presenting a discrediting scientific critique. Although a pseudoscientific rationale increased the parapsychologists' perceived credibility, it had no significant effect on the endorsement of paranormal beliefs — Garrett and Cutting (2017) conducted a similar experiment, replicating the previously observed lack of differences between the three versions regarding the perceived believability of the paranormal story. Likewise, other studies support that, although science mimicry tend to increase sources’ credibility, it does not promote change in beliefs (Bromme, Scharrer, Stadtler, Hömberg, & Torspecken, 2015; Knobloch-Westerwick, Johnson, Silver, & Westerwick, 2015; Thomm & Bromme, 2012; Zaboski & Therriault, 2020), as the effect of scientific jargon is mediated by its adjustment to previous beliefs (Scurich & Shniderman, 2014) and is not persuasive by itself (Gruber & Dickerson, 2012; Hook & Farah, 2013; Michael, Newman, Vuorre, Cumming, & Garry, 2013).
An analysis of the controversy concerning so-called “neuromania” (Legrenzi & Umiltà, 2011) helps to better understand these results. Neuroscientific research is particularly fascinating to the general public, so Weisberg, Keil, Goodstein, Rawson, and Gray (2008) conducted an experiment on the seductive allure of explanations with irrelevant neuroscientific information. As expected, irrelevant information with neuroscientific jargon were preferred by the participants, regardless of the quality of the underlying logic of the explanation. Other experiments have found the same effect (Fernandez-Duque, Evans, Christian, & Hodges, 2015; Weisberg, Taylor, & Hopkins, 2015), also using neuroimaging (McCabe & Castel, 2008)—although the neuroimaging studies have low reproducibility rates (Schweitzer, Baker, & Risko, 2013). Pseudoscientific neuro-jargon is particularly effective for psychological explanations in comparison with irrelevant social science and natural science jargon (Fernandez-Duque et al., 2015; Weisberg, Hopkins, & Taylor, 2018), perhaps due to the authority ascribed to neuroscience when explaining behaviour (Racine, Waldman, Rosenberg, & Illes, 2010).
Nevertheless, Tabacchi and Cardaci (2016) discovered that the allure of neuroscientific jargon is mediated by the wording of the question. All previous experiments asked participants how “satisfactory” they considered the explanations by using a 7-point Likert scale, which is an aesthetic judgment, while in Tabacchi and Cardaci (2016) participants had to choose the correct explanation from two alternatives by using a dichotomous measure about its truthfulness. In this psychometric context, the allure of explanations with vacuous pseudoscientific jargon was not observed, and, as no additional information was given to the participants, it is not likely that pseudoscientific jargon fooled their trust in science in prior experiments. The allure of scientific jargon seems to depend on how individuals are asked to assess evidence in different motivational contexts—i.e., focusing on how satisfactory scientific jargon is in psychological terms or on how believable it is in epistemic terms.
In sum, current evidence consistently suggests that uncritical acceptance of pseudoscientific information is mediated by perceived explanatory satisfaction and adjustment to previous beliefs, not by misguided trust in science. Nevertheless, the criticism expressed in this section regarding the confusion-based approach does not fully invalidate it, as confusion could still be a relevant variable within specific groups and contexts. More research is needed to know if both models can be complementary rather than antithetical.
The Explanation-Polarisation Model
In this section, I will now develop an explanatory framework for the endorsement of pseudoscience detached from confusion-based conceptions. EX-PO departs from the usual definition of pseudoscience, which is based on science mimicry, but it does not explain the endorsement of pseudoscience by means of a faulty distinction between science and pseudoscience, thus conferring another role to science mimicry3. The EX-PO model takes pseudoscience as a set of flawed but appealing explanations, adding a relational aspect by including psychological phenomena related to group polarisation.
Rekker (2021) has proposed a relevant distinction between psychological and ideological science rejection. On one hand, psychological rejection of science takes place implicitly and arises from individuals’ tendency to favor information that maintains their status in an affinity group. On the other hand, ideological rejection (religious, political, etc.) consists of explicit contestation of science through arguments derived from complex doctrines—for example, climate change countermovement organizations (McKie, 2019)4. EX-PO constitutes a model for psychological rejection of science, as the main unit of analysis of the model is the interaction between the individual and the pseudoscientific doctrine, considering both the psychological predispositions and the rhetorical devices involved in such interaction. In this regard, EX-PO explains the endorsement of pseudoscience through the supply of psychologically satisfactory explanations and the demand for profitable ideas that conform to rewarding social norms. There is a general tendency in all individuals to favour mechanisms and categorizations, and individuals also tend to hold desirable, concerning, and useful beliefs. The rhetorical devices of pseudoscience adapt to this psychological framework to gain support.
EX-PO constitutes an explanatory framework for the spread of the two major forms of pseudoscience, pseudo-theory promotion and science denial (Fasce & Picó, 2019b; Hansson, 2017), although both forms show their own characteristics5. The explanatory satisfaction offered by pseudo-theory promotion, due to its greater doctrinal content, should be higher than that of science denialism, whose endorsement, in turn, would be more influenced by ideology-driven group polarisation and direct confrontation with scientific information (Lewandowsky, Pilditch, Madsen, Oreskes, & Risbey, 2019; Medimorec & Pennycook, 2015). For example, as climate change denialists cannot offer satisfactory alternative explanations, they need to simulate coherence by conspiracist discourse reinforced by group behaviour (Lewandowsky, Cook, & Lloyd, 2018).
The Explanatory Allure of Pseudoscience: Empty Labels and Illusory Mechanisms
I will start by elucidating the “explanation” aspect of EX-PO. As already mentioned, interpersonal disagreement about facts is highly influenced by the explanatory allure of pseudoscience, as well as by individual cognitive predispositions toward anti-scientific beliefs such as overreliance in intuitive thinking (Pennycook, Cheyne, Seli, Koehler, & Fugelsang, 2012), ontological confusions6 (Lindeman, Svedholm-Häkkinen, & Lipsanen, 2015), pseudo-profound bullshit receptivity (Pennycook, Cheyne, Barr, Koehler, & Fugelsang, 2015), and causal illusions (Torres, Barberia, & Rodríguez-Ferreiro, 2020).
Explanations matter since they feel intrinsically valuable for pragmatic concerns such as prediction and control (Lombrozo, 2011). Nevertheless, criteria identified in academy publications as explanatory virtues hardly predict positive explanation assessment in naturalistic settings (Lombrozo, 2016; Zemla, Sloman, Bechlivanidis, & Lagnado, 2017). For example, although scholars often consider abstract and simple explanations to be preferable, people tend to favor less generalisable explanations (Bechlivanidis, Lagnado, Zemla, & Sloman, 2017; Khemlani, Sussman, & Oppenheimer, 2011) and to explain inconsistencies by positing additional causes rather than disputing premises, thus preferring explanations that involve complex causal structures (Khemlani & Johnson-Laird, 2011). In this regard, pseudoscience would exploit several sources of subjective explanatory satisfaction, such as flawed categorisations and mechanistic explanations7.
Categorical language supports particularly strong inferences, leading people to form representations in more essentialist (Gelman, Ware, & Kleinberg, 2010), categorical (Lupyan, 2012), and prototypical terms (Lupyan, 2017). People learn named categories more quickly (Lupyan, Rakison, & McClelland, 2007), are more likely to agree with categorical statements about causes and features (Ahn, Taylor, Kato, Marsh, & Bloom, 2013; Hemmatian & Sloman, 2018), and explanations that include sharp and easily recognisable labels for categories are significantly more satisfying in psychological terms (Giffin, Wilkenfeld, & Lombrozo, 2017).
There are numerous examples of insubstantial pseudoscientific labels, such as “cell memory”, “energetic blockage”, “vertebral subluxation”, “detoxification”, “qi deficiency”, and “meta-model”. Parapsychology is particularly interesting in this regard as it expands the categories of folk paranormal beliefs: where a folk paranormal believer experiences a “ghost”, a parapsychologist may see a “poltergeist”, an “apparitional experience”, an “ectoplasm”, a “psychophony”, an “orb”, etc. Of course, scientists also use a large number of complex categories, but these conceptual networks are typically guided by evidence (i.e., scientists tend to reject unfounded categories) and parsimony (i.e., scientists tend to reject unnecessary categories). So, categorizations are not problematic per se, but rather that both scientific and unscientific categorizations have an appeal of easy applicability on the level of the individual recipient.
As outlined above, mechanistic explanations also have a relevant role within EX-PO. People have a strong preference for explanations that invoke causal mechanisms, perhaps driven by a desire to identify as many causes of an effect as possible (Mills, Sands, Rowles, & Campbell, 2019; Zemla et al., 2017). As with categories, mechanistic explanations are also widely used among scientists, although mechanisms in pseudoscience have been already refuted or are construed in a way that makes them untestable. Pseudoscientific doctrines are plenty of flawed mechanistic explanations and processes, especially in comparison with other forms of unwarranted beliefs, such as paranormal and conspiracy theories. For example, the five “biological laws” that rule the emotional aetiology of disease in German new medicine, kinesiology's viscerosomatic relationship, improved blood flow and oxygenation in tissues by magnetic stimulation, memories and experiences passing down through generations by DNA or morphic resonance, and homeopathic dynamization by dilution and succussion.
There is research supporting that the satisfying effect of neuroscientific explanations, discussed in the previous section, is due to the perception of mechanistic explanations. Rhodes, Rodríguez, and Shah (2014) found that neuroscientific information boost self-assessed understanding of the mechanisms by providing perceived insight about causal chains and categorisations about psychological phenomena. Hopkins, Weisberg, and Taylor (2016) confirmed these findings, expanding the allure of mechanistic information to other fields, such as physics, chemistry, and biology. Flawed mechanistic explanations constitute a confirmed cause of illusion of explanatory depth (IOED; Rozenblit & Keil, 2002), which occurs when people believe they understand a process more deeply than they actually do. This overconfidence effect has been empirically associated with other kinds of unwarranted beliefs, such as conspiracy theories (Vitriol & Marsh, 2018), and pseudoscientific rhetoric may be particularly effective in generating this effect among its supporters—indeed, Scharrer, Rupieper, Stadtler, and Bromme (2017) suggest that even legitimate, although oversimplified, science communication causes overconfidence effects among laypeople.
IOED has been identified only in knowledge areas that involve complex theorization about causal networks, such as biological or physical processes (e.g., how a zipper, a toaster, tides, or the digestive system works), so it does not take place across other types of knowledge, such as declarative or narrative (e.g., how to make cookies or capital cities). Several factors converge to create a strong illusion of depth for mechanistic explanations. First, individuals have less experience in representing, expressing, and testing their explanations in comparison with other kinds of knowledge (Rozenblit & Keil, 2002; Wilson & Keil, 1998). Second, because people usually rely on others’ understanding when assessing mechanistic explanations, they tend to overestimate their own understanding of mechanisms in relation to others’ understanding (Fisher, Goddu, & Keil, 2015). Third, explanations are layered, so there are always higher and lower levels of analysis and people often confuse their superficial insights with a deeper understanding of how mechanisms work (Alter, Oppenheimer, & Zemla, 2010). As a result, IOED is based on a failure to construct accurate mental representations by using an appropriate level of construal, thus confusing the metacognitive experience of understanding with the capacity to offer a proper explanation.
Despite IOED is usually overridden by debunking information (Kowalski & Taylor, 2017) and iterative failure (Mills & Keil, 2004), there are three variables described in the current literature which explain why this effect can be so persistent:
-
The strength of care and general concern for a given issue predicts persistent instances of IOED (Fisher & Keil, 2014).
-
People tend to accept useful explanations in the short-term by pragmatic assessment of the potential courses of action they entail (Vasilyeva, Wilkenfeld, & Lombrozo, 2017).
-
Social desirability fosters IOED, so people tend to accept categorisations and explanations that are shared by ingroup members by reliance on community cues and social conformity (Gaviria, Corredor, & Zuluaga-Rendón, 2017; Hemmatian & Sloman, 2018).
Accordingly, a successful pseudoscience should be concerning, useful in the short-term, and socially desirable for its supporters. How pseudoscientific doctrines generate this specific motivational state is accounted by EX-PO's “polarisation” aspect.
The Social Dimension of Pseudoscience: Ingroup Polarisation and Intergroup Clash of Cognitions
Recent research has highlighted how today’s societies are fractured by partisan identities and feedback loops of accommodating information, identity narratives, and anti-expertise (Kreiss, 2019; Lewandowsky, Ecker, & Cook, 2017; Pariser, 2011). For example, previous results have shown that the information shared in social media is more radical and partisan than that of open websites (Faris et al., 2017), thus facilitating the spread of appealing falsehoods (Vosoughi, Roy, & Aral, 2018). Due to its high prevalence in the public sphere and its alarming implications, the struggle between groups of pseudoscientific believers and critical thinkers is increasingly polarised, establishing an intergroup relationship dominated by an imaginary of distrust, competition, and mutual derogation (e.g., Cano-Orón, 2019).
Polarisation outbreaks occur when the empirical matter in dispute is crucial to define a social identity (Fasce, Adrián-Ventura, Lewandowsky, & van der Linden, 2021; Sloman & Fernbach, 2017). Under these conditions, intergroup threats foster self-uncertainty and self-uncertainty motivates radical identification with groups that provide distinctive normative beliefs and higher identification-contingent uncertainty reduction (Hogg, Meehan, & Farquharson, 2010; Hogg & Wagoner, 2017). As a consequence, self-uncertain people with group belongingness tend to increase their endorsement of anti-scientific beliefs (van Prooijen, 2016; van Prooijen & Jostmann, 2013), and the sense of community and perceived existential threats are well-documented root factors for pseudoscientific and conspiracy theories (Fasce, Adrián-Ventura, et al., 2021; Franks, Bangerter, Bauer, Hall, & Noort, 2017; van Prooijen, 2020). Pseudoscientific beliefs have been proven to be influenced by tendency to accept beliefs on the basis of short-term interpersonal benefits (Fasce, Adrián-Ventura, & Avendaño, 2020), which lead individuals to assess scientific evidence depending on whether or not scientists agree with their identity and attitudes (Bromme et al., 2015; Giese, Neth, Moussaïd, Betsch, & Gaissmaier, 2020; Kahan, Jenkins‐Smith, & Braman, 2011; Knobloch-Westerwick et al., 2015; Scurich & Shniderman, 2014).
Recent results highlight how social groups constructed around pseudoscientific beliefs are often exploited by populistic and authoritarian political movements exhibiting radical ideologies (Fasce et al., 2020; Mede & Schäfer, 2020)8. These forms of anti-scientific politics exploit the tension between short-term ingroup rationality and long-term truth-seeking cognition, in which evidence-recognition is crucial, making individuals exposed to partisan information more polarised than nonexposed ones (Druckman & Bolsen, 2011; Nyhan & Reifler, 2015; Nyhan, Reifler, Richey, & Freed, 2014; Palm, Lewis, & Feng, 2017). As a result, politically charged pseudoscientific doctrines such as climate change denial or the anti-vaccination movement are hardly overridden by literacy, analytical thinking, and information (Kahan, 2013; Kahan et al., 2012; Lewandowsky & Oberauer, 2016; Nyhan & Reifler, 2015; Nyhan et al., 2014).
These ideology-driven phenomena have been explained by the presence of motivated reasoning (Kahan, 2016), although recent research has questioned the validity of this interpretation by highlighting potential confounding factors and the elusive nature of backfire effect (Druckman & McGrath, 2019; Tappin, Pennycook, & Rand, 2021; Wood & Porter, 2019). This legitimate dispute regarding the underlying mechanism of group polarization over scientific information does not undermine the fact that group polarisation and perceived social consensus play a pivotal role in the spread of pseudoscience (Bromme et al., 2015; Giese et al., 2020; Kahan et al., 2011; Knobloch-Westerwick et al., 2015; Lewandowsky, Cook, Fay, & Gignac, 2019; Lewandowsky, Gignac, & Vaughan, 2013; Thomm & Bromme, 2012; van der Linden, Leiserowitz, & Maibach, 2018). Accordingly, EX-PO is compatible with any mechanism of group polarisation, including identity-related motivated reasoning, misperception of scientific consensus, and Bayesian belief update (e.g., Cook & Lewandowsky, 2016).
Besides polarisation over already internalised pseudoscientific beliefs, ingroup pressure may increase the motivation to adopt beliefs related to prior beliefs by family resemblance. Individuals who endorse one conspiracy theory tend to endorse new ones, even if they are completely fictitious (Swami et al., 2011) and mutually contradictory (Wood, Douglas, & Sutton, 2012), and the same pattern has been observed regarding pseudoscience (Lewandowsky, Cook, & Lloyd, 2018). This can be seen also from the high internal consistencies in psychometric scales: α = 0.92 for paranormal beliefs (Tobacyk, 2004), α = 0.93 for conspiracy beliefs (Brotherton, French, & Pickering, 2013), and α = 0.92 for pseudoscientific beliefs (Fasce, Avendaño, & Adrián-Ventura, 2021).
As rejection of a pseudoscience is not the same as accepting the relevant science, critical thinking should not be defined as a lack of EX-PO. Critical thinkers also engage in social dynamics that strengthen their conceptions and have specific, but moderated, sources of bias. For example, even though critical thinking is positively correlated with analytical reasoning (Svedholm-Häkkinen & Lindeman, 2013), Pennycook et al. (2012) also found that around 40% of critical thinkers can be characterised as non-analytical reasoners. Moreover, these non-analytical critical thinkers are prone to endorse the kind of ontological confusions that predispose people toward paranormal beliefs (Lindeman, Svedholm-Häkkinen, & Riekki, 2016). These results are in line with Norenzayan and Gervais (2013): critical thinking may originate not only from individual cognitive styles, but also from a lack of cultural input.
Potential Interventions
Although both conceptions, the confusion-based and EX-PO, are not contradictory, their assumptions and implications are not equivalent, particularly regarding how to face the problem of pseudoscience. Current prevention strategies deployed by organisations of critical thinkers are often based on the assumptions of the confusion-based interpretation, so they are focused on general dissemination aimed at improving scientific literacy. In view of the current upsurge of anti-scientific groups and politics—e.g., anti-scientific conspiracy ideation around COVID-19—, this has proven to be insufficient. EX-PO does not emphasize direct science dissemination, suggesting that the problem of pseudoscience might be better addressed through social interventions intended to reduce the allure of subjective explanatory satisfaction and group polarisation.
Accordingly, efficient interventions to reduce the endorsement of pseudoscience should take into account comprehensive motivational strategies such as inoculation messages exposing misleading argumentation techniques (Lewandowsky & van der Linden, 2021), as well as worldview and values affirmation (Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012). This is important in order to boost perception of consensus cues and decrease the false consensus effect between experts and the public opinion. Another potential intervention should be focused on echo chambers—an interesting framework to deploy mechanisms of containment against these detrimental information architectures can be found in Lewandowsky et al. (2017). Even though the degree of ideological segregation in social media usage should not be overestimated—echo chambers and filter bubbles are not larger online than offline (Barberá, Jost, Nagler, Tucker, & Bonneau, 2015; Dubois & Blank, 2018)—, their influence on individuals’ receptivity toward attitude-consistent misinformation is supported by evidence (Giese et al., 2020).
Concluding Remarks
To facilitate future research on EX-PO, I finalise this article offering a summary of the model. Unwarranted beliefs increase their explanatory appeal by science mimicry, on the basis of spurious categorisations and flawed mechanistic explanations. This would be the basic role of the trappings of science. Therefore, the appeal of pseudoscience to isolated individuals comes through two pathways: explanatory satisfaction and more general individual cognitive predispositions. Group polarisation takes place after the aggregation and organisation of believers around pseudoscientific doctrines, and this process reinforces and promotes pseudoscientific beliefs through two pathways: reinforcement of already internalised pseudoscientific beliefs and ingroup pressure to accept new pseudoscientific beliefs resembling those already internalised. Group polarisation boost the explanatory allure of pseudoscience by helping pseudoscientific doctrines be perceived as concerning, useful in ingroup terms, and socially desirable.