“That’s the whole point of good propaganda. You want to create a slogan that nobody’s going to be against, and everybody’s going to be for. Nobody knows what it means, because it doesn’t mean anything.”
Noam Chomsky (1991, pp. 21-22)
Chomsky noted that political communication has become sloganized and essentially meaningless to a certain extent. Consider the last 2020 U.S. presidential elections: Joe Biden believed that “America is an Idea”, while Donald Trump asserted that he would “Keep America Great”. Although these statements seem to convey some meaning, they are too broad and vague to be tested for trustworthiness or evidence. Yet, they aim to persuade voters. Although prevalent, this political communication has not been investigated empirically. In the present paper, we present theoretical and empirical evidence for receptivity to this type of communication which we label “political bullshit” following previous work on pseudo-profound bullshit (Pennycook, Cheyne, Barr, Koehler, & Fugelsang, 2015), bullshit in politics (Brandenburg, 2006) and bullshit in general (Frankfurt, 1986/2005).
Frankfurt (1986/2005) was the first to outline the persuasive communication that was constructed without regard for truth, knowledge, or evidence. He noted the prevalence of this phenomenon and named it bullshit, launching an upsurge in academic work on the topic (Hardcastle & Reisch, 2006). According to Frankfurt (1986/2005), there is a clear distinction between bullshit and lies; while liars care about the truth in the sense of hiding it, bullshitters have no regard for it. So, for example, whenever one speaks on a topic with inadequate knowledge about it, that person is engaging in bullshitting (Petrocelli, 2018). Crucially, it is the speaker’s intention (i.e., how it is produced), and not the recipients’ interpretation that qualifies something as bullshit (Pennycook et al., 2015; Pennycook, Cheyne, Barr, Koehler, & Fugelsang, 2016).
Bullshit in Psychology
Although bullshit has been extensively discussed in philosophy, it has been only recently investigated by psychologists. Specifically, Pennycook and colleagues (2015) focused on one type of bullshit—pseudo-profound bullshit—which represents a collection of abstract words that have syntactical structure, but no actual intended meaning (e.g., “Wholeness quiets infinite phenomena”). They demonstrated that endorsement of this type of bullshit is related to lower analytical and higher intuitive thinking style, higher religiosity, and paranormal beliefs. These findings were later replicated and extended, showing that people susceptible to pseudo-profound bullshit are also more prone to conspiratorial and schizotypal ideations (Hart & Graether, 2018), or unusual experiences and magical thinking (Bainbridge, Quinlan, Mar, & Smillie, 2019).
Given that pseudo-profound bullshit might be only one type of bullshit, “[a] point on what could be considered a spectrum of bullshit” (Pennycook et al., 2015, p. 550), researchers attempted to measure other instances of bullshit; for example, Evans and colleagues (2020) measured scientific bullshit (assembling random words from physics glossary), while Čavojová, Brezina, and Jurkovič (2022) attempted to measure general bullshit (i.e., no abstract buzzwords). Endorsement of both types of bullshit showed to be highly correlated with the receptivity to pseudo-profound bullshit (rs = .60 and .83 respectively). Another empirical line of research on bullshit focused on bullshitting as an act (i.e., producer), rather than on the bullshit as a product (i.e., receptivity). For example, Gligorić and Vilotijević (2020a) demonstrated that people perceive pseudo-profound bullshit as more profound when attributed to a famous author (e.g., Einstein). Furthermore, Littrell, Risko, and Fugelsang (2021) focused on the frequency with which individuals engage in bullshitting, measuring two aspects of it: persuasive and evasive. While the formers’ function is to persuade, impress and achieve acceptance of others, the latter aims to avoid giving direct answers and evade any thorough inquiry. However, the dominant approach remains the one on the reception of pseudo-profound bullshit. Importantly, previous research investigated the relationship with political affinity.
Pseudo-Profound Bullshit and Political Affinities
Several studies discovered the relationship between political ideological stances and receptivity to pseudo-profound bullshit. These showed that, in the U.S., neoliberals (i.e., supporters of the free-market politics; Sterling, Jost, & Pennycook, 2016), Republican supporters, and conservatives are more susceptible to pseudo-profound bullshit (Pfattheicher & Schindler, 2016). The findings for the free-market supporters, economic and social conservatism were later replicated (Evans et al., 2020), though again in a U.S. sample. This relationship fits well within ideological asymmetries between conservatives and liberals in epistemic motivation so that conservatives rely more on intuitive thinking style, heuristic, and simple information processing (e.g., Jost, 2017).
However, the relationship between ideology and pseudo-profound bullshit has not been found consistently: for example, Hart and Graether (2018) found a positive relationship with conservatism only in the first study. Additionally, the relation with neoliberalism was somewhat lower in a Serbian sample (Gligorić & Vilotijević, 2020b). Indeed, it seems that the relationship between ideology and pseudo-profound bullshit is more complex: Nilsson, Erlandsson, and Västfjäll (2019) found a positive relationship with social conservatism within a Swedish sample but failed to replicate one with economic ideological preference. In fact, they found that economical centrists (and even leftists) were most inclined to fall for pseudo-profound bullshit. One of the reasons for inconsistent findings might be the fact that political ideology is differently conceptualized in the U.S. and Sweden. Firstly, in the U.S (which has a two-party system vs Sweden’s multi-party one), the ideological scale is moved to the right. Secondly, ideological division in Sweden is primary concerning economic issues and less so about social ones (Nilsson et al., 2020).
Although a promising line of research, pseudo-profound bullshit and political ideology are relatively distant psychological constructs. Therefore, it is reasonable to investigate this relationship using more political content, which is why we explored another type of bullshit—political.
The idea of bullshit in politics is not new. For example, Orwell (1946) noted how political speech is filled with obscure language, listing devices that foster such rhetoric (e.g., using pretentious diction and meaningless words). More recently, Frankfurt (1986/2005) saw politics (as well as advertising and public relations) as a prime realm where bullshit emerges, whereas Brandenburg (2006) emphasized that bullshit “…has become a structural property, a logical necessity of political communication” (p. 3). Political marketing, communication, and rhetoric (or a combination thereof) can give rise to political bullshit.
Even though research suggests that bullshit emerges in politics, a more important question is what political bullshit is, i.e., how it looks like. Sterling and colleagues (2016) suggested that bullshit in politics would not rely on abstract, pretentious words as pseudo-profound bullshit does because “a politician might deliberately simplify language to broaden his or her appeal, stating something that is vague and platitudinous, like ‘I believe in America!’” (p. 358). To this end, Brandenburg (2006, p. 4) illustrates political bullshit using a part of a speech by a UK politician from the Labour Party, David Miliband: “… Our vision is compelling. Civic pride based on a new age of civic power, not for some of the people, but for all of the people, all of the time.”
Hammack and Pilecki’s (2012) narrative framework is useful in better understanding why political bullshit is used by politicians and what elements make it receptive to people. In this framework, political bullshit can be considered a form of narrative used to politically influence people’s thinking and actions. According to this perspective, language that politicians use serves the purpose of achieving social coherence for political narratives. The “stories” used by politicians help members of a society engage in a political program which is often long and complex, covering many different topics. Politicians and parties try to unite members of society by relying on more abstract and vague communication which at the same time gives people a sense of continuity and personal coherence to deal with the complexity. This can be done, for example, by including in these narratives elements that are meaningful to the audience such as references to social categories (“America”, “all of the people”, “Britishness”). We believe that, when this happens, such rhetoric becomes an instance of political bullshit which is receptive to people because it meets the meaning in solidarity principle of narratives as it fulfills a person’s need to see oneself as part of a larger collective with similar others within a particular time and place (Hammack & Pilecki, 2012). Political bullshit meets another principle of the narrative framework as it can also put the mind in action. Political bullshit can motivate people to engage in action (i.e., voting for a political party) as it may appeal to the audience's emotions (i.e., pride, anger) and identification with the political parties or issues.
Another stream of research on bullshit in politics concerns theoretical speculation that focused on the Brexit campaign (e.g., Hopkin & Rosamond, 2018) and Donald Trump’s presidency (Kristiansen & Kaussler, 2018). Similarly, the idea of political bullshit was also used to discredit political opponents or movements (e.g., Brexit campaign; Ball, 2017). Nevertheless, empirical research is needed to examine whether individuals at different points of the political spectrum use more political bullshit or are more susceptible to it. Overall, while research has discussed the idea of bullshit in politics, it has not clearly conceptualized, or more importantly, attempted to measure this concept.
We define political bullshit as statements of political content that intend to persuade voters, but are so vague and broad that they are essentially meaningless. We agree with Pennycook and colleagues’ (2016) notion that the politicians’ goal is often to “say something without saying anything [and] appear competent and respectful without concerning themselves with the truth” (p. 125). Therefore, political bullshit represents political communication which (1) has no regard for truth (Frankfurt, 1986/2005), (2) promotes one’s goals and agenda, i.e., it is proactive in that it persuades to vote (Brandenburg, 2006; Hammack & Pilecki, 2012; Reisch, 2006) and is (3) essentially unclarifiable (Cohen, 2002). To this end, we view the political communication that Chomsky described as a prototypical bullshit communication.
Our definition of political bullshit also resonates well with the finding that bullshitting has two facets: persuasive and evasive (Littrell et al., 2021; see also the difference between proactive and defensive bullshit; Brandenburg, 2006). It is political communication in particular that aims to persuade (attract voters, get support for an agenda), while politicians often keep communication abstract in order to evade directly answering questions that might lead to a decrease in support (Rogers & Norton, 2011). Conjoining these features, politicians might try to persuade using statements too vague to repulse voters (i.e., evades meaningful proposals). In this way, political bullshit departs from demagoguery which includes lying (bullshit is indifferent to the truth) or making impossible promises (bullshit is too vague to promise anything meaningful). Political bullshit is also different from populism as it does not imply any divisions between “us” and “them”, although populists might sometimes engage in bullshitting (Hopkin & Rosamond, 2018).
The Present Research
In the present research, we constructed political statements that we deemed as political bullshit whose endorsement should be a measure of political bullshit receptivity. As argued previously, “it is not the understanding of the recipient of bullshit that makes something bullshit, it is the lack of concern (and perhaps even understanding) of the truth or meaning of statements by the one who utters it” (Pennycook et al., 2016, p. 125). Therefore, in our case, we made political statements that lack actual meaning and are indifferent to truth. The threefold aim of the present study was to: (1) theoretically outline the concept of political bullshit, (2) operationalize the receptivity to political bullshit and (3) explore the relation of political bullshit receptivity with other traits, ideology, and voting behavior.
We expected positive correlations between the endorsement of political bullshit statements, slogans, and bullshit political programs since all three measures fulfill our criteria of political bullshit (convergent validity). We expected that each of these measures shows a one-factorial structure (factorial validity). We also included factual political statements that have the same format as political bullshit statements but should not be regarded as persuasive (see the difference between mundane and pseudo-profound statements; Pennycook et al., 2015). Next, we hypothesized that endorsement of political bullshit would be positively correlated with the receptivity to pseudo-profound bullshit because both represent one type of bullshit (cf. Čavojová et al., 2022). Based on information-processing ideological asymmetries (Jost, 2017) and the literature on pseudo-profound bullshit, we hypothesized that receptivity to political bullshit would positively correlate with conservatism (Pfattheicher & Schindler, 2016) and neoliberalism (Sterling et al., 2016). Finally, we expected a low correlation with populism because these are different concepts (discriminant validity). Prediction of voting behavior is exploratory as there are no conclusive results in the literature (Nilsson et al., 2019). All these hypotheses have been pre-registered at the Open Science Framework (OSF; see Supplementary Materials)
To address these research questions, we conducted three pre-registered studies in early 2020 in three countries—the U.S., the Netherlands, and Serbia. We also performed a meta-analysis to average the effects and test for homogeneity which would indicate if the patterns vary across countries. Socio-politically, these countries are somewhat similar: they are all liberal democracies, rely on market economy, and belong to the Western culture. However, some large differences also exist: the U.S. is a presidential democracy with a two-party system, while Serbia and the Netherlands are parliamentary democracies with multi-party systems. Additionally, Serbia is not a WEIRD country (compared to the U.S. and The Netherlands) and it has a socialist history. Furthermore, the main denomination in Serbia is Orthodox Christianity, while it also has strong influences from the Orient (due to long Ottoman rule). These cross-country differences allowed us to test the validity of the concept across different political contexts and get more precise estimates.
Overview of the Studies
All three studies had a correlational design, used survey methodology, and were programmed in Qualtrics. Each study was conducted in the native language (English, Serbian, and Dutch) after translation and back-translation by native speakers. Thus, the participants answered the same scales, except for the voting behavior as parties/candidates differ across countries. Before the main studies, political bullshit statements, factual political statements, and slogans were piloted (see Supplementary Materials on OSF, section Pilot Study). Materials for the pilot and main studies (in all languages), datasets, analyses scripts, and outputs are available as Online Supplement (for sample size determination, see Bayesian explanation document). None of the studies had missing data, as each question requested an answer before allowing to move to the next page. Data analyses were performed in JASP (Bayesian correlations) and R (all other analyses; packages: lavaan for confirmatory factor analyses, metaBMA for meta-analyses). Appendices to which we refer in text can be found in the Supplementary Materials, under Online supplement.
Study 1 – U.S. Sample
In total, 183 participants from Prolific completed the survey which took around nine minutes. They were paid 1.09£ for their time. To ensure high-quality data, we included two attention checks; if any was answered incorrectly, the survey would terminate. As pre-registered, we excluded four participants as multivariate outliers (Mahalanobis distance, α = .01) on continuous dependent variables (all that participants filled out), while no participants were excluded based on pre-registered minimum completion time (180 seconds). Therefore, we analyzed the data of 179 participants (58.1% females, 36.9% males, 5.0% “other”; Mage = 29.6, SDage = 10.7). One participant indicated education less than the high school, 44 had completed high school, 72 were students, 24 had an undergraduate degree, and 38 had a graduate degree.
After reporting their demographic characteristics, participants completed two single-item indicators of the ideological position on social and economic issues using a seven-point scale (1 = left-wing to 7 = right-wing). Participants next filled out the following scales in the order in which they are presented here (items were randomized within the scales). Descriptives and reliabilities of scales are given in Table 1.
Pseudo-profound bullshit receptivity was measured with ten items which participants rated on profundity (1 = not at all profound to 5 = very profound) (Pennycook et al., 2015). Five truly meaningful statements were included as fillers to conceal the emptiness of pseudo-profound items.
Bullshit political program endorsement. Another political bullshit measure was the endorsement of three political programs for the presidential elections in the fictitious country of Gonfel. We constructed these programs to be meaningless and empty. Participants rated how much they 1) would support the program and how likely they 2) would vote for the candidate (1 = not at all to 5 = very). This means that six items in total measured bullshit political program endorsement.
Together with the bullshit political programs, three meaningful political programs were included as fillers to conceal the vanity of bullshit programs. Therefore, participants read six political programs (12 items), but meaningful programs were not used in analyses. Next, participants rated how convincing the five slogans (e.g. “I believe in Gonfel!”) are (1 = not all convincing to 5 = very convincing). This was followed by rating how persuasive ten political bullshit statements are (e.g., “To politically lead the people means to always fight for them”), which were presented together with five factual political statements (“The president and prime minister have important political functions”).
Support for populism was measured by indicating agreement with four items (e.g., “Politicians should listen more closely to the problems the people have”) on a five-point scale (Elchardus & Spruyt, 2016). Support for Neoliberalism was measured using two items (“The free market economic system is a fair system” and “The free market economic system is an efficient system”) from the Fair Market Ideology scale (Jost, Blount, Pfeffer, & Hunyady, 2003) on which participants indicated agreement using a five-point scale.
Voting behavior. Participants answered whether they (1) voted at the last U.S. presidential elections (2016). If they indicated “yes”, they were asked to (2) indicate the candidates they voted for. Next, participants indicated whether they (3) planned to vote at the next U.S. presidential elections (2020). If answered “yes”, participants were asked to (4) indicate for which candidate they would vote.
Results and Discussion
Factor Structure of Political Bullshit Measures
To check whether mean scores can be calculated for each of political bullshit measures, we tested their factorial structure. Confirmatory factor analysis (CFA; estimator: maximum likelihood) of ten political bullshit statements showed acceptable fit for a one-factor solution, χ2(35) = 73.45, p < .001, RMSEA = .078, SRMR = .049, CFI = .934, TLI = .915 (RMSEA < .08, SRMR < .08, CFI > .90, TLI > .90; Ben-Shachar, Lüdecke, & Makowski, 2020; Hopwood & Donnellan, 2010). Applying CFA on five slogans showed poor fit for a one-factor solution (without residual covariances), χ2(5) = 31.12, p < .001, RMSEA = .171, SRMR = .071, CFI = .873, TLI = .747. Modification indices suggested that residuals between items 3 and 5 (mi = 10.5), and 4 and 5 (15.7), should be correlated. After adding these residual correlations, the model showed a good fit, χ2(3) = 5.558, p = .135, RMSEA = .069, SRMR = .030, CFI = .988, TLI = .959. However, given that residual covariances should not be included without theoretical or methodological justifications, we wanted to investigate the factor structure using EFA. EFA (estimation method: minimum residual; rotation: promax) showed that one factor (38% of variance explained) should be retained according to both parallel analysis and eigenvalues > 1. All slogans had factor loadings higher than .40 (Appendix A in the Supplementary Materials contains factor loadings of all political bullshit measures in all three countries).
Each of the three bullshit political programs had two responses—likelihood to 1) vote for candidate and 2) support the program. Besides one higher-order factor, a proper CFA model would include residual covariances between the two questions related to the same program, and three questions that have the same format across programs (one about voting and one about support). However, we could not apply residual covariances because the model becomes saturated. Therefore, we performed EFA (same estimation method and rotation) on six items concerning three programs that indicated a one-factor solution (52% of variance explained), based on both parallel analysis and eigenvalues > 1. All items had factor loadings higher than .60 (see Appendix A, Supplementary Materials). Overall, the factor structure indicates that all three measures are best conceptualized as one-factor measures, which is why we calculated means as scores.
Inter-correlations, descriptives, and reliabilities of the scales are given in Table 1. As the table shows, there are medium to high correlations between political and pseudo-profound bullshit measures. Endorsing political factual statements was also correlated with all bullshit measures suggesting response bias; that is, some participants were susceptible to any kind of political statements. To partial out this variance and thus investigate correlations with political bullshit sensitivity, we calculated partial correlations between bullshit measures when political factual statements are controlled for (see Appendix B1, Supplementary Materials). These indicated that the correlation between bullshit measures remains even after controlling for response bias. This, together with the factor analyses, indicates that there is a susceptibility to this kind of bullshit other than response bias.
|1. Political bullshit statements||3.14 (.74)||.86||–|
|2. Political factual statements||1.83 (.90)||.83||.406***||–|
|3. Slogans||2.47 (.78)||.75||.533***||.275***||–|
|4. Bullshit programs||2.74 (.90)||.86||.327***||.241**||.303***||–|
|5. Pseudo-profound bullshit||2.36 (.83)||.89||.432***||.361***||.449***||.353***||–|
|6. Ideology social||2.59 (1.58)||—||.068||.120||.004††||.263***||-.020††||–|
|7. Ideology economy||3.30 (1.72)||—||.033†||.090||.063†||.223**||-.011††||.745***||–|
|8. Neoliberalism support||3.02 (1.07)||.77||.144||.118||.198*||.333***||.094||.469***||.574***||–|
|9. Populism||3.55 (.61)||.51||.121||.012−||.186+||.202+||.211+||.201+||.101||.026−|
Note. One-item measures do not have Cronbach’s α.
***Bayes factor in favor of alternative positive against null (BF+0) > 100; **BF+0 > 15; *BF+0 > 4. ††Bayes factor in favor of null against alternative positive (BF0+) > 10; †BF0+ > 4. +Bayes factor in favor of alternative against null (BF10) > 3; −BF in favor of null against alternative (BF01) > 3. Unmarked correlations have Bayes factor in favor or null or alternative positive between 1 and 3 meaning there is inconclusive evidence (more data is needed).
Interestingly, only some political bullshit measures correlated with ideological measures (social and economic ideology, neoliberalism) and populism. Endorsement of bullshit political programs correlated with all ideological measures and populism. While slogans correlated with the support for neoliberalism and populism (but not with the two ideology measures), political bullshit statements did not correlate with any of them. Finally, participants distinguished between political bullshit statements and factual ones so that political bullshit statements were rated as more persuasive than the factual statements (BF+0 > 100).
Lastly, we tested whether political bullshit measures could predict voting behavior. As Appendix C1 (Supplementary Materials) shows, most of the respondents voted in the 2016 elections (doing that for Hilary Clinton) and planned to vote in the 2020 elections. As for the 2020 elections, Sanders, Biden, and Trump accounted for most of the planned votes. We tested whether political bullshit statements, slogans, and bullshit political programs predicted voting candidates in the 2016 elections (binary logistic regression with outcomes “Trump” and “Clinton”; ns = 20 and 75 respectively), and candidates in the 2020 elections (multinomial logistic regression with outcomes “Trump”, “Sanders”, and “Biden”; ns = 26, 58 and 59 respectively). As pre-registered, we only selected cells that contained more than 10% of data (two candidates in the 2016 elections and three in 2020). Political bullshit could distinguish between the candidates for which one voted: the probability that one voted for Trump (vs Clinton) increased with higher scores on bullshit program endorsement (B = .737, OR = 2.09, z = 2.275, p = .02). Similarly, when predicting the voting choice of the 2020 elections, an increased score on bullshit program was found to be associated with a higher likelihood to be Trump’s voter compared to Biden, B = .823, OR = 2.28, z = 2.650, p = .008, and Sanders, B = .575, OR = 1.78, z = 1.878, p = .060 (Figure 1). Importantly, the same pattern of results remains when the factual political statements are controlled for (see note at the end of Appendix C1 in the Supplementary Materials).
In sum, our operationalization of political bullshit showed good validity. Three measures—political bullshit statements, slogans, and programs—were moderately-to-highly correlated with each other and with pseudo-profound bullshit (convergent validity). This relationship held even after controlling for answers on factual political statements. While the factorial validity (one-factorial structure) could be improved, political bullshit measures had, as predicted, relatively low or no correlations with populism, demonstrating discriminant validity. Two political bullshit measures (slogans and bullshit programs) correlated with neoliberalism, while program bullshit also correlated with both ideological measures (concurrent validity). However, political bullshit statements showed inconclusive correlations with ideological measures, with some support for the null hypothesis regarding the relation to economic ideology. We also found some support that program bullshit predicts voting behavior—Trump’s voters were more susceptible to it in both 2016 and 2020.
Study 2 – Serbian Sample
In total, 254 participants were recruited by distributing the study link on Facebook posts of different news pages in Serbia. Sixty-eight participants were excluded for failing attention check questions. One participant was removed as a multivariate outlier (Mahalanobis distance), while no participants were removed for speeding. This left the final sample of 185 participants (55.7% females, 44.3% males; Mage = 37.3, SDage = 11.3). Three participants indicated education lower than the high school, 47 had completed high school, 26 were students, and 109 had a university degree.
Participants completed the Serbian-translated materials used with the U.S. sample. The only difference was in questions about voting behavior. Participants in Serbia also first indicated whether 1) they voted at the last parliamentary elections (2016). If answered “yes”, participants were asked to (2) indicate for which party they voted (see Supplementary Materials). Finally, participants were presented with the following question: “If you were offered the following lists for the 2020 parliamentary elections, which one would you choose regardless of the boycott?” The question was phrased in this way because, at the time of conducting the study, most of the opposition parties declared a boycott of the 2020 elections in Serbia. Therefore, the question about the next elections could have not been phrased in the same way as in the U.S. and the Netherlands. Offered parties are given in the Supplementary Materials.
Results and Discussion
For brevity purposes, we fully report the factor structure of political bullshit measures for Serbian and Dutch samples at the OSF (see Supplementary Materials, document “Factor structure of political bullshit measures in Serbian and Dutch sample” and Appendix A, document “Appendices”, under Online supplement). In Serbia, we also performed EFA on political bullshit statements as CFA did not show a good fit. A one-factor solution in Serbia explained 30% of the variance for political bullshit statements, 36% for slogans, and 58% for bullshit programs. In the Netherlands, CFA on political bullshit statements showed a good fit, while the proportion of variance explained was 34% for slogans and 52% for bullshit statements. Overall, we found a pattern similar to the U.S. sample, indicating that each measure can be conceptualized as a one-factor measure.
All correlations between measures, descriptives, and reliabilities of the scales are given in Table 2. Similarly to the first study, all bullshit measures correlated positively—both political and pseudo-profound. Endorsement of political factual statements was correlated with political bullshit statements and slogans, but not with bullshit programs and pseudo-profound bullshit. This again suggested some response bias. As in Study 1, positive correlations were found between all bullshit measures controlling for political factual statements (Appendix B2, Supplementary Materials).
|1. Political bullshit statements||3.05 (.78)||.80|
|2. Political factual statements||2.69 (.90)||.64||.389***|
|3. Slogans||2.33 (.84)||.72||.538***||.316***|
|4. Bullshit programs||2.75 (1.15)||.89||.210*||.107||.416***|
|5. Pseudo-profound bullshit||2.57 (.81)||.84||.310***||.128||.284***||.291***|
|6. Ideology social||3.88 (1.80)||—||.141||.001††||.279***||.409***||.126|
|7. Ideology economy||3.59 (1.86)||—||.041†||.033†||.015†||.045†||.056†||.250**|
|8. Neoliberalism support||3.34 (1.16)||.81||.146||.079†||.219*||.040†||.211*||-.058††||.228**|
|9. Populism||3.47 (.74)||.58||.040−||.020− −||.205+||.333+++||.169||.202+||-.018− −||.115−|
Note. One-Item Measures Do Not Have Cronbach’s α. ***BF+0 > 100; **BF+0 > 20; *BF+0 > 10. ††BF0+ > 10; †BF0+ > 3. +++BF10 > 100; +BF10 > 3. − −BF01 > 10; −BF01 > 3. For unmarked correlations there is inconclusive evidence.
In line with the first study, only some of the political bullshit measures correlated with ideological measures and populism. Endorsement of both bullshit programs and slogans correlated with ideology on social issues and populism, while slogans also correlated with support for neoliberalism. As in Study 1, political bullshit statements did not correlate with any of the measures. In both studies, a comparable correlation pattern was found: moderate to high intercorrelations between bullshit measures, some positive correlations between political bullshit measures and ideological measures (ideology and neoliberalism support), and none to small correlations with populism. Finally, as in Study 1, political bullshit statements were rated as more persuasive than the factual statements, BF+0 > 100.
We also tested whether political bullshit measures could predict voting behavior. Appendix C2 (Supplementary Materials) shows how participants responded to three voting questions. While most of the respondents voted in the 2016 elections, there was a high variance regarding voted parties (“SNS”, “DJB”, and “DS” passed the threshold, though note that cell sizes are small: ns = 16, 21 and 15 respectively). For 2020 elections, none of the parties passed the required 10% for data analysis. Therefore, we only tested whether political bullshit measures predicted voted party in 2016. Multinomial logistic regression with outcomes “SNS”, “DJB”, and “DS” showed that political bullshit did not predict which party a participant voted for, χ2(6) = 8.152, p = .227, R2McFadden = .07.
Overall, the analysis of intercorrelations showed similar patterns in this study and Study 1, illustrating good convergent validity, while some differences emerged on the ideological variables, possibly due to country differences. Given that the U.S. and Serbia are socio-politically different, and that studies employed different sample collection strategies, small differences in the relationships between the measures were not surprising. We now report the findings in the Netherlands—a country similar to the U.S. in some aspects (e.g., WEIRD country), and Serbia in others (e.g., European parliamentary democracy).
Study 3 – The Netherlands
We recruited participants in two ways–students filled out the survey in return for research credits (n = 51) and we recruited participants through Prolific (n = 143). In total, 172 completed the survey without failing attention checks. One participant was removed as a multivariate outlier (Mahalanobis distance). One participant was removed for speeding. This left a final sample of 170 participants (45.3% females, 54.7% males; Mage = 27.1, SDage = 10.1). No participants indicated education lower than high school, 23 had completed high school, 83 were students, 37 had an undergraduate, and 27 had a postgraduate university degree.
Participants completed the Dutch-translated materials used in previous studies. Regarding voting behavior, participants in the Netherlands first indicated whether 1) they voted at the last parliamentary elections (2017) with options: “yes”, “no”, and “I was not allowed to vote” (e.g., minors). If indicated “yes”, they were asked to (2) indicate for which party they voted. Next, participants were asked if they intended to vote at the next (2021) parliamentary elections, “yes” or “no”. If answered “yes”, they were asked to indicate the party for which they would vote (see Supplementary Materials).
Results and Discussion
All correlations between measures, descriptives, and reliabilities of the scales are given in Table 3. As in the previous studies, there were moderate-to-high correlations between all bullshit measures—political (political bullshit statements, slogans, and bullshit programs) and pseudo-profound, except for the correlation between slogans and pseudo-profound bullshit. Endorsing political factual statements was correlated with all measures except slogans and populism. As in the previous studies, partial correlations between bullshit measures still indicated positive correlations between bullshit measures while controlling for political factual statements (Appendix B3, Supplementary Materials).
|1. Political bullshit statements||3.02 (.68)||.83|
|2. Political factual statements||2.51 (1.06)||.87||.513***|
|3. Slogans||2.26 (.65)||.71||.377***||.139|
|4. Bullshit programs||2.29 (.91)||.86||.397***||.270***||.310***|
|5. Pseudo-profound bullshit||2.58 (.79)||.87||.402***||.252**||.163||.318***|
|6. Ideology social||3.26 (1.80)||—||.239**||.303***||.077†||.293***||.137|
|7. Ideology economy||3.87 (1.86)||—||.203*||.262**||-.010††||.165||.197*||.607***|
|8. Neoliberalism support||3.04 (1.16)||.61||.249**||.216**||.145||.237**||.193*||.370***||.486***|
|9. Populism||2.91 (.74)||.69||.141||.102−||-.104− −||.204+||.151||.117−||.049−||-.017− −|
Note. One-item measures do not have Cronbach’s α. ***BF+0 > 100; **BF+0 > 10; *BF+0 > 3. ††BF0+ > 10; †BF0+ > 3. +BF10 > 3. − −BF01 > 10; −BF01 > 3. For unmarked correlations there is inconclusive evidence.
Similarly to the previous studies, only some of the political bullshit measures correlated with ideological measures and populism. While political bullshit statements correlated with all ideological variables, endorsement of slogans did not correlate with any of them. On the other hand, bullshit programs correlated with social ideology and neoliberalism support, and was the only political bullshit measure that correlated with populism. Overall, the pattern of the correlations—moderate to high intercorrelations between bullshit measures, some positive correlations between political bullshit measures and ideological measures (ideology and neoliberalism support), and low correlations with populism—were obtained in all three studies. Finally, as in the previous studies, political bullshit statements were rated as more persuasive than the factual statements, BF+0 > 100.
We also tested whether political bullshit measures could predict voting behavior. Since only three parties (VVD, D66, GL) passed the 10% threshold for the analyses both for the previous 2017 (ns = 11, 16, and 34 respectively) and next 2021 elections (ns = 16, 17, and 38 respectively) (Appendix C3, Supplementary Materials), we conducted multinominal logistic regressions to test whether political bullshit measures predicted for which of these parties one voted/would vote. The model predicting the party for which participant voted in 2017 showed marginal significance, χ2(6) = 11.200, p = .082, R2McFadden = .09. Specifically, a higher score on political bullshit statements increased the probability that one had voted for the conservative-liberal VVD party (as compared to the centrist D66), B = 1.747, OR = 5.74, SE = .89, p = .051. A higher score on political bullshit programs increased the probability that one had voted for the VVD as compared to the left-wing oriented GL party, B = .890, OR = 2.43, SE = .48, p = .063. For the 2021 elections, political bullshit measures did not predict the parties for which participants planned to vote, χ2(6) = 4.230, p = .646, R2McFadden = .03.
We now report meta-analyses performed on the correlational results to investigate the associations across the three countries and explore whether political bullshit measures could predict voting behavior divided by ideology.
Meta-Analytic Effects of the Three Studies
To average out effects across three countries, we computed meta-analytic effects for the correlations between all bullshit measures. We also computed meta-analytic effects for the correlations between political bullshit measures on one hand, and ideological measures and populism on the other. For justification of priors, see “Bayesian Meta-analysis” in the Bayesian explanation document (Supplementary Materials).
The meta-analytic effects of the relationship between bullshit measures are given in Table 4. Correlation coefficients are averaged across heterogeneity (fixed and random effects). None of the correlation coefficients had evidence in favor of the random effects (BF > 3) which suggests that the effect sizes are similar across countries. Correlation sizes were moderate to high, supporting the convergent validity of bullshit measures.
|1. Political bullshit statements||–|
|3. Bullshit programs||.302**||.304**||–|
|4. Pseudo-profound bullshit||.374***||.284**||.314***|
Note. ***BF+0 > 100; **BF+0 > 10.
We next performed the same analyses regarding the correlations between political bullshit measures and ideological measures and populism, with the exception that the alternative hypothesis for the correlation between political bullshit measures and populism was two-sided (as in the original studies). As Table 5 shows, bullshit programs correlated highest of all political bullshit measures with ideological measures and populism, while the support for neoliberalism correlated with all political bullshit measures. However, a meta-analysis of the correlations between bullshit programs and neoliberalism, and slogans and populism showed support for random effects, suggesting that correlations were different across countries, which is why looking at the correlations within each country is more informative.
|Measure||Ideology social||Ideology Economic||Neoliberalism support||Populism|
|1. Political bullshit statements||.146*||.094||.174**||.095|
|3. Bullshit programs||.320**||.141*||.197*R||.241+|
Note. **Bayes factor in favor of one-sided alternative averaged across heterogeneity (BF+0) > 10; *BF+0 > 3. †Bayes factor in favor of null against one-sided alternative positive averaged across heterogeneity (BF0+) > 3. +Bayes factor in favor of two-sided alternative averaged across heterogeneity (BF10) > 3. RBayes factor in favor of random H1 against fixed H1 > 3.
Probing Ideological Effect: Exploratory Analyses
Ideology and Political Bullshit
To better understand the relationship between ideology and political bullshit, we performed linear regressions, predicting political bullshit (political bullshit statements, slogans, bullshit programs) with three ideological measures (ideology on social and economic issues, and neoliberalism) included simultaneously. Therefore, we performed nine linear regressions in total, three in each country (one for each political bullshit measure). In the U.S., in all three regressions, only neoliberalism was a significant predictor (βs > .181, ps < .05). In Serbia, social ideology was significant in all three regressions (βs > .158, ps < .05), while neoliberalism predicted political bullshit statements and slogans (βs > .163, ps < .05). In the Netherlands, neoliberalism was significant predictor in all three regressions (βs > .177, ps < .05), while social ideology predicted bullshit programs (β = .285, p = .002). Overall, the pattern of results suggests that neoliberalism is the most important ideological predictor (significant in eight out of nine regressions), while social ideology was still significant in four analyses. On the other hand, economic ideology was not associated with political bullshit. All regression tables are reported in Appendix E, Supplementary Materials.
Predicting Voting for Left, Center, and Right-Wing Parties on the Merged Dataset
Finally, we performed exploratory analyses on the combined datasets1. To increase the power for predicting voting behavior, we merged three datasets by grouping the parties/candidates into the left, center, and right categories for all three countries. Appendix D1 (Supplementary Materials) shows how the categories were formed across countries for the last elections. Multinomial regression showed that political bullshit measures could predict for which ideological candidate/party participant voted, χ2(6) = 28.000, p < .001, R2McFadden = .055. Specifically, as Figure 2 shows, scoring higher on political bullshit program increased the probability to vote for right political options: left-wing voters had lower slopes than center voters (B = .508, z = 2.550, p = .011), who had lower slopes than right-wing voters (B = 2.629, z = 3.316, p < .001).
We then tested whether political bullshit measures predicted the ideological position of the candidate/party for which one would vote at the coming elections (Appendix D2, Supplementary Materials, shows category formation). Contrary to the last elections, multinomial regression showed that political bullshit measures could not predict the ideology of the party one would vote, χ2(6) = 7.660, p = .264, R2McFadden = .012. However, when political bullshit statements were the only predictor in the model, it could distinguish between left and right voting options, B = .431, Z = 2.11, p = .035. This was also found for the bullshit program endorsement, B = .327, Z = 2.108, p = .035. These findings about the coming elections mirror the pattern found for the last elections.
The present research investigated the receptivity to political communication which we deemed as “political bullshit”. Admittedly, this term might be controversial as “bullshit” has a different colloquial connotation, and something more neutral like “political emptiness” could be more suitable. Indeed, as our current studies show: endorsing political messages correlated with actual voting behavior. This indicates that empty political statements are not perceived as bullshit by the voters at all. However, using a different term would disregard the similarity of this concept to the already well-established concept of (pseudo-profound) bullshit. We used the term “political bullshit” to label political communication in which politicians use vague but simplified statements to mobilize voters. Political bullshit meets the principles of Hammack and Pilecki’s (2012) narrative framework as it can give people a sense of coherence in complex political programs, as well as put “the mind in action” when mobilizing voters.
In three pre-registered studies in three countries, we developed three different measures of political bullshit (statements, slogans, and programs) and found support for our operationalization of the receptivity to this communication. In each country, as well as in the meta-analysis, we obtained moderate-to-high intercorrelations between political bullshit measures and pseudo-profound bullshit. We also found the relationship between political bullshit endorsement and ideological orientation: right-wing and neoliberalism supporters were more receptive to political bullshit. Finally, political bullshit could predict voting behavior to some extent, showing that endorsement of political bullshit (especially bullshit programs) was associated with a higher probability to vote for right-wing candidates/parties.
Political Bullshit Measures
First, we found evidence for convergent validity for political bullshit measures. Similarly to the attempts to operationalize other types of bullshit—scientific (Evans et al., 2020) and general (Čavojová et al., 2022)—political bullshit was consistently positively correlated with pseudo-profound bullshit. This builds on the idea that pseudo-profound bullshit is only one kind of bullshit among many to be operationalized (Pennycook et al., 2015; Sterling et al., 2016). Importantly, receptivity to bullshit might be generalizable across domains, meaning that one might be receptive to bullshit no matter whether it is political, pseudo-profound, or scientific (Čavojová et al., 2022; Evans et al., 2020).
Each political bullshit measure showed good reliabilities. Hypothesized one-factor solution for political bullshit statements showed an acceptable fit (in the Serbian sample, this was true after including covariance between two similarly phrased items). This was not the case for slogans, while a proper CFA model for political bullshit programs could not be tested. However, exploratory factor analyses yielded a one-factor solution, which justified the calculation of mean scores. Yet, this indicates that these measures require more work to achieve better psychometric properties (also see limitations section for a discussion of items’ content). Political bullshit statements and slogans had higher intercorrelations than any of these had with bullshit programs. We assume this is due to the format of statements and slogans which were one-sentence items, while bullshit programs were more elaborate. This also raises both theoretical and empirical questions of the structure of political bullshit—whether it is a one or two factorial structure (or even with one higher-ordered factor). However, this was out of the scope of the present paper which attempted to establish the concept of political bullshit itself.
Our participants also recognized factual statements as such, rating them lower on persuasiveness in all three studies. Importantly, controlling for these did not substantially alter the correlations as all of them remained positive. This bias remains an important issue to be accounted for in bullshit receptivity research (Pennycook et al., 2015).
Political Bullshit, Ideology, and Voting
We found evidence that endorsing political bullshit is associated with higher support for the free market, even after controlling for ideological stance on social and economic issues. This is in line with the finding that free-market supporters are more susceptible to pseudo-profound bullshit (Sterling et al., 2016). Interestingly, the relationship with ideology on social and economic issues is less consistent: only bullshit programs endorsement was correlated with both social and economic ideological stance while political bullshit statements were correlated only with social ideology. It is surprising that political bullshit measures were consistently correlated with the support for the free market but not with economic ideology, given that the free market is an important feature of right-wing economics (Harvey, 2007). There might be at least two reasons for this inconsistency. Firstly, the support for the free market was measured with two items, while economic ideology was a one-item measure (higher measurement error). Secondly, right-wing economics also includes other concepts and not only the free market (less welfare state, private ownership). Therefore, one could have a free market with leftist concepts (public/state ownership) as in the case of market socialism. In this view, it is the free market, and not the right-wing economics per se, that is related to the endorsement of bullshit. This aligns with the idea that if one endorses vague concepts such as pseudo-profound bullshit or political bullshit, she would also endorse the free market which is based on obscure concepts such as the invisible hand (Sterling et al., 2016).
The explanation above might help in resolving similar inconsistencies in the literature. While Sterling and colleagues (2016) found that neoliberals in the U.S. endorse pseudo-profound bullshit more, Nilsson and colleagues (2019) did not find that economic right-wing individuals in Sweden also endorse this type of bullshit more. Interestingly, in terms of political bullshit, our study replicated both findings, regardless of country differences. While Nilsson and colleagues (2019) noted that the discrepancy between the findings might have emerged due to cultural differences between the U.S. and Sweden, our study shows that this explanation might be less likely, as the same pattern emerged in all three countries. Alternatively, their second explanation—that they measured different constructs (focus on free-market vs. focus on redistribution/preference for equality)—is more plausible. As we discussed above, it could be specifically free-market proponents (and not economy ideologues more general) who are susceptible to bullshit, because of their preference for simplicity and intuitiveness2.
In line with the theoretical framework of political narratives, we also found evidence that political bullshit can put the mind in action (Hammack & Pilecki, 2012). Political bullshit was associated with voting choice in the U.S. In the Netherlands, it was only associated with the choice of party one voted for but not for the coming elections. Interestingly, political bullshit was not associated with voting behavior in Serbia, which was probably due to the small sample size. Analysis of the merged dataset supported the notion that a higher score on political bullshit increased the probability to vote for right-wing parties. These results are in line with the finding that the favorable view of Republican candidates is correlated with pseudo-profound bullshit receptivity (Pfattheicher & Schindler, 2016).
Although in line with previous research, the reason for ideological asymmetries in susceptibility to political bullshit remains unclear. One possibility would be the above-mentioned ideological differences which showed that right-wing individuals engage in simpler information processing (Jost, 2017). Given that individuals on the right have a more intuitive thinking style and rely more on heuristic processing (Jost & Krochik, 2014), it might be that they fail to detect the vagueness of political bullshit. Indeed, as Sterling and colleagues (2016) found, the relationship between neoliberalism and pseudo-profound bullshit disappears when heuristic and biased processing, and faith in intuition are included in the model. However, there are also reasons why conservatives and right-wing voters would be less receptive to vague political rhetoric: they have a larger need for certainty and closure (Jost, 2017) which is the opposite of political bullshit. Therefore, the exact mechanism remains to be investigated by future research.
Limitations and Future Directions
Our political bullshit measures were relatively short (between five and ten items) which might have increased measurement error. The addition of new items is warranted, while the CFA indices also showed there is room for improvement. Also, the number of studies for conducting a meta-analysis was small. We assume that because of the small number of studies, only two correlations had evidence for heterogeneity. It is reasonable to assume that the correlation between political bullshit and ideological measures might vary across countries due to the different conceptualizations of ideology. Yet, a strength of the meta-analytic approach is that we could show there was no evidence for heterogeneity for any of the correlations between bullshit measures. This suggests that the intercorrelations were homogenous across countries and implies the robustness of political bullshit as a concept.
The reliability of the populism measure (Elchardus & Spruyt, 2016) was relatively low (varying between .51 and .69). Therefore, low correlation with political bullshit measures that we observed might be an underestimation of the true one. However, the observed correlations were found consistently across studies, suggesting a difference between the two constructs. Additionally, there are theoretical reasons for distinguishing between political bullshit, which is based on abstract and vague statements, and populism, which concerns a concrete conception that politicians should be more similar and close to ordinary people. In any case, we believe that political bullshit and populism are distinct constructs, while future research should investigate the relationship in more detail (e.g., relationship’s size between the receptivity to political bullshit and populism, or how politicians as producers use political bullshit and populism in real political programs).
We had a small number of observations in the voting categories, which decreased power to detect the relationship between political bullshit and voting behavior (e.g., 2016 elections in Serbia), and did not allow for analyses of the 2020 elections in Serbia. However, analyses in the U.S., the Netherlands, and on parties combined in the left/center/right categories for all countries support the idea that right-wing voters might be more susceptible to political bullshit. Another reason that this might be the case is the different linguistic repertoire that left-wing and right-wing individuals employ (Sterling et al., 2020). For example, one of the bullshit programs mentioned the idea to “restore the soul of our country”, which seems to contain conservative rhetoric. Therefore, it is possible that our political bullshit items contained words more used or familiar to right-leaning individuals, and thus the relationship could be a confound of item content. However, while this limitation might explain the relationship with social ideology, it could not explain why neoliberalism is still consistently predicting political bullshit, even after accounting for one’s social (and economic) ideology. Future research should investigate this question in more depth, preferably with items cleared of ideological content.
Another limitation in our research is relatively high correlations between the measures of political bullshit and factual statements, which suggests a general tendency to endorse any type of political statement. However, this high correlation is most likely due to the common-method variance, given that these statements were all presented together. As exploratory factor analyses of political bullshit and factual statements, and differences in means (participants endorsed factual statements to less extent) demonstrate, participants distinguished between the two. Future research should take into account the tendency to agree with all statements, but also attempt to reduce common-method bias.
Lastly, a natural question arises—who uses political bullshit? Our research does not provide an answer; it only answers who might be more receptive to it. One way to answer this question could be to take a similar approach that Pennycook and colleagues (2015) took with Deepak Chopra and pseudo-profound bullshit. If items actually uttered by politicians (Deepak Chopra) have the same psychological reality as political (pseudo-profound) bullshit, it would be a strong argument in favour that it indeed is political bullshit. In any case, we do believe that all politicians, to some extent, use political bullshit as exemplified in the introduction with Joe Biden and Donald Trump.