Original Research Reports

Recurrent Fury: Conspiratorial Discourse in the Blogosphere Triggered by Research on the Role of Conspiracist Ideation in Climate Denial

Stephan Lewandowsky*ab, John Cookbc, Klaus Oberauerdb, Scott Brophye, Elisabeth A. Lloydf, Michael Marriottg

Abstract

A growing body of evidence has implicated conspiracist ideation in the rejection of scientific propositions. Internet blogs in particular have become the staging ground for conspiracy theories that challenge the link between HIV and AIDS, the benefits of vaccinations, or the reality of climate change. A recent study involving visitors to climate blogs found that conspiracist ideation was associated with the rejection of climate science and other scientific propositions such as the link between lung cancer and smoking, and between HIV and AIDS. That article stimulated considerable discursive activity in the climate blogosphere—i.e., the numerous blogs dedicated to climate “skepticism”—that was critical of the study. The blogosphere discourse was ideally suited for analysis because its focus was clearly circumscribed, it had a well-defined onset, and it largely discontinued after several months. We identify and classify the hypotheses that questioned the validity of the paper’s conclusions using well-established criteria for conspiracist ideation. In two behavioral studies involving naive participants we show that those criteria and classifications were reconstructed in a blind test. Our findings extend a growing body of literature that has examined the important, but not always constructive, role of the blogosphere in public and scientific discourse.

Keywords: rejection of science, conspiracist discourse, climate denial, Internet blogs

Journal of Social and Political Psychology, 2015, Vol. 3(1), doi:10.5964/jspp.v3i1.443

Received: 2014-11-13. Accepted: 2015-05-23. Published (VoR): 2015-07-08.

Handling Editor: Johanna Ray Vollhardt, Department of Psychology, Clark University, Worcester, MA, USA

*Corresponding author at: School of Experimental Psychology and Cabot Institute, University of Bristol, 12A Priory Road, Bristol, BS8 1TU, United Kingdom. E-mail: stephan.lewandowsky@bristol.ac.uk

This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

With all the hysteria, all the fear, all the phony science, could it be that manmade global warming is the greatest hoax ever perpetrated on the American people? I believe it is.

U.S. Senator James Inhofe, July 28, 2003

(149 Congressional Record S10012 — Science of Climate Change, p. 11)

The discovery by John Tyndall that CO2 is a greenhouse gas dates back over 150 years, and it was recognized more than a century ago that industrial CO2 emissions may alter the planet’s climate. During the last two decades, the scientific evidence for the fact that humans are interfering with the climate has become unequivocal. The vast majority of domain experts agree that the climate is changing due to human CO2 emissions (Anderegg, Prall, Harold, & Schneider, 2010; Cook et al., 2013; Doran & Zimmerman, 2009; Oreskes, 2004).

Given this broad agreement on the fundamentals of climate science, what cognitive mechanisms underlie the dissent from the consensus by a vocal minority of people (Leviston, Walker, & Morwinski, 2013)? At least two major variables have been identified. The primary variable involves people’s worldview or “ideology”; that is, a person’s basic beliefs about how society should be organized. People who embrace a laissez-faire version of the free market are less likely to accept the basics of climate change than people with an egalitarian-communitarian outlook (Dunlap & McCright, 2008; Feygina, Jost, & Goldsmith, 2010; Hamilton, 2011; Heath & Gifford, 2006; Kahan, 2010; Kahan, Jenkins-Smith, & Braman, 2011; Lewandowsky, Gignac, & Oberauer, 2013; Lewandowsky, Oberauer, & Gignac, 2013; McCright & Dunlap, 2011a, b).

A second variable, which we focus on here, is conspiracist ideation; that is, a person’s propensity to explain a significant political or social event as a secret plot by powerful individuals or organizations (Sunstein & Vermeule, 2009). The involvement of conspiracist ideation in the rejection of scientific propositions is widespread (Diethelm & McKee, 2009; Goertzel, 2010; Kalichman, 2009; McKee & Diethelm, 2010) and not altogether surprising: If a person rejects an overwhelming scientific consensus, such as the one for climate science (cf. Cook et al., 2013), then that person also needs to reject the possibility that the consensus emerged as the result of researchers converging independently on the same evidence-based view. Rejection of a scientific consensus thus calls for an alternative explanation of the very existence of that consensus, and the presumption of a conspiracy can provide that alternative (Diethelm & McKee, 2009; McKee & Diethelm, 2010; Smith & Leiserowitz, 2012). For example, the tobacco industry referred to research on the health effects of smoking in internal documents as “a vertically integrated, highly concentrated, oligopolistic cartel” (Abt, 1983, p. 127), which in combination with “public monopolies (...) manufactures alleged evidence, suggestive inferences linking smoking to various diseases, and publicity and dissemination and advertising of these so-called findings” (Abt, 1983, p. 126).

Conspiracism and Rejection of Climate Science [TOP]

In the case of climate change, several qualitative analyses have shown that denial is suffused with conspiratorial themes, for example when dissenters are celebrated as “Galileos” who oppose a corrupt scientific “establishment”. Researchers have suggested that the dissenters’ creation of such a shared social reality provides a stage upon which public climate denial can unfold (McKewon, 2012b, a).i Public accusations of conspiracies within the Intergovernmental Panel on Climate Change (IPCC) were aired in the opinion pages of the Wall Street Journal (WSJ) as early as 1996 (Lahsen, 1999; Oreskes & Conway, 2010), in a piece that alleged a “(...) disturbing corruption of the peer-review process.” The charges focused on a chapter of the 1995 IPCC report that was concerned with the attribution of global warming to human activities. The WSJ piece was authored by an individual who had no part in the IPCC process, and subsequent scholarly work traced the origin of the charge of conspiracy and corruption to a document produced by the Global Climate Coalition, a lobby group representing primarily the energy sector (Lahsen, 1999). In her analysis of this controversy, Lahsen (1999) concluded that conspiracy theories are “(...) rhetorical means by which to cast suspicion on scientific and political opponents” (p. 133). Accordingly, the titles of many recent popular books critical of mainstream climate science hint at a conspiracy. Table 1 shows a non-exhaustive sample of titles. Similarly, Koteyko, Jaspal, and Nerlich (2013) showed in an analysis of UK tabloid readers’ comments on climate change that contrarian constructions such as “tax scam” were a main theme of discourse that arguably could be considered conspiracist.

At a behavioral level, Smith and Leiserowitz (2012) found that among people who reject the findings from climate science, up to 40% of affective imagery invoked conspiracy theories. That is, when asked to provide the first word, thought, or image that came to mind in the climate context, statements such as “the biggest scam in the world to date” would be classified as conspiracist. Similarly, a pair of recent surveys uncovered an association between endorsement of unrelated conspiracy theories and the rejection of scientific propositions, including the findings from climate science (Lewandowsky, Gignac, & Oberauer, 2013; Lewandowsky, Oberauer, & Gignac, 2013). Because those articles form the basis of the present research, we describe them briefly.

Table 1

Recent Books That Reject Mainstream Climate Science and Contain Conspiracist Themes

Author Title Citation
James Inhofe The Greatest Hoax—How the global warming conspiracy threatens your future Inhofe (2012)
Larry Bell Climate of Corruption—Politics and power behind the global warming hoax Bell (2011)
Andrew Montford The Hockey Stick Illusion—Climate and the corruption of science Montford (2010)
Larry Solomon The Deniers—The world-renowned scientists who stood up against global warming, political persecution and fraud Solomon (2008)
Pat Michaels and Robert Balling Climate of Extremes—Global warming science they don’t want you to know Michaels & Balling (2009)
Brian Sussman Climategate—A veteran meteorologist exposes the global warming scam Sussman (2010)
Rael Jean Isaac Roosters of the Apocalypse—How the junk science of global warming nearly bankrupted the Western world Isaac (2012)

The first study sampled visitors to climate blogs (Lewandowsky, Oberauer, & Gignac, 2013), and the “in press” paper and data became available for download in July-August 2012. We thus refer to this paper as LOG12 from here on. The second study involved a nationally-representative sample of Americans and was published online, and the data made available, in October 2013 (Lewandowsky, Gignac, & Oberauer, 2013). We call this paper LGO13 from here on. In replication of much prior research (e.g., Heath & Gifford, 2006; Kahan et al., 2012), LOG12 and LGO13 identified political ideology—in particular, the strength of endorsement of laissez-faire free market economics—as being the principal driver of the rejection of climate science. Conspiracist ideation, measured by endorsement of items such as “A powerful and secretive group known as the New World Order are planning to eventually rule the world” constituted another but lesser contributing factor. Notably, notwithstanding the rather different pools of participants and differences in methodology, the size of the effect of conspiracist ideation on rejection of climate science (r ≈ .20 between latent variables) was virtually identical across both studies. The results mesh well with the findings by Smith and Leiserowitz (2012) and the body of prior research that has linked conspiracist ideation to the rejection of science (Diethelm & McKee, 2009; Goertzel, 2010; Kalichman, 2009; McKee & Diethelm, 2010) and the acceptance of pseudoscience (Lobato, Mendoza, Sims, & Chin, 2014).

Blogosphere Response to Research on Conspiracism [TOP]

Although the results of LOG12 were not unexpected on the basis of prior research, the paper caused considerable discursive activity on “skeptical” Internet blogs. Some climate blogs arguably contain conspiracist themes, for example when expressing the belief that “. . . the alarmists who oversee the collection and reporting of the data simply erase the actual readings [of temperatures] and substitute their own desired readings in their place” (Taylor, 2012). Similarly, the blogosphere continues to reverberate with alleged scandals involving climate scientists. For example, whereas the public quickly lost interest in the “climategate” imbroglio involving scientists’ stolen e-mail correspondence in 2009 (Anderegg & Goldsmith, 2014), the opposite trend has been observed in the blogosphere which has seen increasingly more “climategate”-related content since 2009 (Lewandowsky, 2014a). The present article thus reports an in-depth analysis of the public response to LOG12 in 2012 and 2013. Because that critical public discourse was tightly focused on an identifiable and circumscribed issue and was time limited, it presented an ideal naturalistic experiment for a further examination of the role of conspiracist discourse in the rejection of climate science.

To provide full personal and historical context, we briefly summarize the history of the present article. Two of the present authors (S.L. and K.O.) also authored LOG12 and LGO13, and they therefore became aware of the initial “skeptical” blog response to LOG12. Because this public response seemed of sufficient relevance to the scholarly questions surrounding the involvement of conspiracist discourse in the rejection of science, a research project commenced that analyzed this public discourse. The relevance of this project was further underscored by repeated non-public attempts to prevent or delay the publication of LOG12 while it was in press and even after its publication. For example, the editor of the journal that published LOG12 was approached with an ethical complaint, and the request for retraction of LOG12 while it was in press, when the ethical approval of the study became public through a freedom-of-information request. An early summary of those events was reported elsewhere (Lewandowsky, Mann, Bauld, Hastings, & Loftus, 2013; Sleek, 2013), although attempts to effect a retraction of LOG12 have continued at least until late 2014.

The analysis of the blog-centered public response to LOG12 was published as a thematic analysis in an online open access journal (Lewandowsky, Cook, Oberauer, & Marriott, 2013) and confirmed the presence of conspiracist elements in the blog-centered discourse arising in response to LOG12. To our knowledge this article, called Recursive Fury from here on, became the most-read article in psychology ever published by that journal (approximately 65,000 page views and 10,000 downloads at the time of this writing). Recursive Fury also received some media attention, including in the New York Times (Gillis, 2013). After the journal received a barrage of complaints from a small number of individuals, the article was eventually withdrawn (in March 2014) for legal, but not academic or ethical reasons. The publisher deemed the legal risk posed by a non-anonymized thematic analysis too great. Upon withdrawal, a copy of Recursive Fury was posted by the first author’s host institution at a dedicated URL (http://uwa.edu.au/recursivefury), together with an affirmation of the University of Western Australia’s commitment to academic freedom. In connection with the posting of the article, the University of Western Australia’s General Counsel, Kimberley Heitman, stated publicly: “I’m entirely comfortable with you publishing the paper on the UWA web site. You and the University can easily be sued for hurt feelings or confected outrage, and I’d be quite comfortable processing such a phony legal action” (Lewandowsky, 2014b). The article was viewed and/or downloaded on the UWA website a further 13,000 times before it was replaced by a pointer to the present article. The university received no complaints or legal action in response to its public posting of Recursive Fury for over a year.

The withdrawal of Recursive Fury engendered a number of further events, including the very public resignation of three editors of the journal Frontiers in protest (and critical commentary by a fourth editor; Jones, 2014); an online petition by an NGO (Forecast the Facts) calling for the reinstatement of the paper that attracted nearly 2,000 signatures; and several opinion pieces in the media (including Scientific American) by one of the paper’s initial reviewers that were critical of the journal’s actions (e.g., McKewon, 2014). In addition, the Australian Psychological Society issued a statement that expressed “dismay” at the withdrawal of Recursive Fury; this position was publicly echoed by the (American) Union of Concerned Scientists. The withdrawal of the paper also stimulated at least one editorial in an academic journal, namely the Journal of the American Association of Nurse Practitioners (Pierson, 2014). At the other end of the spectrum, an anonymous team calling itself “NotStephan Lewandowsky” produced a YouTube parody (https://www.youtube.com/watch?v=bLQWRgj-k) of the events surrounding Recursive Fury by subtitling the “rant in the bunker” scene from the movie Downfall (starring Bruno Ganz as Hitler/subtitled as Lewandowsky).

The present article reports an anonymized and updated version of the thematic analysis reported by Lewandowsky, Cook, et al. (2013), together with two additional studies involving blind and naïve participants that confirm the initial conclusions advanced in Recursive Fury. The results provide an “existence proof” for the presence of conspiratorial discourse among “skeptic” climate blogs. Because two of the present authors also contributed to LOG12, several safeguards were put into place to manage the potential conflict of interest that are described below.

Study 1: Thematic Analysis [TOP]

The purpose of the first study was to examine public discourse on climate “skeptic” blogs in response to the publication of LOG12 and to classify the content of this discourse into distinct themes that might potentially be considered conspiracist. For brevity, we use the term “blogosphere” from here on to characterize the collective activity on climate “skeptic” blogs. Our approach follows common narrative methods and is best classified as a “thematic analysis”. In a thematic analysis, the focus is exclusively on content—as opposed to the construction of a single “story” or analysis of linguistic structure. Emphasis in thematic analysis is on “(...) ‘what’ is said, rather than ‘how’, ‘to whom’, or ‘for what purpose’” (Riessman, 2008, pp. 53-54).

Method [TOP]

Sampling of Content [TOP]

Internet activity related to LOG12 was sampled using Google search. Results were limited to English-speaking sites and text. Comparative media analyses have shown that climate “skepticism” is particularly prevalent in Anglophone countries (Painter & Ashe, 2012); we therefore omitted content in other languages. An on-going web search in real time was conducted by two of the authors (J.C. and M.M.) during the period August-October 2012. This daily search used Google Alerts to detect newly published material matching the search term “Stephan Lewandowsky” (henceforth abbreviated to S.L.). Each of those hits was then examined to establish whether it contained any recursive hypotheses, defined as any potentially conspiracist opinion that pertained to the article itself or its author, such as “Dr. Smith is a government agent,” or unsubstantiated and potentially conspiracist allegations pertaining to the article’s methodology, intended purpose, or analysis (e.g., “there were no human subjects”). The search employed a lenient criterion for inclusion in order to collect as broad a corpus for further analysis as possible, and a classification scheme based on the existing literature on conspiracist ideation (described below) was developed and refined as the search proceeded.

If new blog posts were discovered that featured links to other relevant blog posts not yet recorded, these were also included in the analysis. To ensure that the collection of material was as exhaustive as possible, Google was searched for links to the originating blog posts (i.e., first instances of a claim), thereby detecting any further references to the original content or deviations from it.

Although the search encompassed the entire (English-speaking) web, it became apparent early on that the response of the blogosphere was focused around a number of principal sites. To formally identify those sites, we began by analyzing the 30 most-frequently read “skeptic” websites, as identified by Alexa rankings. Alexa is a private company, owned by Amazon, that collects data on web browsing behavior and publishes web traffic reports for the higher trafficked sites. This enables comparison of the relative traffic of websites covering similar topics. Each of those 30 sites was then searched by Google for instances of S.L. that fell within the period 28 August – 18 October 2012. Sites that returned more than 10 hits were considered a principal site, and they are shown in Table 2.

Table 2

Principal Web Sites Involved in Blogosphere’s Response to the Publication of LOG12

Website Google hitsa Blog Postsb
A 747 11
B 82 8
Cd 40 3
D* 36 11
E 33 4
Fc 30 7
G*d 20 17
H 18 6
I 16 0
J 13 2

Note. Sites identified with an asterisk were among the 5 sites contacted by LOG12 with an invitation to participate in the study.

aTotal number of hits on each site to “S.L.” that fell within the period 28 August-18 October 2012.

bTotal number of blog posts featuring recursive theories about LOG12 posted within the period 28 August-18 October 2012.

cThis blog is not among the top-30 “skeptic” sites but was a principal player in the response to LOG12 because its proprietor launched several freedom-of-information requests.

dThese blogs reposted content from other blogs but published no original content of their own.

Blog posts or comments that mentioned recursive theories—without necessarily endorsing them—were excerpted, with each excerpt representing a mention of the recursive theory (see Table 3). Unless prevented by the website, all content items were archived using www.webcitation.org. Items in the corpus of 172 recorded instances are referred to by number (e.g., DC 1) below. Credentialed scholars can obtain further information about the corpus by contacting the first author.

Table 3

Summary of Recursive—and at Least Partially Conspiracist—Hypotheses Advanced in Response to LOG12 During August-October 2012

ID Na Date Sourceb Summary of hypothesis Criteriac
1 27 29 Aug A Survey responses “scammed” by warmists QM, PV, MbW, SS
2 13 29 Aug A “Skeptic” blogs not contacted QM, OS, PV
3 3 3 Sep B Presentation of intermediate data QM, OS, MbW, UCT
4 3 4 Sep C “Skeptic” blogs contacted after delay QM, OS, MbW, NoA, UCT
5 5 5 Sep D Different versions of the survey QM, MbW, UCT
6 3 6 Sep D Control data suppressed QM, NoA
7 3 10 Sep D Duplicate responses from same IP number retained OS, MbW
8 2 14 Sep D Blocking access to authors’ Websites QM, PV, NoA
9 Various Various Miscellaneous hypotheses See text
10 3 12 Sep E Global activism and government censorship QM, PV, SS

aTotal number of mentions in corpus.

bAttribution is based on where and by whom a hypothesis was first proposed in public. Note that the initial proposal of a hypothesis need not imply conspiratorial content: Hypotheses are listed only if the collective response of the blogosphere over time assumed conspiracist attributes.

cQM = Questionable Motives; OS = Overriding Suspicion; PV = Persecuted Victim; MbW = Must be Wrong; NoA = No Accident; SS = Self Sealing; UCT = Unreflexive Counterfactual Thinking.

During the search, a classification scheme of recursive hypotheses was developed that tentatively assigned each comment in the corpus to a hypothesis. The classification of hypotheses necessarily evolved as the search proceeded and as new hypotheses were discovered.

Conspiracist Classification Criteria [TOP]

To process the corpus and to test for the presence of conspiracist discursive elements, we derived six criteria from the existing literature (see Table 3). Our criteria were exclusively psychological and hence did not hinge on the validity of the various hypotheses. This approach follows philosophical precedents that have examined the epistemology of conspiratorial theorizing irrespective of its truth value (e.g., Keeley, 1999; Sunstein & Vermeule, 2009). The approach also avoids the need to discuss or rebut the substance of any of the hypotheses.

The first criterion is that the presumed motivations behind any assumed conspiracy are invariably nefarious or at least questionable (Keeley, 1999): Conspiracist discourse never involves groups of people whose intent is to do good, as for example when planning a surprise birthday party. Instead, conspiracist discourse relies on the presumed deceptive intentions of the people or institutions responsible for the “official” account that is being questioned (Wood, Douglas, & Sutton, 2012). This criterion applies, for instance, when climate science and research on the harmful effects of DDT are interpreted as a globalist and environmentalist agenda designed to impoverish the West and push civilisation back into the stone age (Delingpole, 2011). When presenting the results, we refer to this criterion as Questionable Motives, or QM for short (see Table 3).

A corollary of the first criterion is that the person engaging in conspiracist discourse perceives and presents her- or himself as the victim of organized persecution. At least tacitly, people who hold conspiratorial views also perceive themselves as brave antagonists of the nefarious intentions of the conspiracy; that is, they are victims but also potential heros. The theme of the victimization and potential heroism features prominently in science denial, for example when isolated scientists who oppose the scientific consensus that HIV causes AIDS are presented as persecuted heros and are likened to Galileo (Kalichman, 2009; Wagner-Egger et al., 2011). We refer to this criterion as Persecution-Victimization or PV for short.

Third, conspiracist ideation is characterized by “(...) an almost nihilistic degree of skepticism” (Keeley, 1999, p. 125) towards the “official” account. This extreme degree of suspicion prevents belief in anything that does not fit into the conspiracy theory. Thus, nothing is at it seems, and all evidence points to hidden agendas or some other underlying causal mechanism. We label this criterion Overriding Suspicion or OS.

Fourth, the overriding suspicion is often associated with the belief that nothing happens by accident (e.g., Barkun, 2003). Thus, small random events are woven into a conspiracy narrative and reinterpreted as evidence for the theory. For example, the conspiracy theory that blames the events of 9/11 on the Bush administration relies on evidence (e.g., intact windows at the Pentagon; Swami, Chamorro-Premuzic, & Furnham, 2010) that is equally consistent with randomness. We label this criterion Nothing is an Accident, or NoA for short.

Fifth, the underlying suspicion and lack of trust contribute to a cognitive pattern whereby specific hypotheses may be abandoned when they become unsustainable, but those corrections do not impinge on the overall abstraction that “something must be wrong” and the “official” account must be based on deception (Wood et al., 2012). In the case of LOG12, the “official” account is the paper’s conclusions that conspiracist ideation contributes to the rejection of science; and it is this conclusion that must be wrong according to this criterion. At that higher level of abstraction, it does not matter if any particular hypothesis is right or wrong or incoherent with earlier ones because “ (...) the specifics of a conspiracy theory do not matter as much as the fact that it is a conspiracy theory at all” (Wood et al., 2012, p. 5). We label this criterion Must be Wrong (MbW).

Finally, contrary evidence is often interpreted as evidence for a conspiracy. This idea relies on the notion that, the stronger the evidence against a conspiracy, the more the conspirators must want people to believe their version of events (Bale, 2007; Keeley, 1999; Sunstein & Vermeule, 2009). This self-sealing reasoning may widen the circle of presumed conspirators because any contrary evidence merely identifies a growing number of people or institutions that are part of the conspiracy.

Concerning the rejection of climate science, a case in point is the response to events surrounding the illegal hacking of personal emails of climate scientists, mainly at the University of East Anglia, in 2009. Selected content of those emails was used to support the theory that climate scientists conspired to conceal evidence against climate change or manipulated the data (see, e.g., Montford, 2010; Sussman, 2010). After the scientists in question were exonerated by nine investigations in two countries, including various parliamentary and government committees in the U.S. and U.K., those exonerations were re-branded as a whitewash (see, e.g., U.S. Representative Rohrabacher’s speech in Congress on 8 December 2011), thereby broadening the presumed involvement of people and institutions in the alleged conspiracy. We refer to this criterion as Self-Sealing, or SS for short.

Results [TOP]

Recursive Hypotheses [TOP]

The hypotheses that evolved during the thematic analysis are classified into distinct clusters in Table 3. The first seven hypotheses are considered primary based on their overall prominence, whereas hypotheses 8, 9, and 10 are relatively minor because they were less frequently cited. The table also identifies the criteria, using the short labels introduced earlier, that support the classification of each hypothesis as conspiracist. We do not comment on the validity of any hypothesis other than those that can be unambiguously classified as false (namely, hypotheses 2, 6, 7, and 8; we take up the distinction between valid scholarly critique and conspiracist hypotheses later).

The unit of analysis for this study was the hypothesis, not the individual comment. Hypotheses introduced by one individual were typically picked up by others. Hence, all entries in Table 3 reflected at least partially collective behavior rather than comments of single individuals, although we sought to identify the original source of a hypothesis where possible. Our conclusions about conspiratorial attributes of those hypotheses therefore apply mainly to the collective discourse in the blogosphere, rather than to the utterances of individuals taken in isolation. We did not keep track of the identity or number of different individuals who contributed to the discourse. Creation of those hypotheses was propelled mainly by the sites shown in Table 2, with a further 10 sites making lesser contributions to the hypothesis-generation process. The ID numbers in Table 3 are cross-referenced in the section headings of our analysis below.

Survey responses “scammed” (1) [TOP]

Whenever people express their opinions it cannot be ruled out that they are “faking” their responses by providing answers that are intended to please (or deceive) the experimenter. This possibility may be exacerbated with Internet surveys. In a politically charged context, such as climate change, the further risk arises that some respondents may “scam” the survey by “faking” responses to deliver a “desired” outcome.

This risk was rapidly perceived by commenters in the blogosphere, and almost immediately (on 29 August 2012) the concern was expressed that the LOG12 survey was (a) designed to link “skeptics” with “conspiracy nutters” and that therefore (b) some “alarmist” respondents might dutifully perform as expected. [DC 3].

The notion of “scamming” took center-stage in the blogosphere’s response to LOG12. On numerous blogs, it appeared to be taken for granted that the data were “faked” or “scammed.” In one blog post that repeated the words “scam” or “scammed” 21 times (the post ran to approximately 5,100 words), the author asserted that “almost certainly” some respondents of the survey were caricaturing climate “skeptics.” [DC 79].

During exploration of this hypothesis, initial focus in the blogosphere rested on responses to the LOG12 survey items that targeted conspiracist ideation, with the assertion that the few people who endorsed all (or all but one) conspiracy theories (N = 3) might not represent authentic responses.

This assertion transmuted into several additional scamming hypotheses: On 8 September, a second type of purportedly fake responses was reported involving the participants (N ≈ 120) who disagreed with one of the survey items, namely that “fossil fuels increase atmospheric temperature to some measurable degree.” It was argued that those responses represented an extremist position belonging to people who deny the thermal properties of greenhouse gases that were discovered by John Tyndall in the mid 19th century. Based solely on an “impression” [DC 79], the blogger estimated the proportion of such extremists to be no greater than 20% among “skeptics” in general. Because the observed proportion of such extremist responses was around 50% of the total number of “skeptics” in the LOG12 sample (≈ 120 out of ≈ 250), this was taken to imply that up to three quarters of those responses were “fake.” [DC 79].

On 23 September, it was reported that a further 48 participants had been identified who registered zealous support for free market ideology. This zealous support was taken to imply that those responses, too, represented “scammed” data as the overall incidence of extreme support for the free market in the LOG12 data was greater than in an alternative survey conducted on a “skeptic” blog after the controversy over LOG12 erupted [DC 78]. It was concluded that the data from these apparent zealots were caricatured responses of “skeptics” that were actually provided by people who endorsed climate science [DC 159].

The pursuit of the scamming hypothesis without clear a priori statement of what response pattern would represent a faked response, and the repeated shifting of the criteria for what constitutes scamming, hints at an agenda-driven effort to invalidate the LOG12 data.ii Several of our criteria for conspiracist discourse support this possibility. For example, the blogosphere’s response appeared driven by the need to resist the official explanation of an event (i.e., the LOG12 results in this instance; criterion Must be Wrong) and propose a sinister hidden alternative (i.e., scamming in this instance; Questionable Motives). The scamming theory was also explicitly supported by the presumption that the LOG12 survey was intentionally designed to make “skeptics” look like “nutters”; this meshes with criteria Questionable Motives and Persecution-Victimization. Finally, without a priori specification of what constitutes faked responses, the scamming hypothesis is in principle unfalsifiable: there exists no response pattern that could not be considered fake by an innovative theorist. This potentially self-sealing attribute of the hypothesis (criterion Self-Sealing) may explain its longevity. Some comments related to the “scamming” notion also exhibited a possible incoherence that is characteristic of conspiracist discourse (Wood et al., 2012), for example by referring to the data from LOG12 as “badly-collected” and “totally-invented” (DC 15) in the same argument. It is not clear how data can be both invented and collected.

“Skeptic” blogs not contacted (2) [TOP]

Initial attention of the blogosphere also focused on the method reported by LOG12, which stated: “Links were posted on 8 blogs (with a pro-science stance but with a diverse audience); a further 5 ‘skeptic’ (or ‘skeptic’-leaning) blogs were approached but none posted the link.” Speculation immediately focused on the identity of the 5 “skeptic” bloggers. Within short order, 25 “skeptical” bloggers had come publicly forward [DC 2] to state that they had not been approached by the researchers. Of those 25 public declarations, five were by individuals who were invited to post links to the study by the LOG12 team in 2010. Two of these bloggers had engaged in correspondence with the research assistant for further clarification.

This apparent failure to locate the “skeptic” bloggers led to allegations in blog posts and comments of research misconduct by the LOG12 team. Those suspicions were sometimes asserted with considerable confidence, such as when a commenter asserted it was “known” that contacting the blogs was “made up.” [DC 21]. One blog comment airing the suspicion that “skeptic” bloggers had not been contacted also provided the email address to which allegations of research misconduct could be directed at the host institution of S.L. (Several such allegations ensued.) This comment was posted by an individual who had been contacted twice by the research team.

The names of the “skeptic” bloggers became publicly available on 10 September 2012 (by which time one individual had already identified himself in public), on a blog post by S.L.; http://www.shapingtomorrowsworld.org/lewandowskyGof4.html. Although this information invalidated the hypothesis, the blogosphere’s suspicion about LOG12 seemed undiminished (cf. criteria Must be Wrong, Overriding Suspicion) and attention shifted to various other hypotheses. Two aspects of the process underlying this hypothesis shift are noteworthy.

First, the hypothesis that bloggers were not contacted was abandoned gradually. For example, one blogger opined that even if the blogs had been contacted, S.L. must have known that they would refuse to post the link because the results would have been distorted to try to “harm skeptics.” [DC 99].iii This statement explicitly imputes a pervasive stance of suspicion among “skeptic” bloggers (criterion Overriding Suspicion) because the statement presumes that any blogger would assume that the survey intended to “harm skeptics.” This statement also illustrates the self-perception as a victim of persecution (Persecution-Victimization).

Similarly, it was pointed out that a research assistant, rather than one of the LOG12 authors, had emailed “skeptic” bloggers, whereas S.L. was named in emails to pro-science blogs. [DC 103]. This “inconsistent delivery” sub-hypothesis lasted for 48 hours (11–13 September) and meets criteria Must be Wrong, Nothing is an Accident, and Questionable Motives. (The assistant’s involvement had explicit ethics approval.)

Notwithstanding the abandoning of the initial “no-contact” hypothesis, the allegation that the survey was designed to be biased by excluding “skeptics” remained in the public domain. That is, the hypothesis that LOG12 sought to exclude “skeptics” from their survey persisted in the blogosphere’s inferences, even though the original basis for that inference was no longer maintained. Over a week after the “skeptic” bloggers had been revealed, one blogger argued (on 18 September) that the approached blogs did not publish a link because they did not want to be part of the “painfully obvious” [DC 148a] presumed intent of the research; meeting the criteria Persecution-Victimization and Questionable Motives.

It is notable that concerns about the representativeness of the LOG12 sample were rarely mentioned outside the context of the hypotheses just reviewed. Only two blog comments noted that because “skeptic” blogs did not post links to the survey, the LOG12 sample may have been skewed towards people who endorse the science, without also accompanying that critique with a hypothesis of nefarious intentions or malfeasance on the part of LOG12.

Once hypothesis-shifting was complete, several new hypotheses emerged in short order to counter the conclusions of LOG12. Several of those hypotheses were based on what we call unreflexive counterfactual thinking; that is, the hypothesis was built on a non-existent, counterfactual state of the world, even though knowledge about the true state of the world was demonstrably available at the time. Table 3 indicates which of the remaining hypotheses involved this reliance on counterfactuals (Unreflexive Counterfactual Thinking), which is indicative either of the absence of a collective memory for earlier events, or of the lack of a cognitive control mechanism that requires an hypothesis to be compatible with all the available evidence (which is a hallmark of scientific cognition but is known to be compromised in conspiracist ideation; Wood et al., 2012). We argue later that this unreflexive counterfactual thinking may represent a distinct aspect of conspiracist ideation that, to our knowledge, has not been previously identified in the literature.

Presentation of intermediate data (3) [TOP]

S.L. presented a talk at Monash University in Melbourne on 23 September 2010. The slides for that talk were posted on the web on 27 September 2010 and contain a single brief reference (10 words: “conspiracy factor without climate item predicts rejection of climate science”) to the LOG12 data, based on the responses received by that date (nearly the entire sample).

Because this date fell within three days of the second (unsuccessful) approach to a “skeptic” blogger to post the link to the survey (the first one had been made two weeks earlier, at which time other “skeptic” bloggers were also contacted), the suggestion arose that because “skeptic” bloggers had been contacted late, the survey responses had been “decided” before the data had been received [DC 47]. This hypothesis implies that the data would have differed at a later point. Given that none of the “skeptic” blogs posted a link, and therefore could not have affected the result at any point in time, this hypothesis rests on a counterfactual assumption about the world.

A more extreme variant of this hypothesis proposed that the survey only gave the appearance of legitimacy to a pre-ordained result. [DC 30] This hypothesis identifies the survey as a “cover-up” for pre-ordained results that, presumably, were fabricated by LOG12: It thus goes a step beyond the hypothesis that a subset of the responses were “scammed.” These comments reveal an intense degree of suspicion (criterion Overriding Suspicion), an assumption of questionable motives by the LOG12 authors (Questionable Motives), and the belief that something must be wrong (Must be Wrong).

“Skeptic” blogs contacted after delay (4) [TOP]

The “skeptic” blogs were contacted at least a week after the links to the study had already been posted on the eight other blogs that agreed to participate in the study. This delay did not go unnoticed by the blogosphere, with one blogger arguing that the delay represented conduct that fell short of “reputable” [DC 95]. The hypothesis never matured to the point of clarifying how this delay could have had any bearing on the outcome of the published LOG12 data, given that none of the “skeptic” blogs posted the link. The hypothesis therefore represents another instance of unreflexive counterfactual thinking, in addition to suspicion and the attribution of questionable motives (Questionable Motives, Overriding Suspicion, Must be Wrong). We also suggest that this hypothesis meets the criterion that “nothing is an accident” (Nothing is an Accident) because it imputes significance and intentionality into an event (i.e., a delayed email) that could equally have been accidental.

Different versions of the survey (5) [TOP]

Because question order was counterbalanced between different versions of the LOG12 survey, links to the various versions were quasi-randomly assigned to participating blogs. The existence of different versions of the survey—in particular differences between the versions sent to pro-science and “skeptic” blogs—gave rise to several hypotheses, including the claim that “inconsistent sampling” invalidated the results of LOG12 [DC 65].

This hypothesis rests on counterfactual thinking: Even if survey versions had differed on some variable other than question order (they did not), given that none of the “skeptic” blogs posted the link and hence did not contribute responses, any claim regarding the published data based on those differences among versions rests on a counterfactual state of the world.

On 7 September, S.L. published a blog post explaining the reason for the different versions of the survey. (http://www.shapingtomorrowsworld.org/lewandowskyVersionGate.html). Within a day, instances of this theory ceased.

Control data suppressed (6) [TOP]

Data collection for LOG12 also involved an attempt to recruit a “control” sample via an emailed invitation to participate in the survey among the first author’s campus community. Because this invitation returned only a small number of respondents (N < 80), only the sample of blog denizens was reported in LOG12.

When the survey invitation was discovered by a blogger, several questions emerged about those data, including whether a comparison with the blog sample had been conducted, and whether the data had then been discarded [DC 117]. Reflecting the pervasive belief that something must be wrong (Questionable Motives, Must be Wrong), those questions metamorphosed into the suggestion that the data reported by LOG12 were “cherry-picked” [DC 125].

Duplicate responses from same IP number retained (7) [TOP]

Following standard Internet research protocols (e.g., Gosling, Vazire, Srivastava, & John, 2004), LOG12 filtered the data such that whenever more than one response was submitted from the same IP address, all those responses were eliminated from consideration. This was stated in the LOG12 Method section available for download in August 2012 as “(...) duplicate responses from any IP number were eliminated.”

Some members of the blogosphere interpreted this statement to mean that LOG12 accepted multiple responses provided they differed only slightly [DC 88]. Although this statement was initially qualified by noting that it was just an “interpretation”, this qualifier was dropped from subsequent re-posts of the allegation by other bloggers. The re-posts thus presented the unqualified claim that multiple responses from the same IP address could be included in the LOG12 data. The spread of this hypothesis despite being based on an interpretation alone reveals considerable suspicion (Overriding Suspicion) and also likely the belief that something had to be wrong (Must be Wrong). This theory lasted two days and was mentioned on a mainstream media blog in Australia, albeit without the qualifier that it rested only on an interpretation [DC 107].iv

Blocking access to authors’ websites (8) [TOP]

On 14 September, the websites of the first two present authors (S.L.: www.shapingtomorrowsworld.org; J.C.: www.skepticalscience.com) were temporarily inaccessible (for at least nine hours) from parts of the world, most likely owing to Internet blockages between certain regions and the website server.

This gave rise to the claim by a blogger that both sites had specifically targeted his IP number to prevent access, based on the fact that this individual could gain access using an IP-anonymizing service. The blogger suggested that this prevention of access was unethical [DC 131]. This hypothesis is illustrative of the tendency to assign intentionality to random events (Nothing is an Accident), based on a background of a presumed questionable motive (Questionable Motives) and a self-perception as a victim (Persecution-Victimization).

The claim of IP blocking, although relatively minor in scope, is nonetheless notable because it escalated into a more intricate alleged plot by LOG12 to paint their critics as paranoid. Three comments are particularly indicative of this unfolding discourse:

  1. One commenter predicted that the blogger’s IP number would be unblocked, to enable the LOG12 authors to level charges of paranoia against that blogger [DC 131].

  2. Another commenter suggested that the blocking may have been an attack that was deniable and would leave no traces, enabling the LOG12 authors to respond with “told you so” if the blogger had complained [DC 131].

  3. A final comment begrudingly applauded the skill (of the LOG12 team) to “play the audience” because no complaint (about the IP blocking) could be made without appearing to be a conspiracy theorist [DC 131].

The complexity of this argument parallels conspiracist reasoning surrounding the events of 9/11:

After 9/11, one complex of conspiracy theories involved American Airlines Flight 77, which hijackers crashed into the Pentagon. Even those conspiracists who were persuaded that the Flight 77 conspiracy theories were wrong folded that view into a larger conspiracy theory. The problem with the theory that no plane hit the Pentagon, they said, is that the theory was too transparently false, disproved by multiple witnesses and much physical evidence. Thus the theory must have been a straw man initially planted by the government, in order to discredit other conspiracy theories and theorists by association. (Sunstein & Vermeule, 2009, p. 223, emphasis added).

The blogosphere’s apparent concern over being “baited” into “acting paranoid” is consonant with the overriding extent of suspicion identified earlier as a criterion (Overriding Suspicion) of conspiracist ideation. It also reveals the pervasive self-perception of people who deny the scientific consensus on climate change as victims (Persecution-Victimization). The hypothesis also exemplifies the conspiracist tendency to detect meaning and intentionality behind accidental events (Nothing is an Accident).

The IP blocking hypothesis persisted for a day. The originator of the claim then updated his comment and removed the “unethical” charge, albeit without noting this update on his/her website. (The present authors had web-archived the initial version of the comment by then, which otherwise would have been irretrievably lost.) The originator of the claim then publicly recognized that the blocking might have reflected technical glitches in the link to Australia [DC 131].

Miscellaneous hypotheses (9) [TOP]

Some commenters expressed general displeasure with LOG12, for example by referring to it as an “egregious war crime” akin to “shooting innocent men” and then using those numbers as “terrorists killed in action” (DC 28). Two miscellaneous hypotheses deserve particular mention as they provide insight into the recursive and self-reinforcing nature of the blogosphere’s discourse.

A regular contributor to the blog of the second author of the present paper (J.C.; www.skepticalscience.com) posted a public critique of LOG12. While this post was welcomed and reposted by critics of LOG12, one commenter treated it with suspicion, arguing that the critique represented a “false flag” operation designed for distraction that had failed [DC 115]. This reasoning is reminiscent of the “decoy theory” just described in the context of 9/11 and appears Self-Sealing.

A further hypothesis supposed that the real purpose of LOG12 was to provoke conspiracist ideation from people who reject climate science, asserting that the subject of the study was the blogosphere’s response to LOG12 rather than the survey itself. The commenter adduced support for this assertion from private correspondence of the second author’s blog (J.C.; www.skepticalscience.com) that had been obtained illegally by a hack [DC 172]. This hypothesis arguably also demonstrates unreflexive counterfactual thinking. If as the hypothesis suggests, LOG12 was not a genuine study, this would imply there is no evidence linking climate denial and conspiracist ideation. In that case, it is difficult to see how the LOG12 authors could have expected a conspiracist reaction to LOG12. In reality, the reaction to LOG12 was not anticipated by the authors. The decision to analyze the blogosphere’s reaction to LOG12 was made only after a number of escalating recursive hypotheses had been proposed.

At the time of this writing, similar hypotheses have circulated regarding a Massive Open Online Course (MOOC) run by the second author (J.C.) at the University of Queensland. This MOOC is entitled “Making Sense of Climate Science Denial” (https://www.edx.org/course/making-sense-climate-science-denial-uqx-denial101x) and at least one commentator on the Internet has suggested that the sole purpose of this MOOC (enrolment around 15,000) was to solicit contrarian reactions for the purpose of further study.

Beyond recursion: Global activism and government censorship (10) [TOP]

Thus far, we considered only strictly recursive theories—that is, hypotheses that were spawned by LOG12 and pertained to the methodology and results of LOG12. We conclude with an analysis of theories that were spawned by LOG12 but expanded beyond being recursive.

The expansion commenced with the suggestion that the University of Western Australia was hosting an “activist organization.” It furthermore questioned whether UWA Executives were aware of this global “activism” [DC 110].

Another blogger promoted this theory, linking to the above post [DC 110] and commenting that the blog of the present second author was a ringleader for “conspiratorial activities” [DC 136]. Notably, this blogger explicitly referred to conspiratorial activities by, presumably, the authors of LOG12 and their associates.

A commenter sought to clarify the extent of this presumed conspiratorial activity, identifying another academic at UWA, Professor of Mathematics X, as the “real strategist” behind those activities [DC 111]. X’s apparent leadership role in this conspiracy was reinforced in a subsequent comment by a local who confirmed, with “no doubt”, X’s role as “mastermind” and typical “mad scientist” [DC 112].

A more extended variant of this hypothesis cited S.L.’s research funding available on his webpage (A$4.4 million in grants) and drew attention to A$762,000 specifically for climate research. Moreover, the commenter argued that this funding did not include A$6 million that the Australian Commonwealth Government provided S.L. and colleagues to run ‘The Conversation’ [DC 122]. ‘The Conversation’ refers to an online newspaper (https://theconversation.com/au/who-we-are) that is primarily written by academics and is funded by a consortium of major Australian universities and other scientific organizations. (S.L. has no editorial role in this initiative but has written numerous articles for TheConversation.) This hypothesis thus widened the scope of the presumed activism by LOG12 authors to include a national online media initiative.

The expanding scope of the presumed activities exhibited considerable longevity, as evidenced by a blogpost several months later that was triggered by a radio interview with S.L. on the Australian Broadcasting Corporation’s (ABC) science show. This blogpost accused the (Australian) government of suppressing dissent and labeled S.L.’s research as representing “punitive psychology”, akin to the Soviet Union’s practice to incarcerate dissidents in mental institutions. The blogpost argued that the problem of censorship is widespread and involved the University of Western Australia, the Australian Research Council, the Australian Broadcasting Corporation, and possibly the government.

Common to all these hypotheses is the presumption of widespread questionable motives among the authors of LOG12 and colleagues (Questionable Motives) and a potentially self-sealing propensity to broaden the scope of the presumed malfeasance (Self-Sealing). In this instance, extending the presumed malfeasance to include the Australian government of the day may amplify a self-perception of being victimized (Persecution-Victimization).

Freedom-of-Information Release [TOP]

On 10 October 2012, the host institution of S.L. released a tranche of emails and documents that had been requested by a “skeptic” blogger under Freedom-of-information (FOI) legislation. One set of emails involved all correspondence between the researchers and the blogs that were contacted to host the survey, including those that by an initial hypothesis—number 2 in Table 3—were presumed not to exist. The remaining documents and emails pertained to the institutional ethics approval for LOG12.v Because the FOI release occurred about a month after the last hypothesis spontaneously emerged in response to LOG12, it is considered separately from the other hypotheses summarized in Table 3.

The blogosphere focused on the ethics approvals underlying the study. The existence of ethics approval was met by a broadening of the scope of presumed malfeasance, from the authors of LOG12 to the ethics committee and its chair. To illustrate, one blogger cited the fact that approval was obtained by amendment, and that the speed of approval (within 24 hours) raised questions about the university, especially because the original ethics approval had ostensibly been for a fundamentally different project [DC 165]. In fact, the approval of the additional items for this particular study, drawn from previously published research (e.g., Swami et al., 2010), occurred swiftly because the study was classified as low risk under the applicable guidelines. The broadening of the scope of purported malfeasance to include additional people or institutions in light of disconfirmatory evidence is a principal attribute of conspiracist ideation (Keeley, 1999). The self-sealing response to the freedom-of-information release therefore illustrates several of our classification criteria (viz., Questionable Motives, Overriding Suspicion, Must be Wrong, and in particular Self-Sealing). The alternative hypothesis; namely, that the existence of ethics approvals in conformance with applicable procedures might confirm that there were no ethical problems with the LOG12 study, was not considered by the blogosphere, suggesting a discursive emphasis on delegitimization of LOG12.

Discussion [TOP]

The thematic analysis provides an initial sketch of the blogosphere’s collective response to LOG12. The contribution of Study 1 was to classify the blogosphere’s discourse into a number of potentially conspiracist hypotheses on the basis of a number of well-established criteria. This result meshes well with the prior literature, which has repeatedly linked the denial of well-established scientific findings to conspiracist ideation (e.g., Diethelm & McKee, 2009; Goertzel, 2010; Kalichman, 2009; McKee & Diethelm, 2010).

However, Study 1 has two limitations that need to be addressed before we can explore the full implications of the results. First, any thematic analysis necessarily contains a subjective element, which opens the door to alternative interpretations. Second, this problem is compounded by the fact that two of the present authors also contributed to LOG12, thereby creating a potential conflict of interest.

We addressed these problems in a number of ways: First, the data collection for Study 1 (via Internet search) was conducted by two of the present authors who were not involved in analysis or report of LOG12. Second, the fourth and fifth authors (S.B. and E.A.L.), who had no involvement with LOG12 whatsoever, provided an independent audit of Study 1, reviewing the data and the categorization of it presented in Table 3. Specifically, S.B. first examined for semantic clarity the 7 criteria employed in characterizing statements as at least partly conspiratorial in character (questionable motives, overriding suspicion, persecuted victim, must be wrong, no accident, self-sealing, and unreflexive counterfactual thinking). He next examined the application of groups of these criteria to the 10 summary hypotheses in the study. To assess the justificatory question of whether these hypotheses exhibited these characteristics, he examined the original statements on blogs from which the summary hypotheses were drawn. On the basis of these two steps, he concluded that the summary hypotheses appeared entirely accurate as representations of the remarks being classified under those hypotheses, and that they were accurately characterized as exhibiting the combinations of the criteria for conspiratorial thinking found in them by the study. Similarly, E.A.L. independently confirmed both the logic and accuracy of the thematic analysis in Study 1. Her examination focused on sorting out the truth conditions of the cases, as well as analyzing the counterfactual nature of others. Her focus was on the careful and accurate attribution of the citations used in this study. The results of her detailed analysis confirmed the original work done by the fellow authors of this study.

Third, and arguably most important, we now report two behavioral studies that sought independent support for the thematic analysis using naive participants in a blind test. For those studies, an anonymized and enlarged set of comments was harvested off the Internet in a further search conducted by hired research staff with no association with the present authors.

Study 2: Recreating Classifications [TOP]

In Study 2, naive subjects were presented with the material underlying the thematic analysis in Study 1 and were asked to indicate for each item whether it represented one of a broad range of potential recursive hypotheses. The thematic analysis in Study 1 would be supported if the final classifications were to recreate a variant of Table 3 from among a much larger set of potential hypotheses. The study also included content items from a further Internet search that augmented the material from the thematic analysis.

Method [TOP]

Stimulus Generation [TOP]

The material for the thematic analysis in Study 1 had been obtained using a “depth” approach (e.g., by following up on mentions of a hypothesis by searching targeted sites for recurrences or derivatives), which required some skill and discretion and thus had to be conducted by two of the authors. For the remaining studies, an additional “breadth” search of the Internet was conducted by a contracted assistant. The assistant was not academically associated with any of the authors although he was a PhD student at UWA at a time when S.L. was at the same institution (a year earlier).

This Google search was extended to cover the period 8 July 2012 to 8 October 2012, using a daily search of blog content and the search phrase “Stephan Lewandowsky”. Unlike for the thematic analysis, only the first 100 comments of any applicable post were scanned and no follow-up “depth” search was performed. Content was included if it was minimally recursive (i.e., pertaining to LOG12). This search initially yielded 429 content items.

Those content items were then compared with the items from the “depth” search from Study 1, using an iterative matching algorithm as follows: Random 50-character strings were selected from each “depth” content item and compared to the “breadth” content set for matches. This process was repeated using randomly-chosen 50-character strings for a given content item until no further matches were located. The strings were 50 characters in length because using shorter strings resulted in false matches. The final search using 50 characters strings matched 40 content items from the “depth” set to the “breadth” set. Each match was verified by human inspection.vi In consequence, there were 131 items in the “depth” set from Study 1 that were not returned in the “breadth” search in Study 2.

Results from the two searches (i.e., all of the “breadth” items plus the unique items from the “depth” set) were concatenated into a corpus of content material for use in Study 2 and Study 3. Because the combined corpus was prohibitively long and some content excessively verbose, a research assistant eliminated content that was deemed to constitute duplications or was deemed to be too diffuse or incomprehensible to merit inclusion as stimuli. Content items were also edited to reduce their length (by replacing non-essential content with “...”), where necessary. After editing, which removed less than 10% of content, the final set comprised 508 content items, of which 382 were from the “breadth” search (inclusive of items that were also returned by the “depth” search). A further 126 content items were from the “depth” search for Study 1. The material was then anonymized by replacing the names of individuals with placemarkers such as “Author 1” (for S.L.) or “Paper” (for LOG12) and so on. After editing, the longest comment contained 223 words (M = 64, SD = 39). This final corpus served as stimulus material in the remaining two studies.

The stimulus material was then presented to two research assistants who were asked to jointly produce a broad and (ideally) exhaustive set of potential recursive hypotheses for use as a scoring sheet in Study 2. The intention was to capture all hypotheses across the entire corpus at a fine level of detail, for subsequent scoring of the content material by another set of blind judges.

The final scoring sheet contained 38 candidate hypotheses, which subsumed hypotheses 1 through 7 from Table 3 but included numerous others, and which were grouped into 6 over-arching categories (e.g., Sampling, Questionnaire, Methodology). One candidate hypothesis for each of the 6 over-arching categories was labeled “other.”

Participants and Procedure [TOP]

The participants were 5 psychology undergraduates in their Honours (4th) year of study at the University of Western Australia who were not known to any of the authors and were unaware of LOG12 or the purpose of the study. At the time the study was conducted (late in the academic year), Honours students would have completed their independent research project and would have submitted (or be close to submission of) their final thesis. Australian Honours students are comparable in competence and background training to junior PhD students in the U.K. and (often) the U.S. The participants were thus competent to conduct the rating task. Participants received A$15 per hour in exchange for participation. Experimental sessions took 3-5 hours (with self-paced breaks as required).

Participants were presented with the corpus of 508 content items in a spreadsheet and were informed that they “relate to a scientific paper written by several anonymized authors, identified as Author 1, etc.”. Participants classified each content item into one of the 38 candidate hypotheses provided on a printout of the score sheet, by entering the corresponding hypothesis number into a column in the spreadsheet. Participants could indicate that no hypothesis applied by placing a zero in the response column. Participants had the option to nominate another novel hypothesis by writing a brief description into the response column (no participant exercised this option). Participants had to record at least one response for each item (i.e., a 0 or their preferred hypothesis number), but they could optionally add a second and third classification in two more response columns for each content item if they thought that a comment referred to several hypotheses.

Results [TOP]

We first asked what proportion of content items participants did not classify into any of the hypotheses in the scoring sheet. For the combined set, the proportion of items that were not classified (by responding with 0) was around 10% (mean N across subjects = 51.4). Those proportions differed slightly between the “depth” set (N = 15.6, 12.4%) and the “breadth” set (N = 35.8, 9.4%). Overall, the vast majority of content items appeared amenable to classification. The 5 “other” categories were used between 8.5% and 19% of the time for primary classifications (mean N across subjects = 63.0).

We next examined the consistency of those classifications (including the “other” categories) across participants. For this analysis, we considered the mandatory primary classifications and the secondary classifications where present (.28 of all cases). The small proportion of optional tertiary (.04) classifications were not considered. Because the instructions to participants did not differentiate multiple classifications by importance (i.e., participants were instructed to use “up to 3” classifications if they thought a “comment refers to several hypotheses”), primary and secondary classifications were considered interchangeably. For example, if for a given comment, 4 participants provided a primary classification into hypothesis X, and the fifth participant’s primary classification was Y but that person’s secondary classification was X, then this was counted as agreement among 5 participants.vii

Table 4 shows agreement among participants for the overall set of 508 comments, and broken down by search (i.e., the “depth” search for Study 1 and the new “breadth” search for Study 2. The table shows that between 62% and 68% of comments were classified identically by 3 or more judges; given that there were 38 candidate classifications (plus the 0 option) the agreement is at least satisfactory. When classification is measured at the level of the 6 over-arching categories of hypotheses, the agreement increases considerably (to 87%) as shown in Table 5.

Table 4

Consistency of Participants’ Classifications of Content Into Hypotheses (From the Full Set of 38 Candidates) in Study 2

Number of Identical Classificationsa
Set of items 1 2 3 4 5 3 or more
Combined 20 (.04) 166 (.33) 147 (.29) 115 (.23) 60 (.12) 322 (.63)
“Depth” (Study 1) 2 (.02) 38 (.30) 38 (.30) 32 (.25) 16 (.13) 86 (.68)
“Breadth” (Study 2) 18 (.05) 128 (.34) 109 (.29) 83 (.22) 44 (.12) 236 (.62)

aCell entries refer to the number of comments (proportion in parentheses) that were classified identically by the number of judges in that column.

Table 5

Consistency of Participants’ Classifications of Content Into Hypotheses (From the Over-Arching Set of 6 Candidates) in Study 2

Number of identical classificationsa
Set of items 1 2 3 4 5 3 or more
Combined 2 (.003) 62 (.12) 132 (.26) 138 (.27) 174 (.34) 444 (.87)
“Depth” (Study 1) 0 (.000) 11 (.09) 37 (.29) 31 (.25) 47 (.37) 115 (.91)
“Breadth” (Study 2) 2 (.005) 51 (.13) 95 (.25) 107 (.28) 127 (.33) 329 (.86)

aCell entries refer to the number of comments (proportion in parentheses) that were classified identically by the number of judges in that column.

We next asked whether the principal hypotheses from Table 3 that arose from the thematic analysis in Study 1 were also detected by the participants in Study 2. Using the “depth” set from Study 1, we tabulated the number of comments that at least 3 participants classified identically. Table 6 shows the results. It is clear that participants broadly recovered the classification and frequency information revealed by the thematic analysis in Study 1, despite the fact that content items here were anonymized and, where necessary, edited for length.

Table 6

Frequency of Classification of Comments From Study 1 (“Depth” Search) by Participants in Study 2

IDa Nb Summary of hypothesis
4 16 Survey responses “scammed” by warmists
2 12 “Skeptic” blogs not contacted
31 6 Presentation of intermediate data
6 6 Other sampling
7 4 Different versions of the survey
20 4 Research methodology was flawed
3 3 “Skeptic” blogs contacted after delay
5 3 Duplicate responses from same IP number retained
18 3 Author 1 created Paper 1 to pathologise “skeptic”
19 3 Global activism and government censorship
21 3 Statistical analysis was flawed
37 3 Author 1 was/is biased
38 3 Other ethical flaws
1 2 The sample size was insufficient (e.g., too small)
12 2 Author 1 colluded with other scientists
29 2 Conclusions were misrepresented or misinterpreted
30 1 Control data suppressed
36 1 Author 1 faked the data

Note. Bold-faced hypotheses were identified by the thematic analysis in Study 1 (see Table 3).

aCell entries refer to the ID number of a hypothesis in the scoring sheet used in Study 2. Descriptions of hypotheses are abbreviated to permit cross-referencing with Table 3.

bCell entries are the number of comments consistently assigned to that hypothesis by 3 or more participants.

Further statistical confirmation for this independent recovery is provided by correlating the number of comments assigned to each of the principal hypotheses from the thematic analysis by the present participants (bold-faced in Table 6) and the corresponding frequency data from Study 1 that were extracted during the thematic analysis (see Table 3). The Spearman correlation between those two sets of numbers was significant, ρ = .87, p < .003 (Pearson’s r = .93, p < .0003). Note that the absolute numbers of assigned comments may differ between Tables 3 and 6 because participants in this study were provided with a much larger set of candidate hypotheses, comprising more nuanced sub-classifications of the broader categories obtained in Study 1.

A final question concerns potential differences between the two search sets (i.e., the “depth” set used in Study 1 and the “breadth” set generated for this study). Table 7 shows the frequency of classifications for the “breadth” set, presented in the same manner as before. Comparison of Tables 6 and 7 reveals that the frequency pattern of hypotheses is broadly similar between the two sets. In particular, 6 of the 7 principal hypotheses in Table 3 were also identified in the “breadth” set, although several additional hypotheses were added.

Table 7

Frequency of Classification of Comments from the “Breadth” Search by Participants in Study 2

IDa Nb Summary of Hypothesis
4 27 Survey responses “scammed” by warmists
2 23 “Skeptic” blogs not contacted
20 23 Research methodology was flawed
1 15 The sample size was insufficient (e.g., too small)
6 15 Other sampling
14 14 Peer review was biased
21 12 Statistical analysis was flawed
9 10 Questionnaire poorly constructed
7, 10 8, 5 Different versions of the survey
36 8 Author 1 faked the data
8 6 Absence of neutral response option
28, 29 6, 4 Conclusions were misrepresented or misinterpreted
37 6 Author 1 was/is biased
15 5 Paper politically motivated
33, 30 4, 5 Control data suppressed
5 4 Duplicate responses from same IP number retained
25 3 Poor operationalization
26 3 Method improperly/insufficiently reported
23 2 Literature review flawed
34 2 Author attempts to silence debate
38 2 Other ethical flaws
11 1 Other problem with questionnaire
12 1 Author 1 colluded with other scientists
16 1 There is an ulterior motive or hidden agenda to the paper
17 1 Author colluded with journalists
22 1 Research question flawed
31 1 Presentation of intermediate data

Note. Bold-faced hypotheses refer to Table 3.

aCell entries refer to the ID number of a hypothesis in the scoring sheet used in Study 2. Descriptions of hypotheses are abbreviated in this table to permit cross-referencing with Table 3. Multiple entries refer to two different variants of a hypothesis in the scoring sheet that map onto a single hypothesis in Table 3 and are used to conserve space.

bCell entries are the number of comments consistently assigned to that hypothesis by 3 or more participants.

The frequency information in Table 7 was again compared to the corresponding information in Table 3, and the Spearman correlation was significant, ρ = .87, p < .002 (Pearson’s r = .88, p < .002). Hypotheses that did not appear in Table 7 were represented by a frequency of zero.

Table 7 also permits a quantitative assessment of the prevalence of conspiracist discourse in response to LOG12: Given that the “breadth” search involved a neutral search string, this part of the corpus (382 content items) was in no way slanted towards inclusion of conspiracist content. It follows that the number of comments that participants assigned to the recursive hypotheses from Study 1 (N = 77; see all bold-faced entries in Table 7) provide an estimate of the prevalence of conspiracist discourse in response to LOG12. On that basis, 20% of the content items can be considered conspiracist. We present further evidence from the extant literature in the General Discussion that further highlights the widespread use of conspiracist language in blogs that reject well-established scientific propositions.

Discussion [TOP]

Study 2 provided independent support for the classification of hypotheses and their frequency that formed the basis of our thematic analysis in Study 1. Because Study 2 compared content material from two different search sets, the results suggest that the classifications used in the thematic analysis were not contingent on a particular search strategy.

Critics might argue, however, that the present results were biased by the fact that participants were provided with a fixed set of choices (i.e., the 38 hypotheses on the scoring sheet), and that the emergence of the hypotheses in Table 3 is therefore not altogether surprising. Several considerations weaken this potential criticism: First, participants had the option not to classify a content item if they felt that it did not fall into any available category (by placing a 0 into the response column, which occurred around 10% of the time). Second, the scoring sheet offered 5 different generic “other” options, so participants had ample opportunity to sidestep the suggested response options. Finally, participants were given the opportunity to create their own free-form categorizations for the content material, although none chose to do so. On balance, we argue that it is implausible that the results are an artifact of the scoring sheet.

A further criticism might point to the fact that Study 2 yielded several additional hypotheses (see Tables 6 and 7) that were not uncovered by the thematic analysis in Study 1. Could this be taken to imply that the thematic analysis was incomplete? Our response is threefold: First, the thematic analysis was intended to establish the likely presence of conspiracist ideation in the blogosphere, rather than provide a complete inventory of all such discourse in the blogosphere. Hence, Table 3 was intentionally limited to those hypotheses to which at least one criterion for conspiratorial ideation applied. Second, inspection of Tables 6 and 7 suggests that even some of the novel classifications have conspiratorial attributes (e.g., “colluding with journalists”). Third, and most important, we next report an experiment that sought independent confirmation of the conspiratorial attributes of the content giving rise to all hypotheses in Tables 3, 6, and 7. The final study also examined whether the hypotheses advanced by the blogosphere differ from conventional scholarly critique.

Study 3: Blind Test of Conspiracist Criteria [TOP]

In Study 3, participants were presented with a set of content items from two sources: A sample from the combined corpus used in Study 2 and a comparison sample that was generated by academically-trained critics. Participants provided subjective ratings for each item on a number of dimensions that were designed to tap the criteria for conspiracist ideation in Table 3.

Method [TOP]

Stimulus Generation [TOP]

The stimuli for Study 3 comprised the corpus of 508 content items used in Study 2 and an additional set of 43 comparison items that were intended to represent incisive and scientifically-argued criticisms of LOG12. The comparison items were generated by 3 PhD students in Psychology at the University of Bristol (not known to any of the authors at the time; S.L. had only moved to Bristol a few months earlier and was not yet teaching or supervising students). The students were provided with a copy of LOG12 and the instructions to “(...) generate comments about this paper that could be posted on Internet blogs. Please be as critical as possible based on your scientific training to date. The comments are supposed to point to potential flaws and short-comings of the paper and methodology.” Comments had to stand on their own (i.e., could be understood without reading other comments), and were entered by the students via a blog-like web portal specifically created for this purpose to ensure anonymity. Students were remunerated at the rate of £10 per hour and worked independently (approximately 4 hours). Students generated between 12 and 17 comments each.

The web portal constrained the word length of comments by calculating a running average word length of all submitted comments up to that point. If the average word count was greater than 65 (i.e., the average word count of the corpus from Study 2), then the maximum word count allowed for a submitted comment was constrained to 265 minus the average word count. If the average word count was less than 65, then the maximum word count allowed was set to 200. The final set of 43 comments were roughly comparable in length (M = 55.8, SD = 22.1, range 23–113) to the corpus of 508 web-content items. Comments generated in this manner were used in the experiment without any further editing.

Participants [TOP]

The participants were 25 members of campus communities: 5 were from the University of Western Australia (mainly undergraduate students) and 20 were psychology undergraduates at the University of Bristol who participated voluntarily in exchange for course credit or remuneration at the rate of A$10 or £10 per hour, respectively. The sample comprised 16 female and 9 male participants with an average age of 24.68 years (SD = 10.1). The experiment lasted approximately 1 hr (participants in Bristol then completed a further unrelated experiment). Given that “skeptic” blogs typically claim to spearhead public engagement in science, the use of undergraduates as representatives of the “public” to judge the validity of such claims seemed appropriate.

Procedure [TOP]

The experiment was controlled by a Windows computer that presented all stimuli and recorded all responses with the aid of the Psychtoolbox for MATLAB (Pelli, 1997). There were 86 rating trials in total, involving the 43 comparison items generated for this experiment (called PhD from here on) and 43 web-based content items (web from here on) that were sampled at random and anew for each participant from the corpus of 508 content items. For a random half of participants, the web-based items were sampled exclusively from the set used in Study 1 (i.e., the initial “depth” search), and for the remaining half of participants the items were sampled exclusively from the set recovered in the “breadth” search for Study 2 (recall that this set subsumed some items that were also returned by the “depth” search).

The two types of items were randomly intermixed into a single sequence of 86 rating trials. On each trial, the content item was presented at the top of the screen, and participants then responded to five test questions using the numbers 0 through 9 to indicate their judgment. Each test question was presented below the content item and replaced the previous test question (if any). Table 8 shows the test questions in the order in which they were presented on every trial. The first 4 questions queried aspects of conspiracist ideation that were used for the thematic analysis in Study 1 (viz., Questionable Motives, Overriding Suspicion, Persecution-Victimization, and Must be Wrong), and the final question queried a potentially contrasting dimension, namely the extent to which a content item constituted a reasonable critique of LOG12. (Criteria Nothing is an Accident, Self-Sealing, and Unreflexive Counterfactual Thinking cannot be applied to single comments in isolation without knowledge of the context and hence were not targeted by any of the measures.)

Table 8

Test Questions Used in Study 3

Itema Short Labelb
…believe that the scientists acted with questionable motives? Questionable Motives
…express deep-seated suspicion? Suspicion
…perceive himself/herself as a victim of scientists or research? Victim
…firmly believe that there must be something wrong with the research? Something must be Wrong
…offer a reasonable and well thought out criticism of the research? Reasonable Critique

Note. Participants responded on a 10-point scale using the digits 0 through 9.

aEach test question combined the phrase “To what extent does the commenter…” with the text given in the table entry.

bShort label used in figures and to refer to a measure in the text.

Participants responded at their own pace and each test question remained visible until a response was made. Trials were separated by a 2-second period during which the screen went blank.

Results [TOP]

Due to equipment failure, data from 3 subjects were missing responses to the final 11, 5, and 37 content items, respectively. All subjects were retained for analysis (the results do not change appreciably if participants with missing observations are omitted). Responses were averaged across trials separately for each content type to yield two data points per subject (one for PhD and one for web) on each of the 5 measures. Those scores were first entered into 5 separate between-within subjects analyses of variance to examine whether the content set (“depth” from Study 1 vs. “breadth” from Study 2) had an effect on responding. Neither any of the main effects nor any of the interactions involving content set were significant in any of the analyses, with the largest F (1, 23) = 2.78, p > .10 for the interaction between type of comment and content set for Suspicion. The remainder of the analysis thus collapsed across content sets and focused on the effect of comment type (i.e., web vs. PhD).

Five independent analyses of variance revealed significant main effects of comment type for all measures. The test statistics are shown in Table 9 and boxplots of the data are provided in Figure 1. It is clear that web comments were rated considerably higher than the PhD comments on all 4 measures that tapped conspiracist attributes, whereas the reverse was true for the single item examining the scholarly nature of a content item.viii

Table 9

Summary of Results for Study 3

Dependent variable F MSE p PhD = 9 Web = 9
Questionable motives 72.14 1.00 <.0001 .042 .266
Suspicion 101.60 0.81 <.0001 .041 .259
Victim 66.65 0.64 <.0001 .011 .096
Something must be wrong 62.39 0.56 <.0001 .050 .353
Reasonable critique 29.53 0.97 <.0001 .077 .030

Note. All Fs have df = (1,24). Entries in last two columns are proportions of total number of ratings that used the extreme top of the response scale (9).

Figure 1

Summary of overall responses for all 5 measures in Study 3. Thick horizontal lines are medians, and medians differ significantly if the notches between boxes in each panel do not overlap. See Krzywinski and Altman (2014) for interpretative guidance of notched box plots.

The last two columns in Table 9 provide further information about the distributions of responses. The columns show the proportion of extreme responses (i.e., the top of the scale; 9) across all trials and participants that were provided for each question and content type. In most circumstances, the incidence of such extreme responses in behavioral research is low as people tend to avoid the extremes, especially with a relatively fine-grained scale, such as the one used here with ten response classes. Accordingly, only about 5% of judgments for the PhD comments used this extreme category. For the web material, by contrast, the proportion of such responses was considerably greater for the questions that targeted conspiracist ideation. For the “something must be wrong” question in particular, more than a third of all responses used the most extreme value, with “questionable motives” and “suspicion” lagging not far behind. It was only for the single question examining the scholarly thrust of a comment that extreme endorsements were rare (3%) for the web comments (and comparatively greater for the PhD comments; 8%).

A visual illustration of the skew in the response distributions for web content is shown in Figure 2. The dark gray histograms represent the distribution (normalized for the area to sum to unity) of responses to web content items across items and subjects. The superimposed dashed line represent a kernel density estimate of the PhD-generated content for comparison. The skew towards extreme values for web content on the conspiracist questions (albeit attenuated for “victim”) is readily apparent, as is the (attenuated) reverse skew for the question targeting legitimate scholarly critique.ix

Figure 2

Histograms (dark gray) for all responses to web content (not averaged across trials within a participant) for all 5 questions in Study 3. The dashed line that encloses the cross-hatched area represents Gaussian kernel density estimates (using 1.5 times the standard bandwidth) of the equivalent responses to PhD-generated content.

Discussion [TOP]

The results of Study 3 are straightforward: Compared to material generated by junior scholars who were instructed to be as critical as possible of LOG12, the discourse in the blogosphere scored higher on the four criteria for conspiracist ideation tested in this study. One potential criticism of this result is that the junior scholars—notwithstanding the promise of anonymity and notwithstanding the fact that they were not known to any of the present authors—might have been constrained by professional courtesy and therefore might have attenuated the full extent of their critique. This criticism can be deflected on the basis of two aspects of our results: First, participants judged the PhD-generated material to offer a more thought-out criticism than the responses of the blogosphere. This significant difference is difficult to reconcile with attenuation (and could only be even greater if any possible attenuation were absent). Second, the fact that a large proportion of blogosphere material was given an extreme rating on most of the potentially conspiracist attributes is diagnostic by itself, even in the absence of any comparison with the PhD-generated material. The web-content items thus largely spoke for themselves and although the PhD-generated material provided comparative context, it was not essential to our conclusions.

Our results were obtained with anonymized material that was presented with no guiding context, to participants who knew nothing about the issue under consideration. Moreover, the blog comments included content from both Internet searches conducted for this project (i.e., the “depth” search for Study 1 and the “breadth” search for Study 2), and the two different search sets were found to make no difference to people’s ratings. The data thus again support the results of our thematic analysis in Study 1.

General Discussion [TOP]

Potential Limitations [TOP]

The present studies were concerned with the blogosphere’s response to a single 4,000-word article. One might therefore question the generality of our results. In response, we note that at least one other scientific report in the climate arena engendered a sustained critique that subsequent scholarly analysis identified as conspiracist (Lahsen, 1999). Likewise, accounts by climate scientists of the strategies of climate denial are replete with accounts of conspiratorial accusations against individual papers (e.g., Mann, 2012).

A second criticism might cite the fact that in our thematic analyis we have considered the “blogosphere” as if it were a single entity, analyzed within the context of psychological processes and constructs that typically characterize the behavior of individuals rather than groups. Our response is twofold: First, at the level of purely descriptive narrative methods, our work fits within established precedents involving the examination of communications from heterogeneous entities such as the U.S. Government (Kuypers, Young, & Launer, 1994) or the Soviet Union (Kuypers, Young, & Launer, 2001). Second, at a psychological level, numerous psychological constructs—such as cognitive dissonance, social dominance orientation, or authoritarianism—have been extended to apply not only to individuals but also to groups or societies (e.g., Moghaddam, 2013). We therefore argue that our extension of individual-level work on conspiracist ideation to the level of amorphous groups fits within precedent in two areas of scholarly enquiry.x

A further criticism might hold that although we may have presented some evidence for the presence of conspiracist ideation, the evidence falls far short of “real” conspiracy theories involving events such as 9/11 or the moon landing. In response, we note that the hypotheses leveled against LOG12 do not differ qualitatively—that is, in terms of magnitude or scope—from others that have been identified as conspiracist in the context of another paper in the climate arena (Lahsen, 1999) or that have been observed in response to experimental manipulations (Whitson & Galinsky, 2008). We suggest that conspiracist ideation, like most other psychological constructs (e.g., extraversion), represents a continuum that finds expression to varying extents in conspiratorial theories of varying scope.

Critics might further invoke the fact that two of the present authors also authored LOG12, thereby creating a potential adverse conflict of interest. On this view, the response of the blogosphere represented legitimate criticism of LOG12, and the present article is merely an attempt to deflect the impact of that criticism. We argue against this view on multiple grounds. First, the present article fits squarely with precedents in the scholarly literature of researchers analyzing or reporting events arising from their own work (e.g., Landman & Glantz, 2009; Olivieri, 2003). For example, Landman and Glantz (2009) provide an in-depth analysis of the attacks by the tobacco industry on the research of co-author Stanton A. Glantz. Similarly, Olivieri (2003) provides a personal account of events arising from her pharmaceutical research, which was subject to legal threats and interference from a drug company. Both of those articles report events of considerable scholarly and public importance and would not have been possible without the involvement of a concerned party. We argue that this equally applies to the present article. Second, and going beyond those precedents, two of the present authors who had no involvement with LOG12 whatsoever audited, and confirmed, the thematic analysis of Study 1 independently. Finally, and most important, it is difficult to conceive of ways in which the outcome of the behavioral studies in this article—which confirmed the thematic analysis—could have been affected by a potential conflict of interest. All research staff and participants were either completely unknown to the authors or known only in the most casual and superficial manner. None had any prior academic, scholarly, or financial connection to the authors.

It must also be noted that the present article arguably goes against the interests of the LOG12 authors because it places several criticisms of LOG12 into the peer-reviewed literature that previously had been limited to Internet blogs. Given the well-known resistance of information to subsequent correction (e.g., Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012; Lewandowsky, Stritzke, Oberauer, & Morales, 2005), the exploration of this criticism without any rebuttal (except for hypotheses 2, 6, 7, and 8 in Table 3 that could be unambiguously classified as false) therefore does not constitute a defense of LOG12—quite on the contrary, it may raise questions about LOG12 in the readers’ mind.

Bearing in mind those potential limitations, we now explore the two goals stated at the outset. We explore the implications of our work for understanding of conspiracist ideation and for the role of the blogosphere, and the public more generally, in the conduct of science.

Implications for Understanding Conspiracist Ideation [TOP]

Our principal thesis is that some of the responses to LOG12 voiced in the blogosphere satisfy attributes of conspiracist ideation by the criteria defined at the outset. Two attributes deserve to be highlighted: First, most of the hypotheses can be unified under the immutable belief that “there must be something wrong” (Must be Wrong; see Table 3) and that the authors of LOG12 engaged in intentional malfeasance (Questionable Motives, Overriding Suspicion).

Whereas suspicion on its own is insufficient to identify conspiracist ideation, it arguably constitutes one of its core attributes. For example, the suspicion that LOG12 did not contact “skeptic” bloggers tacitly invokes several major presumptions, namely (a) that the authors of LOG12 were willing to engage in research misconduct; (b) that they would invent a claim about a non-event and publish it in a Method section when there was no incentive or reason to do so; and (c) that they should have somehow provided more “evidence” for the method they used beyond writing an accurate Method section.

Indeed, most of the hypotheses advanced about LOG12 included an accusation of intentional wrong-doing by the authors (viz. minimally hypotheses 2, 4, 5, 6, and 8), which goes beyond pointing to problems and errors as would be the norm in conventional scientific discourse (and as was empirically confirmed in Study 3). The ease with which those presumptions about misconduct and malfeasance were made and accepted provides a fertile environment in the blogosphere for the subsequent unfolding of conspiracist ideation (cf. Keeley, 1999; Wood et al., 2012).

Those underlying beliefs infused conspiracist elements even into those hypotheses that would be expected to arise during routine scholarly critique. For example, the scamming hypothesis evolved continuously without being guided by clear a priori assumptions about what would constitute a scammed response profile, thereby ultimately rendering this hypothesis self-sealing and unfalsifiable (criterion Self-Sealing). It is this psychological attribute, combined with the lack of clear a priori standards for a profile for a “fake” response, that points towards a conspiracist component rather than conventional scholarly critique.

Second, self-sealing reasoning also became apparent in the broadening of the scope of presumed malfeasance on several occasions. When ethics approvals became public in response to an FOI request, the presumption of malfeasance was broadened from the authors of LOG12 to include university executives and the university’s ethics committee. Similarly, the response of the blogosphere evolved from an initial tight focus on LOG12 into an increasingly broader scope. Ultimately, the LOG12 authors were associated with global activism, a A$6 million media initiative, and government censorship of dissent, thereby arguably connecting the response to LOG12 to the grand over-arching theory that “climate change is a hoax.”

Notably, even that grand “hoax” theory is occasionally thought to be subordinate to an even grander theory: one of the bloggers involved in the response to LOG12 considers climate change to be only the second biggest scam in history. The top-ranking scam is seen to be modern currency, dismissed as “government money” because it is not linked to the gold standard. The observed broadening of scope meshes well with previous research that has identified stable personality characteristics that predict the propensity for conspiracist ideation (cf. Douglas & Sutton, 2011; Goertzel, 1994; Swami et al., 2010). It is therefore not altogether surprising that suspicions about a single scholarly paper can rapidly mature into more encompassing hypotheses.

Possible Future Research Directions [TOP]

Our research points to at least two issues that merit further investigation. The first issue concerns the prevalence of the discourse we have documented in this article.

Establishing Prevalence of Conspiracist Discourse [TOP]

Our studies provide an “existence proof ” for conspiracist discourse, and the “breadth” corpus also contains quantitative information suggesting that this element of discourse was quite common in this instance. Nonetheless, extrapolation from our circumscribed corpus to the overall prevalence of conspiracist discourse in the climate blogosphere is fraught with risk and may be inadvisable. There is, however, other recent evidence that establishes the prevalence of conspiracist discourse in climate denial generally (as well as in the rejection of other scientific propositions). For example, the content on “skeptic” climate blogs that is dedicated to the “climategate” pseudo-scandal has been steadily increasing since 2009 (Lewandowsky, 2014a). Given that the scientists embroiled in “climategate” have been exonerated by 9 different enquiries, and given that the public demonstrably lost interest in the affair a long time ago (Anderegg & Goldsmith, 2014), the growing fascination with climategate in the blogosphere is at least indicative of a conspiracist element.

More direct evidence for the prevalence of conspiracist themes relating to climate change on social media was provided by Jang and Hart (2015), who analyzed the full body of climate-relevant Twitter content over two years in four English speaking countries (U.S., U.K., Canada, and Australia). Based on analysis of around 5,700,000 tweets, Jang and Hart found that the prevalence of tweets that accepted climate change as being “real” was only twice as prevalent as tweets that referred to climate change within a “hoax” framework (4% vs. 2% in the U.S., and 2% vs. 1% in the UK). Although the absolute proportion of “hoax” tweets may appear low (1-2%), it is still notable compared to tweets that framed climate change in terms of “action” (around 6% across the 4 countries). Moreover, within the U.S., there were more “hoax” tweets in states that were predominantly Republican than in states that leaned towards the Democrats—this mirrors the known role of political orientation in the acceptance or rejection of climate change (Dunlap & McCright, 2008; Feygina et al., 2010; Hamilton, 2011; Heath & Gifford, 2006; Kahan, 2010; Kahan et al., 2011; Lewandowsky, Gignac, & Oberauer, 2013; Lewandowsky, Oberauer, & Gignac, 2013; McCright & Dunlap, 2011a, b).

Outside the Anglosphere, Bessi et al. (2015) compiled a data base of what they believed to be all scientific and competing conspiracist information sources active on Facebook in Italian. The data base comprised 73 pages with more than 270,000 posts on a variety of scientific issues (and their pseudo-scientific counterpart, viz. “chemtrails”). Bessi et al. identified 864,000 conspiracy users (identified based on their association with web pages that were classified as conspiracist by the authors), compared to 333,000 users of science pages. These results suggest that conspiratorial attitudes towards science are not confined to a small number of Internet denizens. We suggest that the present results are therefore likely indicative of a notable proportion of the climate blogosphere’s discourse, although its exact prevalence remains to be established.

Unreflexive Counterfactual Thinking—A Novel Aspect of Conspiracist Reasoning? [TOP]

Second, we uncovered a potentially novel aspect of conspiracist reasoning when some of the later hypotheses were found to involve a residual impact of earlier, discarded hypotheses. For example, whereas critics initially argued that the results of LOG12 were invalid because “skeptic” bloggers were not contacted (hypothesis 2 in Table 3), upon release of evidence to the contrary, the same conclusion of invalidity was reached by other means; either because of a preliminary report of the data during a colloquium (hypothesis 3); or because of the presumedly faulty timing of the correspondence with “skeptic” bloggers (hypothesis 4); or because “skeptic” bloggers were emailed different versions of the survey (hypothesis 5). All of those hypotheses rely on counterfactual thinking because no “skeptic” blogger posted links to the survey, and therefore neither the dates of correspondence nor the version of the survey (nor any other event involving those bloggers) could have affected the data as reported in LOG12.

This point requires further analysis, because counterfactual reasoning—that is, the use of contrary-to-fact premises in arguments against explanatory hypotheses—is common in legitimate scholarly critiques. This reasoning was indeed present in any assertion that if the skeptic sites had posted links to the survey, then the LOG12 results would have been different. Irrespective of the truth or falsity of the subjunctive counterfactual conditional that the data would have been significantly changed (which is an open empirical question, adjudicated only partially by the replication reported in LGO13), this use of counterfactual reasoning legitimately raises the question of whether the LOG12 data are based on a sufficiently representative sample.

By contrast, the unreflexive counterfactuals that assert skeptic sites were not contacted or were contacted only after a delay (as reported in the context of Study 1), are striking for two reasons. First, even if the counterfactuals were true (as in the case of hypothesis 4; viz. the delay in contacting “skeptic” blogs), they nevertheless have no implications about how the survey results might have been different. Unless links to the survey had been posted on skeptic sites eventually, irrespective of any delay, the unreflexively asserted counterfactuals about delays (or the false notion that “skeptic” blogs were not contacted) would have no bearing at all on the accuracy of the survey results. In short, even if the counterfactuals are granted as premises, they would not imply the conclusion that the results are skewed in some way because the results as reported do not depend on a non-existent state of the world.

There is, however, a way to interpret these comments that would make the counterfactuals less irrelevant. Perhaps they should be interpreted as asserting that “Had the ‘skeptic’ sites been contacted in a different manner, they would have posted a link to the survey and their readers would have responded in ways that would have changed the results (and made them more accurate).” Intriguingly, there is no evidence that these unreflexive counterfactuals were intended to be offered in support of that sort of claim. Instead, they seem to be offered either in support of the questionable motives of the investigators, that something is wrong, or that the “skeptics’ ” suspicions would lead them, as some explicitly asserted (see Study 1), to in fact do the opposite, to not post a link to the survey. We therefore suggest that this unreflexive counterfactual reasoning may be part of the toolbox for conspiracist discourse and may warrant further scholarly attention.

Implications for the Public’s Participation in Science [TOP]

Could the activity in the blogosphere have constituted legitimate scholarly critique of LOG12, rather than a partially conspiracist discourse as is argued here? Several considerations speak against that possibility. First, the data from Study 3 show that unbiased observers were unable to discern much scholarly value in the blogosphere’s response to LOG12.

Second, we already noted at the outset that the public discourse in the blogosphere was accompanied by non-public events to prevent or delay the publication of LOG12. The attempts to suppress publication of a peer-reviewed paper, in conjunction with the absence of any discernable conventional scholarly activity, speak against the possibility that the blogosphere’s discourse represents legitimate public engagement with science. There are indications that seeking the retraction of inconvenient papers has become a routine practice among individuals who are denying (climate) science: We already noted the circumstances surrounding Recursive Fury at the outset. Moreover, one of the bloggers who was also involved in the response to LOG12 recently called for the retraction of a peer-reviewed paper that had underscored the pervasive scientific consensus on climate change through an analysis of nearly 12,000 peer-reviewed articles (Cook et al., 2013). To date, we have become aware of 7 instances in which editors were subject to what can reasonably be classified as harassment or intimidation in order to achieve the retraction of inconvenient papers. The potentially chilling effects of those activities on academic freedom must be analyzed further.

The present findings add to the body of evidence showing that blog posts and comments sometimes have adverse effects on the rationality and civility of public discourse. For example, Anderson, Brossard, Scheufele, Xenos, and Ladwig (2014) showed that people became more polarized about an emerging technology (nanotechnology) when exposed to uncivil rather than civil online comments. Additional recent evidence has uncovered an association between online “trolling” behavior and three aspects of the “Dark Tetrad” of personality—sadism, psychopathy, and Machiavellianism (Buckels, Trapnell, & Paulhus, 2014)—with the link being strongest with sadism. It therefore appears that cyber-trolling may be the contemporary Internet manifestation of everyday sadism.

In response to those risks, at the time of this writing, several large online news services, such as Popular Science (http://www.popsci.com/; 2.8 million unique monthly visitors) and Reddit (http://www.reddit.com/; 4 million unique monthly visitors) have either banned comments altogether (Popular Science) or have selectively banned comments from climate “trolls” by insisting that arguments be supported by peer-reviewed science (Reddit), citing the harmful effects of comments that are emotive and ad hominem.

Our results therefore contribute to a growing understanding of online behavior and its implications for the conduct of science and public discourse. We offer two tentative pointers towards possible solutions. First, it is increasingly clear that there is a need to provide better architectures for online platforms to help channel public discourse into a more constructive direction. The requirement to back up claims by evidence (as on Reddit) or to consider factual status in search engines (as recently piloted by Google) point in that direction. Other alternatives include strict moderation of comments, as for example practiced by the online newspaper TheConversation.com, which employs a “Community Manager” and has entertained options such as a “community council” to provide moderation (https://theconversation.com/involving-a-community-council-in-moderation-25547).

Second, our results point to the need to educate the public about the difference between scientific and non-scientific forms of discourse. The Internet—as a platform for everyone to voice any opinion and make any claim, however unsupported by evidence—will not go away, and the positives associated with a “free for all” medium should not be under-estimated. However, we need to protect the evidence-bound sphere of scientific arguments from the largely unconstrained buzz outside that sphere. Peer review certainly has a role in defending that boundary: As far as we know, none of the individuals involved in the blogosphere’s response to LOG12 ever submitted a critique for peer review. Conversely, a peer-reviewed critique of LOG12 and LGO13 has recently appeared in print (Dixon & Jones, 2015) (accompanied by a rejoinder; Lewandowsky, Gignac, & Oberauer, 2015), which exhibited none of the features of conspiratorial ideation that we report in this article and which involved authors that were not part of the blogosphere examined here. Crucially, such academic discourse, however critical, does not involve the attempt to silence inconvenient voices, which has become an increasingly clearly stated goal of elements of the climate “skeptic” blogosphere.

Notes [TOP]

i) In current scholarly usage the term “denial” is often reserved to describe an active public denial of scientific facts by various means, such as the use of rhetoric to create the appearance of debate where there is none (Diethelm & McKee, 2009; McKee & Diethelm, 2010). The term “rejection of science,” by contrast, has been used in research aimed at identifying the factors that predispose people to be susceptible to organized denial (e.g., Lewandowsky, Gignac, & Oberauer, 2013; Lewandowsky, Oberauer, & Gignac, 2013). In the present article, we frequently use the term “denial” because the object of our study is on the active and public dissemination of information.

ii) The criteria for this hypothesis may also have shifted in response to a blogpost by two of the authors of LOG12, which demonstrated the resilience of their main findings to the removal of outliers on the measure of greatest interest, viz. the endorsement of the various conspiracy theories, on 12 September 2012 (http://www.shapingtomorrowsworld.org/lewandowskyScammers1.html).

iii) This statement was made on the same day that the bloggers’ names were released and it is impossible to ascertain whether it predated or postdated the release.

iv) Following a complaint to the Australian Press Council by S.L., this post was subsequently amended (in 2013).

v) At the time of this writing, this initial FOI request had been followed by at least 4 more FOI requests for items ranging from correspondence to the publication dates of blog posts.

vi) It was necessary to deploy this fairly complex matching algorithm because the length of content items sometimes differed between the two sets; that is, the “depth” set might have contained a shorter excerpt of the same comment or blog post than the “breadth” set, or vice versa. Given the strict criteria for a match, which avoided false positives altogether, the number of matches was likely an underestimate.

vii) The results are qualitatively identical if only primary classifications are considered.

viii) Note that the PhD items included numerous content items that were generated by the same person, which might introduce a correlation among responses to those items. This is less likely for the web items which were sampled from a much larger set (although the overall corpus likely included multiple items produced by the same person). Any potential dependence among responses for each category would, however, have no statistical consequence because all responses were averaged within each content type, and hence each participant only provided a single observation for the analysis for each type. Those responses are independent across participants, as required by the analysis of variance.

ix) This skew could not be detected in the boxplots in Figure 1 because they were based on average responses across content items of each type with each participant, as is required for the analysis of variance. Note also that kernel density estimates for the PhD content are used only to reduce visual clutter; histograms would have revealed the same pattern but cannot be overlayed without impairing readability. Gaussian kernel density estimates obtain a continuous legitimate “outline” of any distribution, no matter its shape; the word Gaussian refers to the smoothing technique, not the shape of the distribution being characterized.

x) It must be noted that the distribution of contributions to the blogosphere’s discourse is far from even. The search string “<firstname><lastname>” Lewandowsky (with actual names replacing the <> placemarkers) yields between 5,000 and 9,000 Google hits for 4 particularly active individuals.

Funding [TOP]

The first author was supported by a Discovery Outstanding Researcher Award from the Australian Research Council during part of this research, and he has been supported by a Wolfson Research Merit Award from the Royal Society since 2013. In addition, the research was supported by internal funding from the University of Bristol and the University of Western Australia. The remaining authors have no funding to report.

Competing Interests [TOP]

The authors have declared that no competing interests exist.

Acknowledgments [TOP]

The authors thank Charles Hanich for assistance throughout the project, and Alexandra Freund for comments on an earlier version of the manuscript. S.L. blogs at www.shapingtomorrowsworld.org, J.C. blogs at www.skepticalscience.com, S.B. blogs at www.scottbrophy.com, and M.M. formerly blogged at http://watchingthedeniers.wordpress.com/.

References [TOP]

  • Abt, C. C. (1983, September). The anti-smoking industry (Philip Morris internal report). Retrieved from http://legacy.library.ucsf.edu/tid/vob81f00

  • Anderegg, W. R. L., & Goldsmith, G. R. (2014). Public interest in climate change over the past decade and the effects of the ‘climategate’ media event. Environmental Research Letters, 9, Article 054005. doi:10.1088/1748-9326/9/5/054005

  • Anderegg, W. R. L., Prall, J. W., Harold, J., & Schneider, S. H. (2010). Expert credibility in climate change. Proceedings of the National Academy of Sciences of the United States of America, 107, 12107-12109. doi:10.1073/pnas.1003187107

  • Anderson, A. A., Brossard, D., Scheufele, D. A., Xenos, M. A., & Ladwig, P. (2014). The “nasty effect”: Online incivility and risk perceptions of emerging technologies. Journal of Computer-Mediated Communication, 19, 373-387. doi:10.1111/jcc4.12009

  • Bale, J. M. (2007). Political paranoia v. political realism: On distinguishing between bogus conspiracy theories and genuine conspiratorial politics. Patterns of Prejudice, 41, 45-60. doi:10.1080/00313220601118751

  • Barkun, M. (2003). A culture of conspiracy: Apocalyptic visions in contemporary America. Berkeley, CA: University of California Press.

  • Bell, L. (2011). Climate of corruption: Politics and power behind the global warming hoax. Austin, TX: Greenleaf Book Group Press.

  • Bessi, A., Coletto, M., Davidescu, G. A., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2015). Science vs conspiracy: Collective narratives in the age of misinformation. PLOS ONE, 10, Article e0118093. doi:10.1371/journal.pone.0118093

  • Buckels, E. E., Trapnell, P. D., & Paulhus, D. L. (2014). Trolls just want to have fun. Personality and Individual Differences, 67, 97-102. doi:10.1016/j.paid.2014.01.016

  • Cook, J., Nuccitelli, D., Green, S. A., Richardson, M., Winkler, B., Painting, R., Skuce, A. (2013). Quantifying the consensus on anthropogenic global warming in the scientific literature. Environmental Research Letters, 8, Article 024024. doi:10.1088/1748-9326/8/2/024024

  • Delingpole, J. (2011). Watermelons: The Green movement’s true colors. New York, NY: Publius Books.

  • Diethelm, P., & McKee, M. (2009). Denialism: What is it and how should scientists respond? European Journal of Public Health, 19, 2-4. doi:10.1093/eurpub/ckn139

  • Dixon, R. M., & Jones, J. A. (2015). Conspiracist ideation as a predictor of climate-science rejection: An alternative analysis. Psychological Science. Advance online publication. doi:10.1177/0956797614566469

  • Doran, P. T., & Zimmerman, M. K. (2009). Examining the scientific consensus on climate change. EOS, 90, 21-22. doi:10.1029/2009EO030002

  • Douglas, K. M., & Sutton, R. M. (2011). Does it take one to know one? Endorsement of conspiracy theories is influenced by personal willingness to conspire. The British Journal of Social Psychology, 50, 544-552. doi:10.1111/j.2044-8309.2010.02018.x

  • Dunlap, R. E., & McCright, A. M. (2008). A widening gap: Republican and Democratic views on climate change. Environment: Science and Policy for Sustainable Development, 50, 26-35. doi:10.3200/ENVT.50.5.26-35

  • Feygina, I., Jost, J. T., & Goldsmith, R. E. (2010). System justification, the denial of global warming, and the possibility of “system-sanctioned change”. Personality and Social Psychology Bulletin, 36, 326-338. doi:10.1177/0146167209351435

  • Gillis, J. (2013, February 21). Unlocking the conspiracy mind-set [Web log post]. Retrieved from http://green.blogs.nytimes.com/2013/02/21/unlocking-the-conspiracy-mindset/? r=0

  • Goertzel, T. (1994). Belief in conspiracy theories. Political Psychology, 15, 731-742. doi:10.2307/3791630

  • Goertzel, T. (2010). Conspiracy theories in science. EMBO Reports, 11, 493-499. doi:10.1038/embor.2010.84

  • Gosling, S. D., Vazire, S., Srivastava, S., & John, O. P. (2004). Should we trust web-based studies? A comparative analysis of six preconceptions about Internet questionnaires. The American Psychologist, 59, 93-104. doi:10.1037/0003-066X.59.2.93

  • Hamilton, L. C. (2011). Education, politics and opinions about climate change evidence for interaction effects. Climatic Change, 104, 231-242. doi:10.1007/s10584-010-9957-8

  • Heath, Y., & Gifford, R. (2006). Free-market ideology and environmental degradation: The case of belief in global climate change. Environment and Behavior, 38, 48-71. doi:10.1177/0013916505277998

  • Inhofe, J. (2012). The greatest hoax: How the global warming conspiracy threatens your future. Washington, DC: WND Books.

  • Isaac, R. J. (2012). Roosters of the apocalypse: How the junk science of global warming nearly bankrupted the western world. Chicago, IL: The Heartland Institute.

  • Jang, S. M., & Hart, P. S. (2015). Polarized frames on “climate change” and “global warming” across countries and states: Evidence from Twitter big data. Global Environmental Change, 32, 11-17. doi:10.1016/j.gloenvcha.2015.02.010

  • Jones, R. (2014, April). Frontiers retraction controversy [Web log post]. Retrieved from https://2risk.wordpress.com/2014/04/17/frontiers-retraction-controversy/

  • Kahan, D. M. (2010). Fixing the communications failure. Nature, 463, 296-297. doi:10.1038/463296a

  • Kahan, D. M., Jenkins-Smith, H., & Braman, D. (2011). Cultural cognition of scientific consensus. Journal of Risk Research, 14, 147-174. doi:10.1080/13669877.2010.511246

  • Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735. doi:10.1038/nclimate1547

  • Kalichman, S. C. (2009). Denying AIDS: Conspiracy theories, pseudoscience, and human tragedy. New York, NY: Springer.

  • Keeley, B. L. (1999). Of conspiracy theories. The Journal of Philosophy, 96, 109-126. doi:10.2307/2564659

  • Koteyko, N., Jaspal, R., & Nerlich, B. (2013). Climate change and ‘climategate’ in online reader comments: A mixed methods study. The Geographical Journal, 179, 74-86. doi:10.1111/j.1475-4959.2012.00479.x

  • Krzywinski, M., & Altman, N. (2014). Points of significance: Visualizing samples with box plots. Nature Methods, 11, 119-120. doi:10.1038/nmeth.2813

  • Kuypers, J. A., Young, M. J., & Launer, M. K. (1994). Of mighty mice and meek men: Contextual reconstruction of the Iranian Airbus shootdown. The Southern Communication Journal, 59, 294-306. doi:10.1080/10417949409372949

  • Kuypers, J. A., Young, M. J., & Launer, M. K. (2001). Composite narrative, authoritarian discourse, and the Soviet response to the destruction of Iran Air flight 655. Quarterly Journal of Speech, 87, 305-320. doi:10.1080/00335630109384339

  • Lahsen, M. (1999). The detection and attribution of conspiracies: The controversy over Chapter 8. In G. Marcus (Ed.), Paranoia within reason: A casebook on conspiracy as explanation (pp. 111-136). Chicago, IL: University of Chicago Press.

  • Landman, A., & Glantz, S. A. (2009). Tobacco industry efforts to undermine policy-relevant research. American Journal of Public Health, 99, 45-58. doi:10.2105/AJPH.2007.130740

  • Leviston, Z., Walker, I., & Morwinski, S. (2013). Your opinion on climate change might not be as common as you think. Nature Climate Change, 3, 334-337. doi:10.1038/nclimate1743

  • Lewandowsky, S. (2014a). Conspiratory fascination versus public interest: The case of ‘climategate’. Environmental Research Letters, 9, Article 111004. doi:10.1088/1748-9326/9/11/111004

  • Lewandowsky, S. (2014b, March). Recursive fury goes recurrent. Retrieved from http://www.shapingtomorrowsworld.org/rf1.html

  • Lewandowsky, S., Cook, J., Oberauer, K., & Marriott, M. (2013). Recursive fury: Conspiracist ideation in the blogosphere in response to research on conspiracist ideation. Frontiers in Psychology, 4, Article 73. doi:10.3389/fpsyg.2013.00073

  • Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13, 106-131. doi:10.1177/1529100612451018

  • Lewandowsky, S., Gignac, G. E., & Oberauer, K. (2013). The role of conspiracist ideation and worldviews in predicting rejection of science. PLOS ONE, 8, Article e75637. doi:10.1371/journal.pone.0075637

  • Lewandowsky, S., Gignac, G. E., & Oberauer, K. (2015). The robust relationship between conspiracism and denial of (climate) science. Psychological Science, 26, 667-670. doi:10.1177/0956797614568432

  • Lewandowsky, S., Mann, M. E., Bauld, L., Hastings, G., & Loftus, E. F. (2013, November). The subterranean war on science. APS Observer, 26(9), 9). Retrieved from http://www.psychologicalscience.org/index.php/publications/observer/2013/november-2013/the-subterranean-war-on-science.html.

  • Lewandowsky, S., Oberauer, K., & Gignac, G. E. (2013). NASA faked the moon landing—Therefore (climate) science is a hoax: An anatomy of the motivated rejection of science. Psychological Science, 24, 622-633. doi:10.1177/0956797612457686

  • Lewandowsky, S., Stritzke, W. G. K., Oberauer, K., & Morales, M. (2005). Memory for fact, fiction, and misinformation: The Iraq War 2003. Psychological Science, 16, 190-195. doi:10.1111/j.0956-7976.2005.00802.x

  • Lobato, E., Mendoza, J., Sims, V., & Chin, M. (2014). Examining the relationship between conspiracy theories, paranormal beliefs, and pseudoscience acceptance among a university population. Applied Cognitive Psychology, 28, 617-625. doi:10.1002/acp.3042

  • Mann, M. E. (2012). The hockey stick and the climate wars: Dispatches from the front lines. New York, NY: Columbia University Press.

  • McCright, A. M., & Dunlap, R. E. (2011a). Cool dudes: The denial of climate change among conservative White males in the United States. Global Environmental Change, 21, 1163-1172. doi:10.1016/j.gloenvcha.2011.06.003

  • McCright, A. M., & Dunlap, R. E. (2011b). The politicization of climate change and polarization in the American public’s views of global warming, 2001–2010. The Sociological Quarterly, 52, 155-194. doi:10.1111/j.1533-8525.2011.01198.x

  • McKee, M., & Diethelm, P. (2010). Christmas 2010: Reading between the lines: How the growth of denialism undermines public health. British Medical Journal, 341, Article c6950. doi:10.1136/bmj.c6950

  • McKewon, E. (2012a). Duelling realities: Conspiracy theories vs climate science in regional newspaper coverage of Ian Plimer’s book, Heaven and Earth. Rural Society, 21, 99-115. doi:10.5172/rsj.2012.21.2.99

  • McKewon, E. (2012b). Talking points ammo: The use of neoliberal think tank fantasy themes to delegitimise scientific knowledge of climate change in Australian newspapers. Journalism Studies, 13, 277-297. doi:10.1080/1461670X.2011.646403

  • McKewon, E. (2014, April). Climate deniers intimidate journal into retracting paper that finds they believe conspiracy theories. Scientific American. Retrieved from http://www.scientificamerican.com/article/climate-deniers-intimidate-journal-into-retracting-paper-that-finds-they-believe-conspiracy-theories

  • Michaels, P. J., & Balling, R. C. (2009). Climate of extremes: Global warming science they don’t want you to know. Washington, DC: Cato Institute.

  • Moghaddam, F. M. (2013). The psychology of dictatorship. Washington, DC: American Psychological Association.

  • Montford, A. W. (2010). The hockey stick illusion: Climategate and the corruption of science. London, United Kingdom: Stacey Publishing.

  • Olivieri, N. F. (2003). Patients’ health or company profits? The commercialisation of academic research. Science and Engineering Ethics, 9, 29-41. doi:10.1007/s11948-003-0017-x

  • Oreskes, N. (2004). The scientific consensus on climate change. Science, 306, 1686. doi:10.1126/science.1103618

  • Oreskes, N., & Conway, E. M. (2010). Merchants of doubt. London, United Kingdom: Bloomsbury Publishing.

  • Painter, J., & Ashe, T. (2012). Cross-national comparison of the presence of climate scepticism in the print media in six countries, 2007-10. Environmental Research Letters, 7, Article 044005. doi:10.1088/1748-9326/7/4/044005

  • Pelli, D. G. (1997). The video toolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437-442. doi:10.1163/156856897X00366

  • Pierson, C. A. (2014). Ethical decision-making in Internet research: Another slant on the “Recursive Fury” debate. Journal of the American Association of Nurse Practitioners, 26, 353-354. doi:10.1002/2327-6924.12143

  • Riessman, C. K. (2008). Narrative methods for the human sciences. Thousand Oaks, CA: Sage.

  • Sleek, S. (2013, November). Inconvenient truth-tellers: What happens when research yields unpopular findings. APS Observer, 26(9), 9). Retrieved from http://www.psychologicalscience.org/index.php/publications/observer/2013/november-2013/inconvenient-truth-tellers.html

  • Smith, N., & Leiserowitz, A. (2012). The rise of global warming skepticism: Exploring affective image associations in the United States over time. Risk Analysis, 32, 1021-1032. doi:10.1111/j.1539-6924.2012.01801.x

  • Solomon, L. (2008). The deniers: The world renowned scientists who stood up against global warming hysteria, political persecution, and fraud and those who are too fearful to do so. Minneapolis, MN: Richard Vigilante Books.

  • Sunstein, C. R., & Vermeule, A. (2009). Conspiracy theories: Causes and cures. Journal of Political Philosophy, 17, 202-227. doi:10.1111/j.1467-9760.2008.00325.x

  • Sussman, B. (2010). Climategate: A veteran meteorologist exposes the global warming scam. Washington, DC: WND Books.

  • Swami, V., Chamorro-Premuzic, T., & Furnham, A. (2010). Unanswered questions: A preliminary investigation of personality and individual difference predictors of 9/11 conspiracist beliefs. Applied Cognitive Psychology, 24, 749-761. doi:10.1002/acp.1583

  • Taylor, J. (2012, June 13). Doctored data, not U.S. temperatures, set a record this year. Forbes. Retrieved from http://www.forbes.com/sites/jamestaylor/2012/06/13/

  • Wagner-Egger, P., Bangerter, A., Gilles, I., Green, E., Rigaud, D., Krings, F., Clémence, A. (2011). Lay perceptions of collectives at the outbreak of the H1N1 epidemic: Heroes, villains and victims. Public Understanding of Science, 20, 461-476. doi:10.1177/0963662510393605

  • Whitson, J. A., & Galinsky, A. D. (2008). Lacking control increases illusory pattern perception. Science, 322, 115-117. doi:10.1126/science.1159845

  • Wood, M. J., Douglas, K. M., & Sutton, R. M. (2012). Dead and alive: Beliefs in contradictory conspiracy theories. Social Psychological and Personality Science, 3, 767-773. doi:10.1177/1948550611434786

Articles citing this article (data provided by Crossref)

  • John Cook, Stephan Lewandowsky (2016)
    Rational Irrationality: Modeling Climate Change Belief Polarization Using Bayesian Networks
    Topics in Cognitive Science, 8(1), p. 160(ff.)
    doi: 10.1111/tops.12186



Creative Commons License
ISSN: 2195-3325
PsychOpen Logo