[Note to readers: This is the first of a things I’m posting here from stuff I wrote at pieceofmindful and other sites, because, well, why not? This is my blog after all.]
We’ve all heard it: “That can’t possibly be true—too many people would have to be involved. Somebody would have spilled the beans by now.” In fact, that is usually the first reaction I hear from people I’ve tried to enlighten about topics such as 9/11. It’s almost like a knee-jerk reflex, and it’s apparently enough to stop them from even considering any conspiracy theory further.
The “too many people” objection got a major boost in January 2016 with the publication of a paper by physicist David Grimes, entitled, “On the Viability of Conspiratorial Beliefs.” Massive media coverage followed, touting his magic formula that “proved” once and for all that conspiracies were bound to fail. (To get a sense of this coverage, just type the following search terms into google: large-scale conspiracies reveal.) “Ah, those conspiracy theorists! Can’t they see they’re hopelessly deluded? This was written by a physicist at Oxford University and published in a peer-reviewed journal. What more proof do you need?”
I’m here to show you that the paper actually proves the exact opposite of what we are told. That’s right, I’m telling you that the paper actually supports the viability of large-scale conspiracies. I also want to offer a few more words about the “too many people” response. But first, a bit about the author of the paper and the journal it was published in.
David Grimes is a post-doctoral researcher at Oxford University. So yes, he is at Oxford, but he does not have a faculty position there. He also writes opinion pieces for The Guardian and The Irish Times pushing mainstream scientific opinion and dismissing skeptics as stupid and/or paranoid. To give you a taste, he has written op-eds derisively dismissing concerns over GMO foods, the HPV vaccine (Gardasil), fluoride in drinking water, and the potential for household radiation (like cell phones, wi-fi routers) to cause any kind of illness. As usual with these guys, he plays the role of open-minded skeptic guided by scientific evidence, while anyone who is skeptical of mainstream dogma is credulous and weak-minded, easily swayed by anecdotal evidence and impervious to reason and logic. If Grimes is not a spook, then he is a paragon of useful idiocy. You know the type.
His article was published in the open-access, on-line journal, PLoS ONE, considered to be the largest journal in the world by volume. PLoS stands for the Public Library of Science, which was started with a grant from the Gordon and Betty Moore foundation (that’s Gordon Moore of Intel and “Moore’s Law” fame – there’s a deep rabbit hole there waiting to be explored). You might be surprised to learn that this “academic” journal has a business model, whereby it charges authors about $1,500 to have their work published. In contrast, most academic journals will only ever charge a small submission fee to cover operating expenses, if that. And whereas the top journal in any field will generally have a 10% acceptance rate (and often much lower), the PLoS ONE journal has a 70% acceptance rate. This is partly due to the lax peer review policies of PLoS, which only require review by a member of the editorial board, compared to 2-4 external reviewers at a respectable journal. The success of PLoS ONE hinges on the importance of open access in today’s on-line culture: the articles tend to be cited more frequently because they are freely available to anyone with an internet connection. This artificially drives up citation counts and the apparent influence (and hence prestige) of the journal. In 2015, the journal published a whopping 28,107 papers, meaning it generated an income of just over $42 million. Not bad for an academic journal, especially considering that they do not have any printing or distribution costs beyond maintaining the website. No wonder more and more open-access “mega journals” are sprouting up all over the place. Grimes can’t abide conspiracy theories but apparently has no qualms with the pay-to-play future of science where academic publishing is reduced to a money-making scheme.
But enough background—let’s get to the heart of the matter. Are large-scale conspiracies viable? Before subjecting Grimes’s article to actual peer review (i.e., tearing it to shreds), I want to first offer a few of my thoughts on this issue, taking them point by point.
- The first point people usually make is that somebody would have talked by now. “If you had thousands of people involved in a major conspiracy like 9/11, somebody would have come forward. No way they could keep it secret.” My first response to this is: imagine somebody threatened to kill you and your family if you spoke out and you thought they were serious. What are the chances that you would say something? Zero, right? So why is it inevitable that somebody would speak out?
- “But,” comes the reply, “what about somebody who is near death or doesn’t have any family. They might have the courage to speak out, right?” OK, let’s say for the sake of argument that somebody does speak out, where would that person turn to? The media is controlled, they will never cover any real whistle-blowers. You’ll never hear from them. Even Facebook and Twitter censor objectionable posts and tweets. And anybody who managed somehow to get their message out would be immediately silenced and discredited or the story surrounded with a flurry of counterclaims and misdirection.
- But the very idea that somebody would ‘speak out’ implies that the people involved in these projects are people of good conscience who are doing something they know to be wrong against their will or later decide what they did was wrong. Why should we assume that anybody involved would want to speak out in the first place?
- It’s also possible that the majority of people involved in the conspiracy might be people of good conscience who believe they are doing it for a good cause. Soldiers involved in the Gulf of Tonkin hoax, for example, probably believed (and still do) that they were doing it to win the fight against communism. Same with 9/11 and terrorism. Yes, the American people had to be lied to, but it was for their own good. For anybody who believes that, speaking out would be an act of treason. Why would these patriotic Americans willingly commit treason? Especially to reveal a secret about something they think was good?
- Compartmentalization: Usually people “involved” in top-secret programs don’t have the full picture. They just do their job without knowing the full deal, so they do not necessarily have the necessary knowledge to expose it.
OK, now to the meat of the article:
We are told that the paper appears to “prove” that conspiracies (especially large ones) are doomed to failure. The results of the paper show this is only true under the most absurd and counter-intuitive assumptions. His equation shows that under certain, more realistic assumptions (which he himself posits) it’s actually much more likely that large-scale conspiracies will remain a secret. Let me try to explain:
There are really 3 key assumptions to his equation: the likelihood of ‘defection,’ the number of conspirators involved, and the rate of death of the conspirators.
Think about it this way, let’s say that 5000 people are in on a conspiracy. Then let’s say there is a 5 in a million chance for each person to defect and expose the conspiracy. (Of course this assumes that defection will necessarily lead to exposure, which is not the case if you control the media, but that’s another argument entirely.) Note that this 5-in-a-million is a completely arbitrary number based on absolutely no empirical data. He just pulled it out of his ass. If he had chosen a lower number, then the likelihood of exposure would be lower. If the likelihood of defection was 0, then the chance of exposure would be 0. But if we accept these assumptions, the question then is how many years will it take for the conspiracy to be exposed?
The answer to this question depends primarily on the death rate of the people involved in the conspiracy. If you need new people to maintain the conspiracy as original conspirators die off, then it is likely to be exposed more quickly. But if not and people die off naturally, then the overall likelihood of exposure is reduced with each new death. And if people die off at an unnatural rate presumably because they are killed (or what Grimes refers to as “removed extrinsically”), then the likelihood of exposure is going to go down much faster (how fast depends on the rate of “extrinsic removal”).
Here is a graph from the paper (actually this one is taken from the follow-up ‘correction’ Grimes published about a month later to correct a ridiculously obvious error):
It shows the cumulative likelihood of the conspiracy failing or being exposed (Y-axis) in any given year after the start of the conspiracy (X-axis is time in years).
The graph compares the conspiracy ‘failure rate’ of 3 different scenarios. They all assume that there are 5000 conspirators and that the likelihood of someone defecting from the conspiracy is 5 in a million. The only difference between these three models is their assumptions about the death rate of co-conspirators.
The blue ‘constant conspirators’ line corresponds to the assumption that there are always 5000 co-conspirators, the orange ‘Gompertzian decay’ model assumes a natural death rate, and the yellow ‘Exponential decay’ line assumes a higher than normal death rate (reduced by half every 10 years). If co-conspirators are dying at this higher rate, then the equation shows that the maximum likelihood of discovery is just under 30%. At a normal death rate, the likelihood reaches 55%. (Note that in the non-corrected version of the paper that got all the publicity, these chances were even lower: 12% and 40%, respectively.) This means that under certain assumptions, according to his own model conspiracies are more likely to remain a secret than be exposed. Even under the most generous assumptions (blue line), there is still over a 20% chance that the conspiracy will never be exposed. And if either the number of conspirators or their likelihood of defection is lower than assumed, the chances of exposure also go down.
But wait, there’s more!
To make matters worse, he comes back to this model towards the end and basically says: well, if people are being killed (“extrinsically removed”) to keep the conspiracy quiet, this would make surviving co-conspirators more likely to defect – not less! – because it would “create panic and disunity.” (I’m not making this up! That’s what he says. It’s right there in the paper on page 12.) So then he runs the numbers where he assumes that killing co-conspirators increases the likelihood of survivors to spill the beans, and lo-and-behold this means the conspiracy will have a 70% chance to be exposed (see red line on graph below) rather than a 17% chance (see dotted blue line). (In the original version of the paper, this latter number was 6.5%.)
If, instead, he had made the more realistic assumption that killing co-conspirators would make the survivors less likely to defect, then the 17% number would have been much, much lower. (BTW, the reason this 17% number doesn’t match the ‘just under 30%’ number from the ‘extrinsic removal’ line in the previous graph is because in this graph he assumes half the people are dying off every 5 years instead of every 10. I believe he chose this quicker die-off rate here because it increases the chances of exposure under the assumption that fast-die off will raise the probability of defection).
He then uses some real-world examples to fill in some of his assumptions and tries to estimate the ‘actual’ likelihood of defection, based on 3 conspiracies that have been exposed (though he arguably makes some faulty assumptions even about these real world conspiracies). From this he concludes that conspiracies are bound to fail. This method suffers from a major problem of selection bias: we cannot draw firm conclusions about all conspiracies based on what we know about the exposed ones, for the simple reason that we don’t know anything about the conspiracies that haven’t been exposed. But Grimes has no problem with this, since he starts out assuming that all large conspiracies will be exposed, which means he doesn’t need to worry about any of them remaining hidden. In logic, they call that begging the question.
Beyond that, he cherry picks examples that tend to confirm his conclusions. Here is an example he could have chosen that would have shown how faulty his methodology and assumptions are:
The Gulf of Tonkin – Here all you have to do is read the first couple paragraphs to realize how easily they were able to withhold truthful information. It happened in August 1964, for the next 30 years there were many inquiries into its legitimacy, and even at one point in 1995 when former U.S. Secretary of Defense Robert McNamara met with former Vietnam People’s Army General Võ Nguyên Giáp to ask what happened on that day in 1964. He responded “Absolutely nothing.” Yet it wasn’t until 2005, over 40 years since the incident, that the files were declassified. They showed that on August 4th, 1964, there weren’t even any North Vietnamese ships in that area. Let alone ones firing on American ships. It also goes on to show that on August 2nd, two days earlier, that an American ship fired a couple shots off at a Vietnamese ship over 10,000 yards away. This also came as a direct order from Captain Herrick, yet this initial action was never reported by the Johnson administration, which insisted that the Vietnamese boats fired first.
In short, Grimes makes many questionable (arguably dishonest) assumptions with his model and his examples in order to reach the conclusion that widespread conspiracies are bound to fail. He also made some major errors that later had to be corrected. (For more on that and other problems with the paper, see this wikispooks entry.) But hey, if you’re willing to fork over $1,500, you, too, can have your rubbish published and stamped with the seal of “peer review” approval.