15

A recent paper has been making the social media rounds, that estimates the number of American deaths due to radioactive fallout from nuclear weapons testing to be between 340,000 and 690,000 between 1951 and 1973.

Here is one such article in Quartz: US nuclear tests killed far more civilians than we knew

All of the articles I have found link to this PDF copy of the study by Keith Meyers at the University of Arizona. I have no idea if it has been peer reviewed. The news articles and the abstract of the paper refer to a "novel method" to find these new numbers, which - combined with a wordpress-hosted PDF rather than a listing in a journal - is raising little red flags.

Has the fallout from American nuclear tests killed hundreds of thousands of civilians as the paper suggests?

RToyo
  • 1,197
  • 1
  • 8
  • 20
  • 6
    The important distinction is the word "immediate". – Daniel R Hicks Dec 27 '17 at 21:39
  • @DanielRHicks Good point, I don't know how I was able to type that without even catching on to that. I've reworded the title to better reflect the claim in the articles. – RToyo Dec 27 '17 at 21:46
  • Rewording the title isn't the point. One number is the deaths "immediately following" the Japan bombings -- presumably within months or a few years. The other is the deaths resulting over lifetimes. While the numbers are significant in and of themselves they are not comparable, and hence this is not a "notable claim". – Daniel R Hicks Dec 27 '17 at 21:56
  • 1
    The [Castle Bravo H-bomb test](https://en.wikipedia.org/wiki/Castle_Bravo) in 1954 was 1000 times more powerful than the A-bombs dropped on Japan (and 2.5 times more powerful than researchers expected). While the bomb was detonated in a relatively isolated place (alas, not isolated enough), significant fallout spread round the world. It was this and similar tests by the Soviets which led to the "Test Ban Treaty", a significant step forward in nuclear weapon control. – Daniel R Hicks Dec 27 '17 at 22:04
  • Edited the title to better fit the real claim rather than the one that is debunked by its source. – Brythan Dec 27 '17 at 22:58
  • I think you should remove the comparison with Japanese immediate deaths altogether. And those numbers are not uncontroversial either according a 1995 WaPo article titled ["A FALLOUT OVER NUMBERS "](https://www.washingtonpost.com/archive/politics/1995/08/05/a-fallout-over-numbers/d9c5fb21-880b-4c6c-85f1-b80e16a0a0ee/). – Fizz Dec 28 '17 at 08:40
  • @DanielRHicks more powerful nuclear weapons does not mean more radioactive fallout, the opposite is usually true. – daniel Dec 28 '17 at 12:27
  • @daniel - A thousand times more powerful pretty much guarantees more fallout. And it is inserted much higher into the atmosphere, where it can drift great distances. – Daniel R Hicks Dec 28 '17 at 13:23
  • @Fizz - There is a significant disconnect between comparing "immediate" numbers and numbers that accrued over 50 years. They're simply not comparable. – Daniel R Hicks Dec 28 '17 at 13:25
  • @DanielRHicks: That's what I'm saying too. The comparison is distracting and unnecessary for the question he's asking. On the other hand, the comparison is made in the abstract of the pdf/paper cited... which probably says a lot about its objectivity. – Fizz Dec 28 '17 at 13:28
  • @DanielRHicks neither of us are bringing anything to back up our claims, but plain fission bombs, and reactor meltdowns, create way more fallout than fusion bombs. My b grade high school explained it by saying the fusion reaction triggered by the fission stage consumed a lot of the potential fallout. – daniel Dec 28 '17 at 13:29
  • RToyoda, it would be better IMHO if you just quoted the paper's abstract instead of engaging in your own comparison of figures. The paper is definitely notable, [Mother Jones also covered it](http://www.motherjones.com/kevin-drum/2017/12/atomic-tests-during-the-1950s-probably-killed-half-a-million-americans/). We'll see if there are scientific reactions to it. Insofar (probably due to the holiday season) the news media reporting included little else but the paper's data/conclusions. – Fizz Dec 28 '17 at 13:37
  • It's actually not at all a new area of research. Since the claim is mainly about Iodine 131 contamination in the US, that has been studied at least 20 years ago: https://www.ncbi.nlm.nih.gov/pubmed/9742895/ Previous research (as recent as 2010) was inconlusive though: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3865880/ – Fizz Dec 28 '17 at 13:48
  • 2
    Maps like Meyers' are [not totally new either](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4165831/figure/Fig7/). The problem is that a causal link seems hard to establish. The paper in which this figure appears says (last 3 paras before conclusion) that a dramatic increase in thyroid cancers has also been observed in the US in recent times, despite no obvious irradiation source, except possibly Chernobyl. I'm curious how Meyers overcame this difficulty. – Fizz Dec 28 '17 at 14:19
  • At the suggestion of the comments by @Fizz and Daniel R Hicks, I've completely removed the comparison to the bombings in Japan. Shares and articles emphasized the comparison to the bombings in Japan, and that made its way into the original form of the question. – RToyo Dec 28 '17 at 15:30
  • @Fizz Did you happen to see any more technical descriptions of the alleged phenomena? The paper linked in the question is really badly formatted; it gives their **"_full model specification_"** as _Equation (1)_ on printed-page-10, but they don't really explain a lot of it, e.g. I don't see what Betai is defined as. – Nat Dec 29 '17 at 04:43
  • ...actually, I just did a `Ctrl`+`F` to search for **"_β_"**, and the PDF viewer seems to say that **"_β_"** only occurs once in the entire document, in their **"_full model specification_"** equation. Did they seriously not even define their variables? – Nat Dec 29 '17 at 04:48
  • @Nat Maybe it’s a “common knowledge” variable to the respective field? – jjack Dec 29 '17 at 10:31
  • @jjack I would guess so, the same for the similarily-placed θ in equation (2). – DevSolar Jan 02 '18 at 10:35
  • @Nat Looks like it's just a slightly rearranged panel regression analysis. That's used a lot in econometrics to the convention is to not explain the coefficents. – rjzii Jan 05 '18 at 15:01
  • @rjzii Awesome, thanks for pointing that out! Do you know what modeling assumptions go into it? The tutorials that I'm finding online make it look like an obfuscated linear regression. – Nat Jan 05 '18 at 15:14
  • @Nat Try searching for multidimensional analysis (MDA) for more information, but if I recall correctly the technique is based upon a linear regression, but with more dimensions involved in the analysis. – rjzii Jan 05 '18 at 16:04

1 Answers1

14

Summary: This paper is emerging science, and it is too soon to know one way or the other.


"Has the fallout from American nuclear tests killed hundreds of thousands of civilians as the paper suggests?" This is emerging science and I am unqualified to critique the methods of the paper. When I am faced with a question about emerging science, I go through a checklist.

  1. Are the authors real scientists? Is the paper a real scientific paper? Is the research peer reviewed?
  2. Does the author review other scientific literature about the same question in his paper? How do the results of those papers compare?
  3. Is the research cited by other papers? Are those citations favorable?

Are the authors real scientists? Is the paper a real scientific paper? Is the research peer reviewed?

Keith Meyers is a Ph.D. candidate at the University of Arizona in the department of economics. His research focuses on "Investigating the Economic Consequences of Atmospheric Nuclear Testing." He has apparently finished 3 chapters of his dissertation, which means that substantial effort has been put into this work. According to his CV, the paper linked in the claim is currently under peer review.

None of this speaks to the accuracy of the papers, but it does mean that this guy is not some crank. He is in the process of going through the usual scientific channels.


Does the author review other scientific literature about the same question in his paper? How do the results of those papers compare?

When looking at emerging science, my first stop is to check the existing science. If the estimates of emerging science are in line with the existing, I am much less skeptical. From the literature review in Meyers' paper,

Simon and Bouville (2015) of the National Cancer Institute (NCI) note that there is great uncertainly underlying these estimates. They estimate that fallout from domestic nuclear testing caused 49,000 thyroid cancer deaths. The 95 percent confidence interval for this estimate is 11,300 and 220,000 deaths. Simon and Bouville (2015) suggest testing contributed up to 11,1000 (sic) additional of other cancer deaths. Without nuclear testing they estimated that 400,000 cases of thyroid cancer would arise naturally in the same population.

Meyers then goes on to talk about the limitations of Simon and Bouville's work. They made some assumptions that limit the scope of the deaths they can attribute to fallout. Meyers also lists a number of other references about fallout from other sources in other parts of the world. His methodology differs from the current literature, and his estimates are higher than the current literature.

Oddly, the largest mortality effects occurred in the Great Plains and Central Northwest U.S., far outside of areas studied by the current literature. Back-of-the-envelope estimates suggest that fallout from nuclear testing contributed between 340,000 to 460,000 excess deaths from 1951 to 1973.

After having read his literature review and conclusions, I am skeptical of both his work and the current literature. The existing estimate by Simon and Bouville is an order of magnitude lower than Meyers' estimate. Meyers questions their assumptions and methodology, but I imagine that his own assumptions and methodology are open to questions. I am not qualified to decide who is correct.


  1. Is the research cited by other papers? Are those citations favorable?

This research is too new to be cited by other scientists. If you are reading this in the future, this google scholar link can be used to find all of the papers that cite this one. Just click on the button that says "cited by X." Citations in other scientific papers are the real peer review. Other qualified authors may cite this paper positively, argue with its results, or simply repeat the results uncritically. If other papers on very similar topic cite this paper positively, I will trust it a lot more.

BobTheAverage
  • 11,961
  • 6
  • 43
  • 54
  • I think we’re currently lacking a feel for any of these numbers. – jjack Dec 29 '17 at 10:33
  • @jjack I made a small edit. I am reluctant to focus on giving people a feel for numbers that I don't trust. – BobTheAverage Dec 29 '17 at 17:12
  • +1 for the answer. Maybe you don't trust some numbers because you have no experience with what produced them? I feel like that whenever something's new to me. – jjack Dec 29 '17 at 17:16
  • 3
    @jjack There are two conflicting sets of numbers from two sets of people who know way more about the topic than I do. I expect that there are a couple dozen people in the world who are familiar enough with the topic to really know which set of numbers (if either) to trust. Until those couple dozen people do their peer review, I will go on trusting neither. – BobTheAverage Dec 29 '17 at 18:04
  • The question itself seems to be within the field of epidemiology (https://en.wikipedia.org/wiki/Epidemiology). The way Bayesian reasoning (Bayes' theorem) would treat those numbers is somehow "average" over them. – jjack Dec 29 '17 at 18:50
  • You're quote contains the number `11,1000` (sic). Should that be `11,100` or `111,000`? – gerrit Jan 02 '18 at 15:19
  • @gerrit I don't know. The paper contains that same mistake. I will edit in the sic. – BobTheAverage Jan 02 '18 at 15:52
  • 2
    Saying that citations in other papers is the real peer review is somewhat misleading since a citation can be good, bad, or indifferent. – rjzii Jan 05 '18 at 14:54
  • @rjzii Good point. Edits made. – BobTheAverage Jan 05 '18 at 19:37
  • 2
    @BobTheAverage What do you mean this is "an emerging science"? This is epidemiology, and that is not an "emerging" science. Neither is the understanding of what ionizing radiation does to the human body. If you by "emerging science" intend to say "someone made new claims about the health effects of the tests, claims that have not been made before", then I for one would challange that too, because the question of what the atmopheric tests did to the "downwinders" is not exactly new; that has been a hot topic for decades. So: what makes you label this as an "emerging science"? –  Jan 09 '18 at 09:33
  • 4
    @michaelKarnerfors When I say "This is emerging science," "this" refers to this particular paper, not epidemiology as a whole. Do the methods and assumptions in this paper represent good scientific practice? I don't know. In time other scientists will examine his methods and hopefully their peer review will tell us. – BobTheAverage Jan 09 '18 at 17:16