3

A recent news article on the BBC summarises work of Gerd Gigerenzer that argues (my emphasis):

...it's not just that doctors and dentists can't reel off the relevant stats for every treatment option. Even when the information is placed in front of them, Gigerenzer says, they often can't make sense of it.

In one example based on real trials with real doctors using the statistics of mammography screening he quotes the results (my emphasis):

In one session, almost half the group of 160 gynaecologists responded that the woman's chance of having cancer was nine in 10. Only 21% said that the figure was one in 10 - which is the correct answer. That's a worse result than if the doctors had been answering at random.

So are medical professional often incapable of understanding or communicating the implications of important statistics?

matt_black
  • 56,186
  • 16
  • 175
  • 373
  • 2
    I would tag this as *statistics* if the tag had not been deprecated. This is one question where the tag is clearly deserved on any criterion. – matt_black Jul 17 '14 at 20:55
  • 2
    Should change this question to, __does anyone apart from professional statisticians understand statistics__. Picking on physicians is part of a general attack on the medical profession which does more harm than good. Or, perhaps next time you need medical advice, seek out a statistician. – HappySpoon Jul 18 '14 at 04:13
  • Yes, this is well known. There have been [multiple studies](https://www.sciencenews.org/blog/context/doctors-flunk-quiz-screening-test-math). A great [TED talk](http://www.ted.com/talks/peter_donnelly_shows_how_stats_fool_juries#t-509539), and a breast cancer screening study (cannot find the link now). You get improved results if you restate the question (i.e. instead of stating how common the disease is as a rate, give that value as a number of people . Of 100,000 women 50+ yo, 1,000 will have BC. If all are screened, 90% with BC will test positive, 9% without BC will also test positive... – user1873 Jul 18 '14 at 07:07
  • 1
    **What are the odds that a random women who tested positive actually has breast cancer?**. When you state it like that, doctors (and most other people) can arrive at the correct answer more often. (But, yes. Most everyone sucks at statistics) – user1873 Jul 18 '14 at 07:11
  • Matt, given the source, what are your reasons for doubting the claim? This would also help to pin down what you mean by "are often incapable"... – P_S Jul 18 '14 at 14:12
  • @P_S the reason to doubt the claim is the trust normally placed in medic to provide good advice. If Gigerenzer is right that very common trust is entirely misplaced. While I'm pretty confident he is right it is still a big and significant claim that deserves skeptical attention given its major implications. – matt_black Jul 18 '14 at 21:58
  • I have Gigerenzer's book on the subject on my bookshelf somewhere. In it, he cites a number of surveys of doctors where they make similar mistakes. If the current answer is insufficient, I'll go hunt it down. – Oddthinking Jul 19 '14 at 18:15
  • Another interesting issue is Framing Effects: doctors are more likely to recommend an operation which is claimed to have a 90% chance of success than one which is claimed to have a 10% chance of failure. – Jörg W Mittag Jul 26 '14 at 03:59

1 Answers1

6

The short answer is "yes".

The studies

Here is another description of (I think) the same studies from Leonard Mlodinow's "The Drunkard's Walk":

[...] in studies in Germany and the United States, researchers asked physicians to estimate the probability that an asymptomatic woman between the ages of 40 and 50 who has a positive mammogram actually has breast cancer if 7 percent of mammograms show cancer when there is none. In addition, the doctors were told that the actual incidence was about 0.8 percent and that the false-negative rate about 10 percent. Putting that all together, one can use Bayes’s methods to determine that a positive mammogram is due to cancer in only about 9 percent of the cases. In the German group, however, one-third of the physicians concluded that the probability was about 90 percent, and the median estimate was 70 percent. In the American group, 95 out of 100 physicians estimated the probability to be around 75 percent.

Similar results were reported by (smaller) studies from 1978 to 2014.

The issue

The problem in these cases is conditional probability. Generally, conditional probability is the probability of one event given another event (or condition). For example, the probability of throwing two sixes with two six-sided dice is 1 in 36; however, the conditional probability of throwing two sixes with two six-sided dice if the first one already turned out to show a six is just 1 in 6.

With any medical test, you have questions of conditional probability. Often, the number reported about a test is its accuracy; for example, an HIV test may produce the correct result 99% of the time. This means that if you have HIV, you have a 99% chance that you will be correctly diagnosed as HIV positive; and if you do not, you have a 99% chance of being correctly diagnosed as HIV negative. The fallacy is to assume that this means that if you are diagnosed with HIV, there is a 99% chance that you are indeed HIV positive. In reality, the probability is much lower and depends heavily on your risk group; it might well be 10%, or 1%, or even below that (I do not want to go into the calculation details here).

The mistake is to assume that the probability of you being correctly diagnosed when positive and the probability of you being positive when diagnosed are the same. You can see this on an obvious example: If you are a professional football player, you are extremely likely to be a male. However, even if you are a male, you are still extremely unlikely to be a professional football player. The reason for this is that there are quite a lot of men, and very few football players; the same applies for the HIV example, since very few people are HIV positive in the first place.

Does this happen in practice?

Finally - a chance to use anecdotal evidence with (relative) impunity! Yes, it does happen. Mlodinow, in his aforementioned book, writes about a similar case where he received a false HIV diagnosis. And I myself had a family member diagnosed with a "definitely malignant" tumor. And - as a side note - although I read about the issue shortly before, it did not occur to me to doubt the doctor's judgement until after the operation showed that the tumor was benign.

Is this about doctors?

No. Doctors are subject to this fallacy, and of course they are a group that is not supposed to err. But people generally have bad intuitions of many probabilistic tasks, this one included. Indeed, the common name for this particular fallacy is the prosecutor's fallacy, which gives a hint as to who else might be affected. The linked Wikipedia article has explanations and examples of legal cases where this played a crucial (and very destructive) role.

Can't you trust doctors now?

Well, can you trust anyone? I expect most doctors to be competent at their core tasks. However, they are just people, and most of them have not been explicitly trained in probability theory. Therefore, you should use healthy skepticism when confronted with numbers describing risks and probability calculations, whether they come from doctors, lawyers and others, sometimes even mathematicians.

P_S
  • 3,554
  • 23
  • 29
  • This is a reasonably good answer but omits one important issue. The way you communicate the numbers matters to how well they are understood. With Mlodinow's example you can get the right result with Bayes's formula but, even with the formula in front of them, most people can't apply it. Gigerenzer showed the problem is not innate by using a different way to describe the numbers and showing people then reached the correct conclusions without further calculation. – matt_black Jul 26 '14 at 11:04
  • @matt_black Sure, the way the issue is put before someone matters to how he approaches it. However, I think the question was primarily about how doctors understand and communicate statistical results as they get them... – P_S Jul 26 '14 at 11:20