16

From the 1992 paper that introduced the term evidence-based medicine:

Evidence-based medicine de-emphasizes intuition, unsystematic clinical experience, and pathophysiologic rationale as sufficient grounds for clinical decision making and stresses the examination of evidence from clinical research.

Should physicians disregard their intuition, their unsystematic clinical experience and pathophysiological rationales? Is there evidence that physicians who follow this approach closely are more effective at treating their patients than doctors who don't?

This is not about whether individual treatments are supported by studies. Of course they are. It is about whether physicians who adhere to the standards of evidence-based medicine actually achieve on average better health outcomes for their patients.

Christian
  • 33,271
  • 15
  • 112
  • 266
  • 15
    Read your question again: `Is there EVIDENCE to back up the practice of EVIDENCE-based medicine?` I would think the answer would be obvious. :) – JasonR May 12 '11 at 11:57
  • 5
    The question is truly puzzling. –  May 12 '11 at 12:14
  • 2
    Drugs are accepted by the FDA for a purpose, and once accepted they don't need to be re-certified. Off-label prescription means using drugs for purposes not initially foreseen, not necessarily for unproven purposes. – David Thornley May 12 '11 at 12:18
  • 4
    Is this question not tautological, and thereby, vacuous? I imagine the doctors and researchers who coined the term "evidence-based medicine" would define its scope as including any medical practice that has scientifically valid evidence to support its use. I mean, isn't it _by definition_ that there is evidence to support the practice of evidence-based medicine?? – Uticensis May 12 '11 at 13:03
  • 7
    The question could be reworded, but the basic idea is, does using 'evidence based medicine' actually improve patient care? As in, are doctors who follow studies meticulously and keep up with current literature better at their jobs than those who do not? And if they are, by what metrics, and are they significantly better? Or do doctors use the term 'evidence based medicine' to justify treatment decisions without properly understanding the evidence or how the studies were performed? – mmr May 12 '11 at 13:29
  • 3
    The question isn't tautological. Chess players improve their performance through learning through experience and observing the effects of their actions. The Chess player doesn't only learn through reading literature about Chess. Reading about controlled studies of chess isn't the way you usually improve your chess skills. An orthodox practitioner of EBM won't change his actions based on his own experience. If medicine is a skill like Chess an approach of learning from experience might outperform an approach of learning exclusively from clinical trials. – Christian May 12 '11 at 21:16
  • 2
    Is this a recursive question? – Kyralessa May 13 '11 at 05:22
  • 1
    this is nonsense – NimChimpsky May 13 '11 at 09:01
  • Thanks for adding that quote. It should help clarify things quite a bit. –  May 13 '11 at 09:31
  • 2
    Doctors don't prescribe based on clinical research. Most of them still use intuition for the most common problems. Intuition is also there prior to doing research: random tests don't occur (or rarely) intuition is used to determine what is researched in the first place. – johanvdw Sep 30 '11 at 10:46
  • I suggest that to better comprehend this question, it helps to treat "Evidence Based Medicine" as a proper noun, meaning evidence in the medical field that is based upon randomized controlled trials. Perhaps then the question could be reworded: Do randomized controlled trials outperform other measures of the effectiveness of medical treatments? – Brian M. Hunt Sep 30 '11 at 13:07

2 Answers2

8

When reliable evidence about long established treatments is gathered it often overturns the conventional wisdom; this is clear evidence that evidence based medicine is more effective than ignorance based medicine.

The trouble with the question is that it is, to some extent, self-referential. The point of evidence based medicine is about challenging what standard of evidence we should accept. The question asks us to apply the same standard to the whole approach. But the very act of comparing not using evidence to using evidence seems to accept the premises of the evidence based approach.

But there is one way we can get something like a satisfactory answer: we can look at areas where a common medical practice has been accepted for a long time but where a properly controlled study has eventually been conducted. The book Testing Treatments is a valuebale source of material on the subject and much of the material below is derived from it (the full text is online). But first a quick review of what counts as reliable evidence.

What does reliable evidence look like?

The Gold standard for evidence about treatments is the double-blind randomised trial. This means that a treatment or a comparison (a placebo or an alternative treatment) is given to two groups of patients, selected randomly without either the patients or the trial managers knowing who gets which. The blinding is necessary because of the placebo effect and the potential for investigators to be biased in how they assess the benefits if they know which group is being treated (even unconscious bias can be very significant). Randomisation is necessary to ensure that any observed differences between the groups are due to the differences in treatment not some other characteristic of the patients.

For more detail see this section in Testing Treatments.

Why does anyone resist the application of EBM?

Many people would look at the basic idea of EBM and wonder why it isn't just obvious. But many have opposed it and it is worth asking why.

One explanation which applies particularly to medicine (though it is also often seen in politicians) is what has been called The God Complex. Tim Harford described it in his Ted Talk (it can also be found in his book Adapt) when telling the story of Archie Cochrane (first quote is from a transcript of the talk the second from the book):

Archie Cochrane all his life fought against a terrible affliction and he realized it was debilitating to individuals and it was corrosive to societies and he had a name for it he called it the God Complex. Now I can describe the symptoms of the God Complex very easily. The symptoms of the God Complex are: no matter how complicated the problem you have an absolutely overwhelming belief that you are infallibly right in your solution. Now Archie was a doctor so he hung around the doctors a lot and the doctors suffered from the God Complex a lot...

Cochrane complained about the 'God Complex' of doctors who didn't need to carry out trials because they knew the correct course of treatment - even when other doctors were issuing contradictory advice with equal confidence.

You have to be humble to recognise the need for reliable evidence and doctors are not trained to be humble.

So what sort of practices have been overturned by evidence?

Babies should sleep on their front

From the 1950s to the 1970s the top authority in childcare, Dr Benjamin Spock, made the following recommendation about how babies should sleep:

There are two disadvantages to a baby’s sleeping on his back. If he vomits he’s more likely to choke on the vomitus. Also he tends to keep his head turned towards the same side . . . this may flatten the side of the head . . . I think it is preferable to accustom a baby to sleeping on his stomach from the start.

The argument has both authority and logic in its favour. But it is wrong.

When proper trials were done and the evidence reviewed the following conclusion emerged:

Advice to put infants to sleep on the front for nearly a half century was contrary to evidence available from 1970 that this was likely to be harmful. Systematic review of preventable risk factors for SIDS from 1970 would have led to earlier recognition of the risks of sleeping on the front and might have prevented over 10 000 infant deaths in the UK and at least 50 000 in Europe, the USA, and Australasia.

The earlier you detect something the better the outcome

It seems intuitively obvious that detecting cancers early should improve the outcomes. They should be easy to treat in healthier patients where the disease has not progressed so much thereby improving survival. While the issue applies to many cancers and progressive diseases, a solid case study is provided by our experience with neurblastoma.

As testing Treatments summarises:

Neuroblastoma was a tempting target for screening for four reasons: (1) children who are diagnosed before the age of one year are known to have a better outlook than those who are diagnosed later; (2) children with advanced disease fare much worse than those with early disease; (3) there is a simple and cheap screening test that can be carried out by blotting wet nappies and measuring substances in the urine; and (4) the test detects nine out of ten children with neuroblastoma.

The logic of early screening seemed so good that Japan introduced screening for 6 month old children across the country in 1985. But:

But 20 years later there was no evidence that neuroblastoma screening had reduced the number of children dying from this cancer. How could that be?

There were several problems. One was that trial outcome was judged by counting survival from time of diagnosis. But this is a biased metric: earlier diagnosis makes it look better even if the treatment does nothing to change the date of death or the progression of the disease (survival should have been measured from birth, which is not biased). There are also several types of neuroblastoma: some regress naturally, other progress rapidly. Screening can miss the fast developing type but will often spot the slow and regressing type even though the disease may have disappeared naturally without intervention. But those patients who may not have suffered from the disease at all will suffer the costs of intervention, which are not negligible.

When reliable evidence was collected, the following results emerged:

By contrast, when unbiased evidence was obtained from clinical trials done in Canada and Germany, involving about three million children in all, researchers were unable to detect any benefit from screening, but there were obvious harms. These included unjustified surgery and chemotherapy, both of which can have serious unwanted effects. In the light of this evidence, infant screening for neuroblastoma in Japan was stopped in 2004.

The intuition that early detection must be good is very deep seated and hard to overturn. The controversy affects other areas of screening such as screening for prostate cancer (see Does screening for prostate cancer save lives? ) and breast cancer screening (see the sometimes heated arguments made here Is routine screening for breast cancer for asymptomatic women worthwhile? ).

Steroids will reduce brain swelling in traumatic head injury

Until about ten years ago if you turned up in a hospital emergency department with a traumatic head injury you would probably have been given steroids. The logic is clear: steroids are great drugs for reducing swelling in most parts of the body and brain swelling is serious; surely anything likely to help must be good for the patient?

Sadly, reality again trumps logic. The CRASH trial came to a different conclusion (this summary is from Margaret Mcartney's book The Patient Paradox):

When the results came in, it was found that steroids were neither effective nor neutral. They were actively harmful to the extent that, if you were given them, you were more likely to die. It was calculated that around 10,000 patients would not have died had this research been done sooner and the use of steroids stopped in these circumstances.


One of the criticisms made in the comments to the original answer was that the examples given above involve studies that would take a long time to produce answers. Part of the answer is that, when there is no reliable evidence, we should start systematically collecting information on outcomes as early as possible. Then we will have the results as fast as possible. We don't stop treating in the meantime but we know that the information that will help us improve will eventually emerge. Shockingly this isn't standard medical practice and the idea of collecting evidence in a systematic way is often opposed by the medical community.

But the benefits can be enormous. Atul Gawande tells a story about military medics in chapter 3 of his magisterial book on medicine, Better: a surgeon's notes on performance. He noted that americas military medics had an unusual habit of diligence about recording what they did and what subsequently happened with military casualties. Unusual because record keeping is not the first thing you think of in the high pressure, under-resourced field hospital when injured soldiers arrive. But systematic recording of injury, treatment and outcomes enabled them to half the rate of death in less than a decade with no new technology for treatment. The simple process of observing which treatments were used and analysing their success rates led to a major improvement in outcomes. Gawande remarks towards the end of the chapter:

We do little tracking like this here at home. Ask a typical American hospital what its death and complication rates for surgery were during the last six months and it cannot tell you.

The problem isn't how much effort it takes, the problem is some people just don't want to do it. Those who want to collect reliable evidence often face deep seated opposition. One of Archie Cochrane's early triumphs (See Tim Harford's TED talk or book) was based on the question of whether patients recovering from a heart attack would survive better if kept in a hospital bed or if they were sent home quickly. Conventional medical opinion was so strongly in favour of keeping them in convalescent beds that they opposed the idea of doing a trial as unethical and looked likely to challenge any statistics the might emerge if a trial were ever conducted. Somehow Cochrane managed to start a trial and get some early results. Then he presented the results to the established doctors showing that survival was clearly better in the group staying in bed. The established experts were so convinced by the statistics that they argued the trial should be stopped on ethical grounds. Only then did Cochrane reveal that he had swapped the results in his presentation and it was actually the patients going home who had better outcomes. The God Complex isn't open to evidence and isn't good for patients.

Moreover, evidence can sometimes be collected quickly. And sometimes even simple ideas can yield big improvements.

Checklists for the insertion of central lines in ICUs

Very ill patients are often cared for in Intensive Care Units (ICUs). They don't usually stay there for long periods of time, but the care they receive is intense. Many need central venous catheters inserted. And they run some risk of harm. As Gigerenzer and Gray describe in Better Doctors, Better Patients, Better Decisions:

...each year, central venous catheters cause an estimated 80,000 bloodstream infections and, as a result, up to 28,000 deaths in intensive care units (ICU) in U.S. hospitals. Total cost of these infections are estimated at US$2.3 billion annually. To save lives Peter Pronovost developed a simple checklist of five steps (including hand washing and cleaning the skin with chlorhexidine) for ICU doctors to follow before inserting an IV line to prevent the introduction of bacteria. The checklist reduced the infection rate to almost zero at some one hundred ICUs in hospitals in Michigan....Yet most ICU physicians do not use them.

The original Pronovost study is reported in the NEJM here. Gawande has a popular account in the New Yorker.

The key lessons are that even simple changes in practice (in this case using a simple checklist to ensure higher compliance with known good practice) can yield big improvement and do so quickly. We don't need to wait for large randomised trials. In this trial the benefits are obvious within weeks. But this is still not standard practice in most hospitals.

Checklists to avoid complications in surgery

Surgeons are particularly prone the the God Complex. So they tend to resist interventions that imply they are not omnicompetent. Such as the systematic use of checklists in operating theatres before surgery. The story of checklists is told by Gawande in his book The Checklist Manifesto

Again a simple intervention was tested and resulted in a large and significant improvement in results for patients and did so quickly. The clinical report of the original work is in the NEJM here. The essential idea is a 19 point checklist that is designed to ensure key elements of good practice are not missed by the team in the operating theatre (such as confirmation of the actual procedure, or ensuring adequate supplies of the right type of blood are on hand in case of emergencies).

As the original paper reports:

The rate of death was 1.5% before the checklist was introduced and declined to 0.8% afterward (P=0.003). Inpatient complications occurred in 11.0% of patients at baseline and in 7.0% after introduction of the checklist (P<0.001)

Those are significant improvements and didn't take that long to see. So again EBM doesn't have to wait forever to get results nor are the improvements achievable small.

Summary and conclusions

There are plenty more examples similar to the above. All demonstrate that reliable evidence often overturns accepted medical practice and persuasive logic. That is the point of evidence based medicine: we should not accept conventional wisdom or standard practice. The only way to know for sure what works and what doesn't work is to use high quality, fair, trials. And the profession of medicine should become less of a craft and more of a science where everyone's duty is to collect the data to measure outcomes and continually improve medical practice.

There are enough examples of where such evidence has reversed accepted practice for us to trust evidence based medicine rather than the alternative, ignorance based medicine.

matt_black
  • 56,186
  • 16
  • 175
  • 373
  • 2
    "When reliable evidence about long established treatments is gathered it often overturns the conventional wisdom; this is clear evidence that evidence based medicine is more effective than ignorance based medicine." You argue that having a few anecdotes where EBM is better provides evidence that EBM is better on average. If a few anecdotes can infact provide evidence, that suggests that a doctor should also be able to treat a patient based on evidence from a few anecdotes. – Christian Oct 25 '12 at 13:18
  • In a normal clinical trial you compare treatment A to treatment B for a huge group of patients with "one illness". If 40% of people do better with treatment A and 60% do better with treatment B, the study tells you that treatment B is "backed-up-by-evidence". The study doesn't show that a doctor is principly unable to learn in years of practicing medicine which patient is more likely to profit from A and which patient improves better through B. – Christian Oct 25 '12 at 13:27
  • 1
    @Christian I think you are guilty of oversimplification and parody. My argument is not that anecdotes *prove* the validity of EBM therefore *anecdotes* are good. My argument is that we can't know whether doctors' judgements are good without collecting reliable evidence. And in many cases existing judgement kills patients. The cases I refer to illustrate that point. There may be other cases where medical experience is supported by EBM analysis, but we can't in general rely on medical judgement unless, case by case, we have reliable evidence that supports it. – matt_black Oct 25 '12 at 13:48
  • 1
    @Christian Your second example (some patients do better on drug A; others on B) is a misunderstanding of EBM. The conclusion would not normally be that "treatment B is backed by evidence"; it would be that some treatments work on some people and some on others. And, if the medic has some criteria for identifying those who will do better on A, the study could *validate* his judgement with evidence. The alternative (unfortunately practised by many) is to allocate treatments to patients without certainty they get the right treatment. – matt_black Oct 25 '12 at 13:53
  • 1
    The fact that you could in theory validate the judgement with a study is: (1) Wrong. Expert decisions in many fields are made without the expert necessarily being able to write down his criteria the way a criteria has to be written done for a study. (2) Misleading. Studies don't get run on every possible way to treat a disease. Requiring gold-standard studies means that you reduce the diversity of treatment options. Patients have to be grouped in a way that the groups are big enough. – Christian Oct 25 '12 at 15:15
  • "The alternative is to allocate treatments to patients without certainty they get the right treatment." Is that parody? You never know. Medicine is a discipline where you make decisions with incomplete knowledge. You don't have certainity. – Christian Oct 25 '12 at 15:21
  • 1
    @Christian You are right that *certainty* is not possible. I would have expounded about the "best decision given uncertain knowledge" but it wouldn't have fitted in the comment. On the point of not being able to test expert judgement you are wrong: a well designed study could test whether an expert's intuition does better than some alternative (even when it isn't articulated). The trouble is that such studies repeatedly demonstrate that experts' confidence in their judgement vastly exceeds the evidence they are right. – matt_black Oct 25 '12 at 15:33
  • You can test someone intuition. You however can't test the criteria independent of the doctor. Another doctor who reads the study can't simply read the criteria in the study and then treat his patient according to the criteria of the study. If the doctor wants to practice EBM he would need to understand the criteria by reading the study. EBM is per definition about deemphasizing intuition. Doctors who make treatment decisions based on their intuition aren't practicing EBM. – Christian Oct 25 '12 at 23:58
  • "The trouble is that such studies repeatedly demonstrate that experts' confidence in their judgement vastly exceeds the evidence they are right." If that's the case for doctors, if doctors who practice medicine by the book outperform those who rely partly on their intuition, how about adding the corrosponding evidence to your answer? If you don't think that claim needs any evidence, why don't you think it needs evidence? – Christian Oct 26 '12 at 00:07
  • 1
    @Christian All the evidence I quoted demonstrated that accepted practice and intuition are sometimes wrong. And EBM isn't about deemphasising intuition, it is about emphasising *evidence*. A doctor relying on intuition could collect systematic evidence to validate his judgements. What is remarkable (and should be scandalous) is how infrequently that evidence is collected. – matt_black Oct 26 '12 at 10:10
  • (A) No, I specifically linked to the EBM paper that defines the term. EBM is a term that has a specific meaning. EBM is per definition about deemphasising intuition. Maybe the core problem is that you don't understand the term the question is about? | (B) Showing that accepted practice and intuition is sometimes wrong doesn't lead logically to the conclusion that all deviation from the treatment recommendations that come from published studies are likely wrong. – Christian Oct 26 '12 at 11:28
  • I think you have definitely given a number examples of evidence-based medicine outperforming the intuitive approach. Does this prove that overall, it is superior? Barring a meta-study pitting the two approaches against each other, I don't think it's possible to answer. To give a bit of (humorours) food for thought, consider the famous [parody article](http://www.bmj.com/content/327/7429/1459) about the efficacy of parachutes. It certainly highlights some failings of the evidence-based approach. – Daniel B Oct 26 '12 at 12:26
  • 1
    @DanielB I fail to see how a comparison of *medical intuition* versus *evidence* would be conducted unless it involved, how do I put this, **evidence**. Intuition isn't always wrong, but the only way to *know* is to use reliable forms of evidence. The alternatives are not intuition versus evidence, they are ignorance versus evidence. If we say intuition is OK by itself, then we cannot reliably know whether medical practice is killing patients or curing them. But there is not general proof: every intuition requires specific validation. – matt_black Oct 26 '12 at 13:20
  • @matt_black I agree with the concept that a (correctly executed) evidence-based approach is required in order to have high confidence in the validity of the approach. In this sense, evidence will always outperform the alternative; but there is an opportunity cost of waiting for evidence to become available (and in some cases, it never will, like the parachutes). So, I interpret the question along the lines of "overall, is it beneficial to allow some non-evidence based treatments, or should everything always be evidence-based". – Daniel B Oct 26 '12 at 13:30
  • A couple things, 1) can you better define what you mean by "ignorance based medicine"? The way things are currently written that is somewhat confrontational and can be casting people such as Dr. Spock in a negative light even though he was working with some of the best evidence that was available in his day and it wasn't until the 1990's that issues with SIDS uncovered. I suspect that you mean “ignorance” in the context of “lack of better understanding with the best information available” but it seems to imply a willful aspect the way it is written. – rjzii Oct 26 '12 at 13:39
  • 2) I don't feel that you really made a good case for EBM as most of your examples are based upon long term studies or studies preformed after unexpected results trended for several years so one could almost argue that it doesn't outperform other approaches as it requires too long term of an approach to obtain satisfactory results. Can you find more direct example that applies EBM on the short term? I suspect that there might be some examples in the drug discovery field when doing larger studies of experimental drugs. – rjzii Oct 26 '12 at 13:46
  • 1
    @DanielB I wouldn't argue that the alternatives are EBM or no-treatment-at-all. But the position where we just accept conventional practice and never collect the evidence should not be accepted. So we shouldn't have an "opportunity cost of waiting". – matt_black Oct 26 '12 at 15:18
  • 2
    @RobZ I'm not sure why taking time to find the right answer is an argument against EBM. When we don't have the right evidence surely the right approach is to start gathering the evidence as soon as possible? Unfortunately, in all the examples I used the evidence was only gathered much later than it should have been, resulting in significant volumes of harm being done. I have no problem with practice continuing while evidence doesn't exist, but to fail to pursue that evidence is unforgivable. – matt_black Oct 26 '12 at 15:28
  • @matt_black - It's the devil's advocate against it by arguing that the time involved with collecting evidence for EBM causes more harm than may be prevented though EBM. Thus, an example of a quick turn around through the application of EBM would help to booster the argument. – rjzii Oct 26 '12 at 15:42
  • @RobZ I will add some further examples. – matt_black Oct 28 '12 at 01:22
  • "But the very act of comparing not using evidence to using evidence seems to accept the premises of the evidence based approach." Yes! That puts the finger on my confusion with the question. If you don't accept the use of Evidence-Based Medicine, why would an answer providing evidence convince you? – Oddthinking Oct 28 '12 at 23:16
  • @Oddthinking The real question is why evidence is so underused in actual medical practice. And some medics seem to be so convinced of the value of their intuition that evidence won't sway them. I think the question was somehow stumbling towards this. – matt_black Oct 28 '12 at 23:29
  • @Oddthinking: I don't see any problem with holding the paradigm of EBM to it's own standards. I think anyone who preaches EBM has the burden of prove to show that the core ideas of EBM are supported by evidence. That's what being a honest skeptic is about. Questioning dogma. – Christian Oct 29 '12 at 02:03
  • 1
    @Oddthinking: No-one is arguing that studies should be ignored. They are simply arguing: 1) often there are treatments that look promising, but haven't yet been verified by a double-blind trial. Waiting for evidence will often result in much greater harm then taking the risk on a new treatment 2) often studies are too broad - a doctor may have good reasons to believe that in these particular circumstances another treatment might work better. – Casebash Oct 29 '12 at 02:09
  • 1
    Evidence based medicine can be evaluated by considering the performance of doctors using strictly evidence based medicine and comparing it against those using intuition, ect – Casebash Oct 29 '12 at 02:09
  • @matt_black: Let's take the steroids example. There evidence that steroids reduce swelling and there evidence that steroids in the brain is bad. That means that before 2005 any doctor who practices medicine according to the values of EBM was supposed to give them to his patients. If he made the observation that his patients don't seem to do well with them? Statistical fluke. EBM's values don't stop the doctor from killing his patients. I don't know when steroids got introduced but it's likely that for most of the time using them in this case with EBM decision making was harmful to patients. – Christian Oct 29 '12 at 02:26
  • (@Christian: Am assuming "steroids in the brain" should be "swelling in the brain".) – Oddthinking Oct 29 '12 at 03:51
  • This discussion seems to be turning into something that would be easier to resolve in chat. Come join me: http://chat.stackexchange.com/rooms/info/6268/does-ebm-itself-need-evidence – Oddthinking Oct 29 '12 at 03:57
4

I can only speak to the one instance of evidence-based medicine I've seen in practice.

A few years back, there was a study in mammography that claimed that only women above the age of 50 should be screened for breast cancer, and even then, only once every two years (and here's the task force link there).

This study caused the doctors I was working with at the time to cringe, because younger women develop cancers faster. In fact, the week the study was released, we saw two patients, one of which was 32 and was diagnosed with stage 3 cancer right after giving birth (she thought the breast stiffness was due to preparation for lactation), and another woman who had nothing on her priors when she was 42 and had full-blown metastasized cancer 2 years later. Those are anecdotal results, but when contrasted with that overall guidance, they show the cherry-picking of the evidence done by the task force. A younger woman who doesn't get screened at least yearly could develop stage 3 or 4 cancer by the time the biennial screening comes around.

Essentially, if you back far enough away from the statistics, it could be argued that mammograms aren't useful in younger women because of the breast density preventing accurate screening results. However, with the move to digital, more of the dense breast is now available to mammographers (as reported in the NYTimes here and elsewhere here). And as that previous reference pointed out, younger women have more aggressive cancers, so reducing their screenings mean that these patients will more likely suffer from more developed cancers and increased mortality with less screening.

There are many ways of interpreting evidence. Personally, I don't think that that task force interpreted the evidence properly, although my bias is also tainted by those cases I saw. I'm pretty sure that this kind of interpretation problem will occur in many other fields as well. So will following such task force recommendations make doctor's better? I'd say that the next step, at least in mammography, is to see how the breast cancer mortality rate will change as doctors follow these new recommendations.

mmr
  • 1,946
  • 1
  • 19
  • 18
  • 4
    In your analysis of the task force's failings, you haven't considered the cost of (a) the tests performed on healthy women, (b) the false positives. Also you should consider the effectiveness of the treatment - there is no point detecting a cancer you can't treat. – Oddthinking Oct 01 '11 at 07:43
  • @Oddthinking: Also, the likelihood that screening would have found cancers in time for better outcomes. It is true that the recommendation should change based on changes in capabilities. – David Thornley Oct 01 '11 at 22:31
  • @oddthinking: the reality is, most of the people who die of breast cancer are those who don't get screened on a regular basis (http://www.asco.org/ascov2/Meetings/Abstracts?&vmview=abst_detail_view&confID=70&abstractID=40559)-- so healthy and non-healthy women must be screened to determine who falls in what category anyway. In practical terms, the cost of a false positive is much, much smaller than the cost of a false negative (ie, it's definitely a psychological burden, but not a deadly one). – mmr Oct 02 '11 at 16:47
  • @mmr, no-oone has yet mentioned the placebo effect ( http://en.wikipedia.org/wiki/Placebo_effect#Mechanism_of_the_effect ), which foxes statistics, but may provide one mechanism for the reason people try alternative remedies like homepathy - and yet get better after conventional medicine has failed them. Placebo effect even competes with surgery which is, after all, a pretty measurable thing: http://www.ncbi.nlm.nih.gov/pubmed/12110735 If the placebo effect is responsible for improving the health of people who have not had an effective treatment, to what extent has it been a component of conve – Hunter Nov 28 '11 at 17:29
  • @Hunter, I think your comment got cut off. I do believe that EBM takes into account the placebo effect, as much as can be done. EBM gathering methods coalesce a number of clinical trials, selected by the gatherers, for their scientific rigor, including attention to the Placebo Effect (ie, the study must have effective controls). From that group, a broader statistical determination is found. The trouble is that selection process may be biased or incomplete, or numbers from different studies may have been acquired differently; but placebo should be accounted for, ideally. – mmr Nov 28 '11 at 20:41
  • So, no explanation for the downvote, just a downvote? Please explain why you think that this answer warranted the downvote, thanks. – mmr Feb 10 '12 at 21:26