Superfluous Medical Studies Called Into Question

Discussion in 'Fibromyalgia Main Forum' started by tansy, Jan 6, 2006.

  1. tansy

    tansy New Member

    By David Brown
    Washington Post Staff Writer
    Monday, January 2, 2006

    In medical research, nobody is convinced by a single experiment.

    A finding has to be reproducible to be believable. Only if different
    scientists in different places do the same study and get the same outcomes
    can physicians have confidence the finding is actually true. Only then is it
    ready to be put into clinical practice.

    Nevertheless, one of medicine's most overlooked problems is the fact that some questions keep being asked over and over. Repeated tests of the same diagnostic study or treatment are a waste of time and money, and of volunteers' trust and self-sacrifice. Unnecessary clinical
    trials may also cost lives.

    All this is leading some experts to ask a new question: "What part of 'yes'
    don't doctors understand?"

    Two papers dramatically illustrated this problem last year and may have helped nudge the medical establishment toward doing something about it.

    One article examined 18 years of research on aprotinin, a drug used to reduce bleeding during heart surgery. The other looked at studies on the relationship between a baby's sleeping position and sudden infant death
    syndrome. Both concluded that research on these subjects went on long after the answers were known - namely, that aprotinin worked and that babies sleeping on their backs were less likely to die of SIDS.

    The odyssey of aprotinin, which is derived from the lung tissue of cows, was
    recounted in the journal Clinical Trials.

    Dean Fergusson and his colleagues at the Ottawa Health Research Institute
    found 64 randomized, controlled trials - the most authoritative type of
    study - on the use of aprotinin in heart surgery. They were done in half a
    dozen countries over 18 years, starting in 1987.

    Two-thirds were little more than variations on each other. And nearly all
    showed the same thing: Patients who received aprotinin during surgery bled
    less. They had only one-third the chance of needing a blood transfusion of
    patients who did not get the drug.

    What was surprising was that this advantage was clear by June 1992,
    after the 12th of the 64 studies. If researchers after that time had
    familiarized themselves with previous studies -- and especially if they had analyzed summaries of those studies, called "meta-analyses" -- they might not have considered it necessary to run their own.

    But it appears that very few of them studied closely what had been published
    previously about aprotinin. On average each new paper listed only one-fifth
    of the previous studies in its references. Only two research teams mentioned
    the two published "overviews" of aprotinin research, one from 1994 and the
    other from 1997. Both of them demonstrated the unquestionable advantage of
    giving the drug.

    In all 64 studies, the patients were randomly assigned to get aprotinin or a
    placebo. In general, mortality did not differ between the two groups. But
    some of the patients receiving a placebo had bleeding and needed
    transfusions that they might have avoided had they been given aprotinin.

    Being given a placebo long after aprotinin's value had been proved probably
    did not cost lives. The same cannot be said of medicine's failure to pay
    attention to studies of infant sleep position.

    Last April, in the International Journal of Epidemiology, Ruth Gilbert of
    the Institute of Child Health in London examined 40 studies of SIDS and
    sleep position going back to 1965.

    Gilbert found that if researchers had pooled the results of the oldest
    studies and analyzed them, they might have gotten a big hint by 1970
    that putting babies to sleep on their stomachs raised the risk of SIDS.
    Instead, that observation did not become convincing until the late 1980s.

    So researchers now know that sleeping on the stomach raises the risk of
    SIDS sevenfold. That realization led to "Back to Sleep"
    campaigns in Britain in 1991 and in the United States in 1994.

    Between 1970 and the unveiling of that advice, 11,000 British infants - who
    might have survived if sleeping on the back had been the norm - died of
    SIDS. In the United States, Europe and Australia, "at least 50,000 excess
    deaths were attributable to harmful health advice," Gilbert and her
    colleagues wrote.

    The problem is evident even in research on the highest-profile diseases.

    In 1992, Joseph Lau, then at the Department of Veterans Affairs hospital in
    Boston and now at Tufts University, published a paper that has become a
    classic in epidemiology. He examined 33 clinical trials of streptokinase, a
    drug that dissolves clots in the coronary arteries of people having heart

    The trials were conducted from 1959 to 1988. Lau conducted a "cumulative
    meta-analysis" of the results. This is done by adding each trial's patients
    and their outcomes to all the preceding ones. The result was a running
    scorecard of streptokinase's performance.

    Lau determined that by the end of the eighth trial in 1973, the evidence
    was clear that heart attack patients who got streptokinase had 25
    percent lower death rates than those who did not. That
    conclusion, and the percentage, did not budge while 34,542 more patients
    were enrolled in 25 more trials of streptokinase over the next 15 years.

    There are lots of reasons this kind of thing happens.

    In many of the aprotinin studies, the researchers tested the drug in
    subgroups of patients or altered variables to see if outcomes changed. The
    drug is very expensive, so they tried different doses. Sometimes they added
    it to the blood in the heart-lung machine; sometimes they injected it
    directly into the patient. Some studies examined not only aprotinin's
    effects on bleeding, but also on the function of artery bypasses to restore
    blood flow to the heart muscle.

    Additionally, surgical culture and practices differ somewhat from country to
    country, and apparently surgeons in some nations felt they needed to study
    the drug themselves before adopting its use.

    Even given these justifications, however, there was much repetition. Two
    studies of aprotinin's effects on patients taking aspirin were published in
    1994, another in 1998, and another in 2000. All showed the same thing:
    Aprotinin worked for those patients, too.

    The reason for the plethora of SIDS studies was different. The evidence that
    stomach-sleeping was hazardous arose from observational studies, which are
    inherently less authoritative than controlled trials where people are
    randomly assigned to do one thing or another. It takes more observational
    studies to persuade doctors to change something as important as advice to
    new parents.

    The number of unnecessary studies that occur is an open question.

    Nobody requires that medical scientists review previous research to make
    sure the question they are asking has not already been answered. This
    may change, though.

    The Lancet, a British journal, announced last summer that it will
    require that authors submitting papers show they performed a
    meta-analysis of previous research or consulted an existing one.

    "In 10 years we are going to look back on this time, and we won't believe
    this wasn't done as a matter of course," said Steven N. Goodman, a physician
    and biostatistician at Johns Hopkins University who edits Clinical Trials.

    The current state of affairs, in his opinion, is indefensible.

    When a patient volunteers for a randomized clinical trial, he or she strikes
    an implicit bargain with the researcher. The patient may benefit, but even
    if he does not, others will.

    That is because the study will produce new knowledge.

    But if the question is already settled, then the patient's sacrifice and
    altruism are for naught.

    "In the ethical world, two things need to be considered -- harms and
    wrongs," Goodman said. "People in unnecessary trials are sometimes
    harmed, but I would say they are always wronged. And in the world of clinical research, wrongs are almost worse than harms."
  2. minimonkey

    minimonkey New Member

    Nancy -- I was thinking the same thing earlier today --- we, as a community, have managed to compile a huge database that seems to surpass what most of the medical doctors *begin* to understand about these illnesses. (And yet we pay them for their services, again and again....) Maybe one of us, or we as a team, will have the breakthrough idea that leads to a cure eventually... wouldn't that be a hoot!

    Part of the problem that leads to such conflicting results in these studies is that it is easy to tinker with statistics so that you get a technically "valid, methodologically sound" study with significant results that are meaningless in the real world. There is a lot of "bad science" or at the very least "meaningless" science out there....

    I was planning on becoming a researcher at one time, but when I learned enough about advanced stats to realize how much meaningless information gets published, I changed my mind. I'm a clinician (in psych) instead, and the research I do is of the case-study variety.
  3. KelB

    KelB New Member

    The trouble is, some studies and conclusions are just plain wrong and they need to be repeated and challenged.

    For example, my Rheumy published his research into FM in the British Medical Journal. He concluded that you could cure FM with graded exercise.

    This now stands as accepted proof, despite being really sloppy work e.g. he accepts anecdotal evidence from patients when it agrees with his ideas, and discounts it when it doesn't. There is no objective monitoring of ALL the patients who started the exercise programme. Anyone who dropped out was ignored for the remainder of the trial.

    At present, his conclusion is that graded exercise doesn't work for some people because they don't do as they're told and give up on the regime too easily. They're lazy and not motivated to recover. There is no evidence offered to support this, it's just an untested assumption presented as fact. It's intellectually lacking and just plain baloney.

    How will this "research" be disproved unless someone else (with half a brain and a smaller ego!) carries out properly controlled trials of his conclusion?

    Good research should be accepted. Bad research needs to be tested and challenged in a credible way and this can only be by repetition. The pity of it is, that in order to disprove the "exercise cures FM" theory, more of us will have to be injured in the research process.

    The problem is, how do we tell good research from bad, after just one study?
  4. vickiw

    vickiw Member

    maybe I'm just a pie-eyed optimist, but I think some of this is going to improve. The WWW wasn't used much before 1994 and it took a few years before it became truly organized (I work for a technical organization and was involved in creating web sites from fairly early on). Check the dates in this article.

    With the consolidation and easy access to information these days, those who fund the research and studies can now much more easily find out what has or hasn't been done and just how well it's been done. As mentioned in the article, they are becoming reluctant to throw good money after bad and will not fund redundant studies. Researchers will have to prove the need for the study in order to receive grants.

    Not that studies can't be skewed, as mentioned, but that's another subject...
    [This Message was Edited on 01/07/2006]
  5. NyroFan

    NyroFan New Member

    It took me a while to get through it, but it was well worth it. Thank you for posting it.

    And that is good news about 'The Lancet'. Once in a while I go to the University Med Library and read the journals.
    'Lancet' is one of my favorites.