171. Bad Science

In an ideal world, the various checks and balances that are applied would ensure that information presented in the scientific journals, especially those regarded as prestigious, is sound and reliable. Unfortunately there is convincing evidence that this is not the case. In an article published in the BMJ, Tricia Greenhalgh and colleagues have concluded that evidence based medicine is in crisis (1). The following reasons are cited:

  • The drug and medical devices industries increasingly set the research agenda by defining what counts as a disease. Then they specify which tests and treatments will be evaluated and choose the measures which will be used to determine the effectiveness.
  • There is just too much information so that the clinical guidelines are not manageable.
  • Most of the current work will only result in very small gains. The large scale trials which must be done to detect these marginal benefits tend to over-estimate them while at the same time the adverse side effects are played down.
  • As people age, they are more likely to suffer from several different conditions at the same time. These are very difficult to treat with the result that the management of one disease or risk state may cause or exacerbate another—most commonly through the perils of polypharmacy in the older patient.
  • As the examples above show, evidence based medicine has drifted in recent years from investigating and managing established disease to detecting and intervening in non-diseases. Risk assessment using “evidence based” scores and algorithms (for heart disease, diabetes, cancer, and osteoporosis, for example) now occurs on an industrial scale, with scant attention to the opportunity costs or unintended human and financial consequences (1).

John Ioannidis concluded that many published research findings are false or exaggerated, and an estimated 85% of research resources are wasted (2).

Richard Smith, a former editor of the BMJ has argued that the fundamental issue is the publication of papers in medical journals which report the results of trials on various drugs (3). These are likely to be much more effective in influencing doctors to select these drugs for treatment of patients than advertisements. Independent research has shown that studies funded by a company were four times more likely to have results favourable to the company than studies funded from other sources. This is achieved by asking the right questions. Smith describes how that approach is implemented:

  • Conduct a trial of your drug against a treatment known to be inferior
  • Trial your drugs against too low a dose of a competitor drug
  • Conduct a trial of your drug against too high a dose of a competitor drug (making your drug seem less toxic)
  • Conduct trials that are too small to show differences from competitor drugs
  • Use multiple endpoints in the trial and select for publication those that give favourable results
  • Do multicentre trials and select for publication results from centres that are favourable
  • Conduct subgroup analyses and select for publication those that are favourable
  • Present results that are most likely to impress—for example, reduction in relative rather than absolute risk.

If all this fails to produce positive results then it is rare for any negative results ever to see the light of day. In addition the impact of the favourable results are enhanced by publishing the same information in several different journals. Because the drug companies usually conduct trials in several different centres, there is huge scope for publishing different results from different centres at different times in different journals. It’s also possible to combine the results from different centres in multiple combinations. A few years ago the biotechnology firm Amgen selected 53 reports of research, many of them related to cancer, which might offer potential for subsequent development by the company. It found that the results of only 6 (11%) could actually be replicated successfully (4). In order to investigate further where possible the original authors were contacted to discuss the discrepant findings, exchange reagents and repeat experiments under the authors’ direction, occasionally even in the laboratory where the work had been done.

It was discovered that studies which could be reproduced, the authors had paid close attention to controls, reagents, investigator bias and describing the complete data set.

By contrast it was found that when this was not the case, data were not routinely analysed by investigators blinded to the experimental versus control groups. Results were often published which in agreement with working hypothesis, but which were not representative of the complete set of data. It emerged that there are no guidelines which require all data sets to be reported in the paper. In fact, original data sets are often removed during the editorial process. Essentially similar conclusions were reached by a team from Bayer Healthcare in Germany (5).

John Ioannidis and colleagues have identified a number of reasons why so much research is poor quality (6). These include:

  • Poor protocols and design. Some research is done using a rudimentary protocol or possibly no protocol at all. Even when there is a formal protocol, it may not be publicly available. Although changes may have to be made during the course of an investigation, these are often poorly documented.
  • Poor utility of information. Many studies are conducted without any attempt to assess the value or usefulness of the information that will be generated.
  • Statistical power and outcome misconceptions. In order to achieve statistical power, researchers may choose outcome measures which are clinically trivial or scientifically irrelevant. This can happen in studies on heart disease where there is no difference between treatments in death rates but there can be significant differences if other symptoms of the disease, which are assessed subjectively are included in the analysis.
  • Insufficient consideration of other evidence. Most studies are designed and conducted in isolation. In a broader context, the failure of researchers on heart disease to recognise the damage that can be caused by raised blood glucose insulin resulted in the continued emphasis on fat/cholesterol. Ultimately this led to recommendations to alter the habitual by reducing fat, especially the saturated fat, and increasing carbohydrates. It is now becoming clear that this has been an absolute disaster and is one of the main reasons why obesity and T2D has reached epidemic levels.
  • Subjective, non-standardised definitions and vibration of effects. This refers to subjective judgments which leave room for so-called vibration effects during the statistical analysis which means that the results can differ depending on how the analysis is done. This can lead to bias especially if the investigators have a preference for a particular result. This provides an opportunity for the researchers to design the study protocol in such a way that would favour the outcomes that will satisfy the sponsors. This could help to explain why the source of funding has such a strong influence on the results obtained.

He goes on to explain that many biomedical researchers have poor training in research design and analysis. Physicians who conduct research usually have a short introduction to biostatistics early in medical school, and subsequently do not receive any further formal training in clinical research. The little training that they receive often focuses on data analysis, and rarely includes study design, which is arguably the most crucial element in research methods.

Much flawed and irreproducible work has been published, even when only simple statistical tests are involved.

Research is often done by stakeholders with conflicts of interest that favour specific results. These stakeholders could be academic clinicians, laboratory scientists, or corporate scientists, with declared or undeclared financial or other conflicts of interest. Much clinical research is designed and done under the supervision of the industry, with little or no input from independent researchers. Clinicians might participate in this process simply through the recruitment of study participants, without making any meaningful contribution to the design, analysis, or even writing of the research reports, which might be done by company ghost writers.

CONCLUSION

This is a very sorry state of affairs. Enormous amounts of money and human resources are being devoted to research in the biological sciences, yet all the indications are that in many cases the results are not worth the paper they are written on. On top of all this there is the failure of the refereeing system used by the scientific journals to identify the flaws contained in the papers submitted for consideration publication. Furthermore the findings often are absolutely crucial in the determination of commercial strategies and national policies. It is somewhat salutary that these are being constructed on a foundation of sand. For example it means that it is relatively easy or the data to be manipulated to suit other agendas, such as the case submitted for official approval of a drug. The same considerations apply to the development of dietary guidelines and the poor standard of the background science is one of the reasons why it is so difficult to determine the relationship between diet and health.

There are many reasons why public health policies in many countries are failing but the lack of a sound reliable evidence base is fundamental and until this is fixed, progress will be extremely difficult to achieve.

REFERENCES

  1. http://www.bmj.com/content/348/bmj.g3725
  2. http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1001747
  3. http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020138
  4. http://www.nature.com/nature/journal/v483/n7391/full/483531a.html
  5. http://www.nature.com/nrd/journal/v10/n9/full/nrd3439-c1.html
  6. http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(13)62227-8/fulltext

 

Scroll to Top