240. Why is Nutrition Science in such a Mess?

Although nutrition science has been responsible for some remarkable progress in human health, there is widespread agreement that everything in the garden is not rosy. In particular, there have been some spectacular failures, costing enormous sums of money, in the past 50 years. Much of the success is based on the study of minerals and vitamins. Here we are dealing with one specific factor that has characteristic symptoms if there is a deficiency, which become obvious quite quickly. By contrast, the relationships between the chronic diseases such as cancer and cardiovascular conditions are much more complex. Usually there are several different factors involved and it can take many years for the disease to develop.

Poor quality science

A paper in the Mayo Clinic Proceedings has provided a devastating critique of the research used to devise the dietary guidelines (1). It begins by noting:

“the appalling track record of highly publicised nutrition claims derived from epidemiologic studies that consistently failed to be supported when tested using objective study designs.”

In one evaluation of over 50 nutritional studies for a variety of dietary patterns and nutrient supplementation, it was found that “100% of the observational claims failed to replicate” and that five claims were statistically significant “in the opposite direction.”

Essentially, this means that when objective methods are used to test nutrition claims, these claims do not stand up. The authors conclude that the research paradigm has been a failure and present their own suggestions why this has happened.

The fundamental issue is the poor quality of data that has been obtained using memory-based dietary assessments, which consist mainly of interviews, questionnaires and surveys. The reliability of this information is questionable on the following grounds:

  • The assessments of the intake of all nutrients bear little relation to actual consumption;
  • Recall of past events does not provide an accurate picture of what has happened;
  • Respondents are required to comply with protocols and perform behaviours known to induce false recall;
  • The subjective mental phenomena used to derive the data are not subject to independent observation, quantification, falsification, or verification;
  • The failure to control for physical activity, cardiorespiratory fitness, and other obvious confounders invalidates conclusions about the relationships between diet and health.

Investigations that are based on data collected in this way lack plausibility because they cannot be reliably, accurately, and independently observed, quantified, and confirmed or refuted. This is not science, but is more accurately described as “pseudoscience”.

Bias

One of the big problems with nutrition research is that people, including scientist, have very strong beliefs about food. A very interesting paper has examined unscienti?c beliefs about scienti?c topics in nutrition (2). It begins by describing three different aspects of perceptions of reality, namely:

  • Myths: “beliefs held to be true despite substantial refuting evidence”;
  • Presumptions: “beliefs held to be true for which convincing evidence does not confirm or disprove their truth”;
  • Facts: “propositions backed by sufficient evidence to consider them empirically proved for practical purposes”.

 

White hat bias

There is now convincing evidence that source of finance can have a very powerful influence the results obtained and on how they are interpreted. This is well established and will not be discussed here. The aspect I wish to consider is the way in which personal beliefs, which cannot be substantiated, are an important form of bias that may be detrimental to nutrition studies. These include personal biases, political views, promotion opportunities, and allegiance to the “norm,” which have been referred to as “White hat bias”.  These include the following:

  • Selectively citing only the portion of results that favour a particular viewpoint (also called “unbalanced citation”);
  • Inappropriate inclusion or exclusion of data in reviews;
  • Miscommunication within research conclusions, press releases, or media about research results;
  • Publication bias (in this case, only publishing results that are perceived to match a preconceived beneficial effect);
  • Only making conclusions about results consistent with the hypothesis (3).

The authors comment that:

“white hat bias seems predicated on researchers’ beliefs that such distortions will improve human health. Instead, these distortions harm the health of science by impairing scientific integrity and damaging public trust.”

There is absolutely no doubt that these practices are widespread and are not restricted to individual research projects but are endemic in the evaluation of research that invariably is used as the basis for policy formulation. This probably explains why those committees used by governments to devise dietary guidelines have their conclusions so badly wrong.

The dietary guidelines fiasco

Robert Hoenselaar evaluated three different reports that focus on nutrition (3). Two were produced in the USA: in 2005 by the Institute of Medicine (IoM) (4) and by the the Dietary Guidelines for Americans Committee (DGAC) in 2010 (5). The third was by European Food Safety Authority (EFSA) (6). The following weaknesses were identified:

  • All three reports concluded that saturated fat (SFA) increases Low Density Lipoprotein Cholesterol (LDL-C) and used that as evidence for an association between SFA and CVD. The IoM and EFSA reports noted that the SFA also raises HDL-C but none commented on the effect this would have on the incidence of CVD. This was despite the fact that a meta-analysis in 2003 had concluded that in the absence of information on the direct effects between dietary fats and oils and coronary heart disease (CHD), we cannot determine the risks. The conclusions were dependent on the relationships between cholesterol levels and the risks of heart disease that in the case of heart disease have never been firmly established. Even then, it is clear that double standards are being applied by focusing on the LDL-C as “detrimental” (“bad cholesterol”) but ignoring the HDL-C as “beneficial” (“good cholesterol”).
  • The EFSA report was the only one to include data from RCTs about the substitution of dietary fats. However, it only referred to the results from four trials although results from 14 were available. A systematic review based on all the available evidence showed that the effect from the substitution of dietary fats could be attributed monounsaturated fat (MUFA) intake, rather than that of the SFA (7).
  • None of the reports from the advisory committees systematically evaluated results from prospective studies examining the direct relation between SFA intake and CVD. All three reports excluded results from the majority of studies available. The two US reports misrepresented their results. The IoM report stated that most epidemiologic studies found a positive association between SFA intake and CHD, although significantly increased risks were found in only two of nine articles included in the results. The DGAC report ignored the effects found by their own results and suggested that replacing SFAs by MUFAs decreases the CHD risk.

Conclusions

As any good coach will tell you, to achieve success in sport it is essential to get the basics right. It is obvious that in nutrition generally the basics are pretty dreadful. After all, if the researchers cannot accurately determine what is actually being consumed, then it is impossible to have confidence in the results and the interpretation. There is ample evidence that the research is riddled with sloppy procedures, which is only made worse by the bias that is inherent in the work of many investigators. It is not in the least surprising that many of the dietary recommendations are flawed, which has meant that far from improved public health, many individuals have suffered poor health as a consequence of complying with the official advice. It really is time the nutrition scientific community got its act together and took appropriate steps to introduce rigour into the research.

References

  1. E. Archer et al (2015). Mayo Clinic Proceedings 90 (7) pp 911-926
  2. A W Brown et al (2014). http://advances.nutrition.org/content/5/5/563.full.pdf
  3. R Hoensalaar (2102). Nutrition 28 pp 118–123
  4. https://www.nap.edu/read/10490/chapter/1
  5. http://origin.www.cnpp.usda.gov/DGAs2010-DGACReport.htm
  6. http://www.efsa.europa.eu/en/efsajournal/doc/1461.pdf
  7. R P Mensink et al (2003). American Journal of Clinical Nutrition 77 pp 1146-55

 

 

 

 

 

 

 

 

 

Scroll to Top