News and events

Why is dietary advice constantly changing?

Posted on Sunday 1 September 2019

Charlotte Hawkins

The world of food used to be simple for earlier generations. Lack of food was always a serious concern, but as long as people had enough, the balance of our diets was given little thought, and each generation just ate according to what was available and what they had learned from previous generations.


Around the middle of the last century, things started to change. Exotic foreign food began to be imported and the availability of new foods increased year on year, leading people to question and change their long-established food choices. Scientists began to look at food as a potential trigger for degenerative diseases, and talked about nutrition not in terms of foods such as meat, eggs, fish or fruit, but in terms of the nutrients that are present within these foods, such as saturated fat, cholesterol, omega 3 fats or fructose - an approach that has become known as "reductionism".  This has not changed for over half a century, although specific components of food have fallen in and out of the limelight and opinions have changed as to whether each component is good or bad.


The increasing sophistication of food processing has enabled foods to be tweaked according to the particular dietary dogma of the day: in the 1980's and 90's when the low-fat craze hit, many foods that traditionally had a high fat content could be made low fat or fat-free. However, by the millennium carbohydrates were the new public enemy number one, and traditionally high carbohydrate foods such as bread and pasta could be purchased in low-carb versions. Now food manufacturers have become fixated on "added value" (i.e. manipulating a food to be able to charge a higher price), there has been a shift towards an even more reductionist view of food with foods such as enhanced omega 3 eggs and sterol-containing margarines. Food companies know that by boosting desirable nutrients or reducing unfavourable ones it will give them an edge in a highly competitive market and enable them to charge more for a food that is made up of usually inexpensive ingredients. Although the advertising of these "nutraceutical" foods claim to be based on science, they have had as much of an influence on the public perception of the healthfulness or negative aspects of food as both government public information campaigns and media reports.


This reductionist approach to nutrition by both scientists and food manufacturers has made what was once simple hugely complicated. There is a never-ending stream of new studies that show one particular component of food to be potentially good or bad for our health. The nature of scientific enquiry is to challenge existing assumptions in the light of continually evolving evidence, but research will inevitably bring up many contradictions and experts will rarely agree. 
It is natural for people to ask why scientists can't give us a straight answer, but there are many reasons why studying diet and health is wrought with complications. There are several ways this type of research is done, but none of them are perfect.


Epidemiologists look at patterns of disease in whole populations to observe which sorts of diets tend to promote health and which tend to diminish it. The difficulty with this is that it is impossible to eliminate compounding variables - other elements that could skew the data - as it involves studying real people in the real world. If  an increased intake of one nutrient has been observed, it is inevitable that there will have  been a reduction in another, and that doesn't even begin to take into account genetic or ethnic differences, lifestyle, activity levels, and cultural values, all of which impact on health. 


Studies can be done in a laboratory setting but these do not translate well into real-life scenarios. They usually take place in vitro or on animals, neither of which can lead to any firm conclusions about the impact of certain nutrients on human health. Occasionally studies have involved keeping people in laboratories and studying their nutrient intake, but these are extremely short-term for ethical and cost reasons (participants have to be housed and paid!) and mostly researchers are looking at "markers" such as change in blood composition, which is not the same as identifying patterns of diseases that usually take decades to develop. 


Another problem with research done in a laboratory is that it always attempts to isolate the health impact of specific  nutrients and rarely takes account of how they interact. A nutrient on its own may behave very differently in the body than in combination with others. For example, iron absorption from food is greatly enhanced by eating food that contains vitamin C at the same time, but it can be reduced if you happen to also have a cup of tea along with it due to the tannins that tea contains. Some isolated nutrients which in foods are believed to be beneficial have actually shown themselves to be detrimental when taken in supplement form, but it is difficult to know whether this is due to the supplement being synthetically constructed, out of the context of the foods in which it is normally found, to do with the dose, to do with absorption, or any other long list of possibilities.


A third type of study, often called a "longitudinal" study, observes dietary patterns and the development of disease in large numbers of people. This can be useful to identify trends over the longer term, but the major flaw with this type of study is that it relies on the information that the participants give the researchers which is inevitably biased and often inaccurate when based on remembering the foods that were eaten over a given period.

There are further problems when data from these studies are analysed. Data analysis is far from straightforward, and scientists from different backgrounds can look at the same set of data and interpret it in entirely different ways. Furthermore, it is easy to assume that a statistical correlation is the same as cause and effect, and this is a common error made by journalists when trying to present scientific data to the public in an easy-to-understand format. This occurs partly because the two can be hard to distinguish, and also because it is far more exciting to report that something has been proven, rather than report that a pattern has been observed. 
A further problem is how scientific research is funded, as it is nearly always food or pharmaceutical manufacturers that pay for research to be undertaken. It has been shown many times that study results are nearly always in the favour of the organisation that is paying for the research. Those results which are not seen to be desirable are often minimised, or swept to one side and not reported, leading to an imbalance in the information that is disseminated to other researchers, journalists, public health bodies and thus the population as a whole. Patterns of research funding change according to the political priorities of the day, and therefore so does the scientific information that reaches the public.


The difficulty with trying to establish what we should be eating to maintain our health is that the issues involved are often highly complex, but are difficult to compress into a clear message that the layperson can understand. Food is a highly emotive issue for political and personal reasons. To the majority of us there seem to be too many experts with too many opinions, and it is hard to know who or what to believe. I am as confused as everybody and can offer no authoritative opinion on the pros and cons of particular nutrients or suggest what we should be eating, but over the next few months I will be looking at the major components of our food - our sources of energy: fat in October, carbohydrate in November and protein in December - and will attempt to explain some of the issues and differing opinions around them. Watch this space!

 

Return to headlines