Breakout session 7

“Good science/bad science: THR research—unanswered questions”

The title of the seventh breakout session at the Global Tobacco & Nicotine Forum in London in September, “Good science/bad science: THR research—unanswered questions,” implied that it was accepted that there was something called “bad science”—that bad science wasn’t simply an oxymoron. Such acceptance might be okay if the word “bad” is taken to imply that some science is undertaken with flawed methodologies. We all make mistakes. However, it is not okay if the word “bad” is taken to imply that some science is undertaken carelessly or, especially, with the intention of producing results that are misleading.

But it gets worse. One of the panelists mentioned how research findings could go unchallenged once published. Clearly outraged, he told how a paper submitted with the results of research that had tried to replicate but, in the end, had contradicted the results of a previous paper was turned down for publication by the journal that had published the original paper on the grounds that it was an insult to the original paper. It makes you wonder how the journal defines science—good or bad?

The problem here is that the woman and man in the street rely on scientists to challenge the work of other scientists because they cannot do it. Katy Guest, writing in The Guardian at the end of September, quoted Hannah Fry (from Fry’s book Hello world: How to be human in the age of the machine) as saying something that seemed to have particular resonance within the science of tobacco harm reduction: “We have a tendency to over-trust anything we don’t understand. And if we don’t understand it, those difficult questions will be answered by those who do—pharmaceutical companies, malign governments and the like.”

And, as the session was told, the woman and man on the street are bombarded every day with science-based but often-misleading stories churned out by the big press offices of the scientific journals because the more media coverage, the more citations, the higher the impact of the paper on which the stories are based. And it is easy to target those press releases because the woman and man on the street are more likely to believe ideas if those ideas are close to what they already believe. At the same time, though, they are liable to draw a lot of wrong conclusions because they are good at fooling themselves—and others. And it seems as if the fallacy most commonly recognized—A follows B, so A must be caused by B—is also the fallacy that most often fools people.

The discussion was scary in places because just about all the links of the chain joining the science of healthcare—and, particularly, tobacco harm reduction—to its application seemed to be weak, or at least to be liable to weaknesses. Starting with the World Health Organization, tobacco harm reduction strategy was said to be being dictated by the precautionary principle, which raised concerns because, while it was acceptable to take certain actions before scientific certainty had been achieved, there had to be at least some evidence of risk or harm. Such a strategy could not be based on the premise that something might go wrong in the future.

Delegates were told that much of the funding for research into tobacco harm reduction went to studies looking at—dare one say, looking for?—problems rather than benefits. Some research was poorly designed and/or poorly conducted, and the results were misinterpreted. Replication rates in certain types of research were poor, while statistical paradigms, which could be useful, could not be relied upon to reveal the truth.

Abstracts, which were what 90 percent of people relied on totally for information about scientific research, sometimes included information that was not supported by the main paper. In one example, an abstract had suggested a causal link when the main paper had specifically said that it was not possible to identify such a link from the research. Indeed, it seems that the evidence needed to draw conclusions about causal relationships is generally poorly understood.

Delegates were told that, increasingly, papers and abstracts were including policy recommendations when, as a rule, scientists did a terrible job of coming up with anything close to a rudimentary understanding of rulemaking, policymaking and regulation. And policy makers were being influenced disproportionately by the results of observational studies, scientific reasoning that went from observation to theories, a process that was logically challenged. Media reports—especially headlines—of scientific findings were often inaccurate, and even some doctors struggled to understand the science underpinning the information they tried to convey to patients.

It wasn’t all bad news, however. Sure, the amount of junk science being generated was increasing and being magnified by the media, but the good news was that the same mistakes were being made time and time again, and these could be fixed. At the micro level, for instance, researchers using nonhuman animals could take on board that a human being weighs about 3,000 times what a mouse weighs, and, at a macro level, research protocols that currently do not exist could be agreed upon, given that the right international forum were set up to develop such protocols. Studies were increasingly being pre-registered, and raw data was being published to allow findings to be replicated or contradicted.

And while it was said that it would take decades of research to fully quantify the effect of lower-risk products, it was pointed out that the same applied to any product on the market. It was not known, for instance, what would happen in the long term to the people prescribed statins when they first became available about 30 years ago, but these drugs were put on the market anyway.