Skip to main content

An Epidemic of False Claims

Competition and conflicts of interest distort too many medical findings

False positives and exaggerated results in peer-reviewed scientific studies have reached epidemic proportions in recent years. The problem is rampant in economics, the social sciences and even the natural sciences, but it is particularly egregious in biomedicine. Many studies that claim some drug or treatment is beneficial have turned out not to be true. We need only look to conflicting findings about beta-carotene, vitamin E, hormone treatments, Vioxx and Avandia. Even when effects are genuine, their true magnitude is often smaller than originally claimed.

The problem begins with the public’s rising expectations of science. Being human, scientists are tempted to show that they know more than they do. The number of investigators—and the number of experiments, observations and analyses they produce—has also increased exponentially in many fields, but adequate safeguards against bias are lacking. Research is fragmented, competition is fierce and emphasis is often given to single studies instead of the big picture.

Much research is conducted for reasons other than the pursuit of truth. Conflicts of interest abound, and they influence outcomes. In health care, research is often performed at the behest of companies that have a large financial stake in the results. Even for academics, success often hinges on publishing positive findings. The oligopoly of high-impact journals also has a distorting effect on funding, academic careers and market shares. Industry tailors research agendas to suit its needs, which also shapes academic priorities, journal revenue and even public funding.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The crisis should not shake confidence in the scientific method. The ability to prove something false continues to be a hallmark of science. But scientists need to improve the way they do their research and how they disseminate evidence.

First, we must routinely demand robust and extensive external validation—in the form of additional studies—for any report that claims to have found something new. Many fields pay little attention to the need for replication or do it sparingly and haphazardly. Second, scientific reports should take into account the number of analyses that have been conducted, which would tend to downplay false positives. Of course, that would mean some valid claims might get overlooked. Here is where large international collaborations may be indispensable. Human-genome epidemiology has recently had a good track record because several large-scale consortia rigorously validate genetic risk factors.

The best way to ensure that test results are verified would be for scientists to register their detailed experimental protocols before starting their research and disclose full results and data when the research is done. At the moment, results are often selectively reported, emphasizing the most exciting among them, and outsiders frequently do not have access to what they need to replicate studies. Journals and funding agencies should strongly encourage full public availability of all data and analytical methods for each published paper. It would help, too, if scientists stated up front the limitations of their data or inherent flaws in their study designs. Likewise, scientists and sponsors should be thorough in disclosing all potential conflicts of interest.

Some fields have adopted one or several of these mechanisms. Large international consortia are becoming commonplace in epidemiology; journals such as Annals of Internal Medicine and the Journal of the American Medical Association instruct authors to address study limitations; and many journals ask about conflicts of interest. Applying the measures widely won’t be easy, however.

Many scientists engaged in high-stakes research will refuse to make thorough disclosures. More important, much essential research has already been abandoned to the pharmaceutical and biomedical device industries, which may sometimes design and report studies in ways most favorable to their products. This is an embarrassment. Increased investment in evidence-based clinical and population research, for instance, should be designed not by industry but by scientists free of material conflicts of interest.

Eventually findings that bear on treatment decisions and policies should come with a disclosure of any uncertainty that surrounds them. It is fully acceptable for patients and physicians to follow a treatment based on information that has, say, only a 1 percent chance of being correct. But we must be realistic about the odds.