YOU ARE HERE: LAT HomeCollections

Studies in Confusion

Knowing what constitutes good research can help consumers evaluate conflicting reports and claims that sound too good to be true.


One week medical researchers report beta carotene prevents cancer. Then they say it may cause cancer. Or we hear fiber is good, so we dutifully load up on oatmeal and green vegetables. Then we're told that maybe it's not so good. The list goes on and on--it's enough to give you mental whiplash.

So how can consumers, especially those with chronic or serious ailments, sort through this contradictory data in order to make informed decisions about their health care?

"It's very difficult," says Michele Rakoff, a breast cancer survivor who directs a peer support mentoring program at Long Beach Memorial Hospital in Long Beach. "Going through having breast cancer is frightening. Then, there's a news flash on TV about a new cancer cure, and desperate women start calling their doctors. When a closer look reveals it's no big deal, everyone feels let down."

It can be disheartening even for people who aren't in the midst of a medical crisis and just want to stay healthy. Part of the problem is that most Americans get their health information secondhand, through television, newspapers and magazines, according to a 2000 survey conducted by the Kaiser Family Foundation. In this era of the instant news cycle and the pithy sound bite, preliminary studies are often inflated into major breakthroughs, and the subtle nuances and caveats that put findings into context are glossed over or lost.

Los Angeles Times Tuesday May 8, 2001 Home Edition Part A Part A Page 2 Zones Desk 2 inches; 47 words Type of Material: Correction
Medical consortium--An item about the Cochrane Collaboration in the April 30 Health section contained some incorrect information. The consortium, which analyzes medical evidence, is made up of groups from around the world, including Asia and Africa. The correct phone number for the group's San Francisco center is (415) 502-8227.

No wonder there's so much confusion. Unfortunately, this constant waffling "breeds distrust of the traditional channels of communication, making some people vulnerable to quack cures, or 'X-Files'-style theories about diseases," says David W. Murray, director of the Statistical Assessment Service, a nonprofit medical science think tank in Washington, D.C. "The public's interpretation of what went wrong is how these medical conspiracy stories get started: 'There really is a cure for cancer, but people have been bought off,' or 'We know something, but they won't let us tell you.' "

With a little effort, though, consumers can separate the hope from the hype. The key is to understand how medical research works, and the different levels of evidence, so one can determine the significance of studies.

"The public is looking for magic bullets and believes that when a study is done it means we have the answer once and for all," says Dr. Linda Rosenstock, dean of the UCLA School of Public Health. "But the reality is that science proceeds in a series of steps that don't always go in the same direction. And results can be overstated or oversimplified."

Learning to Ask the Right Questions

The gold standard in medical research is the randomized double-blind, placebo-controlled clinical trial. What that means in plain English is that half the people in the study are selected at random to get the new drug or treatment, and the other half get a dummy pill, or placebo. The study is blinded so no one--not even the researchers--knows who is getting the real McCoy.

Such studies avoid biases that might skew the results. Otherwise, researchers may subconsciously treat subjects differently, or participants might have such a strong belief that a treatment works that they'll feel better even if it's not effective.

Ideally, when the study is finished, the people getting the therapy either benefited or didn't as compared with the control group. This is the way virtually all new drugs are tested by pharmaceutical companies to get clearance by the Food and Drug Administration to market them in the United States.

Still, even seemingly well-executed research can have inherent flaws--and a host of questions needs to be answered before accepting results as gospel. How many people were in the study? How long did it last? If there were only 50 people over a six-month period, that's not enough time or study subjects to prove effectiveness. Who conducted the research? Research from scientists affiliated with reputable academic institutions is usually more reliable. Where was the study published? Articles in major journals tend to be more rigorously scrutinized.

There's such a thing as publication bias, too. "The major medical journals, drug companies and scientists themselves tend not to publish negative results," says Kay Dickersin, an associate professor in the Brown University School of Medicine in Providence, R.I. So we often only hear about the exciting "breakthroughs," and not the follow-up studies where the treatments turned out to be duds.

Los Angeles Times Articles