Advertisement

CAMPAIGN '08: RACE FOR THE WHITE HOUSE : Q&A

The art and science of taking polls

Differing results have to do with polling's inherent nature, and with factors unique to this campaign.

October 31, 2008|Mark Z. Barabak | Barabak is a Times staff writer.

Every day dozens of polls on the presidential race are published, reporting voter sentiment nationally and in key states. Depending on the numbers, either Barack Obama is headed for an electoral vote landslide Tuesday, or John McCain has a shot at yet another come-from-behind victory.

Obviously, both can't happen, which suggests that at least some polls are askew. Take, for instance, Nevada, a state that Obama hopes to win as part of a Democratic incursion into the conservative-leaning Rocky Mountain West. One poll this week put the Illinois senator's lead there at 12 percentage points. Another gave Obama a 10-point advantage and still another a 7-point lead. One said Obama's lead was 5 points and two others said 4, meaning Sen. McCain of Arizona could actually be ahead slightly, given the margin of sampling error.

Why such a big difference in polls conducted in the same state over roughly the same period of time?

There are several reasons, some having to do with the inherent nature of polling, others with factors unique to this highly unusual presidential campaign, which has given fits to even the most experienced pollsters.

Opinion surveys are based on statistical probabilities. The idea is that by interviewing a representative sample of voters, pollsters will achieve the same result as if they had interviewed every voter in a given area.

Though some are skeptical of that fundamental premise, the pioneering George Gallup had a ready retort: "An accurate blood test requires only a few drops of blood." In other words, a pollster can attain a reasonably accurate gauge of how 100 million or more Americans will behave on election day by conducting a scientific sampling of about 1,200 or so voters.

But there are any number of reasons that polls come up with varying results. Sometimes questions are worded differently, or posed in a different order. There are also different ways of choosing whom to sample. Some polls, such as the Los Angeles Times Poll, will talk to individuals at random. Others work off lists of registered voters.

The age, gender or ethnicity of the person asking the questions can affect the response. For that reason, some pollsters employ interactive technology, using a recorded voice or the Web. Others, however, frown at the practice because there is no way to know whether the respondent is a voter or their 6-year-old child. Any and all of those factors can cause results to differ.

So how do pollsters know they are interviewing a representative sample of voters?

That's where art and science come together. A pollster will attempt to determine who among those interviewed are the most likely to vote in the election. This year it's especially tough to define a "likely voter," given Obama's particular appeal to black voters and young people, two groups that typically fail to vote in numbers commensurate with their share of the population. Moreover, minorities and young people are especially hard to get ahold of, given their mobility, the fact they often work odd hours and their preference for cellphones.

Different pollsters have different ways of determining whom they consider a "likely voter." That accounts for the biggest variation among samples. For instance, one recent national survey that showed the race neck-and-neck included a large number of evangelical Christians -- too many, in the judgment of some pollsters -- which improved McCain's performance and narrowed the gap with Obama. On the other hand, McCain's camp says many polls are overstating the projected turnout of black and younger voters, to the detriment of the GOP nominee.

What else is important in assessing polls?

Timing is crucial. Polls taken before or after a significant event can vary considerably. A survey on voters' concerns about terrorism would have undoubtedly yielded very different results depending on whether it was taken in the days leading up to or just after Sept. 11, 2001.

Although no event of that magnitude has occurred this year, there have been several developments -- such as the selection of the candidates' running mates, the two major-party conventions, the presidential debates and the crisis on Wall Street -- that affected public opinion, especially in the short term. When looking at polls, it's important to compare surveys conducted over roughly the same time frame.

What do pollsters mean when they talk about "a margin of error"?

Because they are not talking to every single voter, pollsters recognize there is a certain squishiness in their numbers. This "sampling error" is measurable, based on a standard statistical calculation. (Rule of thumb: The bigger the sample size, the smaller the margin of error.)

Advertisement
Los Angeles Times Articles
|
|
|