The Cellar  

Go Back   The Cellar > Main > Politics

Politics Where we learn not to think less of others who don't share our views

Reply
 
Thread Tools Display Modes
Old 02-14-2009, 10:31 AM   #151
TGRR
Horrible Bastard
 
Join Date: Feb 2009
Location: High Desert, Arizona
Posts: 1,103
Quote:
Originally Posted by TheMercenary View Post
Fail. Widely known as the weakest forms of statistical measure.
Keep on digging. You'll get out of that hole someday.
TGRR is offline   Reply With Quote
Old 02-14-2009, 10:22 PM   #152
TheMercenary
“Hypocrisy: prejudice with a halo”
 
Join Date: Mar 2007
Location: Savannah, Georgia
Posts: 21,393
Recognizing the Impact of Statistics in Polls
A survey is an instrument that collects data through questions and answers and is used to gather information about the opinions, behaviors, demographics, lifestyles, and other reportable characteristics of the population of interest. What's the difference between a poll and a survey? Statisticians don't make a clear distinction between the two, but what people call a poll is typically a short survey containing only a few questions (maybe that's how researchers get more people to respond — they call it a poll rather than a survey!). But for all intents and purposes, surveys and polls are the same thing.

You come into contact with surveys and their results on a daily basis. Surveys even have their own television program: The game show Family Feud is completely based on surveys and the ability of the contestants to list the top answers that people provided on a survey. Contestants on this show must correctly identify the answers provided by respondents to survey questions such as, "Name an animal you may see at the zoo" or "Name a famous person named John."

Compared to other types of studies, such as medical experiments, surveys are relatively easy to conduct and aren't as expensive to carry out. They provide quick results that can often make interesting headlines in newspapers or eye-catching stories in magazines. People connect with surveys because they feel that survey results represent the opinions of people just like themselves (even though they may never have been asked to participate in a survey). And many people enjoy seeing how other people feel, what they do, where they go, and what they care about. Looking at survey results makes people feel connected with a bigger group, somehow. That's what pollsters (the people who conduct surveys) bank on, and that's why they spend so much time doing surveys and polls and reporting the results of this research.

Getting to the source
Who conducts surveys these days? Pretty much anyone and everyone who has a question to ask. Some of the groups that conduct polls and report the results include:

News organizations (for example, ABC News, CNN, Reuters)
Political parties (those in office and those trying to get into office)
Professional polling organizations (such as The Gallup Organization, The Harris Poll, Zogby International, and so on)
Representatives of magazines, TV shows, and radio programs
Professional organizations (such as the American Medical Association, which often conducts surveys of its membership)
Special-interest groups (such as the National Rifle Association)
Academic researchers (who conduct studies on a huge range of topics)
The U.S. government (which conducts the American Community Survey, the Crime Victimization Survey, and numerous other surveys through the Census Bureau)
Joe Public (who can easily conduct his own survey on the Internet)
Not everyone who conducts a poll is legitimate and trustworthy, so be sure to check the source of any survey in which you're asked to participate and for which you're given results. Groups that have a special interest in the results should either hire an independent organization to conduct (or at least to review) the survey, or they should offer copies of the survey questions to the public. Groups should also discuss in detail how the survey was designed and conducted, so that you can make an informed decision about the credibility of the results.

Surveying what's hot
The topics of many surveys are driven by current events, issues, and areas of interest; after all, timeliness and relevance to the public are two of the most attractive qualities of any survey. Here are just a few examples of some of the subjects being brought to the surface by today's surveys, along with some of the results being reported:

Does celebrity activism influence the political opinions of the American public? (Over 90% of the American public says no, according to CBS News.)
What percentage of Americans have dated someone online? (Only 6% of unmarried Internet users, according to CBS News.)
Is pain something that lots of Americans have to deal with? (According to CBS News, three-quarters of people under 50 suffer pain often or at least some of the time.)
How many people surf the Web to find health-related information? (About 98 million, according to The Harris Poll.)
What's the current level of investor optimism? (According to a survey by The Gallup Organization, it should be called investor pessimism.)
What was the worst car of the millennium? (The Yugo, according to listeners of the NPR radio show Car Talk.)
When you read the preceding survey results, do you find yourself thinking about what the results mean to you, rather than first asking yourself whether the results are valid? Some of the preceding survey results are more valid and accurate than others, and you should think about whether to believe the results first, before accepting them without question.

Making an impact on lives
Whereas some surveys are fun to look at and think about, other surveys can have a direct impact on your life or your workplace. These life-decision surveys need to be closely scrutinized before action is taken or important decisions are made. Surveys at this level can cause politicians to change or create new laws, motivate researchers to work on the latest problems, encourage manufacturers to invent new products or change business policies and practices, and influence people's behavior and ways of thinking. The following are some examples of recent survey results that can impact you:

Teens drive under the influence: A recent Reuters survey of 1,119 teenagers in Ontario, Canada, from grades 7 through 13 found that, at some point during the previous year, 15% of them had driven a car after consuming at least two drinks.
Children's health care suffers: A survey of 400 pediatricians by the Children's National Medical Center in Washington, D.C., reported that pediatricians spend, on average, only 8 to 12 minutes with each patient.
Crimes go unreported: According to the U.S. Bureau of Justice 2001 Crime Victimization Survey, only 49.4% of violent crimes were reported to police. The reasons victims gave for not reporting crimes to the police are listed in Table 1.

http://www.dummies.com/how-to/conten...-in-polls.html

Follow the links:

http://www.dummies.com/how-to/conten...ntire-pop.html

http://www.dummies.com/how-to/conten...tatistics.html
__________________
Anyone but the this most fuked up President in History in 2012!
TheMercenary is offline   Reply With Quote
Old 02-14-2009, 10:31 PM   #153
TheMercenary
“Hypocrisy: prejudice with a halo”
 
Join Date: Mar 2007
Location: Savannah, Georgia
Posts: 21,393
Page 8

http://books.google.com/books?id=il9...lt#PRA3-PA8,M1
__________________
Anyone but the this most fuked up President in History in 2012!
TheMercenary is offline   Reply With Quote
Old 02-14-2009, 10:33 PM   #154
TheMercenary
“Hypocrisy: prejudice with a halo”
 
Join Date: Mar 2007
Location: Savannah, Georgia
Posts: 21,393
Surveys tend to be weak on validity and strong on reliability. The artificiality of the survey format puts a strain on validity. Since people's real feelings are hard to grasp in terms of such dichotomies as "agree/disagree," "support/oppose," "like/dislike," etc., these are only approximate indicators of what we have in mind when we create the questions. Reliability, on the other hand, is a clearer matter. Survey research presents all subjects with a standardized stimulus, and so goes a long way toward eliminating unreliability in the researcher's observations. Careful wording, format, content, etc. can reduce significantly the subject's own unreliability.

http://writing.colostate.edu/guides/...vey/com2d2.cfm
__________________
Anyone but the this most fuked up President in History in 2012!
TheMercenary is offline   Reply With Quote
Old 02-14-2009, 10:37 PM   #155
TheMercenary
“Hypocrisy: prejudice with a halo”
 
Join Date: Mar 2007
Location: Savannah, Georgia
Posts: 21,393
How Many Subjects Do I Need for a Statistically Valid Survey?

by Daryle Gardner-Bonneau, Ph.D.
Office of Research
Michigan State University/Kalamazoo Center for Medical Studies
Reprinted from Usability Interface, Vol 5, No. 1, July 1998

Beware of people who give quick, pat answers in response to the question - "I’m doing a survey. How many subjects do I need?" They probably haven’t a clue as to what they’re talking about.

There aren’t any valid quick answers to this question. I work in the medical domain and advise faculty/residents/medical students on sample size determination for survey research studies all the time because, in medicine, survey results are often discounted and are not publishable unless you can support/validate the decision you made regarding sample size. We do this through power analysis, and except for the simplest power analyses, it's good to have the advice and assistance of a statistician.

That said, I can tell you how we generally approach the problem for surveys and what information a statistician needs to do a power analysis to determine sample size.

Usually, surveys involve a number of hypotheses. You do a power analysis and get a sample size estimate with respect to each hypothesis, but I usually ask folks to give me the two or three most important survey questions or, more specifically, hypotheses, they want to explore. We do power analyses for those, get a sample size estimate for each one, and from there make a decision as to the sample size for the survey as a whole.

Here's an example to give you some idea of what your statistician needs to know to determine the sample size for a survey. Let's say you're looking for a difference in patient satisfaction between two departments in a hospital - obstetrics and cardiology - and in your survey patients are asked to rate their satisfaction on a scale from 1 to 100. To determine how many patients to sample, the statistician needs information/estimates with respect to the following questions:

1. What do you consider an "important" difference in satisfaction ratings that you'd like to be able to detect between the two departments (e.g., 10 points? 20 points?)?

2. What do you think the variability is in satisfaction ratings?
Note: This might be a tough question to answer, and in the absence of any data you may have to guess. But what you might use, for example, is the standard deviation of ratings in the last survey of patient satisfaction you did, unless there was something more specific available.

3. What is, in your mind, an acceptable probability of an alpha error - an alpha error meaning that you will see a statistically significant difference in the samples, when no difference actually exists in the populations? This is often set by convention at .05.

4. Similarly, what is an acceptable probability for a beta error - that you may NOT find a statistically significant difference between the samples when there actually is a difference in the populations? This is also often estimated by convention as .20, .15, or .10, the first of these being the most common.

If you can answer these four questions, the statistician can then estimate the number of obstetrics and cardiology patients you need to sample. Sometimes, when we're really "iffy" on the answer to a question, we'll run several power analyses, say, with different values for the alpha, beta, and/or the variability estimates just to see how these variables affect the final result (i.e., the sample size estimate). This can be an especially useful exercise when there are tradeoffs that must be considered (e.g., when the cost per survey administered is significant).

One word of caution: The estimate given to you by the statistician is the number of subjects from whom you need valid data. This number is going to be less than the number of people you actually approach with the survey, because some will fail to respond and some may respond inappropriately and their data will not be usable. Referring to the example above, if the statistician tells you that you’ll need 65 cardiology patients and 65 obstetrics patients, and you know, based on past experience, that the non-respondent rate is 25%, you want to send your survey to 88 cardiology patients and 88 obstetrics patients in order to receive 65 responses from each group. Hopefully, if your survey is well-designed, all of the responses you receive will be valid...but that’s another issue.

The rationale is pretty much the same for any power analysis, though I've given you a fairly straightforward and simple example. The calculations can get "hairy" once you have more than two comparison groups, for example, but there are computer programs to help with that, and statisticians generally know this area pretty well.

The best source of information about power analysis and sample size estimation is Jacob Cohen’s book, Statistical Power Analysis for the Behavioral Sciences (Erlbaum). First published in 1969, revised, and published again in a second edition in 1988, this book is still considered the "Bible" among those who do power analysis. A short, highly readable, basic treatment of the subject, which may suffice nicely for the simpler power analysis problems, is found in the book, How Many Subjects? by Helena C. Kraemer and Sue Thiemann (Sage Publications, 1987). Finally, for those who feel confident doing their own power analyses without the guidance of a statistician, there is some excellent software available. nQuery Advisor, from Statistical Solutions Ltd., does a power analysis for almost any research design situation. It costs several hundred dollars, but is certainly worth the price for those who must do these analyses quite often. For more information about nQuery Advisor, contact the company’s Boston office at 1-800-262-1171, or visit their web site (http://www.statsolusa.com).

http://www.stcsig.org/usability/news...ysubjects.html
__________________
Anyone but the this most fuked up President in History in 2012!
TheMercenary is offline   Reply With Quote
Old 02-14-2009, 10:43 PM   #156
TheMercenary
“Hypocrisy: prejudice with a halo”
 
Join Date: Mar 2007
Location: Savannah, Georgia
Posts: 21,393
Disadvantage of the online survey

Disadvantages include the problem of being unable to accurately monitor the people completing the survey. Paper based surveys tend to be more controlled in that the cohort for the survey is normally invited to participate in the research after which time the instrument sent to them to complete. On the other hand, an online survey is potentially available to anyone who has access to a particular internet site. The online survey also has the disadvantage in that it can only be completed by those people who have computer access, although with increasing internet availability this is not such an issue as in the past. Further, some online surveys may not be appropriate when specific cohorts of respondents are required, for example; in surveying lower socio-economic groups or geographically isolated communities. There is also the assumption that people who complete online surveys have a certainly level of computer proficiency and confidence in being able to firstly locate the required web site and then the skills to complete it. Another potential problem lies is identifying an appropriate cohort and the most appropriate way to advise respondents on how to complete it. In these days of spam email people can become cynical about 'another survey' and therefore disregard it, potentially biasing the results obtained.

http://ausweb.scu.edu.au/aw04/papers...ton/paper.html

For the record I never answer truthfully in either on-line polls or telephone polls.
__________________
Anyone but the this most fuked up President in History in 2012!
TheMercenary is offline   Reply With Quote
Old 02-14-2009, 10:46 PM   #157
TheMercenary
“Hypocrisy: prejudice with a halo”
 
Join Date: Mar 2007
Location: Savannah, Georgia
Posts: 21,393
The World of Statistics is not so Cut and Dry
People are bombarded by statistics on a daily basis and many fall for them, hook, line and sinker. Why? Because most people do not understand how statistics work.

Funny thing about statistics...they are usually very biased, slanted or give a poor representation of the real world and most people's situations.

See, I took a few statistics classes in college and they taught us some interesting things...

Statistics LIE

People LIE about statistics

Statistics are taken from a SAMPLE demographic of usually 1200 people (and there are how many people in the US??? A little disproportionate, don't ya think?)

There are books about how to lie with statistics, including the book "How to Lie with Statistics" (Darrell Huff)

Statistics take for granted that there is a "typical" price or behavior, but, remember, it is out of the SAMPLE of 1,200 - does it apply to you? You actually have less than a 3% chance that it does (based on US population of 301,139,947 as supplied by CIA World Fact Book - July 2007)

Statistics often leave out pertinent information

For instance, a scholarly journal in 1995 stated that "every year since 1950 the number of American children gunned down has doubled." (The author claimed that the statistic came from the Children's Defense Fund.)

Hmmm, let us take that into consideration for a moment. Suppose just 1 child was gunned down in 1950. In 1951, the number of children gunned down would have been 2. In 1952, the number would have been 4 (remember, we are doubling!) and so on. Well, by 1965, it would have been 32,768 children gunned down, but in 1965, the FBI only identified 9,960 criminal homicides in all of the US, including adult and child victims combined. To jump to the chase, by 1995, when this article was published, the annual number of children gunned down would have been more than 35 trillion. Where are all of these extra children? And why haven't we heard of this mass gunning down of children that escalates so exponentially?

Because it is a part of flawed statistics and an assertion to my point that people use statistics to twist "facts" to suit their soap box rally or rant at the time. It is usually very weak when you look at the specifics - and this includes governmental and non profit statistics.

In truth, the Children's Defense Fund did indeed publish a statistic regarding children being gunned down. In The State of America's Children Yearbook - 1994, it was stated, "The number of American children killed each year by guns has doubled since 1950."

Difference in wording equals different meaning. It is just a matter of twisting the statistics to suit your needs.

So, how do you make sure that YOU do not fall for faulty statistics?

1. Be wary of statistics spouters who fail to direct you to the exact location where you can view the statistics for yourself. (book, article, journal, link)

2. READ the fine print that explains the sample (for polls and certain statistics) and situation, including environment, geographical region, ages, etc.

3. Don't believe everything that you read. If your source does not have information to divulge the conditions of the survey or poll, does not provide a link to how the poll or survey was conducted or other pertinent information that allows you to discern the validity of the poll or survey, disregard it as bunk.

4. Ask these questions:

Where did the data originate from?

Who conducted the survey?

Does the administrator of the survey have an ulterior motive for slanting the results in a particular direction?

How was the data collected?

What questions were asked?

How were the questions asked?

Who asked the questions?

5. You should be careful of comparisons. When two things happen at the same time, it does not mean that the two things are necessarily related. This is basic logic, but many people who are not skilled in logic and statistics rush to put, what they erroneously think, are two and two together. It ain't equaling four, that is for sure! Many people use this slanting of information to "prove" their point. Politicians are famous for this, but people who are desperately grasping at straws to substantiate an argument are very guilty of it as well.

6. Watch for numbers that are taken out of context. Affectionately referred to as "cherry picking," this slanting refers to adjusting the analysis so that it concentrates solely on the data that supports a specific claim and ignores or shuts out everything else. In other words, certain "facts" that suit the person's claim are "cherry picked" or selected while other pertinent information is swept under the rug.

One of the primary things that statistics will teach us is that there are no averages. If 50% of pet owners are responsible, then 50% of pet owners must be irresponsible. It does not help to change the definition, there must always be a population that is 50% below and that is substantiated by bell curve graphs.

This, in turn leads us to the next issue where people have problems with interpreting statistics. They want to make the statistics fit the normal distribution. However, this is significantly flawed because there are non-normal distributions. So what happens is that the statistics that are used for normal distributions are usually not appropriate for distributions that are blatantly non-normal.

Finally, most people do not understand the terminology behind statistics. They mistakenly assume that the term "mean" means the same as "average." This is inherently wrong. Mean is a mathematical term while the word average is used to describe a person or data item. In mathematics, however, average means "a number that typifies a set of numbers of which it is a function." When used in the mathematical context (and the context in which it is used when referring to statistics), average can mean "mean," "median" or "mode."

So, what do these terms mean?

Mean - a number the typifies a set of numbers

Median - the middle value of a distribution

Mode - the value or item occurring most frequently in a series of observations or statistical data

These two examples will aid in understanding this:

Set 1

2, 5, 5 6, 9, 12, 13

Mean - 7.71 (typifies the set - add the numbers, divide by 7)

Median - 6 (is the number directly in the middle of the distribution)

Mode -5 (occurs most frequently in the set)

Set 2

4, 5, 5, 5, 8, 12, 86

Mean - 17.857

Median - 5

Mode - 5

Mark Twain said, "There are three kinds of lies: lies, damned lies and statistics." Numbers are provocative and have a certain power, but in the wrong, uneducated hands are nothing more than a mess. Even accurate statistics can be used to try to strengthen inaccurate arguments. And honesty and accuracy is therefore compromised.

http://hubpages.com/hub/Understanding_Statistics
__________________
Anyone but the this most fuked up President in History in 2012!
TheMercenary is offline   Reply With Quote
Old 02-14-2009, 10:47 PM   #158
TheMercenary
“Hypocrisy: prejudice with a halo”
 
Join Date: Mar 2007
Location: Savannah, Georgia
Posts: 21,393
Any Poll Can Be Manipulated To Support An Agenda
by Ed Garvey

There are lies, damned lies and then there are statistics. That old saw comes to mind every time I hear about another poll. Polls have become a wonderful substitute for serious analysis of current events. Who cares about social justice, women's rights, tuition rates or civil liberties if news people can report the percentage of the American people on one side or the other? Poll results are almost as compelling as crime on the late news.
Even more interesting, those reporting on the poll results never spend a minute telling the audience about the bias of the group releasing the poll, their experience or track record or how they are being used. Polls have become an incredibly effective weapon in the hands of spin doctors. If they can show that a big majority supports their position, the media need only report that the verdict is in, the results are clear and there is no need to examine the underlying premise. Bush wins Florida.

And polls taken privately guide our elected leaders. That is a certainty. Bill Clinton never took a position without hearing from his pollster and Bush is worse. The motto of Al From and his Democratic Leadership Council is to ride with the majority on all issues. "Why fight it?" is their guiding principle. If the majority of the nation supports NAFTA, get behind it.

And so it goes. Polls guide politicians and polls make the radio and television reporter's job easy. They are the modern-day oracle at Delphi, the elders discussing great issues in the temple, or political parties adopting and following platforms. (Ah, if Socrates had only listened to his pollster.) No need for a static party platform. After all, if a plank is adopted in June, public opinion could change by August so why get stuck with an old poll?

Knowing the significance of polls, those in power use their ideological and economic partners to shape opinion in advance of a poll to help pass legislation or detain people who don't look like the majority. Most polls don't just "happen." They are issued by some group with an ax to grind. When do they take the poll and when do they release it? Are there any rules? Will the right-wing Bradley Foundation front calling itself the Policy Institute issue poll results showing that the vast majority of Wisconsinites oppose school vouchers for religious schools? Hardly, when the now-departed Michael Joyce and his compliant board gave millions in support of this privatization effort. The institute releases results only when they support its agenda.

If the institute finds that time after time the majority opposes vouchers, then it will ask about the failures of public schools, thereby advancing the agenda through the back door. Then the "nonprofit Wisconsin Policy Institute" releases a poll showing that an alarming number of citizens are unhappy with public schools. What to do? To get started, how about an experiment with vouchers in Milwaukee for poor African-American children? Ease people into this privatization effort and before they wake up, public schools will be reserved for those very poor African-American kids used to justify their efforts. It is that simple.

What questions are asked? Is the University of Wisconsin School of Journalism contacted? Of course not. Independent experts are not asked for advice because a poll is not about objective information, it is about propaganda.

Polls have become as dangerous as 30-second advertising spots for candidates. Now, 30-second spots are used to "educate" the electorate based on private polls that inform the perpetrator which buttons to push to move large numbers of people to one side of an issue. Then a later poll can be released showing the new opinion and thus convincing the people that the leaders are following them. How local newscasters will respond is a given.

All any group with an agenda has to do is make sure it gets the results right. But that's easy. If it crafts the message to get the desired response and fails, it has three choices. Kill the results, release altered figures, or go back to more commercials to move the public like Pavlov's dogs to the "correct" position.

It is almost impossible to listen to radio or television without having the incredible message driven into one's brain that 80 percent (or is it 90 percent?) of the American people agree with John Ashcroft's plan to eliminate the Sixth Amendment right to an attorney, to eliminate the Fourth Amendment right against unreasonable search and seizure, and to hold people who look different from Laura and W incommunicado for weeks even after a judge finds there is no evidence against them. We are told day in and day out that there should be no presumption of innocence and that public trials are too messy when fighting the modern day communists called terrorists.

None of the media finds out who asked what questions to whom. No one asks for the underlying data to back up the assertions. No one asks or ever learns how many polls were taken with the results remaining silent. And all this before the fundamental question: "Suppose 80 percent believe that the Bill of Rights should be eliminated. Is that all you need to report or might we discuss, for a moment, what those rights were intended to protect?"

Had Lincoln taken a poll before issuing the Emancipation Proclamation, or had George Washington asked about the percentage of colonists who would be willing to take up arms, or had Martin Luther King Jr. asked what percentage of the American people supported direct action in opposition to segregation laws, we would be a much different nation. John Ashcroft recently declared, "To those who scare peace-loving people with phantoms of lost liberty, my message is this: Your tactics only aid terrorists." To those elected officials who were neither laughing out loud nor silently crying when Ashcroft spoke those words, I say, forget the polls and find your moral compass.

To the media: Stop telling us what we think. Start explaining why we have the Bill of Rights.

http://www.commondreams.org/views01/1212-05.htm
__________________
Anyone but the this most fuked up President in History in 2012!
TheMercenary is offline   Reply With Quote
Old 02-14-2009, 10:49 PM   #159
TheMercenary
“Hypocrisy: prejudice with a halo”
 
Join Date: Mar 2007
Location: Savannah, Georgia
Posts: 21,393
HOW POLLS ARE MANIPULATED
Filed under: Media, blogging — admin @ 11:14 am
by Simon Owens

During the Republican National Convention, NOW, a PBS weekly TV news magazine, posted an unscientific poll on its website asking viewers to vote on whether they thought vice presidential nominee Sarah Palin was qualified for the position. Like most polls the show posts every week, it was taken down from the front page and replaced by a new one after gathering a few thousand votes.

But in the weeks after it was removed, someone unearthed the still-present URL for the poll and linked to it at the conservative website, Free Republic. The site has become famous for sending hordes of readers to crash unscientific online polls, so much so that the act of doing so has been termed “freeping.” In this particular instance, members of the Free Republic felt that the poll showed a sign of bias, and the poster linked to it to “provide them with a result they did not expect.”

“Send this email to every non-liberal you know,” the person wrote. “Let’s get some balance into this survey group. This is the easiest vote you will ever make. It takes literally two seconds.”

Predictably, the numbers on the poll in favor of Palin began to move up, but during the freep several liberal websites got wind of it. Typical of the blogosphere, the poll became a link-fest version of tug-of-war. Close to a hundred bloggers linked to it and liberals and conservatives began forwarding email chains to their friends asking them to vote (I actually received one of these emails less than an hour before I sat down to begin writing this article).

One of the bloggers who eventually linked to the poll was PZ Myers (pictured). An associate professor of biology at the University of Minnesota-Morris, Myers is arguably the most popular atheist and science blogger on the Net. His blog, Pharyngula, is published as part of the Science Blog network (owned by Seed Media Group) and averages more than 50,000 readers a day. In recent months, he and a small group of other atheist bloggers have begun a constant and often-successful campaign to crash online unscientific polls, usually to counterbalance or push back against what they see as either anti-science or overly-dogmatic beliefs.

After Myers finds a poll dealing with religion or science on a news website, he’ll provide a link to the site along with a pithy or mocking comment. “The Edmonton Sun asks, ‘Should God be left out of the University of Alberta’s convocation speech?’” he noted in one such post recently. “I should think so. They should also leave Odin, Zeus, and the Tooth Fairy out of it, unless it’s to make a joke. Surprisingly, though, 67% of the respondents disagree with me so far. Will that have changed when I wake up in the morning, I wonder…?”

Why Poll Crash?
I spoke to the science blogger, and Myers told me that when he links to a poll he can typically swing the results by 10,000 to 20,000 votes in a particular direction. Indeed, within an hour after he linked to the Sun’s poll, the results went from 67 percent of the respondents saying “no” to 91 percent “yes.” Though he has participated in poll crashes dating back to over a year ago, he has only begun conducting them on a semi-daily basis within the last month and a half.

“It’s a very popular thing with some people because they can flex a little itty bitty muscle, and a group going there and doing something shows we have some clout, a clout in expressing an opinion,” Myers said. “There have been a couple places where the polls are so poorly done and so easily manipulated, and people go nuts; they write a script and send in hundreds of thousands of votes. Which is kind of cheating, but the whole point is that these polls are silly and useless anyway.”

The bloggers’ motivation in linking to these polls, he said, was, in essence, to delegitimize them. Because these polls are unscientific and therefore largely biased toward the demographic of the website on which they’re posted, Myers argued that poll crashing makes it harder for people to use the polls simply to reaffirm their own biases.

“For instance, if I put a poll on my blog asking whether evolution is true, everyone would say ‘yes’ with just a few outliers,” he explained. “If you put it on something like [Christian conservative group] Focus on the Family, everyone there will say ‘no.’ So the point is to show that these are highly prejudicial polls, they’re sampling unscientifically, and they’re really kind of worthless. And you can’t use those results to say anything at all. I mean, what can you say about such a poll?”

But the inaccurate data isn’t the only problem that Myers has with these polls; he also detests the poor construction of many poll questions and the limited answer choices given. It’s not uncommon for him to link to a poll while issuing the caveat that — due to the perceived inanity of the question or answers — he doesn’t know which choice his readers should pick.

In speaking to Myers, I learned that his averseness to these polls sometimes carries over to even their scientific counterparts. He argued, as have others, that media coverage of elections is much too poll-obsessed and that covering a campaign in such a way perpetuates misconceptions about why voters should choose a particular candidate.

“If you look at the major networks’ coverage of the election, for instance, what you find is that they turn it into a horse race,” he said. “All they report is who’s ahead, who’s behind and by how much. It is distracting and detracts from the coverage of the actual issues. So that’s another reason to get in there and disrupt these polls: it’s because the polls really don’t matter. You shouldn’t vote on whether someone is ahead or not. What you should be voting for is whether they have policies that you agree with.”

Measuring Enthusiasm
I spoke to a few of the people responsible for publishing polls that Myers had crashed, and surprisingly there were no bitter feelings toward bloggers who deliberately try to skewer their results. In fact, both the people I interviewed said they welcomed such online participation. They argued that instances of poll crashes allowed them to gauge the level of enthusiasm for a particular issue.

Joel Schwartzbert, the director for new media for NOW, outright rejected the notion that the poll question on the website — whether Sarah Palin was qualified to be vice president — was somehow biased or leading. When the news magazine formulates each week’s poll question, he said, it bases it on a pressing issue that has become part of the national conversation. In this particular instance, there had been a sizable amount of discussion during the Republican National Convention over Palin’s qualifications for the position.

“As an example, during the Democratic convention, we asked people if they thought the party is unified,” he told me. “So we did not pull this issue out of a vacuum, it was the most relevant and talked-about issue. When the convention ended, that poll was retired. We don’t link to old polls, nor do we have an archive of old polls. So what people did was they found that poll sort of drifting in the vast outer space of the Internet, and looking at the source code found the URL, and that’s what became viral. It did not even begin to become viral until it was formerly retired on our website.”

To date, more than 50 million votes have been registered on the poll, both from constant freeping and from bots running rampant and falsely inflating the numbers. Eventually, NOW changed the poll to track a user’s cookie so they could only vote one time per computer.

Because of this one poll, Schwartzbert said, both NOW and PBS as a whole have experienced traffic numbers that far surpassed previous viewership records by wide margins. And in attracting all that traffic, they were able to drive readers to other NOW content linked at the bottom of the Palin poll. In this respect, the poll was able to engage the online community and expose a much larger audience to more reputable and scientific information.

I asked the new media director about the unscientific nature of such polling and whether it could be misleading in displaying public opinion.

“I don’t find any online polls to be accurate enough to be worthy of public broadcast,” Schwartzbert said. “We do not announce these poll results on air. If we were going to announce them on air you can be assured that it’d be a scientific poll that’d be very official. We don’t offer up these results to measure scientifically any demographics. The point of these polls and other polls is so that people can register their vote…And the poll engine has a way to generate enough excitement to look at our investigative reports, which are still very thoroughly vetted and meticulously fact checked and very scientific.”

Schwartzbert said that people like polls in the same way that they like memes and lists, and part of using new media is understanding that “these other devices are a way to get people to come to your table. But you want to rely on your bread and butter, and, in our case, the video investigations are the meat of what we do, and what best serves our mission. So the poll is a way for people to express themselves and bring people to our larger core mission, which is to reveal what’s going on in our democracy.”

http://www.socialistunity.com/?p=3056
__________________
Anyone but the this most fuked up President in History in 2012!
TheMercenary is offline   Reply With Quote
Old 02-14-2009, 10:51 PM   #160
TheMercenary
“Hypocrisy: prejudice with a halo”
 
Join Date: Mar 2007
Location: Savannah, Georgia
Posts: 21,393
Herald pulls manipulated poll

By Brent Curtis Rutland Herald - Published: September 30, 2004


RUTLAND — Editors at the Rutland Herald pulled an online poll Monday night after a local radio talk show host urged his listeners to skew the results.

On Saturday, the Herald posted the question "Who is your likely choice for governor in the Nov. 2 vote?" The poll included the top four candidates running for that office — Republican Gov. James Douglas, Democrat Peter Clavelle, Liberty Union candidate Peter Diamondstone and Libertarian Hardy Machia. The informal poll was intended to remain posted at www.rutlandherald.com until Friday.

Editors at the Herald decided to remove the poll Monday night after Tim Philbin, host of "On the Air with Tim Philbin," told his listeners that the poll was biased and described how they could skew the results by voting more than once.

Philbin, who often discusses politics on his morning show broadcast by WSYB in Rutland, said he was appalled to check the poll Monday to find that Clavelle had the lead after about 450 people had voted.

"The point is, please don't tell me that represents reality," Philbin said after his show Tuesday. "That's called manufactured news."

Philbin said the unscientific poll was a sham because it flew in the face of other polls he has seen this election year that show Douglas in a commanding lead.

He said he suspected the newspaper's reason for posting the poll was to shape public opinion, not reflect it.

"If they have a poll that can be manipulated, the results of which should represent reality, you can manipulate reality to represent what the press wants," Philbin said.

To prove his point, he told his listeners how to effectively stuff the ballot box.

Hours after his radio show Monday, the number of votes cast on the Web site climbed from about 450 to 1,003.

After the editorial staff at the newspaper heard about Philbin's broadcast, the decision was made to pull the plug on the poll Monday night.

"We ended the poll when we realized it had been rigged," city editor John Dolan said Tuesday. "We assumed something like this could happen in small amounts, but not this kind of organized rigging."

Dolan said the decision to withdraw the poll was not a partisan issue. He said the results would have been removed no matter which candidate's results were rigged.

While the online poll doesn't follow standard polling methods, Dolan said the results — available on the Web site and in the newspaper's Street Talk section — provided readers with a picture of public opinion and an opportunity to participate in the process.

"It's valuable because it lets people participate willingly instead of waiting to be called and asked," he said. "And while the results are not scientifically accurate, they give some indication of what the community is thinking."

Dolan also pointed out that Philbin conducts his own informal polls during his two-hour radio show.

"It's interesting to note that today, Mr. Philbin boasted about his prank and then spent the remaining hour of his show conducting a poll on the Leahy-McMullen race," he said. "One person called. We had 450 people vote in the two days before his show. Which poll was more useful? People can decide for themselves."

However, the number of participants in a poll might not be an accurate gauge of their usefulness or accuracy, according to professors at the Poynter Institute, a journalism school in St. Petersburg, Fla.

Aly Colòn, an ethics group leader at Poynter, said Tuesday that online polls and polling in general could ruin a newspaper's credibility if the publication wasn't upfront about its methodology and accuracy.

"If people think it's definitive information that's pure and unbiased, then they see the results and aren't sure how the findings could happen. They will think the newspaper tried to skew the results in a particular way even if it wasn't," he said.

Al Tompkins, who teaches broadcasting and online issues at Poynter, was even more skeptical of online polling.

"The big issue with online polling is that no matter how many responses you get, it's not 100 percent accurate," he said. "What you end up with is a very lopsided demographic. It gives you a wild idea at best of what a community is thinking."

Tompkins said online polling was once a popular way for newspapers to get their readership involved. But, he said, many newspapers have evolved to using online forums and community chat rooms to have interactive discussions with their readers.

"Most of the time, I've found the online polls to be a complete waste of time because they're so wildly unscientific," he said. "They're called polls, but they're not really polls at all. One is borderline voodoo, the other is scientific polling."


Contact Brent Curtis at brent.curtis@rutlandherald.com.

http://www.timesargus.com/apps/pbcs....23/1003/NEWS02
__________________
Anyone but the this most fuked up President in History in 2012!
TheMercenary is offline   Reply With Quote
Old 02-14-2009, 10:56 PM   #161
TGRR
Horrible Bastard
 
Join Date: Feb 2009
Location: High Desert, Arizona
Posts: 1,103
Wow.



Someone disagreed with Merc, and he pooped a solid PAGE of cut and paste.
TGRR is offline   Reply With Quote
Old 02-14-2009, 10:56 PM   #162
TheMercenary
“Hypocrisy: prejudice with a halo”
 
Join Date: Mar 2007
Location: Savannah, Georgia
Posts: 21,393
There have been numerous books pointing to flaws in public opinion polls. In Superpollsters: How They Measure and Manipulate Public Opinion in America (1995), Vice President and senior analyst of the Gallup Organization David W. Moore writes, "The views that people express in polls are very much influenced by the polling process itself, by the way questions are worded, their location in the interview, and the characteristics of the interviewers." By way of an example, Moore describes a 1940 experiment which found that the use of the words “forbid� and “allow� would yield significantly different results in one question. People were asked whether (1) “the United States should forbid speeches against democracy� and whether (2) “the U.S. should allow speeches against democracy.� The results: 46 percent of Americans responded “no� to the first question, yet only 25 percent of Americans agreed with the second question.

In Constructing Public Opinion:How Political Elites Do What They Like and Why We Seem to Go Along with It (2001), Justin Lewis, Professor of Communication and Deputy Head of the School of Journalism, Media and Cultural Studies at Cardiff University in Wales, writes, "It is well known in the poll literature that disparity in response can be generated by something as basic as question wording or by apparently innocuous information given by the interviewer." Supporting this statement, UCLA political science Professor John R. Zaller writes in his book, The Nature and Origins of Mass Opinion (1992): "Entirely trivial changes in questionnaire construction can easily produce 5 to 10 percentage point shifts in aggregate opinion, and occasionally double that." Citing a number of comprehensive studies, Lewis comments that the media fails to represent, especially in the United States, "the degree of support for a variety of political positions on the left from gun control to social justice issues." He argues that not only do polls inaccurately portray public opinion, but the media further distorts the perception of the polls by failing to report the complete range of opinions. Lewis additionally suggests that the news media particularly favors the interests of political and economic elites in their reporting about opinion polls.

Web Resources:
Wall Street Journal article featuring John Zogby of the polling company, Zogby International, Polling Isn't Perfect: Why voter surveys so often get it wrong. (11/14/02)

Article titled Public Opinion Polling Fraud (10/2003) by polling organization, Retro Poll which “designs and performs opinion polls that look at the relationship between public knowledge and public opinion.�

The Program on International Policy Attitudes (PIPA), of the University of Maryland, “carries out research on public attitudes on international issues by conducting nationwide polls, focus groups and comprehensive reviews of polling conducted by other organizations.�

Articles by Institute for Public Accuracy founder and executive director, Norman Solomon:
Polls: When Measuring Is Manipulating (10/18/02)
Polls give Numbers, but Truth Is More Elusive (5/17/96)

Polling Report is "An independent, nonpartisan resource on trends in American public opinion,�

The Washington Post published a list of poll research organizations and associations.

The Roper Center at the University of Connecticut offers links to polling organizations and associations at their web site.

National Council on Public Polls published the following articles:
20 Questions A Journalist Should Ask About Poll Results
Answers to Questions We Often Hear From the Public

About Polling offers information on the field of opinion research gathered from a variety of sources and commentators by the nonprofit organization called Public Agenda, which was founded decades ago by Cyrus Vance and Daniel Yankelovich.

http://www.askquestions.org/details.php?id=25
__________________
Anyone but the this most fuked up President in History in 2012!
TheMercenary is offline   Reply With Quote
Old 02-14-2009, 10:57 PM   #163
TheMercenary
“Hypocrisy: prejudice with a halo”
 
Join Date: Mar 2007
Location: Savannah, Georgia
Posts: 21,393
So yes, the poll is the weakest form of statistical measure.
__________________
Anyone but the this most fuked up President in History in 2012!
TheMercenary is offline   Reply With Quote
Old 02-14-2009, 11:20 PM   #164
Redux
Guest
 
Posts: n/a
Quote:
Originally Posted by TheMercenary View Post
So yes, the poll is the weakest form of statistical measure.
LMAO I like the Polling for Dummies!

I'll stick with the American Statistical Association, the American Political Science Association, the Democratic Party, the Republican Party, the major media, social science research organizations, the list is endless...all of which recognize the value of polls in providing a snapshot of public opinion with a relatively high degree of accuracy.

If I think a poll is relevant to a discussion, I will post it....you can post your disclaimer.

And others following the discussion can decide for themselves.
  Reply With Quote
Old 02-15-2009, 12:38 AM   #165
TheMercenary
“Hypocrisy: prejudice with a halo”
 
Join Date: Mar 2007
Location: Savannah, Georgia
Posts: 21,393
Quote:
Originally Posted by Redux View Post
the Democratic Party, the Republican Party,
Yea, you just hang on to those.
__________________
Anyone but the this most fuked up President in History in 2012!
TheMercenary is offline   Reply With Quote
Reply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT -5. The time now is 08:18 AM.


Powered by: vBulletin Version 3.8.1
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.