The Cellar

The Cellar (http://cellar.org/index.php)
-   Politics (http://cellar.org/forumdisplay.php?f=5)
-   -   Bush suddenly an interesting character again (http://cellar.org/showthread.php?t=19229)

Redux 02-04-2009 06:04 PM

Quote:

Originally Posted by Aliantha (Post 530655)
Oh yeah...I come to all the good bitch fests around here. If I don't, I have to worry about my reputation as a bitch slipping. ;)

I wont be doing too much partying for a little while though. I've got a little under two months and I'll be having a baby. :D

Now we just need the mad Greek to show up! Remember her? I loved how she kicked ass!

Hey..congrats on the baby!!!!

Aliantha 02-04-2009 06:08 PM

Thanks Redux. :)

Raine used to post here, and so did dov, but his views weren't popular with some people and Raine got the huffs and left.

It's a shame really. They both had some interesting points of view.

TheMercenary 02-04-2009 06:11 PM

Quote:

Originally Posted by Aliantha (Post 530652)
How can you know better? Do you have inside information? He's only just begun, so I don't see how you can possibly make any kind of rational judgement about his abilities just yet. Give him a few months and then start, but don't go making unfounded judgements just yet.

k... until them I will point and laugh where appropriate.:D

Aliantha 02-04-2009 06:12 PM

Excellent. lol And let's just think for a moment how that sort of repetitive type posting worked for you last month. lol

TheMercenary 02-04-2009 06:46 PM

Quote:

Originally Posted by Aliantha (Post 530666)
Excellent. lol And let's just think for a moment how that sort of repetitive type posting worked for you last month. lol

Yea, well I guess it is ok to laugh and point at the president. Or I could just throw my shoe at him. :D

Aliantha 02-04-2009 06:50 PM

Hey now there's a thought. lol

Griff 02-05-2009 04:05 PM

Quote:

Originally Posted by Pico and ME (Post 530522)
Daaaaaaaammmmmmmmmmn.

This guy provides cite after cite and in between he throws in a little conjecture and opinion, but mostly he has been citing his stuff...which in a previous post Merc deliberately ignored. Therefore he says he wont cite for Merc anymore. So when Merc asks for one he didn't get it. I don't blame him.

Typical merc nonsense.

Quote:

Originally Posted by TheMercenary (Post 530646)
Unlike popular belief about me, I want him to be the most successful president eva. EVA! But I know better. Keep your eye on the ball.

Gonna call Bullshit on this. The board spam would indicate otherwise.

TheMercenary 02-05-2009 08:29 PM

Quote:

Originally Posted by Griff (Post 530952)
Typical merc nonsense.



Gonna call Bullshit on this. The board spam would indicate otherwise.

That's cool. I call bull shit on your feeling that you call bull shit on anything I post. I could give a rats ass what you think say or comment about my posts. So I guess the feeling is at the very least a common one between us.

Griff 02-06-2009 05:03 AM

...and yet you read and reply.

Since we're making our wishes known, please leave the Cellar.

TheMercenary 02-06-2009 06:27 PM

Quote:

Originally Posted by Griff (Post 531111)
...and yet you read and reply.

Since we're making our wishes known, please leave the Cellar.

Fuck off.

sugarpop 02-06-2009 07:16 PM

Quote:

Originally Posted by TheMercenary (Post 530496)
No.

That is way to broad. There are many reasons not to release information. So you want to see the information and they you get to tell us whether or not it would be a threat to national security? There are other people who are paid to do that. You just aren't one of them. And special interest groups are not either. There is a system in place for this process. It works.

Not any more it doesn't. Hopefully that will be restored with the new administration.

TheMercenary 02-06-2009 08:56 PM

Quote:

Originally Posted by sugarpop (Post 531443)
Not any more it doesn't. Hopefully that will be restored with the new administration.

Don't bet on it or hold your breath.

sugarpop 02-07-2009 10:53 PM

Quote:

Originally Posted by TheMercenary (Post 531469)
Don't bet on it or hold your breath.

Why? He is already doing a lot to restore the integrity of this country.

Urbane Guerrilla 02-12-2009 08:53 PM

For, it seems, a very strange value of "doing a lot." I'm certainly not embarrassed to have voted for the other guy. The Repubs are doing all they can to keep the Dems from taking the bit in their teeth and running wild. If the Dems succeed in running wild, they will crash, terribly.

Redux 02-12-2009 10:48 PM

Quote:

Originally Posted by Urbane Guerrilla (Post 533886)
The Repubs are doing all they can to keep the Dems from taking the bit in their teeth and running wild. If the Dems succeed in running wild, they will crash, terribly.

You mean like the Republicans did for six years...running wild, rubber stamping everything Bush wanted, conducting virtually no Congressional oversight of the Executive Branch, giving Bush unprecedented 'war powers (warrantless wiretaps of citizens, circumventing US treaty obligations...)w/o a Congressional war powers resolution, politicizing the Dept of Justice, nearly doubling the national debt to over $9 trillion...and then crashing with a thud!

I guess we shall see if the Democrats meet that lofty standard, but people w/o an agenda, by that I mean most Americans, will probably wait beyond one month before making that judgement.

I like the new EOs on government contracting, reviewing detention policies, ensuring lawful interrogations, protecting presidential records. I like the new FOIA memorandum and guidelines. I like the action by the Dept of Interior to put a hold on oil/gas exploration leases in proximity to national parks. I like the SCHIP expansion and the Pay Equity Act. And I like the bold action on the stimulus bil, although I would have preferred it being bolder.

I like that many Americans give Obama and the Democratic Leaders in Congress high approval ratings as well:
http://pollingreport.com/images/CNNjob.GIF

I like that even a majority of Republicans are optimistic about Obama
http://pollingreport.com/images/CBSobama1.GIF
But I know the public has little patience and high expectations.

And I know that Merc doesnt believe in polls. :headshake

sugarpop 02-13-2009 01:54 AM

Thanks Redux for pointing all that out.

TheMercenary 02-13-2009 02:59 PM

OH LOOK! a Poll! :lol2:

classicman 02-13-2009 05:08 PM

Redux, Do you have the actual questions to that poll? Who was defined as "leaders" Were they local, national, not specified? I'm seriously interested. Polls fascinate me. I am one of those people that answer them and surveys all the time. Problem is they mostly offer some really bad choices which virtually force an answer that is usually what the pollster or their backers wanted in the first place.
Many times I have given alternate answers as the options were not accurate enough that the pollster stops in the middle, thanks me and moves on.
I do find it strange that the congressional approval ratings nearly doubled in the last few weeks/months according to the poll you posted.

classicman 02-13-2009 05:18 PM

Hmm...

I decided to look them up myself -
CONGRESS – Job Rating in national polls

The first column is .......date..app..disapp..unsure...+-

Ipsos/McClatchy...............2/6-9/09 37 59 * -22


CNN/Opinion Research........2/7-8/09 29 71 - -42


CBS.................................2/2-4/09 26 62 12 -36

FOX/Opinion Dynamics.....1/27-28/09 40 46 14 -6

FOX/Opinion Dynamics.....1/13-14/09 23 68 10 -45

NBC/Wall Street Journal....1/9-12/09 23 68 9 -45

USA Today/Gallup 1/9-11/09 19 76 5 -57

I must be looking at different data than you.
CNN/Opinion Research ..2/7-8/09.....29.....71.....-.....-42
CNN/Opinion Research 10/3-5/08.....23.....76.....1.....-53

with approval ratings consistently in the 20's over the last two polls I ail to see how suddenly the ratings are jumping into the 50's and 60's as in your poll by the same organization.

Redux 02-13-2009 05:57 PM

Quote:

Originally Posted by classicman (Post 534219)
Hmm...

I decided to look them up myself -
CONGRESS – Job Rating in national polls

The first column is .......date..app..disapp..unsure...+-

Ipsos/McClatchy...............2/6-9/09 37 59 * -22


CNN/Opinion Research........2/7-8/09 29 71 - -42


CBS.................................2/2-4/09 26 62 12 -36

FOX/Opinion Dynamics.....1/27-28/09 40 46 14 -6

FOX/Opinion Dynamics.....1/13-14/09 23 68 10 -45

NBC/Wall Street Journal....1/9-12/09 23 68 9 -45

USA Today/Gallup 1/9-11/09 19 76 5 -57

I must be looking at different data than you.
CNN/Opinion Research ..2/7-8/09.....29.....71.....-.....-42
CNN/Opinion Research 10/3-5/08.....23.....76.....1.....-53

with approval ratings consistently in the 20's over the last two polls I ail to see how suddenly the ratings are jumping into the 50's and 60's as in your poll by the same organization.

Yep...you are looking at different data.

Merc and I have been through this....polls of Congress as a whole are vastly different and have many more variables than polls of 1-2 individuals or polls of the parties.

Congress' low number as a whole (a body of 545) over the last two years are attributed to many factors:
some democratic voters rated Congress very low for not impeaching Bush, some republicans voters because of all the talk of impeaching Bush and holding so many oversight hearings

some democratic voters rated Congress very low for being rolled over on Iraq funding, some republicans because Democrats tried to block Iraq war funding.

some democratic voters rated Congress very low because of all the Republican filibusters in the Senate, some republican because the republicans didnt filibuster enough
When you are rating a person or a party, you are generally rating an easily identified ideology and voting record. When you rate Congress as a whole, there is no single ideology or voting record.

The polls asking the public (of both parties and indys) to rate Congress by party rather than as a single body are one means of addressing some of these questions....and the term "Congressional leaders" would generally be explained by the pollsters.

Job rating - Democrats in Congress

Job rating - Republicans in Congress

Perhaps you understand the difference.....Merc doesnt.

I wont bet my house on poll numbers but results of a poll or poll trends do represent a reasonably valid snapshot of public opinion at and/or over a defined period of time.

There is a reason why both parties spending $millions on polls...it does provide that snapshot.

TheMercenary 02-13-2009 08:23 PM

Bottom line, if you are going to use the most popular polls, is that Congress has had approval ratings well below Bush for over 2 years. Maybe they can ride the coat tails of Obama and gain some ground on his positive energy, but even that appears to be slipping.

TheMercenary 02-13-2009 08:31 PM

Quote:

Originally Posted by Redux (Post 534221)
I wont bet my house on poll numbers but results of a poll or poll trends do represent a reasonably valid snapshot of public opinion at and/or over a defined period of time.

False. Most of these polls are determined on the opinions of 1000 people give or take. Now given that at any given time the recent population of the United States is 305 million, what you are saying is that you believe that 0.0000327% speaks for the other 99+% of the total US. That would be false.

Was it a telephone poll? Who did they call? Who took the time to answer the questions? What is the demographics? How do you extrapolate that to 305 million people? You can't. Anyone who studies statistics knows that the poll is the weakest form of statistical measure. Straw Poll = Straw Man.

Redux 02-13-2009 09:12 PM

Quote:

Originally Posted by TheMercenary (Post 534282)
False. Most of these polls are determined on the opinions of 1000 people give or take. Now given that at any given time the recent population of the United States is 305 million, what you are saying is that you believe that 0.0000327% speaks for the other 99+% of the total US. That would be false.

Was it a telephone poll? Who did they call? Who took the time to answer the questions? What is the demographics? How do you extrapolate that to 305 million people? You can't. Anyone who studies statistics knows that the poll is the weakest form of statistical measure. Straw Poll = Straw Man.

Straw polls, like what you may find on many websites, are not scientific polls like those used by polling organizations.

In a straw poll, anyone can participate.

The credible polling organizations use representative samples to predict the larger universe of voters with a relatively small error of margin.

They are widely accepted in politics, economics, sociology, statistics, and any field of research.

Objective observers know the difference.

Urbane Guerrilla 02-13-2009 09:18 PM

Quote:

Originally Posted by Redux (Post 533923)
You mean like the Republicans did for six years...running wild, rubber stamping everything Bush wanted,

Running wild? Not at all: discommoding the Left is hardly "running wild" among wise persons. The "rubber stamping" was bipartisan, I'll have you recall and henceforth keep in mind. Keep your memory good, or I'm likely to embarrass you.

Quote:

giving Bush unprecedented 'war powers' (warrantless wiretaps of citizens, circumventing US treaty obligations...)w/o a Congressional war powers resolution,
Trying to tell somebody who remembers the Congress did authorize the President to do whatever he had to to win the war, and did authorize the President to prosecute the conflict that matters were otherwise, doesn't say a lot for your understanding of recent history, Redux. See how very badly served hewing to liberal-left opinion leaves you? Congress' resolution did tell GWB "go to it." Nobody responsible or thoughtful (in other words, the left-liberals aren't in the picture) says otherwise.

The war powers are by no means "unprecedented." Compared to war powers during declared states of war, the Bush Admininstration's are somewhat reduced -- check what Roosevelt did with strikers during WW2. Granted, what we saw was a try at assuming war powers without the legal aegis of a Congressional declaration of war, which would have completely smoothed the President's road. Those exact war powers are still held by the Obama Presidency, by the way.

Quote:

politicizing the Dept of Justice,
Here you seem to be mistaking the Bush Administration for its unfortunate predecessor. Watch your sources -- the Left is full of shitheads who assume their audience either has always had bad memories -- or convenient Memory Holes.

Quote:

nearly doubling the national debt to over $9 trillion
A fiscal sin that is totally bipartisan, so I say you're throwing a null at me. Besides which, the present Administration is on track to double that nine trillion, no? Bipartisan idiocy, helloooo... no wonder I profess a third party. One that hasn't had a chance to run the deficit up or down.

Quote:

I guess we shall see if the Democrats meet that lofty standard, but people w/o an agenda, by that I mean most Americans, will probably wait beyond one month before making that judgement.
We are gathering data. These data will be reflected in the judgement we make -- now, or ninety days from now. I'm still quite without reasons to trust the Democratic Party.

The Republicans did things I wanted done, that I really wanted done, which I think will ring down the decades as heroic, wise things. The Democrats haven't managed that in any particular since 1991, and I think the last time I voted for a Democratic candidate might have been well before that year. That's a long time for a national party to be consigned to the "Idiots" box.

Redux 02-13-2009 09:25 PM

Quote:

Originally Posted by Urbane Guerrilla (Post 534302)
Running wild? Not at all: discommoding the Left is hardly "running wild" among wise persons. The "rubber stamping" was bipartisan, I'll have you recall and henceforth keep in mind. Keep your memory good, or I'm likely to embarrass you.

Feel free to embarrass me by posting the roll call votes on the initia lPatriot Act or the Iraq war AUMF. I dont think a majority of Democrats voted for either. You might even add the Bush $1.5 trillion tax cuts that mostly benefited the top wage earners.

Quote:

Trying to tell somebody who remembers the Congress did authorize the President to do whatever he had to to win the war, and did authorize the President to prosecute the conflict that matters were otherwise, doesn't say a lot for your understanding of recent history, Redux. See how very badly served hewing to liberal-left opinion leaves you? Congress' resolution did tell GWB "go to it." Nobody responsible or thoughtful (in other words, the left-liberals aren't in the picture) says otherwise.
Please read the Authorization of Use of Military Force...it does not give the president the authority to do "whatever he had to to win the war" which is how it differs from a Congressional "war powers" resolution. In fact, there were two AUMFs - one immediately following 9/11 (wide bi-partisan support) and a second to authorize the invasion of Iraq (not as bi-partisan).

Quote:

The war powers are by no means "unprecedented." Compared to war powers during declared states of war, the Bush Admininstration's are somewhat reduced -- check what Roosevelt did with strikers during WW2. Granted, what we saw was a try at assuming war powers without the legal aegis of a Congressional declaration of war, which would have completely smoothed the President's road. Those exact war powers are still held by the Obama Presidency, by the way.
The difference that you fail to recognize from previous presidents (FDR). Congress declared war with a "war powers resolution". They did not for Bush's "war on terrorism"....there was no Congressionsal "declared state of war" as in WW II.

Many (most?) constitutional scholars, conservative and liberal, would suggest that an AUMF is not equal to a War Powers Resolution or Declaration of War.

Next- politicization of the Department of Justice
Quote:

Here you seem to be mistaking the Bush Administration for its unfortunate predecessor. Watch your sources -- the Left is full of shitheads who assume their audience either has always had bad memories -- or convenient Memory Holes.
Please read the latest report (one of several) by Bush's own DoJ Inspector General (a liberal shithead?) on the politicization of the Dept of Justice over the last eight years. (Report slams politicized hiring process at DoJ) (DoJ Internal Report - pdf)

Redux 02-13-2009 09:50 PM

UG...I am still waiting for you to explain your Republicans = integrity assertion in another discussion in light of what I posted in response.
Quote:

Originally Posted by Redux (Post 531504)
UG...I suggest you start here:

A report on corruption investigations of members of the 109th Congress..the last Republican majority Congress:
Below is a rundown of all 21 lawmakers, current and former. Ten of them are no longer in office. Investigations of seven are part of the Abramoff investigation. Seventeen are Republicans, four are Democrats.

http://www.propublica.org/article/po...gation-wrap-up
You might also want to read about the K Street project.

The Grover Norquist/Tom DeLay/Karl Rove plan of influence peddling with the hope of creating a permanent Republican majority.

Kinda backfired on them after Abramoff's arrest and Tom DeLay's resignation from Congress under an ethical cloud.

You went silent after that...perhaps you were embarrassed?

TheMercenary 02-14-2009 08:50 AM

Quote:

Originally Posted by Redux (Post 534300)
They are widely accepted in politics, economics, sociology, statistics, and any field of research.

Objective observers know the difference.

Fail. Widely known as the weakest forms of statistical measure.

Redux 02-14-2009 08:59 AM

Quote:

Originally Posted by TheMercenary (Post 534395)
Fail. Widely known as the weakest forms of statistical measure.

Cite please! From a reputable source!

That polling using random samples to reflect the larger universe, along with including margins of error, and review of questions for bias, have little validity.

Sorry, but you are blowing smoke out of your ass.

TheMercenary 02-14-2009 09:19 AM

Quote:

Originally Posted by Redux (Post 534398)
Cite please! From a reputable source!

That polling using random samples to reflect the larger universe, along with including margins of error, and review of questions for bias, have little validity.

Sorry, but you are blowing smoke out of your ass.

Anyone who hangs much validity on polls is the smoke coming out of my ass. So far you have not proven a damm thing.

Redux 02-14-2009 09:23 AM

Quote:

Originally Posted by TheMercenary (Post 534395)
Fail. Widely known as the weakest forms of statistical measure.

Cite please....from a reputable source!

You might start with this publication from the American Statistical Association:
Quote:

What is a Survey

Today the word "survey" is used most often to describe a method of gathering information from a sample of individuals. This "sample" is usually just a fraction of the population being studied...

In a bona fide survey, the sample is not selected haphazardly or only from persons who volunteer to participate. It is scientifically chosen so that each person in the population will have a measurable chance of selection. This way, the results can be reliably projected from the sample to the larger population...

...When it is realized that a properly selected sample of only 1,000 individuals can reflect various characteristics of the total population, it is easy to appreciate the value of using surveys to make informed decisions in a complex society such as ours. Surveys provide a speedy and economical means of determining facts about our economy and about people's knowledge, attitudes, beliefs, expectations, and behaviors...

http://www.whatisasurvey.info/
National polling organizations like Gallup, Harris, Zogby, etc have the policies and practices in place to ensure that they meet or surpass the accepted standards of reliability....or they would be out of business very quickly.

Polls you see on the Drudge, CNN.com, etc where anyone can click and submit have no standards.

TGRR 02-14-2009 10:31 AM

Quote:

Originally Posted by TheMercenary (Post 534395)
Fail. Widely known as the weakest forms of statistical measure.

Keep on digging. You'll get out of that hole someday.

TheMercenary 02-14-2009 10:22 PM

Recognizing the Impact of Statistics in Polls
A survey is an instrument that collects data through questions and answers and is used to gather information about the opinions, behaviors, demographics, lifestyles, and other reportable characteristics of the population of interest. What's the difference between a poll and a survey? Statisticians don't make a clear distinction between the two, but what people call a poll is typically a short survey containing only a few questions (maybe that's how researchers get more people to respond — they call it a poll rather than a survey!). But for all intents and purposes, surveys and polls are the same thing.

You come into contact with surveys and their results on a daily basis. Surveys even have their own television program: The game show Family Feud is completely based on surveys and the ability of the contestants to list the top answers that people provided on a survey. Contestants on this show must correctly identify the answers provided by respondents to survey questions such as, "Name an animal you may see at the zoo" or "Name a famous person named John."

Compared to other types of studies, such as medical experiments, surveys are relatively easy to conduct and aren't as expensive to carry out. They provide quick results that can often make interesting headlines in newspapers or eye-catching stories in magazines. People connect with surveys because they feel that survey results represent the opinions of people just like themselves (even though they may never have been asked to participate in a survey). And many people enjoy seeing how other people feel, what they do, where they go, and what they care about. Looking at survey results makes people feel connected with a bigger group, somehow. That's what pollsters (the people who conduct surveys) bank on, and that's why they spend so much time doing surveys and polls and reporting the results of this research.

Getting to the source
Who conducts surveys these days? Pretty much anyone and everyone who has a question to ask. Some of the groups that conduct polls and report the results include:

News organizations (for example, ABC News, CNN, Reuters)
Political parties (those in office and those trying to get into office)
Professional polling organizations (such as The Gallup Organization, The Harris Poll, Zogby International, and so on)
Representatives of magazines, TV shows, and radio programs
Professional organizations (such as the American Medical Association, which often conducts surveys of its membership)
Special-interest groups (such as the National Rifle Association)
Academic researchers (who conduct studies on a huge range of topics)
The U.S. government (which conducts the American Community Survey, the Crime Victimization Survey, and numerous other surveys through the Census Bureau)
Joe Public (who can easily conduct his own survey on the Internet)
Not everyone who conducts a poll is legitimate and trustworthy, so be sure to check the source of any survey in which you're asked to participate and for which you're given results. Groups that have a special interest in the results should either hire an independent organization to conduct (or at least to review) the survey, or they should offer copies of the survey questions to the public. Groups should also discuss in detail how the survey was designed and conducted, so that you can make an informed decision about the credibility of the results.

Surveying what's hot
The topics of many surveys are driven by current events, issues, and areas of interest; after all, timeliness and relevance to the public are two of the most attractive qualities of any survey. Here are just a few examples of some of the subjects being brought to the surface by today's surveys, along with some of the results being reported:

Does celebrity activism influence the political opinions of the American public? (Over 90% of the American public says no, according to CBS News.)
What percentage of Americans have dated someone online? (Only 6% of unmarried Internet users, according to CBS News.)
Is pain something that lots of Americans have to deal with? (According to CBS News, three-quarters of people under 50 suffer pain often or at least some of the time.)
How many people surf the Web to find health-related information? (About 98 million, according to The Harris Poll.)
What's the current level of investor optimism? (According to a survey by The Gallup Organization, it should be called investor pessimism.)
What was the worst car of the millennium? (The Yugo, according to listeners of the NPR radio show Car Talk.)
When you read the preceding survey results, do you find yourself thinking about what the results mean to you, rather than first asking yourself whether the results are valid? Some of the preceding survey results are more valid and accurate than others, and you should think about whether to believe the results first, before accepting them without question.

Making an impact on lives
Whereas some surveys are fun to look at and think about, other surveys can have a direct impact on your life or your workplace. These life-decision surveys need to be closely scrutinized before action is taken or important decisions are made. Surveys at this level can cause politicians to change or create new laws, motivate researchers to work on the latest problems, encourage manufacturers to invent new products or change business policies and practices, and influence people's behavior and ways of thinking. The following are some examples of recent survey results that can impact you:

Teens drive under the influence: A recent Reuters survey of 1,119 teenagers in Ontario, Canada, from grades 7 through 13 found that, at some point during the previous year, 15% of them had driven a car after consuming at least two drinks.
Children's health care suffers: A survey of 400 pediatricians by the Children's National Medical Center in Washington, D.C., reported that pediatricians spend, on average, only 8 to 12 minutes with each patient.
Crimes go unreported: According to the U.S. Bureau of Justice 2001 Crime Victimization Survey, only 49.4% of violent crimes were reported to police. The reasons victims gave for not reporting crimes to the police are listed in Table 1.

http://www.dummies.com/how-to/conten...-in-polls.html

Follow the links:

http://www.dummies.com/how-to/conten...ntire-pop.html

http://www.dummies.com/how-to/conten...tatistics.html

TheMercenary 02-14-2009 10:31 PM

Page 8

http://books.google.com/books?id=il9...lt#PRA3-PA8,M1

TheMercenary 02-14-2009 10:33 PM

Surveys tend to be weak on validity and strong on reliability. The artificiality of the survey format puts a strain on validity. Since people's real feelings are hard to grasp in terms of such dichotomies as "agree/disagree," "support/oppose," "like/dislike," etc., these are only approximate indicators of what we have in mind when we create the questions. Reliability, on the other hand, is a clearer matter. Survey research presents all subjects with a standardized stimulus, and so goes a long way toward eliminating unreliability in the researcher's observations. Careful wording, format, content, etc. can reduce significantly the subject's own unreliability.

http://writing.colostate.edu/guides/...vey/com2d2.cfm

TheMercenary 02-14-2009 10:37 PM

How Many Subjects Do I Need for a Statistically Valid Survey?

by Daryle Gardner-Bonneau, Ph.D.
Office of Research
Michigan State University/Kalamazoo Center for Medical Studies
Reprinted from Usability Interface, Vol 5, No. 1, July 1998

Beware of people who give quick, pat answers in response to the question - "I’m doing a survey. How many subjects do I need?" They probably haven’t a clue as to what they’re talking about.

There aren’t any valid quick answers to this question. I work in the medical domain and advise faculty/residents/medical students on sample size determination for survey research studies all the time because, in medicine, survey results are often discounted and are not publishable unless you can support/validate the decision you made regarding sample size. We do this through power analysis, and except for the simplest power analyses, it's good to have the advice and assistance of a statistician.

That said, I can tell you how we generally approach the problem for surveys and what information a statistician needs to do a power analysis to determine sample size.

Usually, surveys involve a number of hypotheses. You do a power analysis and get a sample size estimate with respect to each hypothesis, but I usually ask folks to give me the two or three most important survey questions or, more specifically, hypotheses, they want to explore. We do power analyses for those, get a sample size estimate for each one, and from there make a decision as to the sample size for the survey as a whole.

Here's an example to give you some idea of what your statistician needs to know to determine the sample size for a survey. Let's say you're looking for a difference in patient satisfaction between two departments in a hospital - obstetrics and cardiology - and in your survey patients are asked to rate their satisfaction on a scale from 1 to 100. To determine how many patients to sample, the statistician needs information/estimates with respect to the following questions:

1. What do you consider an "important" difference in satisfaction ratings that you'd like to be able to detect between the two departments (e.g., 10 points? 20 points?)?

2. What do you think the variability is in satisfaction ratings?
Note: This might be a tough question to answer, and in the absence of any data you may have to guess. But what you might use, for example, is the standard deviation of ratings in the last survey of patient satisfaction you did, unless there was something more specific available.

3. What is, in your mind, an acceptable probability of an alpha error - an alpha error meaning that you will see a statistically significant difference in the samples, when no difference actually exists in the populations? This is often set by convention at .05.

4. Similarly, what is an acceptable probability for a beta error - that you may NOT find a statistically significant difference between the samples when there actually is a difference in the populations? This is also often estimated by convention as .20, .15, or .10, the first of these being the most common.

If you can answer these four questions, the statistician can then estimate the number of obstetrics and cardiology patients you need to sample. Sometimes, when we're really "iffy" on the answer to a question, we'll run several power analyses, say, with different values for the alpha, beta, and/or the variability estimates just to see how these variables affect the final result (i.e., the sample size estimate). This can be an especially useful exercise when there are tradeoffs that must be considered (e.g., when the cost per survey administered is significant).

One word of caution: The estimate given to you by the statistician is the number of subjects from whom you need valid data. This number is going to be less than the number of people you actually approach with the survey, because some will fail to respond and some may respond inappropriately and their data will not be usable. Referring to the example above, if the statistician tells you that you’ll need 65 cardiology patients and 65 obstetrics patients, and you know, based on past experience, that the non-respondent rate is 25%, you want to send your survey to 88 cardiology patients and 88 obstetrics patients in order to receive 65 responses from each group. Hopefully, if your survey is well-designed, all of the responses you receive will be valid...but that’s another issue.

The rationale is pretty much the same for any power analysis, though I've given you a fairly straightforward and simple example. The calculations can get "hairy" once you have more than two comparison groups, for example, but there are computer programs to help with that, and statisticians generally know this area pretty well.

The best source of information about power analysis and sample size estimation is Jacob Cohen’s book, Statistical Power Analysis for the Behavioral Sciences (Erlbaum). First published in 1969, revised, and published again in a second edition in 1988, this book is still considered the "Bible" among those who do power analysis. A short, highly readable, basic treatment of the subject, which may suffice nicely for the simpler power analysis problems, is found in the book, How Many Subjects? by Helena C. Kraemer and Sue Thiemann (Sage Publications, 1987). Finally, for those who feel confident doing their own power analyses without the guidance of a statistician, there is some excellent software available. nQuery Advisor, from Statistical Solutions Ltd., does a power analysis for almost any research design situation. It costs several hundred dollars, but is certainly worth the price for those who must do these analyses quite often. For more information about nQuery Advisor, contact the company’s Boston office at 1-800-262-1171, or visit their web site (http://www.statsolusa.com).

http://www.stcsig.org/usability/news...ysubjects.html

TheMercenary 02-14-2009 10:43 PM

Disadvantage of the online survey

Disadvantages include the problem of being unable to accurately monitor the people completing the survey. Paper based surveys tend to be more controlled in that the cohort for the survey is normally invited to participate in the research after which time the instrument sent to them to complete. On the other hand, an online survey is potentially available to anyone who has access to a particular internet site. The online survey also has the disadvantage in that it can only be completed by those people who have computer access, although with increasing internet availability this is not such an issue as in the past. Further, some online surveys may not be appropriate when specific cohorts of respondents are required, for example; in surveying lower socio-economic groups or geographically isolated communities. There is also the assumption that people who complete online surveys have a certainly level of computer proficiency and confidence in being able to firstly locate the required web site and then the skills to complete it. Another potential problem lies is identifying an appropriate cohort and the most appropriate way to advise respondents on how to complete it. In these days of spam email people can become cynical about 'another survey' and therefore disregard it, potentially biasing the results obtained.

http://ausweb.scu.edu.au/aw04/papers...ton/paper.html

For the record I never answer truthfully in either on-line polls or telephone polls.

TheMercenary 02-14-2009 10:46 PM

The World of Statistics is not so Cut and Dry
People are bombarded by statistics on a daily basis and many fall for them, hook, line and sinker. Why? Because most people do not understand how statistics work.

Funny thing about statistics...they are usually very biased, slanted or give a poor representation of the real world and most people's situations.

See, I took a few statistics classes in college and they taught us some interesting things...

Statistics LIE

People LIE about statistics

Statistics are taken from a SAMPLE demographic of usually 1200 people (and there are how many people in the US??? A little disproportionate, don't ya think?)

There are books about how to lie with statistics, including the book "How to Lie with Statistics" (Darrell Huff)

Statistics take for granted that there is a "typical" price or behavior, but, remember, it is out of the SAMPLE of 1,200 - does it apply to you? You actually have less than a 3% chance that it does (based on US population of 301,139,947 as supplied by CIA World Fact Book - July 2007)

Statistics often leave out pertinent information

For instance, a scholarly journal in 1995 stated that "every year since 1950 the number of American children gunned down has doubled." (The author claimed that the statistic came from the Children's Defense Fund.)

Hmmm, let us take that into consideration for a moment. Suppose just 1 child was gunned down in 1950. In 1951, the number of children gunned down would have been 2. In 1952, the number would have been 4 (remember, we are doubling!) and so on. Well, by 1965, it would have been 32,768 children gunned down, but in 1965, the FBI only identified 9,960 criminal homicides in all of the US, including adult and child victims combined. To jump to the chase, by 1995, when this article was published, the annual number of children gunned down would have been more than 35 trillion. Where are all of these extra children? And why haven't we heard of this mass gunning down of children that escalates so exponentially?

Because it is a part of flawed statistics and an assertion to my point that people use statistics to twist "facts" to suit their soap box rally or rant at the time. It is usually very weak when you look at the specifics - and this includes governmental and non profit statistics.

In truth, the Children's Defense Fund did indeed publish a statistic regarding children being gunned down. In The State of America's Children Yearbook - 1994, it was stated, "The number of American children killed each year by guns has doubled since 1950."

Difference in wording equals different meaning. It is just a matter of twisting the statistics to suit your needs.

So, how do you make sure that YOU do not fall for faulty statistics?

1. Be wary of statistics spouters who fail to direct you to the exact location where you can view the statistics for yourself. (book, article, journal, link)

2. READ the fine print that explains the sample (for polls and certain statistics) and situation, including environment, geographical region, ages, etc.

3. Don't believe everything that you read. If your source does not have information to divulge the conditions of the survey or poll, does not provide a link to how the poll or survey was conducted or other pertinent information that allows you to discern the validity of the poll or survey, disregard it as bunk.

4. Ask these questions:

Where did the data originate from?

Who conducted the survey?

Does the administrator of the survey have an ulterior motive for slanting the results in a particular direction?

How was the data collected?

What questions were asked?

How were the questions asked?

Who asked the questions?

5. You should be careful of comparisons. When two things happen at the same time, it does not mean that the two things are necessarily related. This is basic logic, but many people who are not skilled in logic and statistics rush to put, what they erroneously think, are two and two together. It ain't equaling four, that is for sure! Many people use this slanting of information to "prove" their point. Politicians are famous for this, but people who are desperately grasping at straws to substantiate an argument are very guilty of it as well.

6. Watch for numbers that are taken out of context. Affectionately referred to as "cherry picking," this slanting refers to adjusting the analysis so that it concentrates solely on the data that supports a specific claim and ignores or shuts out everything else. In other words, certain "facts" that suit the person's claim are "cherry picked" or selected while other pertinent information is swept under the rug.

One of the primary things that statistics will teach us is that there are no averages. If 50% of pet owners are responsible, then 50% of pet owners must be irresponsible. It does not help to change the definition, there must always be a population that is 50% below and that is substantiated by bell curve graphs.

This, in turn leads us to the next issue where people have problems with interpreting statistics. They want to make the statistics fit the normal distribution. However, this is significantly flawed because there are non-normal distributions. So what happens is that the statistics that are used for normal distributions are usually not appropriate for distributions that are blatantly non-normal.

Finally, most people do not understand the terminology behind statistics. They mistakenly assume that the term "mean" means the same as "average." This is inherently wrong. Mean is a mathematical term while the word average is used to describe a person or data item. In mathematics, however, average means "a number that typifies a set of numbers of which it is a function." When used in the mathematical context (and the context in which it is used when referring to statistics), average can mean "mean," "median" or "mode."

So, what do these terms mean?

Mean - a number the typifies a set of numbers

Median - the middle value of a distribution

Mode - the value or item occurring most frequently in a series of observations or statistical data

These two examples will aid in understanding this:

Set 1

2, 5, 5 6, 9, 12, 13

Mean - 7.71 (typifies the set - add the numbers, divide by 7)

Median - 6 (is the number directly in the middle of the distribution)

Mode -5 (occurs most frequently in the set)

Set 2

4, 5, 5, 5, 8, 12, 86

Mean - 17.857

Median - 5

Mode - 5

Mark Twain said, "There are three kinds of lies: lies, damned lies and statistics." Numbers are provocative and have a certain power, but in the wrong, uneducated hands are nothing more than a mess. Even accurate statistics can be used to try to strengthen inaccurate arguments. And honesty and accuracy is therefore compromised.

http://hubpages.com/hub/Understanding_Statistics

TheMercenary 02-14-2009 10:47 PM

Any Poll Can Be Manipulated To Support An Agenda
by Ed Garvey

There are lies, damned lies and then there are statistics. That old saw comes to mind every time I hear about another poll. Polls have become a wonderful substitute for serious analysis of current events. Who cares about social justice, women's rights, tuition rates or civil liberties if news people can report the percentage of the American people on one side or the other? Poll results are almost as compelling as crime on the late news.
Even more interesting, those reporting on the poll results never spend a minute telling the audience about the bias of the group releasing the poll, their experience or track record or how they are being used. Polls have become an incredibly effective weapon in the hands of spin doctors. If they can show that a big majority supports their position, the media need only report that the verdict is in, the results are clear and there is no need to examine the underlying premise. Bush wins Florida.

And polls taken privately guide our elected leaders. That is a certainty. Bill Clinton never took a position without hearing from his pollster and Bush is worse. The motto of Al From and his Democratic Leadership Council is to ride with the majority on all issues. "Why fight it?" is their guiding principle. If the majority of the nation supports NAFTA, get behind it.

And so it goes. Polls guide politicians and polls make the radio and television reporter's job easy. They are the modern-day oracle at Delphi, the elders discussing great issues in the temple, or political parties adopting and following platforms. (Ah, if Socrates had only listened to his pollster.) No need for a static party platform. After all, if a plank is adopted in June, public opinion could change by August so why get stuck with an old poll?

Knowing the significance of polls, those in power use their ideological and economic partners to shape opinion in advance of a poll to help pass legislation or detain people who don't look like the majority. Most polls don't just "happen." They are issued by some group with an ax to grind. When do they take the poll and when do they release it? Are there any rules? Will the right-wing Bradley Foundation front calling itself the Policy Institute issue poll results showing that the vast majority of Wisconsinites oppose school vouchers for religious schools? Hardly, when the now-departed Michael Joyce and his compliant board gave millions in support of this privatization effort. The institute releases results only when they support its agenda.

If the institute finds that time after time the majority opposes vouchers, then it will ask about the failures of public schools, thereby advancing the agenda through the back door. Then the "nonprofit Wisconsin Policy Institute" releases a poll showing that an alarming number of citizens are unhappy with public schools. What to do? To get started, how about an experiment with vouchers in Milwaukee for poor African-American children? Ease people into this privatization effort and before they wake up, public schools will be reserved for those very poor African-American kids used to justify their efforts. It is that simple.

What questions are asked? Is the University of Wisconsin School of Journalism contacted? Of course not. Independent experts are not asked for advice because a poll is not about objective information, it is about propaganda.

Polls have become as dangerous as 30-second advertising spots for candidates. Now, 30-second spots are used to "educate" the electorate based on private polls that inform the perpetrator which buttons to push to move large numbers of people to one side of an issue. Then a later poll can be released showing the new opinion and thus convincing the people that the leaders are following them. How local newscasters will respond is a given.

All any group with an agenda has to do is make sure it gets the results right. But that's easy. If it crafts the message to get the desired response and fails, it has three choices. Kill the results, release altered figures, or go back to more commercials to move the public like Pavlov's dogs to the "correct" position.

It is almost impossible to listen to radio or television without having the incredible message driven into one's brain that 80 percent (or is it 90 percent?) of the American people agree with John Ashcroft's plan to eliminate the Sixth Amendment right to an attorney, to eliminate the Fourth Amendment right against unreasonable search and seizure, and to hold people who look different from Laura and W incommunicado for weeks even after a judge finds there is no evidence against them. We are told day in and day out that there should be no presumption of innocence and that public trials are too messy when fighting the modern day communists called terrorists.

None of the media finds out who asked what questions to whom. No one asks for the underlying data to back up the assertions. No one asks or ever learns how many polls were taken with the results remaining silent. And all this before the fundamental question: "Suppose 80 percent believe that the Bill of Rights should be eliminated. Is that all you need to report or might we discuss, for a moment, what those rights were intended to protect?"

Had Lincoln taken a poll before issuing the Emancipation Proclamation, or had George Washington asked about the percentage of colonists who would be willing to take up arms, or had Martin Luther King Jr. asked what percentage of the American people supported direct action in opposition to segregation laws, we would be a much different nation. John Ashcroft recently declared, "To those who scare peace-loving people with phantoms of lost liberty, my message is this: Your tactics only aid terrorists." To those elected officials who were neither laughing out loud nor silently crying when Ashcroft spoke those words, I say, forget the polls and find your moral compass.

To the media: Stop telling us what we think. Start explaining why we have the Bill of Rights.

http://www.commondreams.org/views01/1212-05.htm

TheMercenary 02-14-2009 10:49 PM

HOW POLLS ARE MANIPULATED
Filed under: Media, blogging — admin @ 11:14 am
by Simon Owens

During the Republican National Convention, NOW, a PBS weekly TV news magazine, posted an unscientific poll on its website asking viewers to vote on whether they thought vice presidential nominee Sarah Palin was qualified for the position. Like most polls the show posts every week, it was taken down from the front page and replaced by a new one after gathering a few thousand votes.

But in the weeks after it was removed, someone unearthed the still-present URL for the poll and linked to it at the conservative website, Free Republic. The site has become famous for sending hordes of readers to crash unscientific online polls, so much so that the act of doing so has been termed “freeping.” In this particular instance, members of the Free Republic felt that the poll showed a sign of bias, and the poster linked to it to “provide them with a result they did not expect.”

“Send this email to every non-liberal you know,” the person wrote. “Let’s get some balance into this survey group. This is the easiest vote you will ever make. It takes literally two seconds.”

Predictably, the numbers on the poll in favor of Palin began to move up, but during the freep several liberal websites got wind of it. Typical of the blogosphere, the poll became a link-fest version of tug-of-war. Close to a hundred bloggers linked to it and liberals and conservatives began forwarding email chains to their friends asking them to vote (I actually received one of these emails less than an hour before I sat down to begin writing this article).

One of the bloggers who eventually linked to the poll was PZ Myers (pictured). An associate professor of biology at the University of Minnesota-Morris, Myers is arguably the most popular atheist and science blogger on the Net. His blog, Pharyngula, is published as part of the Science Blog network (owned by Seed Media Group) and averages more than 50,000 readers a day. In recent months, he and a small group of other atheist bloggers have begun a constant and often-successful campaign to crash online unscientific polls, usually to counterbalance or push back against what they see as either anti-science or overly-dogmatic beliefs.

After Myers finds a poll dealing with religion or science on a news website, he’ll provide a link to the site along with a pithy or mocking comment. “The Edmonton Sun asks, ‘Should God be left out of the University of Alberta’s convocation speech?’” he noted in one such post recently. “I should think so. They should also leave Odin, Zeus, and the Tooth Fairy out of it, unless it’s to make a joke. Surprisingly, though, 67% of the respondents disagree with me so far. Will that have changed when I wake up in the morning, I wonder…?”

Why Poll Crash?
I spoke to the science blogger, and Myers told me that when he links to a poll he can typically swing the results by 10,000 to 20,000 votes in a particular direction. Indeed, within an hour after he linked to the Sun’s poll, the results went from 67 percent of the respondents saying “no” to 91 percent “yes.” Though he has participated in poll crashes dating back to over a year ago, he has only begun conducting them on a semi-daily basis within the last month and a half.

“It’s a very popular thing with some people because they can flex a little itty bitty muscle, and a group going there and doing something shows we have some clout, a clout in expressing an opinion,” Myers said. “There have been a couple places where the polls are so poorly done and so easily manipulated, and people go nuts; they write a script and send in hundreds of thousands of votes. Which is kind of cheating, but the whole point is that these polls are silly and useless anyway.”

The bloggers’ motivation in linking to these polls, he said, was, in essence, to delegitimize them. Because these polls are unscientific and therefore largely biased toward the demographic of the website on which they’re posted, Myers argued that poll crashing makes it harder for people to use the polls simply to reaffirm their own biases.

“For instance, if I put a poll on my blog asking whether evolution is true, everyone would say ‘yes’ with just a few outliers,” he explained. “If you put it on something like [Christian conservative group] Focus on the Family, everyone there will say ‘no.’ So the point is to show that these are highly prejudicial polls, they’re sampling unscientifically, and they’re really kind of worthless. And you can’t use those results to say anything at all. I mean, what can you say about such a poll?”

But the inaccurate data isn’t the only problem that Myers has with these polls; he also detests the poor construction of many poll questions and the limited answer choices given. It’s not uncommon for him to link to a poll while issuing the caveat that — due to the perceived inanity of the question or answers — he doesn’t know which choice his readers should pick.

In speaking to Myers, I learned that his averseness to these polls sometimes carries over to even their scientific counterparts. He argued, as have others, that media coverage of elections is much too poll-obsessed and that covering a campaign in such a way perpetuates misconceptions about why voters should choose a particular candidate.

“If you look at the major networks’ coverage of the election, for instance, what you find is that they turn it into a horse race,” he said. “All they report is who’s ahead, who’s behind and by how much. It is distracting and detracts from the coverage of the actual issues. So that’s another reason to get in there and disrupt these polls: it’s because the polls really don’t matter. You shouldn’t vote on whether someone is ahead or not. What you should be voting for is whether they have policies that you agree with.”

Measuring Enthusiasm
I spoke to a few of the people responsible for publishing polls that Myers had crashed, and surprisingly there were no bitter feelings toward bloggers who deliberately try to skewer their results. In fact, both the people I interviewed said they welcomed such online participation. They argued that instances of poll crashes allowed them to gauge the level of enthusiasm for a particular issue.

Joel Schwartzbert, the director for new media for NOW, outright rejected the notion that the poll question on the website — whether Sarah Palin was qualified to be vice president — was somehow biased or leading. When the news magazine formulates each week’s poll question, he said, it bases it on a pressing issue that has become part of the national conversation. In this particular instance, there had been a sizable amount of discussion during the Republican National Convention over Palin’s qualifications for the position.

“As an example, during the Democratic convention, we asked people if they thought the party is unified,” he told me. “So we did not pull this issue out of a vacuum, it was the most relevant and talked-about issue. When the convention ended, that poll was retired. We don’t link to old polls, nor do we have an archive of old polls. So what people did was they found that poll sort of drifting in the vast outer space of the Internet, and looking at the source code found the URL, and that’s what became viral. It did not even begin to become viral until it was formerly retired on our website.”

To date, more than 50 million votes have been registered on the poll, both from constant freeping and from bots running rampant and falsely inflating the numbers. Eventually, NOW changed the poll to track a user’s cookie so they could only vote one time per computer.

Because of this one poll, Schwartzbert said, both NOW and PBS as a whole have experienced traffic numbers that far surpassed previous viewership records by wide margins. And in attracting all that traffic, they were able to drive readers to other NOW content linked at the bottom of the Palin poll. In this respect, the poll was able to engage the online community and expose a much larger audience to more reputable and scientific information.

I asked the new media director about the unscientific nature of such polling and whether it could be misleading in displaying public opinion.

“I don’t find any online polls to be accurate enough to be worthy of public broadcast,” Schwartzbert said. “We do not announce these poll results on air. If we were going to announce them on air you can be assured that it’d be a scientific poll that’d be very official. We don’t offer up these results to measure scientifically any demographics. The point of these polls and other polls is so that people can register their vote…And the poll engine has a way to generate enough excitement to look at our investigative reports, which are still very thoroughly vetted and meticulously fact checked and very scientific.”

Schwartzbert said that people like polls in the same way that they like memes and lists, and part of using new media is understanding that “these other devices are a way to get people to come to your table. But you want to rely on your bread and butter, and, in our case, the video investigations are the meat of what we do, and what best serves our mission. So the poll is a way for people to express themselves and bring people to our larger core mission, which is to reveal what’s going on in our democracy.”

http://www.socialistunity.com/?p=3056

TheMercenary 02-14-2009 10:51 PM

Herald pulls manipulated poll

By Brent Curtis Rutland Herald - Published: September 30, 2004


RUTLAND — Editors at the Rutland Herald pulled an online poll Monday night after a local radio talk show host urged his listeners to skew the results.

On Saturday, the Herald posted the question "Who is your likely choice for governor in the Nov. 2 vote?" The poll included the top four candidates running for that office — Republican Gov. James Douglas, Democrat Peter Clavelle, Liberty Union candidate Peter Diamondstone and Libertarian Hardy Machia. The informal poll was intended to remain posted at www.rutlandherald.com until Friday.

Editors at the Herald decided to remove the poll Monday night after Tim Philbin, host of "On the Air with Tim Philbin," told his listeners that the poll was biased and described how they could skew the results by voting more than once.

Philbin, who often discusses politics on his morning show broadcast by WSYB in Rutland, said he was appalled to check the poll Monday to find that Clavelle had the lead after about 450 people had voted.

"The point is, please don't tell me that represents reality," Philbin said after his show Tuesday. "That's called manufactured news."

Philbin said the unscientific poll was a sham because it flew in the face of other polls he has seen this election year that show Douglas in a commanding lead.

He said he suspected the newspaper's reason for posting the poll was to shape public opinion, not reflect it.

"If they have a poll that can be manipulated, the results of which should represent reality, you can manipulate reality to represent what the press wants," Philbin said.

To prove his point, he told his listeners how to effectively stuff the ballot box.

Hours after his radio show Monday, the number of votes cast on the Web site climbed from about 450 to 1,003.

After the editorial staff at the newspaper heard about Philbin's broadcast, the decision was made to pull the plug on the poll Monday night.

"We ended the poll when we realized it had been rigged," city editor John Dolan said Tuesday. "We assumed something like this could happen in small amounts, but not this kind of organized rigging."

Dolan said the decision to withdraw the poll was not a partisan issue. He said the results would have been removed no matter which candidate's results were rigged.

While the online poll doesn't follow standard polling methods, Dolan said the results — available on the Web site and in the newspaper's Street Talk section — provided readers with a picture of public opinion and an opportunity to participate in the process.

"It's valuable because it lets people participate willingly instead of waiting to be called and asked," he said. "And while the results are not scientifically accurate, they give some indication of what the community is thinking."

Dolan also pointed out that Philbin conducts his own informal polls during his two-hour radio show.

"It's interesting to note that today, Mr. Philbin boasted about his prank and then spent the remaining hour of his show conducting a poll on the Leahy-McMullen race," he said. "One person called. We had 450 people vote in the two days before his show. Which poll was more useful? People can decide for themselves."

However, the number of participants in a poll might not be an accurate gauge of their usefulness or accuracy, according to professors at the Poynter Institute, a journalism school in St. Petersburg, Fla.

Aly Colòn, an ethics group leader at Poynter, said Tuesday that online polls and polling in general could ruin a newspaper's credibility if the publication wasn't upfront about its methodology and accuracy.

"If people think it's definitive information that's pure and unbiased, then they see the results and aren't sure how the findings could happen. They will think the newspaper tried to skew the results in a particular way even if it wasn't," he said.

Al Tompkins, who teaches broadcasting and online issues at Poynter, was even more skeptical of online polling.

"The big issue with online polling is that no matter how many responses you get, it's not 100 percent accurate," he said. "What you end up with is a very lopsided demographic. It gives you a wild idea at best of what a community is thinking."

Tompkins said online polling was once a popular way for newspapers to get their readership involved. But, he said, many newspapers have evolved to using online forums and community chat rooms to have interactive discussions with their readers.

"Most of the time, I've found the online polls to be a complete waste of time because they're so wildly unscientific," he said. "They're called polls, but they're not really polls at all. One is borderline voodoo, the other is scientific polling."


Contact Brent Curtis at brent.curtis@rutlandherald.com.

http://www.timesargus.com/apps/pbcs....23/1003/NEWS02

TGRR 02-14-2009 10:56 PM

Wow.

:lol:

Someone disagreed with Merc, and he pooped a solid PAGE of cut and paste.

TheMercenary 02-14-2009 10:56 PM

There have been numerous books pointing to flaws in public opinion polls. In Superpollsters: How They Measure and Manipulate Public Opinion in America (1995), Vice President and senior analyst of the Gallup Organization David W. Moore writes, "The views that people express in polls are very much influenced by the polling process itself, by the way questions are worded, their location in the interview, and the characteristics of the interviewers." By way of an example, Moore describes a 1940 experiment which found that the use of the words “forbid� and “allow� would yield significantly different results in one question. People were asked whether (1) “the United States should forbid speeches against democracy� and whether (2) “the U.S. should allow speeches against democracy.� The results: 46 percent of Americans responded “no� to the first question, yet only 25 percent of Americans agreed with the second question.

In Constructing Public Opinion:How Political Elites Do What They Like and Why We Seem to Go Along with It (2001), Justin Lewis, Professor of Communication and Deputy Head of the School of Journalism, Media and Cultural Studies at Cardiff University in Wales, writes, "It is well known in the poll literature that disparity in response can be generated by something as basic as question wording or by apparently innocuous information given by the interviewer." Supporting this statement, UCLA political science Professor John R. Zaller writes in his book, The Nature and Origins of Mass Opinion (1992): "Entirely trivial changes in questionnaire construction can easily produce 5 to 10 percentage point shifts in aggregate opinion, and occasionally double that." Citing a number of comprehensive studies, Lewis comments that the media fails to represent, especially in the United States, "the degree of support for a variety of political positions on the left from gun control to social justice issues." He argues that not only do polls inaccurately portray public opinion, but the media further distorts the perception of the polls by failing to report the complete range of opinions. Lewis additionally suggests that the news media particularly favors the interests of political and economic elites in their reporting about opinion polls.

Web Resources:
Wall Street Journal article featuring John Zogby of the polling company, Zogby International, Polling Isn't Perfect: Why voter surveys so often get it wrong. (11/14/02)

Article titled Public Opinion Polling Fraud (10/2003) by polling organization, Retro Poll which “designs and performs opinion polls that look at the relationship between public knowledge and public opinion.�

The Program on International Policy Attitudes (PIPA), of the University of Maryland, “carries out research on public attitudes on international issues by conducting nationwide polls, focus groups and comprehensive reviews of polling conducted by other organizations.�

Articles by Institute for Public Accuracy founder and executive director, Norman Solomon:
Polls: When Measuring Is Manipulating (10/18/02)
Polls give Numbers, but Truth Is More Elusive (5/17/96)

Polling Report is "An independent, nonpartisan resource on trends in American public opinion,�

The Washington Post published a list of poll research organizations and associations.

The Roper Center at the University of Connecticut offers links to polling organizations and associations at their web site.

National Council on Public Polls published the following articles:
20 Questions A Journalist Should Ask About Poll Results
Answers to Questions We Often Hear From the Public

About Polling offers information on the field of opinion research gathered from a variety of sources and commentators by the nonprofit organization called Public Agenda, which was founded decades ago by Cyrus Vance and Daniel Yankelovich.

http://www.askquestions.org/details.php?id=25

TheMercenary 02-14-2009 10:57 PM

So yes, the poll is the weakest form of statistical measure.

Redux 02-14-2009 11:20 PM

Quote:

Originally Posted by TheMercenary (Post 534697)
So yes, the poll is the weakest form of statistical measure.

LMAO :D I like the Polling for Dummies!

I'll stick with the American Statistical Association, the American Political Science Association, the Democratic Party, the Republican Party, the major media, social science research organizations, the list is endless...all of which recognize the value of polls in providing a snapshot of public opinion with a relatively high degree of accuracy.

If I think a poll is relevant to a discussion, I will post it....you can post your disclaimer.

And others following the discussion can decide for themselves.

TheMercenary 02-15-2009 12:38 AM

Quote:

Originally Posted by Redux (Post 534707)
the Democratic Party, the Republican Party,

Yea, you just hang on to those.

sugarpop 02-17-2009 11:09 PM

Quote:

Originally Posted by classicman (Post 534215)
Redux, Do you have the actual questions to that poll? Who was defined as "leaders" Were they local, national, not specified? I'm seriously interested. Polls fascinate me. I am one of those people that answer them and surveys all the time. Problem is they mostly offer some really bad choices which virtually force an answer that is usually what the pollster or their backers wanted in the first place.
Many times I have given alternate answers as the options were not accurate enough that the pollster stops in the middle, thanks me and moves on.
I do find it strange that the congressional approval ratings nearly doubled in the last few weeks/months according to the poll you posted.

I agree. Polls are never accurate enough. I almost always want to answer differently than any of the choices given.

sugarpop 02-17-2009 11:21 PM

Hey Merc, again, you might want to tell that guy at fivethirtyeight.com that polls are useless, because he pretty consistently accurately predicts stuff based on polls.


All times are GMT -5. The time now is 02:10 PM.

Powered by: vBulletin Version 3.8.1
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.