![]() |
Here's What's Wrong with Science...
...Most studies are designed by stupid people.
I've often thought that my ideal job, the one where I could contribute the most to humanity, would be to sit on a committee that reviewed proposed scientific studies before they were done, and had the opportunity to point out the logical flaws or complicating factors in their design, and helped them redesign them to provide more accurate or meaningful results. Case in point: this recent study, that claims to have found a trend whereby people with a "negative gut reaction" to a picture of their new spouse are more likely to be unhappy/divorced in the coming years than those with a "positive gut reaction." Big design flaw: they showed these people a picture of their spouse, and then measured whether they could pick out positive or negative words more rapidly. But they didn't find out whether these same people tended to pick out positive/negative words without being shown a picture of their spouse. My suspicion is that those people with supposed "negative gut reactions" to their spouses were just more negative overall, and a negative person is less likely to have a successful relationship. This would have been an easy thing to control for in the experiment, either by testing the participants in multiple rounds, or by splitting them into a test group and a control group that was not shown a picture of their spouse. As it is, the study is a waste. Awhile back I had a long email exchange with a graduate student, who asked me point-blank why I had participated in her study, and my opinion on why most people were refusing. I told her the honest truth, that I only finished it because I have a very ingrained need to finish what I've committed to starting, and that the whole time I was thinking what a stupid study it was. It was sort of trying to determine the relationship between poor sleep and autism, but again had many major design flaws, which I detailed for her. Meanwhile I've participated in literally dozens of other autism studies that I can see have smaller but no less fatal design flaws. I don't really have a point here, I'm just venting a little. A good study is a thing of beauty, but they are so rare. |
Your local university's IRB is for you !
|
|
Science is not really about one study, that's a media issue. It's much more about can that study be reproduced, or how do other studies confirm or deny its findings. A scientist should assume that any single study can be wrong via any number of possible mistakes. This is why experiments are endlessly reproduced. Reproducing confirms the finding and validates the science, or doesn't confirm and brings the science into question.
Also, the media is hopelessly attracted to soft sciences like psychology, sociology, and economics, and not so concerned with hard sciences such as bio, chem, and physics where more stringent methodology is used. |
Agree. The above 'study' is not science.
The people doing 'hard sciences' research are not stupid people; they are probably the brightest, most underpaid people out there. A media article does not make a fluff 'research' project science. |
Quote:
reported in the Pittsburg Post Gazette last month showing that if a dog wags it's tail to the right side, it's feeling happy emotions...to the left denote negative emotions. But then maybe that depends on which way the dog is facing. :rolleyes: |
My dad did a short week long stint reviewing grants at the NSF as part of some panel or something. He wasn't impressed with most of the applications he saw, but there were a few that were very good. And he took what he saw in those good applications to improve his own application writing. It paid off, because he got most grants approved after that, when previously they had been mostly denied.
|
Quote:
Quote:
Now, another part of that definition is that we can disprove and otherwise retract it from our understanding, and that's excellent, and I have faith that we will eventually do so with this particular study. But even in the hard sciences, there are many examples of flawed studies that sit around unchallenged for decades, or even continue to be referenced after they've been rejected. Quote:
|
Dazza used to sit on a board doing exactly that. Also part of it was deciding what was relevant research for the industry.
|
Quote:
My dog wags its tail up and down! I suspect it may have been crooked inutero though, or perhaps broken very soon after birth. Either way, its just her funny thing. |
It's like Winston Churchill's assessment of democracy:
Quote:
|
Quote:
There isn't a fail-safe mode to prevent people who deliberately falsify data, as Wakefield did. The scientific community has heretofore relied on its members to meet a minimum standard of basic honesty. Since Wakefield, editorial committees have to wrestle with the question of whether everything in a submission is false. Researchers, on the whole, do not demand a pedestal. As I said before, these people are some of the most brilliant among us, and they are drastically underpaid and neglected. We benefit from their work and never bother to ask who deserves our thanks. The one thing you can count on is that 'science', i.e. serious researchers, will demand truth and honesty. Over time, errors and deliberate falsehoods will be exposed and corrected. People who turn away from 'science' because things change over time are wholly mistaken: it is the honesty of true research that demands that errors, once exposed, be admitted. It's the charlatans who claim infallibility and demand unquestioning adherence. Clod - what exactly is your experience of scientific research? You say that it's fucking amazing but plagued with human error. Have you any first-hand experience with it? What are your credentials? My credentials have been questioned and examined in this forum very recently, as being pertinent to my qualification to comment on scientific findings. In the same spirit, I would like to know your educational background. |
You misunderstand me completely, ortho. I am not "turning away from science." Quite the opposite. I desire for science's mechanism for ruling out flawed data to work faster and more efficiently, that's all. I am impatient.
I wasn't the one who questioned your qualifications, and I have no desire to get into some sort of personal pissing match with you. I'll readily admit I don't have the qualifications you are looking for. I do hold two separate bachelor's degrees, one of which included upper-division classes in physics, biology, and chemistry. I got a nearly perfect score on my SATs, and when they totaled up all the college credits I accrued through various testing (including AP tests in physics, biology, and chemistry,) I entered as a sophomore halfway through my second year. I finished my two degrees in 3 years total. But no, they weren't hard science degrees. There are smart researchers out there. The best man at my wedding got his PhD studying the physics of muon spin resonance. He's brilliant. I also know he regularly complained about how several of the other grad students he had to work with were idiots. "Idiot" is relative. If I find a flaw in a study's design in less than 5 minutes, that researcher is an idiot to me, regardless of how smart they may seem to you. Not all researchers are brilliant, and if you think they are, you are missing the point of the system that allows for their work to be weeded out over time. |
Quote:
Quote:
However a much larger topic applies. Some researchers are literally falsifying facts - as Dr Wakefield did to claim autism is created by the MMR vaccine. (And a stripper named Jenny McCarthy refuses to admit she was brainwashed by the fraud.) Or Dr Schön did in research on organic transistors in the Bell Labs. In that case, it took five years for the fraud to finally be exposed - a major setback for this once promising research that could have made tablets (ie Ipad) on an electronic sheet of paper. The problem is discussed in much greater detail in various publications. For example, social sciences (ie psychology) suffer from an often misunderstood use of statistics. Not necessary fraud; just bad science. The confusion extends to a controversy over free publications (“minimal-threshold” journals) verses the mainstay peer reviewed publications (ie Lancet). The former are about getting as much science out to others as fast as possible. Peer review is about getting it right. Problem is that many who peer review papers do not do a good job especially since the reviewers do not get paid and get little credit for doing the work. The Economist defines two types of errors. Type 1 is the false positive created by thinking something is true when it is not. Type 2 is a false negative, for example, by assuming the 5% that contradicts a conclusion are only outliers and can be ignored. Worse is the amount of data that must be crunched to prove a conclusion. This makes peer review even more difficult and expensive. Especially true in big pharma where trial data is not provided (the exception being GalxoSmithKline). In another study, only 143 of 351 randomly selected papers would share their raw data for review. Granted, this is mostly in the pharmaceutical industry where lawyers and business school graduates now dominate top management. It may not be as flagrant in other sciences. Also not required to be shared is software necessary to make the research possible. 1.4 million papers are published annually. The number of retractions have increased and yet still remain at 0.2%. A problem compounded by what too many of us do. We don't like people who contradict. Too many of us want to feel good rather than grasp reality. And yet that is what science really needs. More papers that describe failed experiments. But if you want your paper to be published, then you better have positive (cheery) results. A Harvard biologist (John Bohannon) intentionally wrote a paper chock full of what was described as clangers - intentionally fraudulent and written to be obviously wrong. He submitted it to 304 publications. 157 approved it for publication. Fiona Godlee of the British Medical Journal submitted a paper with eight glaring and obvious mistakes to 200 of their regular reviewers. No one identified all mistakes. The average was only two mistakes per reviewer. Some found no mistakes. In another study over 14 years, 92% of reviewers found a constantly less number of mistakes with each year. With 14 years of experience, one should identify more mistakes? Apparently not. Some years ago, Amgen tried to replicate results of 53 published studies they considered relevant. Only six papers were confirmed. Fraud is not widespread or universal. But concerns exist for so much research time lost due to so much flawed science. Essential is to reproduce the study. Diederik Stapel was defined in April 2013 by the NY Times as a fraudster in maybe 55 papers on psychology. 30 were definitely found fraudulent. The fraud was confirmed in November 2011 when two students blew the whistle. He returned his PhD to the University of Amsterdam. But worse are the dissertations by 10 PhD candidates whose research and reputations are now tarnished by then Dr Stapel. He got away with it for so long because he did extensive preliminary research; created hypothesizes that were credible. So no one questioned his research. Dr Schön, on the other hand, was making breathtaking conclusions. So it only took almost five years to discover his fraud. IBM's Watson labs (and others) repeatedly tried and failed to reproduce his results. Science also takes time to discredit. Questions are whether we have become more suspicious. Or whether new 'low threshold' internet publications have put a spotlight on the entire peer review process. One hypothesis is that 'softer' science (ie psychology, sociology, pharma) is so subjective as to make fraud easier. Also critically important is the expression "Publish or die" - a reference to what professors must do to protect their university positions. Nobody wants to hear why some lines of research fail since most of us want positive and cheery results rather than a hard reality that most experiments in revolutionary study fail. |
Quote:
I have already pointed out that errors in research are identified and corrected with time. It's only the frauds and charlatans, like Andrew Wakefield and his ilk, who persist in the face of contradictory evidence and lead people who pride themselves on being intelligent and knowledgable, in spite of having no education in the area, down false paths. And, no - having good SAT scores and/or some AP courses doesn't take the place of actual graduate level courses in the hard sciences. Nor does having had a best man with a PhD. Most intelligent people accept that they don't have expert knowledge about everything - law, for example - and will consult an expert about matters of importance outside their area of education. You have admitted that your education is not in the sciences, but you nevertheless view yourself as an expert in scientific research, qualified to dictate who is, and is not, an idiot. Can you describe the different types of studies, the advantages and drawbacks of each, and the situations in which each is appropriate, clod? Do you understand what makes a good study? Do you understand how to analyze the data from a given study - which statistical tests are appropriate and which can't be used, and whether the results are statistically significant? Can you tell us whether this study was designed with sufficient power to render significant results? Do you know what the term 'power' refers to, in this context? If you knew anything in this area, you wouldn't have made such a statement in the first place. |
[Rod Steiger] You are about to enter ... your data for a chi-square contingency table analysis. [/Rod Steiger]
|
Quote:
Quote:
I'm sorry I offended you, ortho. And I'm sorry you hate Wakefield so very much, and by extension me. That's kind of not my problem. |
I don't hate you, Clod. If I am angry at Wakefield for what he did, that doesn't extend to you in any way. I disagree with some of your beliefs, which is not the same as having a 'personal problem' with them. You probably wouldn't describe yourself as having a 'personal problem' with some of my conclusions, even though you disagree with them. You're an intelligent person who has devoted great time and energy to your family's health and well-being. Given your intelligence, though, I didn't expect a statement like 'Here's what's wrong with science: most studies are designed by stupid people' from you. I'm sorry I took the bait.
|
MAJOR CAT FIGHT FAIL.
|
You took the words RIGHT OUT OF MY HEAD.
|
Quote:
Here's what's wrong with government... most laws are written by stupid people. Here's what's wrong with public education... most educators are stupid people. Here's what's wrong with humanity... most people are stupid people. I believe all of the above to be true. I'm just misanthropic, that's all. |
There were some who believed that with the invention of the first motor vehicle, you could never go over 60 miles an hour because it'd be impossible to breath. Obviously we know that's not true, but it doesn't change the fact that a scientist proved it to be true initially.
Science, like all other facets of life, is an evolving creature. What seems dumb at first, might in fact be true (like flying to the moon for example). As to whether or not negative thoughts lead to negative outcomes, we all know by now that this is usually true, so in a way, it's possible to believe this study has at least some merit when it says that people with a negative view of their marriage will probably end up divorced. I would suggest you don't need to be a scientist to make that judgement though. Anyway, you girls need to kiss and make up. I think the blokes want pics too btw. eta: Ortho, don't take things too personally. It's the interwebz and we're all friends here (sorta lol), so we're allowed to make sweeping statements during moments of frustration. We all do it now and then. Some more than others. In fact, I do it ALL the time! ;) |
1 Attachment(s)
|
Well, I did not react positively to my husband-to-be when I first saw him, and we have been married for 11 years. Fairly happily, too.
I know that is not the point of the OP...;) I think its fair to say that most humans are stupid...that is, not perfect specimens who always think and react intelligently. So, of course, their scientific studies will often follow suit. And especially if the motive for the study is flawed or just has an agenda. But that is just my 'layman's' conclusion. |
Here's an example of taking way to long to correct.
Quote:
|
Citing fraudulent studies brings this story to mind:
Quote:
|
Quote:
Quote:
They are not stupid. They are too emotional; therefore not logical. Too many want to tell you what you want to hear. |
This is what is wrong with science
Interesting comment piece in the Guardian, from early December (only just read it).
Randy Schekman, a cell biologist, and winner of the 2013 Nobel prize for medicine, considers the way the primacy of 'luxury journals' impacts on scientific research: Quote:
Quote:
It's an interesting read. http://www.theguardian.com/commentis...damage-science |
Clod, I found that a very interesting article
... mainly because I retired my own research career before Gore invented the internet. I followed the link in the article, and was amused that the first screen was a "sign up" page. But at least it did not ask for subscription $ or an institutional certification. I was also amused that the first actual article I scanned had 100+ references. In my day, the editors would have been more concerned about saving paper ! OK, to the main points. The concept of eLife does seem different than paper journals in the length of time-to-publication. But the consolidation of reviews seems the best improvement. I do remember receiving editor- and peer-review letters requesting almost opposing changes. This seems to be something all editors could/should do, not just on-line journals. So, I get the better time factor, the consolidated reviews, and editorial decisions, and now to my remaining question... what is meant by an "open-source" model. Professional competition, jealousy and antagonism did exist, and probably still does. But authors were usually were allowed to recommend or exclude certain people as "peer reviewers" of their submission. I assume most journal editors still allow this sort of guidance. As I read the link, there is still a "senior editor" and "co-editors" that act as the gate-keepers. The "peer-review" seems to come only after these editors have already committed to publication. ETA: I just received an eLife email asking mw to confirm my subscription ... requesting email address and password. Now why would they need/want that ? |
(I am flattered to be mistaken for DanaC, but I didn't post that last article, she did. :))
|
Oooops... :redface:
But even so I'd be interested your reactions |
It was interesting. I didn't see a description of the actual process eLife uses for submissions and review, but I am in favor of keeping things open-source in general. Transparent is always better.
|
of course the problem of 'peer-review' or validation of a hypothesis or reproducing the same results before publication is that if it isn't published then peers can't run with it - cf. chicken and egg scenario.
|
| All times are GMT -5. The time now is 12:51 PM. |
Powered by: vBulletin Version 3.8.1
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.