Three Million Scientific Papers Wrong Statistical Method Wrong

I have been, in this blog, maintaining that Science at best is workable hypothesis for the time being and there is no certainty about it.

And the Scientists hide under the cloak of Axioms, which you are not allowed to question.

They say it is self-evident.

Science is built on faulty logic.

That a certain result shall follow a given set of cause/s/events under similar conditions.

What people forget or do not dare to question is that all the conditions in any scientific testing or experiments are not in our control , we do not know how reliable they are and we are not guaranteed the circumstances shall remain repeatedly the same.

That Nature shall behave uniformly is a fallacy not supported by Logic.

We can not say Nature shall behave uniformly for we have not examined all the cases of Nature and it is impossible to know this.

We assume it shall.

p Value Imaginary.Image.jpg
Many researchers have labored under the misbelief that the p-value gives the probability that their study’s results are just pure random chance. Credit: Lenilucho/Wikipedia

Secondly Science is certain of Causal relationship ,that is an effect has a Cause and a Cause  must  produce a result.

Logically a Cause may have more than One effect and one effect may have more than one Cause.

Therefore this is also faulty.

(Indian Philosophy addresses this problem by Parinama Vada and Vivatha Vada)

And the scientists also assumes many tools for verification of data,those that are purely imaginary and have no factual basis.

Now a Testing tool used by Psychology is found to be wrong and so are the three million scientific papers based on these tools.

Worse is that this has happened in Applied Psychology

How the patients were ever cured based on these scientific papers only GOK!

Psychology researchers have recently found themselves engaged in a bout of statistical soul-searching. In apparently the first such move ever for a scientific journal the editors of Basic and Applied Social Psychologyannounced in a February editorial that researchers who submit studies for publication would not be allowed to use a common suite of statistical methods, including a controversial measure called the p-value.

These methods, referred to as null hypothesis significance testing, or NHST, are deeply embedded into the modern scientific research process, and some researchers have been left wondering where to turn. “The p-value is the most widely known statistic,” says biostatistician Jeff Leek of Johns Hopkins University. Leek has estimated that the p-value has been used at least three million scientific papers. Significance testing is so popular that, as the journal editorial itself acknowledges, there are no widely accepted alternative ways to quantify the uncertainty in research results—and uncertainty is crucial for estimating how well a study’s results generalize to the broader population.

Unfortunately, p-values are also widely misunderstood, often believed to furnish more information than they do. Many researchers have labored under the misbelief that the p-value gives the probability that their study’s results are just pure random chance. But statisticians say the p-value’s information is much more non-specific, and can interpreted only in the context of hypothetical alternative scenarios: The p-value summarizes how often results at least as extreme as those observed would show up if the study were repeated an infinite number of times when in fact only pure random chance were at work.

This means that the p-value is a statement about imaginary data in hypothetical study replications, not a statement about actual conclusions in any given study. Instead of being a “scientific lie detector” that can get at the truth of a particular scientific finding, the p-value is more of an “alternative reality machine” that lets researchers compare their results with what random chance would hypothetically produce. “What p-values do is address the wrong questions, and this has caused widespread confusion,” says psychologist Eric-Jan Wagenmakers at the University of Amsterdam


Ostensibly, p-values allow researchers to draw nuanced, objective scientific conclusions as long as it is part of a careful process of experimental design and analysis. But critics have complained that in practice the p-value in the context of significance testing has been bastardized into a sort of crude spam filter for scientific findings: If the p-value on a potentially interesting result is smaller than 0.05, the result is deemed “statistically significant” and passed on for publication, according to the recipe; anything with larger p-values is destined for the trash bin.

Quitting p-values cold turkey was a drastic step. “The null hypothesis significance testing procedure is logically invalid, and so it seems sensible to eliminate it from science,” says psychologist David Trafimow of New Mexico State University in Las Cruces, editor of the journal.’

In plain English the p value is imaginary and has no basis.

I can foresee a host of scientists coming out against this post using fancy jargon, while the questions raised by me remain unanswered/will remain so.

Long Live Science!

News Source,

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s