I’ve heard and read several statisticians use the following phrase or an equivalent, such as Andrew Gelman on his blog or Nassim Taleb on his youtube channel :
But what does this mean exactly ? I did not fully understand it at first. I decided to research about it and think I now have a good grasp of the idea, so I am writing this short post in order to share my understanding of it, which might still be imperfect. Statistical significance is a tool that is very often misinterpreted and I think it is important to have an honest discussion about it to decide what this tool can and cannot achieve. Here is my explanation of this issue, hopefully you’ll find it succinct and clear :
When you run a hypothesis test, you’re trying to figure out whether you should reject the null hypothesis or not. If we take the example of the Z-test, you decide what the value of the parameter you’re interested in is for the null hypothesis. You then gather data from a sample of the population and estimate the same parameter for that sample. Assuming the parameter you are studying follows a Gaussian distribution, if the estimate of your parameter from the sample and the parameter value for your null hypothesis are separated by enough standard deviations, you reject the null hypothesis. The number of standard deviation separating the two values is often called the z-score, and the probability of observing a z-score greater than x is called the p-value. The value x varies and depends on the significance level chosen. Indeed, if the p-value is smaller than the significance level, the result is deemed statistically significant.
But here is the catch, if you take a population of samples of the same size from an initial population, the parameters of the population of samples will vary and follow a their own probability distribution. Their z-scores and p-values will also vary and follow their own probability distribution ! In fact, p-values of samples of the same size from the same population can vary a lot, and the difference between a significant result and a non-significant result is itself not necessarily statistically significant in the p-value distribution.
In fact, in Nassim Taleb’s paper on this subject, he generated the p-value distribution trough a Monte Carlo generator. He found that if the “true” p-value of the population is 0.12, 60% of the estimated p-values from the samples could be below the traditional significance threshold of 0.05.
This problem has serious consequences. Very often people will say “statistical significance does not imply practical significance”, but in fact, finding a statistically significant result in your sample does not even imply that it is truly “statistically significant” at the population level.
P-values and statistical significance are tools that are misunderstood by a lot of researchers and I think this information needs to be spread. The fact that, as we’ve shown, p-values can vary greatly makes p-hacking much easier than it would be otherwise, and this has terrible negative consequences on the scientific literature. One solution that has been advanced by several statisticians is to lower the significance threshold greatly, to 0.01 or 0.005. This might be a good start, but will it be sufficient ? Time will tell us, hopefully.
This last year and a half a phenomenon in academia has caught my attention. A big chunk of the scientific papers published in reputable journals don’t replicate. In this article we will try to explain the reasons behind this crisis, its implications and what we might do about it.
What is replication ?
You could make a solid case for the view that the main goal of science is to find the laws of the world. Indeed, the scientific enterprise has, since its inception, expanded our understanding of our universe. A reason for that is the scientist’s ability to discover patterns or constants in our world. Metals expand when they are heated. That holds true whether you live in ancient Mesopotamia or modern Australia, and it will most likely be true in the future. This property is unaffected by either time or space, which, one could argue, is the definition of a law.
Even in the legal sense, laws should to be applied equally on the jurisdiction for which they are designed and they should be stable trough time. That property demarcates the Rule of Law from arbitrary trials. This gives the citizen a sense of legal security and predictability of the judge, if I do X then law Y will apply.
With regards to scientific laws, they have roughly the same purpose. They should be true universally and make the world more predictable and understandable to us. Since I know that metals expand when heated, I know that if I were to heat a piece of steel in two weeks it will expand. Since I know how the metal will behave in the future, I can use that knowledge to solve problems I might have.
“What does any of that have to to with the problems in academia ?” You might ask. Well, replicating a study means conducting it again, by using the same methods and gathering new data the same way the former study did, or even by re-analyzing the same data a second time.
If our research methods are valid and enable us to find properties of the world that are perennially true, then if I conduct the same study twice, I should get the same outcome twice.
Unfortunately, for a significant portion of the studies in scientific journals, even the most prestigious ones, results do not replicate. And that phenomenon affects almost all of the disciplines, with some being hit harder than others. That includes medicine, psychology, economics, sociology, criminology, neuroscience, artificial intelligence and many more.
Why are scientific journals full of false findings ?
First of all, I have to say that it’s normal for some studies to yield false findings, that is just part of science. Look at it this way, of all the possible hypotheses you can make about the world, only a tiny fraction of them will be true. There are way more molecules that don’t cure headaches than molecules that do. Imagine if you had to test them all to find a cure of headaches.
Let’s say you were to test 100 000 compounds among which only one could cure a headache, and your testing methodology returned a false positive only 1% of the time. Even with that relatively low false positive rate, after having tested every single compound, you should have about 1000 findings that say that their compound works even though it doesn’t.
Of course, when researching a certain subject, you don’t test every single hypothesis, you try to make theory-backed guesses as to what can work and then test these hypotheses that you find plausible (this is called inference to the best explanation, or abduction). Nonetheless, the asymmetry between true and false propositions still holds.
However it seems unlikely that this asymmetry is the sole reason for the epidemic of false results in scientific journals. Let’s take a look at some numbers.
Note that in rows marked with an asterix, the replicability rate has been estimated trough surveys of researchers, not actual replication attempts, whereas for the other fields the studies were actually conducted once again.
As you can see, it is not at all uncommon to find fields with a replicability rate of 50% or below. The problem is severe and it seems like it is worse in over-hyped disciplines such as machine learning or oncology.
Indeed, these findings are the result of perverse incentives created by the science publication system. In order to get their grants renewed, scientists have to publish papers in scientific papers, preferably prestigious ones, otherwise their careers might come to an end. This is called the publish or perish effect.
These two factors combined create the aforementioned incentives, which drive researchers to produce novel, positive findings at all costs, even if it entails partaking in questionable research practices or downright falsifying results.
Questionable research practices are widespread in academia. It is very hard to gauge the extent to which they are, since, almost by definition, the individuals who engage in them try to conceal them.
Nonetheless, we do have some numbers. In a survey of biomedical post-doc students, 27% of them said they were willing to select or omit data to improve their results in order to secure funding. Note that, as far as I know, that survey was not even anonymous ! What is more, an anonymous survey of psychology researchers found that the majority of researchers have engaged in questionable research practices.
We can add to this body of evidence this testimony by a young social psychology researcher who was outright fired from her degree for refusing to engage in p-hacking. She also reported that her fellow researchers would engage in p-hacking to further their left-wing political agenda.
Yet another testimony by an economics researcher makes several concerning accusations. She reports that senior economists silence opinions that diverge from theirs, take credit for work that is not theirs, discriminate against some minorities and more.
Richard Thaler, an eminent researcher know for his contributions to behavioral economics and ex-president of the American Economics association, reportedly tried to discredit valuable research because it contradicted his views. Among this research is a paper reporting that only 33% of economics research can be replicated without contacting the original authors, which I used in my table above to estimate the replicability rate in economics.
To top it off, we might observe that there is no correlation between a paper replicating and its number of citations. This could denote several issues that plague academia. It has been observed that researchers will sometimes refuse to cite colleagues with whom they compete for a grant and that they will form citation alliances sometimes referred to as citation rings.
What can we do about it ?
There are several initiatives that could be implemented in order to mitigate this situation.
First of all, science should be freely accessible, since it is funded with our tax money. This reform is necessary, but could be very difficult to implement due to scientific journals’ important lobbying power.
There are dozens open access journals, and many scientists choose to only publish in those. Nonetheless, early career scientists have a strong incentive to publish in renowned journals in order to advance their careers and possibly get tenure. In many universities, tenure is conditional on publishing in these outlets.
Secondly, more replication studies should be undertaken. Online repositories for replication studies are beginning to emerge in order to host this type of studies, which doesn’t get much love from the oligarchs of scientific publishing.
Other than that, scientist should submit their data and code along with their papers, not only to detect fraud but open data and code make replication a lot easier. As some say, “In God we trust, all others bring data”.
Finally, I am personally of the opinion that we should completely ditch peer review as there is scant evidence that it can even beat random screening. It is likely that in the future, statistical models will be devised to rate the quality of a paper and its probability of being replicated. In order to extract the necessary features for such a model, one could turn to natural language processing models.
These latter models are also being used in another way, Brian Uzzi, professor at Northwestern university, trained a NLP model to detect elements of language that indicate fraud or low confidence in the findings, rather than trying to use the measurements and metrics of the study.
Hopefully this piece will have fulfilled its purpose by giving an thorough yet brief introduction to some of the major problems facing academia currently. It is regrettable that such a noble pursuit has become so corrupt, discouraging many youths to pursue a career in academia, myself included.
Despite it all, I am still optimistic, insofar as academia does not have a monopoly on science, far from it. Private companies and institutes have been responsible for many scientific breakthroughs in the past two centuries. The most recent notable example would be Google’s quantum supremacy. The private sector is particularly proficient in the advancement of applied, practical science, in other words : technology development.
I encourage everyone interested in science but critical of academia to not get disheartened with science as a whole. If you consider yourself a humanist, perhaps solving people’s everyday problems trough knowledge is more important than theoretical progress. After all, don’t we pursue science to better our lot ?