Distinguish between Statistical and Practical Significance
It's essential to understand the difference between statistical and practical significance. This distinction is often overlooked by students who are learning statistics.
In this blog post, I'm going to explain the difference between these two concepts.
What's the difference between statistical and practical significance?
In a nutshell: Although the statistical significance shows that there's an effect, the practical significance suggests that the effect is large enough to provide some meaningful, actionable outcome.
Let's start with understanding statistical significance.
You conduct a hypothesis test, and as a result, you get a p-value less than 0.05. Your immediate reaction will be to reject the null hypothesis. Based on that, you will conclude that things have changed and go with the Alternate Hypothesis.
Here the result of this experiment is statistically significant.
Let's take a simple example of a vaccine. You have developed a vaccine to cure a new disease. Based on tests conducted, you find that this vaccine effectively cures the disease in 85% of the patients. That is good news.
Based on further research, you expect that the effectiveness of this vaccine can be improved by giving a second dose of this vaccine to the patient one week later. This is expected to enhance the effectiveness of this vaccine. You conduct an experiment where half of the patients are given a single dose, and another half (randomly selected) are given two doses at a gap of one week. You study the results of this experiment, and the p-value is less than 0.05. With this, you conclude that there are sufficient statistical reasons to believe that a two-dose system is more effective.
That means the results of this experiment are statistically significant.
What about practical significance?
As you analyze the data, you find out that the two does system is more effective, and it cures 87% of the patients (better than 85% for a single dose). To keep it simple, I am neglecting the confidence interval and the margin of error in this post.
Now the question will be, is it worth it? In a real-life situation, will you be willing to launch a two-dose system for just a 2% improvement in the result?
A statistically significant experiment might not be practically significant.