Let us recall that the Welch Test is an extension of the Student Test to test if two independent samples generated from two gaussian distributions have the same mean value in the case where they may have different variances.
Let us start with a simple application of the Welch Test. We generate two datasets from two normal distribution they have same mean but different variances.
library(ggplot2);
library('ramify');
##
## Attaching package: 'ramify'
## The following object is masked from 'package:graphics':
##
## clip
<- randn(200, mean=0, sd=1);
s1 <- randn(200, mean=0, sd=1.6); s2
We can check the distribution of them with hist plot:
<- data.frame(
df samples=append(s1,s2),
origin=append(rep('sample1',200),rep('sample2',200))
)ggplot(df, aes(x=samples, color=origin, fill=origin)) +geom_histogram(aes(y=..density..), position="identity", alpha=0.5)+
geom_density(alpha=0.2)
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
Next, we do the Welch test to compare these the mean of these two samples (since their variances are different).
= t.test(s1, s2, var.equal = FALSE);
tTest print(tTest);
##
## Welch Two Sample t-test
##
## data: s1 and s2
## t = 0.22867, df = 333.19, p-value = 0.8193
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.2200795 0.2779782
## sample estimates:
## mean of x mean of y
## 0.07368490 0.04473557
The p-value is too high, we can not reject the null hypothesis at 0.05 level. The Welch Test finds correctly that the two samples have the same mean.
The KS test is a non parametric test that allows to test if two continuous distributions are different. Since the sequences generated in the previous section have different standard deviations, the KS test should reject the null.
<- ks.test(s1,s2);
ksTest print(ksTest);
##
## Two-sample Kolmogorov-Smirnov test
##
## data: s1 and s2
## D = 0.145, p-value = 0.02984
## alternative hypothesis: two-sided
The difference was be shown by the small p-value.
Now we consider two samples generated from two different gaussian distributions with the same variances but with different means. We want to study the distribution of p-value obtained from the Student-test and the KS-Test in this case.
<- function (mu1,mu2,sigma1,sigma2,nametest='Student Test'){
powercomparison <- 200;
n = rep(0,300);
p_ttest = rep(0,300);
p_kstest
for (i in 1:300){
<- randn(200, mean=mu1, sd=sigma1);
s1 <- randn(200, mean=mu2, sd=sigma2);
s2 if (nametest == 'Student Test'){
= t.test(s1,s2,var.equal=TRUE);
tTest
}else{
= t.test(s1,s2,var.equal=FALSE);
tTest
}= tTest$p.value;
p_ttest[i] = ks.test(s1,s2);
ksTest <- ksTest$p.value;
p_kstest[i]
}<- data.frame(pvalue = append(p_ttest,p_kstest), test = append(rep(nametest,length(p_ttest)),rep('KS Test',length(p_kstest))))
df boxplot(pvalue ~ test, data = df, varwidth = TRUE)
}
<- 1;
mu1 <- 1.2;
mu2 <- 1;
sigma1 <- 1;
sigma2 powercomparison(mu1,mu2,sigma1,sigma2);
We see that the power of the Student Test is larger than the one of the KS test. Indeed, we obtain smaller p-values with the Student test compared to the KS test. This was predictable since we perfectly match the conditions of application of the Student Test. On the contrary, the KS test can be used for any continuous distributions and the no free lunch principle would lead us to think that the generality of the KS test should be paid in power in some situations.
We know consider different variances for the two gaussian distributions. In the following, we show that the power comparison between the KS and the Welch Test is less obvious in this case.
Based on the result of the previous section, it seems natural to think that if the two variances are close enough, the power of the Welch Test will still be larger than the one of the KS test. This is shown below on an example.
<- 1;
mu1 <- 1.2;
mu2 <- 1;
sigma1 <- 1.02;
sigma2 powercomparison(mu1,mu2,sigma1,sigma2,nametest='Welch Test');
However, when the difference between variances is getting larger, the KS test can become more powerful.
<- 1;
mu1 <- 1.2;
mu2 <- 1;
sigma1 <- 1.25;
sigma2 powercomparison(mu1,mu2,sigma1,sigma2,nametest='Welch Test');