Performance of Normality Tests Under Non-Normal Distributions: A Simulation Approach
Keywords:
normality tests, Laplace distribution, Gamma distribution, simulation, empirical power, Shapiro-Wilk, Anderson-Darling, skewness, heavy tailsAbstract
Assessing normality is a fundamental step in statistical analysis, particularly for methods that assume normally distributed data such as regression, ANOVA, and t-tests. However, real-world datasets often exhibit characteristics inconsistent with the normal distribution, such as skewness or heavy tails. This study investigates the empirical power of ten widely used classical normality tests under two non-normal distributions: the Laplace distribution, which is symmetric but heavy-tailed, and the Gamma distribution, which is positively skewed. A Monte Carlo simulation was conducted using four different sample sizes (n = 25, 30, 100, 150), with 1000 repetitions for each condition. The tests analyzed include Shapiro-Wilk, Anderson-Darling, Jarque-Bera, Kolmogorov-Smirnov, Lilliefors, and others.
Results reveal that the Shapiro-Wilk, Anderson-Darling, and Jarque-Bera tests consistently demonstrate high power in detecting deviations from normality across both distributions. In contrast, the Kolmogorov-Smirnov and Lilliefors tests show substantially lower power, particularly in smaller samples. The Anderson-Darling test performs exceptionally well in detecting heavy tails (Laplace), while the Shapiro-Wilk and D’Agostino’s K² tests are effective for identifying skewness (Gamma).
These findings underscore the importance of selecting a normality test based on the specific characteristics of the data distribution. Researchers should avoid default reliance on less powerful tests and instead utilize more sensitive alternatives to improve the robustness of statistical conclusions when working with non-normal data.