What is effect size

Last updated: April 1, 2026

Quick Answer: Effect size is a statistical measure of the magnitude of an outcome or difference between groups in a study. It quantifies practical significance of results, independent of sample size.

Key Facts

Understanding Effect Size in Research

Effect size measures the magnitude of a phenomenon or the strength of a relationship between variables in research studies. While statistical significance (p-values) indicates whether a result is likely due to chance, effect size indicates whether the result is meaningful in practical terms. A study might show a statistically significant result with a tiny effect size, meaning the real-world impact is negligible. Conversely, a large effect size with smaller statistical significance might indicate a practically important finding worthy of attention.

Common Effect Size Measures

Cohen's d is the most common effect size measure for comparing two group means. It's calculated as the difference between means divided by the pooled standard deviation. Cohen's d values of 0.2, 0.5, and 0.8 are typically considered small, medium, and large effects respectively. Pearson's r measures correlation strength between variables, ranging from -1 to +1. Odds ratios measure the odds of an outcome occurring in one group versus another. Eta-squared measures the proportion of variance explained in analyses of variance.

Interpreting Effect Sizes

Jacob Cohen established benchmarks for interpreting effect sizes, though these vary by context and field. Small effects (d = 0.2) might be practically irrelevant despite statistical significance in large samples. Medium effects (d = 0.5) represent meaningful differences noticeable in most contexts. Large effects (d = 0.8) are obvious and substantial in magnitude. These benchmarks provide reference points, but researchers must consider domain-specific knowledge. In medicine, even small effect sizes might be clinically important if they involve significant patient suffering.

Why Effect Size Matters

Statistical significance depends heavily on sample size—large samples can show significant results even for trivial effects, making effect size crucial. Effect size is independent of sample size, making it a pure measure of the phenomenon itself. This makes effect sizes essential for comparing studies with different sample sizes, conducting meta-analyses, and evaluating real-world importance of findings. Professional standards now require effect size reporting alongside p-values in research publications.

Applications and Reporting

Researchers report effect sizes to help readers understand practical significance of results beyond mere statistical significance. Meta-analyses combine effect sizes from multiple studies to estimate overall effect magnitude across research. Power analysis uses expected effect sizes to determine necessary sample sizes for studies to detect effects. Modern statistics education emphasizes effect size interpretation, recognizing its importance for evaluating research quality.

Related Questions

What is the difference between effect size and statistical significance?

Statistical significance (p-value) indicates whether results are likely real versus chance, while effect size measures the magnitude of results. Large samples can produce significant results with tiny effect sizes that may lack practical importance.

What is Cohen's d and how is it used?

Cohen's d measures the standardized difference between two group means, calculated as the difference divided by pooled standard deviation. Values of 0.2, 0.5, and 0.8 represent small, medium, and large effects respectively.

Why should researchers report effect sizes?

Effect sizes indicate practical significance independent of sample size, enable meta-analysis and power analysis, and help readers understand real-world importance of findings. They're now standard requirements in scientific journals.

Sources

  1. Wikipedia - Effect SizeCC-BY-SA-4.0
  2. Investopedia - Effect SizeFair Use