Comprehensive descriptions of all methods and tools integrated into DTI can be downloaded here (in PDF format).

P-value adjustments

In an enrichment analysis multiple categories are tested simultaneously. For each individual test the same significance threshold $\alpha$ is used to judge if a category is significant. This means $\alpha$ is the probability to make a false positive prediction (Type-I-Error). Subsequently, each test has probability $\alpha$ to make a Type-I-Error. The problem with multiple testing is that this probability is accumulated.

For k tested hypotheses this probability is defined as:

$$ P(\text{at least one significant result}) = 1-(1-\alpha)^k $$

Multiple testing procedures adjust p-values derived from multiple statistical tests to correct for the number of false positive predictions (Type-I-Error).

Familywise error rate controlling p-value adjustments

When performing multiple hypotheses tests, the familywise error rate (FWER) is the probability of making at least one false positive prediction, or Type-I-Error, among all the tested null hypotheses [1].

$$FWER = Pr(|FP| > 0)$$

Bonferroni

The Bonferroni method [2], [3], [4] adjusts all p-values with the number of tested null hypotheses. The Bonferroni test is conservative and always controls the familywise error rate [1].

$$\tilde p_{i}\ =\ np_{i} $$

Sidak

The Sidak method [2], [5] is slightly less conservative than the corresponding Bonferroni adjustment [6]. This adjustment is guaranteed to control the familywise error rate when all of the p-values are uniformly distributed and independent [7], [1].

Step-down methods

Step-down methods were first introduced by Holm [8]. These procedures examine p-values in order, from smallest to largest. If, after correction, a p-value is smaller than its predecessor it obtains the value of its predecessor. This ensures the monotonicity in the order. The benefit of using step-down methods is that the tests are made more powerful (smaller adjusted p-values) while, in most cases, maintaining strong control of the familywise error rate [1].

$$\tilde p_{i}\ =\ 1 - ( 1 - p_{i})^{n} $$
Holm

The Holm adjustment [8] is a step-down approach for the Bonferroni method.

$$\tilde p_{i}\ =\ \begin{cases} np_{i} & \text{for } i=1\\ max \left( \tilde p_{(i-1)}, (n - i +1) p_{i} \right) & \text{for }i=2,...,n \end{cases}$$
Holm-Sidak

The Holm-Sidak adjustment [8], [5] is a step-down approach for the Sidak method.

$$\tilde p_{i}\ =\begin{cases} 1-(1-p_{i})^{(n)} & \text{for } i=1\\ max \left( \tilde p_{(i-1)}, 1-(1-p_{i})^{(n-i+1)} \right) & \text{for }i=2 ,...,n \end{cases}$$
Finner

The Finner method [9], [10] is a step-down approach for a slightly adapted Sidak method.

$$\tilde p_{i}\ =\ \begin{cases} np_{i} & \text{for } i=1\\ max \left( \tilde p_{(i-1)}, 1-(1-p_{i})^{(\frac{n}{i})} \right) & \text{for }i=2 ,...,n \end{cases}$$

Step-up methods

In step-up methods p-values are examined in order, from largest to smallest. If, after correction, a p-value is bigger than its predecessor it obtains the value of its predecessor. This ensures the monotonicity in the order.

Hochberg

The Hochberg adjustment [11] is a step-up approach for the Bonferroni method. Hochberg showed that Holm's step-down adjustments also control the familywise error rate even when calculated in step-up fashion. Since p-values adjusted by Hochberg's method are always smaller than or equal to p-values adjusted by Holm's method, the Hochberg method is more powerful [1].

$$\tilde p_{i}\ =\ \begin{cases} p_{i} & \text{for } i=n\\ min \left( \tilde p_{(i-1)}, (n-i+1)p_{i} \right) & \text{for }i=n-1,...,1 \end{cases}$$

False discovery rate controlling p-value adjustments

FDR-controlling adjustments are less conservative than adjustments controlling the familywise error rate [12], [1].

$$FDR=E(\frac{FP}{FP+TN})\text{, where }\frac{FP}{FP+TN}=0\text{, when }FP=TN=0$$

Benjamini-Hochberg

The Benjamini-Hochberg method [13], [12] is a step-up approach to control the false discovery rate. It assumes all p-values to be independent.

$$\tilde p_{i}\ =\ \begin{cases} p_{i} & \text{for } i=n\\ min \left( \tilde p_{(i-1)}, \frac{n}{i}p_{i} \right) & \text{for }i=n-1 ,...,1 \end{cases}$$

Benjamini-Yekutieli

The Benjamini-Yekutieli method is an extension of the Benjamini-Hochberg adjustment that can also be applied when p-values are dependent [14]. This method always controls the false discovery rate, but is thus quite conservative [1].

$$\gamma = \sum_{i=1}^{n} \frac{1}{i} $$ $$\tilde p_{i}\ =\ \begin{cases} \gamma p_{i} & \text{for } i=n\\ min \left( \tilde p_{(i-1)}, \gamma \frac{n}{i}p_{i} \right) & \text{for }i=n-1 ,...,1 \end{cases}$$

Biblibgraphy

  1. SAS, p-Value Adjustments - SAS/STAT(R) 9.22 User's Guide, (View online)
  2. Abdi, Herve, The Bonferonni and Sidak Corrections for Multiple Comparisons,
  3. Bonferroni, C. E., Il calcolo delle assicurazioni su gruppi di teste., Studi in Onore del Professore Salvatore Ortu Carboni,
  4. Bonferroni, C. E., Teoria statistica delle classi e calcolo delle probability., Pubblicazioni del R Istituto Superiore di Scienze Economiche e Commerciali di Firenze,
  5. Sidak, Zbynek, Rectangular confidence regions for the means of multivariate normal distributions, Journal of the American Statistical Association, Taylor and Francis Group,
  6. Westfall, Peter H and Wolfinger, Russell D, Multiple tests with discrete distributions, The American Statistician, Taylor and Francis Group,
  7. Holland, Burt S and Copenhaver, Margaret DiPonzio, An improved sequentially rejective Bonferroni test procedure, Biometrics, JSTOR,
  8. Holm, Sture, A simple sequentially rejective multiple test procedure, Scandinavian journal of statistics, JSTOR,
  9. Finner, H, On a monotonicity problem in step-down multiple test procedures, Journal of the American Statistical Association, Taylor and Francis Group,
  10. Finner, Helmut, Some new inequalities for the range distribution, with application to the determination of optimum significance levels of multiple range tests, Journal of the American Statistical Association, Taylor and Francis Group,
  11. Hochberg, Yosef, A sharper Bonferroni procedure for multiple tests of significance, Biometrika, Biometrika Trust,
  12. Hochberg, Yosef and Benjamini, Yoav, More powerful procedures for multiple significance testing, Statistics in medicine, Wiley Online Library,
  13. Benjamini, Yoav and Hochberg, Yosef, Controlling the false discovery rate: a practical and powerful approach to multiple testing, Journal of the Royal Statistical Society. Series B (Methodological), JSTOR,
  14. Benjamini, Yoav and Yekutieli, Daniel, The control of the false discovery rate in multiple testing under dependency, Annals of statistics, JSTOR,