Cargando…
The Ranking Probability Approach and Its Usage in Design and Analysis of Large-Scale Studies
In experiments with many statistical tests there is need to balance type I and type II error rates while taking multiplicity into account. In the traditional approach, the nominal [Image: see text]-level such as 0.05 is adjusted by the number of tests, [Image: see text], i.e., as 0.05/[Image: see te...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2013
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3869737/ https://www.ncbi.nlm.nih.gov/pubmed/24376639 http://dx.doi.org/10.1371/journal.pone.0083079 |
Sumario: | In experiments with many statistical tests there is need to balance type I and type II error rates while taking multiplicity into account. In the traditional approach, the nominal [Image: see text]-level such as 0.05 is adjusted by the number of tests, [Image: see text], i.e., as 0.05/[Image: see text]. Assuming that some proportion of tests represent “true signals”, that is, originate from a scenario where the null hypothesis is false, power depends on the number of true signals and the respective distribution of effect sizes. One way to define power is for it to be the probability of making at least one correct rejection at the assumed [Image: see text]-level. We advocate an alternative way of establishing how “well-powered” a study is. In our approach, useful for studies with multiple tests, the ranking probability [Image: see text] is controlled, defined as the probability of making at least [Image: see text] correct rejections while rejecting hypotheses with [Image: see text] smallest P-values. The two approaches are statistically related. Probability that the smallest P-value is a true signal (i.e., [Image: see text]) is equal to the power at the level [Image: see text], to an excellent approximation. Ranking probabilities are also related to the false discovery rate and to the Bayesian posterior probability of the null hypothesis. We study properties of our approach when the effect size distribution is replaced for convenience by a single “typical” value taken to be the mean of the underlying distribution. We conclude that its performance is often satisfactory under this simplification; however, substantial imprecision is to be expected when [Image: see text] is very large and [Image: see text] is small. Precision is largely restored when three values with the respective abundances are used instead of a single typical effect size value. |
---|