logo
    Hypothesis Testing
    0
    Citation
    5
    Reference
    10
    Related Paper
    Keywords:
    Alternative hypothesis
    Null (SQL)
    p-value
    Statistic
    Sample (material)
    In this paper we investigate the significance level and power of the Hartley's maximum F-ratio test for testing the null hypothesis versus the alternative hypothesis , where and is the maximum (minimum) of the variances of several normal distributions. Under the null hypothesis a least favorable configuration (LFC) for the maximum F-ratio test to be greater than or equal to a critical value is determined. The result is important for the calculation of the critical value of the test statistic to ensure a specified significance level under the hypothesis. Furthermore, this result may be used to construct lower confidence bounds for the ratio . A numerical procedure to implement the testing procedure is provided.
    p-value
    Statistic
    Value (mathematics)
    Alternative hypothesis
    Null (SQL)
    Ratio test
    Citations (3)
    A randomized procedure is described for constructing an exact test from a test statistic F for which the null distribution is unknown. The procedure is restricted to cases where F is a function of a random element U that has a known distribution under the null hypothesis. The power of the exact randomized test is shown to be greater in some cases than the power of the exact nonrandomized test that could be constructed if the null distribution of Fwere known.
    Null (SQL)
    p-value
    Exact statistics
    Goldfeld–Quandt test
    Power function
    Citations (28)
    Summary To perform a test of significance of a null hypothesis, a test statistic is chosen which is expected to be small if the hypothesis is false. Then the significance level of the test for an observed sample is the probability that the test statistic, under the assumptions of the hypothesis, is as small, or smaller than, its observed value. A “good” test statistic is taken to be one which is stochastically small when the null hypothesis is false. Optimal test statistics are defined using this criterion and the relationship of these methods to the Neyman‐Pearson theory of hypothesis testing is considered.
    p-value
    Alternative hypothesis
    Statistic
    Null (SQL)
    Null (SQL)
    Null model
    p-value
    Alternative hypothesis
    Chi-square test
    Statistic
    Citations (53)
    We define a general statistical framework for multiple hypothesis testing and show that the correct null distribution for the test statistics is obtained by projecting their true distribution onto the space of mean zero distributions. For common choices of test statistics (based on an asymptotically linear parameter estimator), this distribution is asymptotically multivariate normal with mean zero and the covariance of the vector influence curve for the parameter estimator. This test statistic null distribution can be estimated by applying the non-parametric or parametric bootstrap to correctly centered test statistics. We prove that this bootstrap estimated null distribution provides asymptotic strong control of most type I error rates. We show that obtaining a test statistic null distribution from a data null distribution only provides the correct test statistic null distribution if the covariance of the vector influence curve is the same under the data null distribution as under the true data distribution. This condition is the formal analogue of the subset pivotality condition (Westfall and Young (1993)). We also show that our multiple testing methodology controlling type I error is equivalent to constructing an error-specific confidence region for the true parameter values and checking if it contains the hypothesized value. We conclude with a discussion of applications.
    Resampling
    p-value
    Null (SQL)
    Sampling distribution
    Citations (4)
    We analyzed the effect of the deviation of the exact distribution of the p-values from the uniform distribution on the Kolmogorov-Smirnov (K-S) test that was implemented as the second-level randomness test. We derived an inequality that provides an upper bound on the expected value of the K-S test statistic when the distribution of the null hypothesis differs from the exact distribution. Furthermore, we proposed a second-level test based on the two-sample K-S test with an ideal empirical distribution as a candidate for improvement.
    Anderson–Darling test
    Empirical distribution function
    Statistic
    p-value
    Randomness tests
    Citations (0)
    Several statistical methods have recently been developed that use the distribution of P-values from multiple tests of hypotheses to analyze data from high-dimensional experiments. These methods are only as valid as the P-values that were derived from test statistics. If an incorrect distribution for a test statistic was used, the P-value will not be valid and the distribution of P-values from multiple test statistics could give misleading results. Moreover, if the correct distribution of a test statistic is used, a distribution of P-values may still give misleading results if P-values are correlated. A primary focus of this paper is on the distribution of a P-value under a null hypothesis, and the test statistic that is considered is the number of rejected null hypotheses. Two issues are demonstrated using six data examples, two that are simulated and four from actual microarray experiments. The results provide some insight into how much of an effect might be introduced into a distribution of P-values if invalid P-values are computed or if P-values are correlated. Additional illustration is given regarding the distribution of a P-value under an alternative hypothesis and some approaches to modeling it are presented.
    p-value
    Statistic
    Value (mathematics)
    Alternative hypothesis
    Citations (12)