logo
    Does rapid guessing prevent the detection of the effect of a time limit in testing?
    2
    Citation
    28
    Reference
    10
    Related Paper
    Citation Trend
    Abstract:

    Rapid guessing is a test taking strategy recommended for increasing the probability of achieving a high score if a time limit prevents an examinee from responding to all items of a scale. The strategy requires responding quickly and without cognitively processing item details. Although there may be no omitted responses after participants' rapid guessing, an open question remains: do the data show unidimensionality, as is expected for data collected by a scale, or bi-dimensionality characterizing data collected with a time limit in testing, speeded data. To answer this question, we simulated speeded and rapid guessing data and performed confirmatory factor analysis using one-factor and two-factor models. The results revealed that speededness was detectable despite the presence of rapid guessing. However, detection may depend on the number of response options for a given set of items.

    Keywords:
    Time limit
    The Connor-Davidson Resilience Scale (CD-RISC) is inarguably one of the best-known instruments in the field of resilience assessment. However, the criteria for the psychometric quality of the instrument were based only on classical test theory. The aim of this paper has focused on the calibration of the CD-RISC with a nonclinical sample of 444 adults using the Rasch-Andrich Rating Scale Model, in order to clarify its structure and analyze its psychometric properties at the level of item. Two items showed misfit to the model and were eliminated. The remaining 22 items form basically a unidimensional scale. The CD-RISC has good psychometric properties. The fit of both the items and the persons to the Rasch model was good, and the response categories were functioning properly. Two of the items showed differential item functioning. The CD-RISC has an obvious ceiling effect, which suggests to include more difficult items in future versions of the scale.
    Differential item functioning
    Polytomous Rasch model
    Ceiling effect
    Citations (65)
    Item bank
    Classical test theory
    Item analysis
    Test theory
    Psychometric testing
    Psychometric tests
    Citations (150)
    We can obtain recently item response time data easily by Computer Testing. And we can evaluate examinees not only for test score but for response time. It is well known that IRT(Item Response Theory)is useful in item analysis for evaluation of test score. Similiarly we can apply item analysis for evaluation of examinee's response time to the idea of IRT. The authors proposed IRT for item response time. In this paper, the authors showed; 1. validity of the theory, 2. item analysis by the theory, 3. estimated examinee's ability for response time. And the authors showed the utilities of this theory by applying to practical data.
    Response time
    Classical test theory
    Differential item functioning
    Item analysis
    Test theory
    Citations (0)
    Low-dimensional materials have excellent properties which are closely related to their dimensionality. However, the growth mechanism underlying tunable dimensionality from 2D triangles to ID ribbons of such materials is still unrevealed. Here, we establish a general kinetic Monte Carlo model for transition metal dichalcogenides (TMDs) growth to address such an issue. Our model is able to reproduce several key ñndings in experiments, and reveals that the dimensionality is determined by the lattice mismatch and the interaction strength between TMDs and the substrate. We predict that the dimensionality can be well tuned by the interaction strength and the geometry of the substrate. Our work deepens the understanding of tunable dimensionality of low-dimensional materials and may inspire new concepts for the design of such materials with expected dimensionality.
    Lattice (music)
    Kinetic Monte Carlo
    Abstract Item response theory (IRT) model applications extend well beyond cognitive ability testing, and various patient-reported outcomes (PRO) measures are among the more prominent examples. PRO (and like) constructs differ from cognitive ability constructs in many ways, and these differences have model fitting implications. With a few notable exceptions, however, most IRT applications to PRO constructs rely on traditional IRT models, such as the graded response model. We review some notable differences between cognitive and PRO constructs and how these differences can present challenges for traditional IRT model applications. We then apply two models (the traditional graded response model and an alternative log-logistic model) to depression measure data drawn from the Patient-Reported Outcomes Measurement Information System project. We do not claim that one model is “a better fit” or more “valid” than the other; rather, we show that the log-logistic model may be more consistent with the construct of depression as a unipolar phenomenon. Clearly, the graded response and log-logistic models can lead to different conclusions about the psychometrics of an instrument and the scaling of individual differences. We underscore, too, that, in general, explorations of which model may be more appropriate cannot be decided only by fit index comparisons; these decisions may require the integration of psychometrics with theory and research findings on the construct of interest.
    Multidimensional scaling
    Citations (19)
    Psychometric theory offers a range of tests that can be used as supportive evidence of both validity and reliability of instruments aimed at measuring patient-reported outcomes (PRO). The aim of this paper is to illustrate psychometric tests within the Classical Test Theory (CTT) framework, comprising indices that are frequently applied to assess item- and scale-level psychometric properties of PRO instruments.
    Item bank
    Classical test theory
    Differential item functioning
    Item analysis
    Criterion validity
    Citations (32)
    The assessment of dimensionality of data is important to item response theory (IRT) modelling and other multidimensional data analysis techniques. The fact that the parameters from the factor analysis formulation for dichotomous data can be expressed in terms of the parameters in the multidimensional IRT model suggests that the assessment of the dimensionality of the latent trait space can also be approached from the factor analytical viewpoint. Some problems connected with the assessment of the dimensionality of the latent space are discussed, and the conclusions are supported by simulated results for sample sizes of 250 and 500 on a 15-item test. Five tables contain data from the simulation; and 48 graphs illustrate eigenvalues and plotted mean residuals.
    Factor Analysis
    Sample (material)
    Citations (17)
    Classical test theory
    Multiple choice
    Test theory
    Item analysis
    Psychometric testing
    Citations (0)
    With the development in computing technology, item response theory (IRT) develops rapidly, and has become a user friendly application in psychometrics world. Limitation in classical theory is one aspect that encourages the use of IRT. In this study, the basic concept of IRT will be discussed. In addition, it will briefly review the ability parameter estimation, particularly maximum likelihood estimation (MLE) and expected a posteriori (EAP). This review aims to describe the fundamental understanding of IRT, MLE and EAP which likely facilitates evaluators in the psychometrics to recognize the characteristics of test participants. Key words: Expected A Posteriori, Item Response Theory, Maximum Likelihood Estimation
    Classical test theory
    Test theory
    Citations (11)

    Background

    Frequently used questionnaires to measure fatigue in rheumatoid arthritis (RA) are not developed from the patients’ perspective or have a fixed-length format. Modern psychometrics provide the possibility to measure patient reported outcomes precisely at individual level with few items. To be able to construct a computer adaptive test (CAT) that respectively selects items based on the previous answer of a patient, an item pool has to be constructed and calibrated according to item response theory (IRT).

    Objectives

    Goal of this study was the calibration of an item pool to measure fatigue in RA. It was based on the patients’ perspective captured by an interview study, and examined for face and content validity by a previous Delphi study with patients and professionals. The item pool contained 245 questions about fatigue. The fit of the items with the underlying dimensions was assessed with item response theory (IRT) and the dimensionality structure of the item pool was examined by factor analysis and multidimensional IRT.

    Methods

    Participants were 551 patients with RA from three hospitals in the Netherlands. Obviously, it was not feasible to let each patient score all 245 items of our item pool, so we used an item administration design to construct seven different questionnaire versions. Each patient completed one version of the questionnaire, maximally containing 126 items. IRT analysis using the generalized partial credit model (GPCM) was conducted for each dimension of fatigue. Poorly fitting items were removed. Consecutively, exploratory and confirmatory factor analysis was performed on the remaining items and a multidimensional IRT model was fitted.

    Results

    In the IRT analysis, 49 items showed insufficient item characteristics. Items with a discriminative ability <0.60 and/or model misfit effect sizes >0.10 were removed. Exploratory and confirmatory factor analysis on the 196 remaining items revealed three dimensions of fatigue named: severity, impact and variability of fatigue. The dimensions were further confirmed in multidimensional IRT model analysis.

    Conclusions

    This study provided an initially calibrated multidimensional item bank and has shown which dimensions and items that came forward from previous studies are important for the development of a multidimensional computerized adaptive test (CAT) for fatigue in RA.

    Disclosure of Interest

    None Declared
    Item bank
    Exploratory factor analysis
    Differential item functioning
    Item analysis
    Face validity
    Delphi Method