Internal Seminars 2017: New developments for IRT test equating

  • Data: 21 novembre 2017

  • Luogo: Dipartimento di Scienze Statistiche - Via delle Belle Arti 41 - Bologna - Aula I

Relatrice
Valentina Sansivieri

Abstract
Equating is a statistical process that is used to adjust scores on test forms so that scores on the forms can be used interchangeably (Kolen and Brennan, 2014). A good way to increase the accuracy in estimating equivalent scores could be by using covariates about examinees. Sansivieri and Wiberg (2017) show how it is possible to decrease the equating standard error in item response theory (IRT) observed-score equating by using covariates, with both a simulation study and an empirical example. However, two main limits emerge: the bias of the equivalent scores is not calculated and covariates are not included in estimating the examinees’ abilities. To fill the first gap, a new parametric bootstrap method is proposed. It works, substantially, by generating the bootstrap samples from a multivariate normal distribution. A simulation study shows that the new bootstrap method works very well. In order to include covariates in estimating the examinees’ abilities, instead, a Bayesian approach is more flexible than a classical maximum likelihood approach: an empirical example shows that the equating standard error which we obtain by using a Bayesian approach and including one discrete covariate is lower  than the equating standard error obtained by using a classical maximum likelihood approach without covariates. Finally, we describe the MIRT (multidimensional item response theory) observed-score equating (Brossman and Lee, 2013) and we propose different possible developments which could be realized within its framework.

Organizzazione
Alessandra Luati, Silvia Cagnone