Single and Multiple Ability Estimation in the SEM Framework: A Noninformative Bayesian Estimation Approach

Su Young Kim, Youngsuk Suh, Jee Seon Kim, Mark A. Albanese, Michelle M. Langer

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

Latent variable models with many categorical items and multiple latent constructs result in many dimensions of numerical integration, and the traditional frequentist estimation approach, such as maximum likelihood (ML), tends to fail due to model complexity. In such cases, Bayesian estimation with diffuse priors can be used as a viable alternative to ML estimation. This study compares the performance of Bayesian estimation with ML estimation in estimating single or multiple ability factors across 2 types of measurement models in the structural equation modeling framework: a multidimensional item response theory (MIRT) model and a multiple-indicator multiple-cause (MIMIC) model. A Monte Carlo simulation study demonstrates that Bayesian estimation with diffuse priors, under various conditions, produces results quite comparable with ML estimation in the single- and multilevel MIRT and MIMIC models. Additionally, an empirical example utilizing the Multistate Bar Examination is provided to compare the practical utility of the MIRT and MIMIC models. Structural relationships among the ability factors, covariates, and a binary outcome variable are investigated through the single- and multilevel measurement models. The article concludes with a summary of the relative advantages of Bayesian estimation over ML estimation in MIRT and MIMIC models and suggests strategies for implementing these methods.

Original languageEnglish
Pages (from-to)563-591
Number of pages29
JournalMultivariate Behavioral Research
Volume48
Issue number4
DOIs
StatePublished - Jul 2013

Fingerprint

Dive into the research topics of 'Single and Multiple Ability Estimation in the SEM Framework: A Noninformative Bayesian Estimation Approach'. Together they form a unique fingerprint.

Cite this