OBJECTIVES: Developing methods to assess the quality of data from patient preference surveys is an active research area. Comprehension questions have been proposed to identify respondents who might not understand the DCE tasks, and these questions also teach respondents how to interpret text and graphics and reinforce important survey elements. We reviewed 11 DCE surveys to assess the performance of a comprehension question.
METHODS: We reviewed data from 13 DCE surveys that used a common risk grid comprehension question. The percent of respondents who failed the comprehension question was calculated and summarized by recruiting source and respondent type.
RESULTS: The failure rate on the risk grid comprehension question ranged from 5% to 36%. The failure rates by recruitment source were: 11%-36% online panel (n=6), 5%-15% patient group (n=4), 5%-13% clinical site (n=3). In all 13 surveys, the analysis of the preference weights using the full sample yielded intuitive results. Across respondent types, failure rates were: patients 5% to 15% (n=8), caregivers 13% to 36% (n=4), general population 17% (n=1). In several studies, we compared the results with and without including respondents who failed the risk grid comprehension question and found that the results were qualitatively similar.
CONCLUSIONS: We provide a summary of failure rates for a comprehension question across multiple studies against which other researchers can compare their studies. Datasets with failure rates as high as 36% still produced reasonable estimates of preference weights (no extreme disordering or large confidence intervals). Our experience suggests that comprehension questions help respondents learn and provide a check on respondent understanding. Very high failure rates may be helpful in signaling the presence of data quality problems like attributes that are very difficult to understand or inattentive respondents.