- Blue Paper
The unreliability of value elicitation methods in valuing development interventions.
Jeremy Shapiro, Changing Jang, and Nicholas Owsley
- May 17, 2021
- 9:30 am

SECTOR
PROJECT TYPE
DOI
Location
BEHAVIORAL THEME
OVERVIEW
This study assesses the relative reliability of the most common incentive-compatible value elicitation techniques, and compares valuations generated by each technique to those from a hypothetical question. Specifically, we collect valuations for 18 common aid interventions from 793 potential aid recipients using 6 randomly assigned elicitation methods. In a follow up survey, respondents were given a ‘take-it-or-leave-it’ (TIOLI) offer for an intervention – we measure reliability as whether the elicitation method predicts the respondent’s choice at follow-up. Our results show that valuations are systematically overstated across methods and are generally not consistent with responses to a concrete TIOLI offer – only 40% of valuations were consistent with TIOLI choices.
THEMATIC AREAS
Valuations are also sensitive to the elicitation method used and to framing. Overall, incentive-compatible techniques do not perform meaningfully better than a hypothetical question. We conclude that valuations can be obtained inexpensively using a hypothetical question, but that policy makers should use valuation outputs with caution and refrain from using them as ‘point estimates’ given the limitations to their content.