search menu icon-carat-right cmu-wordmark

Quantifying Uncertainty in Expert Judgment: Initial Results

Technical Report
The work described in this report, part of a larger SEI research effort on Quantifying Uncertainty in Early Lifecycle Cost Estimation (QUELCE), aims to develop and validate methods for calibrating expert judgment.
Publisher

Software Engineering Institute

CMU/SEI Report Number
CMU/SEI-2013-TR-001
DOI (Digital Object Identifier)
10.1184/R1/6582719.v1

Abstract

The work described in this report, part of a larger SEI research effort on Quantifying Uncertainty in Early Lifecycle Cost Estimation (QUELCE), aims to develop and validate methods for calibrating expert judgment. Reliable expert judgment is crucial across the program acquisition lifecycle for cost estimation and, perhaps most critically, for tasks related to risk analysis and program management. This research is based on three field studies that compare and validate training techniques aimed at improving the participants' skills to enable more realistic judgments commensurate with their knowledge. 

Most of the study participants completed three batteries of software engineering domain-specific test questions. Some participants completed four batteries of questions about a variety of general knowledge topics for purposes of comparison. Results from both sets of questions showed improvement in the participants' recognition of their true uncertainty. The domain-specific training was accompanied by notable improvements in the relative accuracy of the participants' answers when more contextual information to the questions was given along with "reference points" about similar software systems. Moreover, the additional contextual information in the domain-specific training helped the participants improve the accuracy of their judgments while also reducing their uncertainty in making those judgments.

Cite This Technical Report

Goldenson, D., & Stoddard, R. (2013, March 1). Quantifying Uncertainty in Expert Judgment: Initial Results. (Technical Report CMU/SEI-2013-TR-001). Retrieved April 24, 2024, from https://doi.org/10.1184/R1/6582719.v1.

@techreport{goldenson_2013,
author={Goldenson, Dennis and Stoddard, Robert},
title={Quantifying Uncertainty in Expert Judgment: Initial Results},
month={Mar},
year={2013},
number={CMU/SEI-2013-TR-001},
howpublished={Carnegie Mellon University, Software Engineering Institute's Digital Library},
url={https://doi.org/10.1184/R1/6582719.v1},
note={Accessed: 2024-Apr-24}
}

Goldenson, Dennis, and Robert Stoddard. "Quantifying Uncertainty in Expert Judgment: Initial Results." (CMU/SEI-2013-TR-001). Carnegie Mellon University, Software Engineering Institute's Digital Library. Software Engineering Institute, March 1, 2013. https://doi.org/10.1184/R1/6582719.v1.

D. Goldenson, and R. Stoddard, "Quantifying Uncertainty in Expert Judgment: Initial Results," Carnegie Mellon University, Software Engineering Institute's Digital Library. Software Engineering Institute, Technical Report CMU/SEI-2013-TR-001, 1-Mar-2013 [Online]. Available: https://doi.org/10.1184/R1/6582719.v1. [Accessed: 24-Apr-2024].

Goldenson, Dennis, and Robert Stoddard. "Quantifying Uncertainty in Expert Judgment: Initial Results." (Technical Report CMU/SEI-2013-TR-001). Carnegie Mellon University, Software Engineering Institute's Digital Library, Software Engineering Institute, 1 Mar. 2013. https://doi.org/10.1184/R1/6582719.v1. Accessed 24 Apr. 2024.

Goldenson, Dennis; & Stoddard, Robert. Quantifying Uncertainty in Expert Judgment: Initial Results. CMU/SEI-2013-TR-001. Software Engineering Institute. 2013. https://doi.org/10.1184/R1/6582719.v1