TY - CONF T1 - Using uncertainty as a model selection and comparison criterion T2 - Proceedings of the 5th International Conference on Predictor Models in Software Engineering Y1 - 2009 A1 - Sarcia',Salvatore Alessandro A1 - Basili, Victor R. A1 - Cantone,Giovanni KW - accuracy KW - Bayesian prediction intervals KW - Calibration KW - cost estimation KW - cost model KW - model evaluation KW - model selection KW - prediction interval KW - Uncertainty AB - Over the last 25+ years, software estimation research has been searching for the best model for estimating variables of interest (e.g., cost, defects, and fault proneness). This research effort has not lead to a common agreement. One problem is that, they have been using accuracy as the basis for selection and comparison. But accuracy is not invariant; it depends on the test sample, the error measure, and the chosen error statistics (e.g., MMRE, PRED, Mean and Standard Deviation of error samples). Ideally, we would like an invariant criterion. In this paper, we show that uncertainty can be used as an invariant criterion to figure out which estimation model should be preferred over others. The majority of this work is empirically based, applying Bayesian prediction intervals to some COCOMO model variations with respect to a publicly available cost estimation data set coming from the PROMISE repository. JA - Proceedings of the 5th International Conference on Predictor Models in Software Engineering T3 - PROMISE '09 PB - ACM CY - New York, NY, USA SN - 978-1-60558-634-2 UR - http://doi.acm.org/10.1145/1540438.1540464 M3 - 10.1145/1540438.1540464 ER -