Computational approaches to Bayesian inference for software reliability

Date of Completion

January 1994


Statistics|Computer Science




The Gibbs sampling approach is developed for Bayesian inferences and predictions in software reliability models. In many cases, a data augmentation technique for the Gibbs sampling is introduced to facilitate us in specifying conditional densities. When the conditional densities in the Gibbs algorithm are not easily identified, the Metropolis algorithm within Gibbs is used. The software reliability models considered include the Jelinski and Moranda model, the Littlewood and Verall model, nonhomogeneous Poisson processes, and superposition models of several independent nonhomogeneous Poisson processes. On the modeling aspects of the software, we propose a unified approach. In this unified theory, we model epochs of the failures of software by two different classes. One is the general order statistics model; the other is the record value statistics model. The general order statistics model treats the observed epochs of failures to be the first n order statistics taken from N i.i.d. observations with density f supported in $R\sp{+}$, where N is the number of bugs in the software at the beginning of testing stage. To incorporate the possible addition of new faults during repairs, we use the record value statistics model, where we assume the epochs of failures to be the record breaking statistics of i.i.d. observations taken from f supported in $R\sp{+}$. Their corresponding point processes can be related to the nonhomogeneous Poisson processes. Model selection based on a mean squared prediction error and a prequential likelihood of conditional predictive ordinates is developed. ^