Improving predictive models of software quality using search-based metric selection and decision trees
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Predictive models are used to identify potentially problematic components that decrease product quality. Design and source code metrics are used as input features for predictive models; however, there exist a large number of structural measures that capture different aspects of coupling, cohesion, inheritance, complexity and size. An important question to answer is: Which metrics should be used with a model for a particular predictive objective? Identifying a metric subset that improves the performance for the classifier may also provide insights into the structural properties that lead to problematic modules.
In this work, a genetic algorithm (GA) is used as a search-based metric selection strategy. A comparative study has been carried out between GA, the Chidamber and Kemerer (CK) metrics suite, and principal component analysis (PCA) as metric selection strategies with different datasets. Program comprehension is important for programmers and the first dataset evaluated uses source code inspections as a subjective measure of cognitively complexity. Predicting the likely location of system failures is important in order to improve a system’s reliability. The second dataset uses an objective measure of faults found in system modules in order to predict fault-prone components.
The aim of this research has been to advance the current state of the art in predictive models of software quality by exploring the efficacy of a search-based approach in selecting appropriate metrics subsets. Results show that GA performs well as a metric selection strategy when used with a linear discriminant analysis classifier. When predicting cognitive complex classes, GA achieved an F-value of 0.845 compared to an F-value of 0.740 using PCA, and 0.750 for the CK metrics.
By examining the GA chosen metrics with a white box predictive model (decision tree classifier) additional insights into the structural properties of a system that degrade product quality were observed. Source code metrics have been designed for human understanding and program comprehension and predictive models for cognitive complexity perform well with just source code metrics. Models for fault prone modules do not perform as well when using only source metrics and need additional non-source code information, such module modification history or testing history.
Description
Keywords
Citation
Vivanco, Rodrigo and Jin, Dean (2007). Improving a Predictive Model of Cognitive Complexity Using an Evolutionary Computational Approach – A Case Study, In Proceedings of the 17th Annual International Conference of IBM Centers for Advanced Studies (CASCON’07), Toronto, Ontario, October 2007, pp. 109-123.