Obtain a competitive edge in software quality by utilizing our unique, patented software diversity technology. The Diversity Analyzer allows software quality-assurance professionals and software developers to automatically measure and improve the quality of their testing by measuring code coverage. With the same cost as coverage, you can go way beyond coverage and improve your testing by analyzing the internal software control-flow diversity, by analyzing the data-flow diversity, by measuring dynamic code complexity, and by improving bug isolation.
Quickly find out where you test, what you test, how well you test, how complex your code is, and how well your test cases isolate faults. Use this unique information to improve the quality of your testing by diversifying your testing to increase the probability of defect detection. Measure dynamic code complexity to identify portions of code with high run-time complexity. Identify test cases that are best-suited for debugging your application.
What is test diversity? Test diversity is a test-dispersion measure that tell you where your testing is concentrated, what you have tested, and how well you have tested your code. This measure could be used to evaluate any type of testing currently in existence, such as black-box testing, white-box testing, statistical testing, etc. Test diversity relates the quality of any type of testing to control diversity and data diversity at the source-code level.
- Determine test distribution
- Determine control-flow variation
- Balance your test
- Improve test quality
- Determine data-flow variation
- Determine data-flow distribution
- Diversify your test
- Improve test quality
Determine where your testing is concentrated and how well you are testing at the source-code level.
The true/false evaluation frequencies of conditional expressions in conditional statements are used to measure the quality of a test suite. These evaluation frequencies could be un-evenly distributed for a particular test suite. For example, the truebranches in the code could be more heavily exercised than the false ones. Conditional diversity points to portions of source code of high and low test concentration.
Conditional diversity measures the control-flow variation of the program under test. Test suites with higher control-flow variations are more dispersed in the control and data space than test suites with low control-flow variations. Use conditional diversity to determine if you are gaining false confidence in your testing by running the “same” or “similar” test over and over again. Use conditional diversity to determine where in the code you need to apply balancing and skewing schemes to diversify your testing and increase chances of defect detection.
To learn more about conditional diversity, including a concrete example, go to diversity.
Determine if you are testing your code with the same, similar or different data. Testing with different data at the GUI level that result in same or similar internal-program data gives you false confidence that your testing is diversified.
Code coverage suffers from the problem of covering program code with data that might not expose defects. Since in general there are multiple sets of data that could cover a program, ones that expose defects and others that do not, a program could be completely control-covered but, unfortunately, with data that does not expose defects. Therefore, it is highly desirable to know the internal data distribution involved in testing, to take steps to increase it, and to continually measure it, improve it, and diversify it. Higher internal program data diversity indicates high data variation among test cases, whereas a lower data diversity indicates the code is covered with the same or similar data over and over again.
Control and data are tightly related with respect to test diversity. For example, branch selection in the code is governed by values of program variables, and vice versa, the values of program variables are governed by branch selection. If two test suites result in different conditional diversities, then the internal data states involved in the test suites are different, in turn, resulting in different data flowing throughout the program. In effect, conditional diversity is used to measure data diversity. Determine if you are gaining false confidence in your testing by running the “same” or “similar” tests over and over again.
Data diversity is calculated as an average of the individual data diversities for each conditional statement. The individual data diversities are calculated as a percentage of test suites for which a conditional expression has distinct conditional diversities. If two test cases have different conditional diversities then they execute different paths in the code, which is only possible if different data flows throughout the program. Therefore, higher data diversity means that the internal-program data involved in testing is more diverse.
Higher data diversity could be obtained by adding more test cases, by reordering existing test cases, and by replacing existing test cases with new ones utilizing balancing and skewing.
To learn more about the theory and the practice behind conditional diversity, including a concrete example, go to diversity.