2nd Intl. Competition on Software Testing held at FASE 2020 in Dublin, Ireland.
Motivation
Tool competitions are a special form of comparative evaluation, where each tool has a team of developers or supporters associated that makes sure that the tool shows its best possible performance. Tool competitions have been a driving force for the development of mature tools that represent the state of the art in several research areas. This web site describes the competition on automatic software testing, which is in 2019 held as a satellite event for the conference TACAS 2019, as part of the TOOLympics event.
There are several new and powerful tools for automatic software testing around, but they are very difficult to compare. The reason is that so far no widely distributed benchmark suite of testing tasks was available and most concepts are only validated in research prototypes. This competition wants to change this: The goal is to establish a set of test tasks for comparing software testers, and the tools are publicized on the Test-Comp web site.
Only few projects aim at producing stable tools that can be used by people outside the respective development groups, and the development of such tools is not continuous. Also, PhD students and PostDocs do not adequately benefit from tool development because theoretical papers count more than papers that present technical and engineering contributions, like tool papers. Through its visibility, this competition wants to change this, showing off the latest implementations of the research results in our community, and give credits to researchers and students who spend considerable amounts of time developing test algorithms and software packages.
Goals of the Competition
- Provide a snapshot of the state-of-the-art in software testing to the community. This means to compare, independently from particular paper projects and specific techniques, different test-generation tools in terms of precision and performance.
- Increase the visibility and credits that tool developers receive. This means to provide a forum for presentation of tools and discussion of the latest technologies, and to give the students the opportunity to publish about the development work that they have done.
- Establish a set of benchmarks for software testing in the community. This means to create and maintain a set of programs together with coverage criteria, and to make those publicly available for researchers to be used in performance comparisons when evaluating a new technique.
Overview
One test run for a test generator gets as input (i) a program from the benchmark suite and (ii) a test specification (find bug, or coverage criterion), and returns as output a test suite (i.e., a set of test vectors). The test generator is contributed by the competition participant. The test runs are executed centrally by the competition organizer. The test validator takes as input the test suite and validates it by executing the program on all test vectors: for bug finding it checks if the bug is exposed and for coverage it reports the coverage using gcov. The picture below might help getting an idea of the process.
Contact
For questions about the competition, this web page, the benchmarks, or the organization of the competition, please contact the competition chair: Dirk Beyer, LMU Munich, Germany.
For discussion of topics of broad interest that are related to the competition on software testing,
please consider posting on our mailing list:
test-comp@googlegroups.com.
Web archive:
https://groups.google.com/forum/#!forum/test-comp
Sponsors
Test-Comp 2020 is sponsored by:
- Ludwig-Maximilians-Universität München (LMU Munich), Germany