Fifth Round of the SBST Java Unit Testing Tool Contest (with SBST 2017 @ ICSE)

Four successful JUNIT tool competitions have left a mark in the Search-Based Software Testing International Workshops since 2013. We would like to be grateful for the awesome efforts involved around the every year tool competition. Big thanks to the workshops attendees, to the paper authors, to the organising and technical committees and to Tanja E.J. Vos for chairing and making the contests alive every year.

It has been decided that for the competitions there will be a new Chair every year, supported by a Deputy Chair that will become Chair in the following year. So we are making this a reality. We are proud to announce the Fifth Round of the competition chaired by Annibale Panichella, with Urko Rueda as Deputy Chair, that will be held at SBST in conjunction with ICSE 2017: http://sbst2017.lafhis.dc.uba.ar/

The competition setup will remain mostly similar to previous editions. You might find interesting the information about the 1st, 2nd, 3rd and 4th contests:


We will soon announce useful information to participate in the upcoming 5th 2017 contest. Meanwhile, you can check the list of published papers.

1st contest, SBST2013 papers:

2nd contest, FITTEST2013 papers:

3rd contest, SBST2015 papers:

4th contest, SBST2016 papers:

Fourth Round of the SBST Java Unit Testing Tool Contest (with SBST 2016 @ ICSE)

After three successful competitions we, again this year, invite developers of tools for Java unit testing at the class level—both SBST and non-SBST—to participate in the 4th round of our tools competition!

Competition entries are in the form of short papers (maximum of 4 pages) describing an evaluation of your tool against a benchmark supplied by the workshop organizers. In addition to comparing your tool to other popular and successful tools such as Randoop, we will manually create unit tests for the classes under test, to be able to obtain and compare benchmark scores for manual and automated test generation!

The competition paper should be uplaoded to Easychair (https://easychair.org/conferences/?conf=sbst2016 ).

The results of the tools competition will be presented at the SBST 2016 workshop.

Importante dates

  • Deadline for installing a copy of your tool on the server ready to be run and collect the results: 22th of January
  • Test will be run and results will be communicated after: 5th of February
  • Deadline for uploading the 4 page competition paper: 18th of February
  • Camera ready deadline is: 26th of February

The Contest

The contest is targeted at developers/vendors of testing tools that generate test input data for unit testing java programs at the class level. Each tool will be applied on a set of java classes taken from open-source projects, and selected by the contest organization. The participating tools will have a fixed time budget to generate tests for Java classes. The score will be calculated based on:

  • statement and branch coverage ratios
  • fault detection and mutation scores
  • preparation, generation and execution times

Each participant should install a copy of their tool on the server where the contest will be run. To this end each participant will have SSH-access to the contest-server. The benchmark infrastructure will run the tools and measure their outputs fully automatically, therefore tools must be capable of running without human intervention.

Participate

If you are interested in participating, please send a mail to Tanja Vos describing the following characteristics of your tool: 1) name, 2) testing techniques implemented (SBST or other), 3) compatible operating systems, 4) tool inputs and outputs, and 5) optionally any evaluative studies already published.

You will be sent credentials to log-in to the sbst-contest-server.

Instructions

To allow automatic execution of the participating tools, these need to be configured and installed on the contest-server:

  • Host: sbstcontest.dsic.upv.es
  • OS: Ubuntu 12.04 LTS

You should install and configure your testing tool in the home directory of your account. You can basically do that in any way you want with the following exceptions.

  • You must have an executable (or shell script) named $HOME/runtool that implements the protocol described below
  • Your tool must store intermediate data in $HOME/temp/data
  • Your tool must output the generated test cases in JUnit4 format in $HOME/temp/testcases

The Benchmark Automation Protocol

The runtool script/binary is the interface between the benchmark framework and the participating tools. The communication between runtool and the benchmark framework is done through a very simple line based protocol over the standard input and output channels. The following table describes the protocol, every step consists of a line of text received by the runtool program on STDIN or sent to STDOUT.

STEP
MESSAGES
STDIN
MESSAGES
STDOUT
DESCRIPTION
1 BENCHMARK Signals the start of a benchmark run; directory $HOME/temp is cleared
2 directory Directory with the source code SUT
3 directory Directory with compiled class files of the SUT
4 number Number of entries in the class path (N)
5 directory/jar file Class path entry (repeated N times)
6 number Number of classes to be covered (M)
7 CLASSPATH Signals that the testing tool required additional classpath entries
8 Number Number of additional class path entries (K)
9 directory/jar file Repeated K times
10 READY Signals that the testing tool is ready to receive challenges
11 time budget Scoring for each class under test will take place after a fixed amount of time, in seconds. Any test artefact generated after this time will be ignored.
12 class name The name of the class for which unit tests must be generated.
13 READY Signals that the testing tool is ready to receive more challenges; test cases in $HOME/temp/testcases are analyzed; subsequently $HOME/temp/testcases is cleared; goto step 11 until M class names have been processed

To ease the implementation of a runtool program according to the protocol above, we provide a skeleton implementation in Java.

Test the Protocol

In order to test whether your runtool implemented the protocol correctly, we installed a utility called sbstcontest on the machine. If you run it, it will output:

sbstcontest <toolname> <benchmark> <tooldir> <run_number> <timebudget>
Available benchmarks: [Closure-9, Math-9, Chart-5, Time-6, Lang-61]

The first line shows how the tool is used. <toolname> is a short string identifier for your testing tool, <benchmark> is one of the installed benchmarks as shown in the second line. <tooldirectory> is the directory where your runtool resides and <run_number> is a positive number greater than 0. The benchmarks are collections of classes from different open source projects. An example invocation would be (120 seconds of time budget):

sbstcontest MyToolName Math-9 . 1 120

If you implemented the protocol correctly and generated all the test cases, the tool will create a transcript file in the runtool’s directory. This file will show you several statistics, such as achieved branch coverage, mutation score, etc.

You will be able to run the full set of benchmarks through the script:

sbstcontest_4th_auto.sh MyToolName runs-number <timebudget>

It will run each available benchmark “runs-number” times. Output is stored in ./results folder. You may also want to obtain a single transcript.csv file from all runs:

transcript_single.sh ./results MyToolName

Test Case Format

The tests generated by the participating tools must be stored as one or more java files containing JUnit4 tests.

  • declare classes public
  • add zero-argument public constructor
  • annotate test methods with @Test
  • declare test methods public

The generated test cases will be compiled against

  • JUnit 4.10
  • The benchmark SUT
  • Any dependencies of the benchmark SUT

Third Round of the SBST Java Unit Testing Tool Contest (with SBST 2015 @ ICSE in Florence)

After two successful competitions in 2012 and 2013 we, again this year, invite developers of tools for Java unit testing at the class level—both SBST and non-SBST—to participate in the third round of our tools competition!

Competition entries are in the form of short papers (maximum of 4 pages) describing an evaluation of your tool against a benchmark supplied by the workshop organizers. In addition to comparing your tool to other popular and successful tools such as Randoop, we will manually create unit tests for the classes under test, to be able to obtain and compare benchmark scores for manual and automated test generation!

The competition paper should be uplaoded to Easychair (https://easychair.org/conferences/?conf=sbst2015 ).

The results of the tools competition will be presented at the SBST 2015 workshop.

Importante dates

  • Deadline for installing a copy of your tool on the server ready to be run and collect the results: 6th of february
  • After the 6th of February the test will be run and results will be communicated ASAP.
  • Deadline for uploading the 4 page competition paper: 18th of February

The Contest

The contest is targeted at developers/vendors of testing tools that generate test input data for unit testing java programs at the class level. Each tool will be applied on a set of java classes taken from open-source projects, and selected by the contest organization. The participating tools will be compared for:

  • execution time
  • achieved branch coverage
  • mutation score

Each participant should install a copy of their tool on the server where the contest will be run. To this end each participant will have SSH-access to the contest-server. The benchmark infrastructure will run the tools and measure their outputs fully automatically, therefore tools must be capable of running without human intervention.

Participate

If you are interested in participating, please send a mail to Tanja Vos describing the following characteristics of your tool: 1) name, 2) testing techniques implemented (SBST or other), 3) compatible operating systems, 4) tool inputs and outputs, and 5) optionally any evaluative studies already published.

You will be sent credentials to log-in to the sbst-contest-server.

Instructions

To allow automatic execution of the participating tools, these need to be configured and installed on the contest-server:

  • Host: sbstcontest.dsic.upv.es 
  • OS: Ubuntu 12.04 LTS

You should install and configure your testing tool in the home directory of your account.  You can basically do that in any way you want with the following exceptions.

  • You must have an executable (or shell script) named $HOME/runtool that implements the protocol described below
  • Your tool must store intermediate data in $HOME/temp/data
  • Your tool must output the generated test cases in JUnit4 format in $HOME/temp/testcases

The Benchmark Automation Protocol

The runtool script/binary is the interface between the benchmark framework and the participating tools. The communication between runtool and the benchmark framework is done through a very simple line based protocol over the standard input and output channels. The following table describes the protocol, every step consists of a line of text received by the runtool program on STDIN or sent to STDOUT.

STEP
MESSAGES
STDIN
MESSAGES
STDOUT
DESCRIPTION
1 BENCHMARK Signals the start of a benchmark run; directory $HOME/temp is cleared
2 directory Directory with the source code SUT
3 directory Directory with compiled class files of the SUT
4 number Number  of entries in the class path (N)
5 directory/jar file Class path entry (repeated N times)
6 number Number  of classes to be covered (M)
7 CLASSPATH Signals that the testing tool required additional classpath entries
8 Number Number of additional class path entries (K)
9 directory/jar file Repeated K times
10 READY Signals that the testing tool is ready to receive challenges
11 class name The name of the class for which unit tests must be generated.
12 READY Signals that the testing tool is ready to receive more challenges; test cases in $HOME/temp/testcases are analyzed; subsequently $HOME/temp/testcases is cleared; goto step 11 until M class names have been processed

To ease the implementation of a runtool program according to the protocol above, we provide a skeleton implementation in Java.

Test the Protocol

In order to test whether your runtool implemented the protocol correctly, we installed a utility called sbstcontest on the machine. If you run it, it will output:

sbstcontest <benchmark> <tooldirectory> <debugoutput>
Available benchmarks: [mucommander, jedit, jabref, triangle, argouml]

The first line shows how the tool is used. <benchmark> is one of the installed benchmarks as shown in the second line. <tooldirectory> is the directory where your runtool resides and <debugoutput> is a boolean value to enable debug information. The benchmarks are collections of classes from different open source projects. triangle lends itself to testing the basic functionality, as it is a very simple benchmark consisting of only 2 classes. An example invocation would be:

sbstcontest triangle . true

If you implemented the protocol correctly and generated all the test cases, the tool will create a transcript file in the runtool’s directory. This file will show you several statistics, such as achieved branch coverage, mutation score, etc.

Test Case Format

The tests generated by the participating tools must be stored as one or more java files containing JUnit4 tests.

  • declare classes public
  • add zero-argument public constructor
  • annotate test methods with @Test
  • declare test methods public

The generated test cases will be compiled against

  • JUnit 4.10
  • The benchmark SUT
  • Any dependencies of the benchmark SUT

Second Round of the SBST Java Unit Testing Tool Contest (Deadline: September 20th 2013)

After our successful competition earlier this year, we invite developers of tools for Java unit testing at the class level—both SBST and non-SBST—to participate in the second round of our tools competition! Competition entries are in the form of short papers (maximum of 4 pages) describing an evaluation of your tool against a benchmark supplied by the workshop organizers. In addition to comparing your tool to other popular and successful tools such as Randoop, we will manually create unit tests for the classes under test, to be able to obtain and compare benchmark scores for manual and automated test generation!

The results of the tools competition will be presented at the workshop. We additionally plan to co-ordinate a journal paper jointly authored by the tool developers that evaluates the results of the competition.

The Contest

The contest is targeted at developers/vendors of testing tools that generate test input data for unit testing java programs at the class level. Each tool will be applied on a set of java classes taken from open-source projects, and selected by the contest organization. The participating tools will be compared for:

  • execution time
  • achieved branch coverage
  • mutation score

Each participant should install a copy of their tool on the server where the contest will be run. To this end each participant will have SSH-access to the contest-server. The benchmark infrastructure will run the tools and measure their outputs fully automatically, therefore tools must be capable of running without human intervention.

Participate

To participate, please send a mail to Tanja Vos describing the following characteristics of your tool: name, testing techniques implemented (SBST or other), compatible operating systems, tool inputs and outputs, and optionally any evaluative studies already published.

You will be sent credentials to log-in to the sbst-contest-server.

Instructions

To allow automatic execution of the participating tools, these need to be configured and installed on the contest-server:

  • Host: sbstcontest.dsic.upv.es 
  • OS: Ubuntu 12.04 LTS

You should install and configure your testing tool in the home directory of your account.  You can basically do that in any way you want with the following exceptions.

  • You must have an executable (or shell script) named $HOME/runtool that implements the protocol described below
  • Your tool must store intermediate data in $HOME/temp/data
  • Your tool must output the generated test cases in JUnit4 format in $HOME/temp/testcases

The Benchmark Automation Protocol

The runtool script/binary is the interface between the benchmark framework and the participating tools. The communication between runtool and the benchmark framework is done through a very simple line based protocol over the standard input and output channels. The following table describes the protocol, every step consists of a line of text received by the runtool program on STDIN or sent to STDOUT.

STEP
MESSAGES
STDIN
MESSAGES
STDOUT
DESCRIPTION
1 BENCHMARK Signals the start of a benchmark run; directory $HOME/temp is cleared
2 directory Directory with the source code SUT
3 directory Directory with compiled class files of the SUT
4 number Number  of entries in the class path (N)
5 directory/jar file Class path entry (repeated N times)
6 number Number  of classes to be covered (M)
7 CLASSPATH Signals that the testing tool required additional classpath entries
8 Number Number of additional class path entries (K)
9 directory/jar file Repeated K times
10 READY Signals that the testing tool is ready to receive challenges
11 class name The name of the class for which unit tests must be generated.
12 READY Signals that the testing tool is ready to receive more challenges; test cases in $HOME/temp/testcases are analyzed; subsequently $HOME/temp/testcases is cleared; goto step 11 until M class names have been processed

To ease the implementation of a runtool program according to the protocol above, we provide a skeleton implementation in Java.

Test the Protocol

In order to test whether your runtool implemented the protocol correctly, we installed a utility called sbstcontest on the machine. If you run it, it will output:

sbstcontest <benchmark> <tooldirectory> <debugoutput>
Available benchmarks: [mucommander, jedit, jabref, triangle, argouml]

The first line shows how the tool is used. <benchmark> is one of the installed benchmarks as shown in the second line. <tooldirectory> is the directory where your runtool resides and <debugoutput> is a boolean value to enable debug information. The benchmarks are collections of classes from different open source projects. triangle lends itself to testing the basic functionality, as it is a very simple benchmark consisting of only 2 classes. An example invocation would be:

sbstcontest triangle . true

If you implemented the protocol correctly and generated all the test cases, the tool will create a transcript file in the runtool’s directory. This file will show you several statistics, such as achieved branch coverage, mutation score, etc.

Test Case Format

The tests generated by the participating tools must be stored as one or more java files containing JUnit4 tests.

  • declare classes public
  • add zero-argument public constructor
  • annotate test methods with @Test
  • declare test methods public

The generated test cases will be compiled against

  • JUnit 4.10
  • The benchmark SUT
  • Any dependencies of the benchmark SUT

Final Results

You can find your benchmark score in the “score.txt” file within your tool’s directory.

The final ranking is a follows:

  1. evosuite with 156.9459 points on average
  2. randoop with 101.8129 points on average
  3. t2 with 50.4938 points on average
  4. dsc with 43.3208 points on average
We do not have a ranking for the manual tests, since we do not know how much time the developers spent to develop the manual test cases. The benchmark score can thus only be given as a function of t_gen:  score_manual = 246.5554 – 5 * t_gen

The zip file also contains the score tool which aggregates the results of a single run as well as multiple runs.

./score ../evosuite/run1/results.txt      // score for a single run
./score ../evosuite      // final score averaged over all runs

Preliminary Results

We have the first results of the benchmark. However, be aware that these can still change. This is just to get a first impression of your results.

The zip file contains the test classes as well as the results from each run of a participant’s tool. Each run comprises the generated test cases, a log file of the run and the results.txt file with the metrics for each class.

The folder “scoretool” contains the tool “score” which can be used to calculate the benchmark score for each run. Just pass on of the results.txt files:

./score ../evosuite/run1/results.txt

SBST Java Unit Testing Tool Contest

We invite developers of tools for Java unit testing at the class level—both SBST and non-SBST—to participate in a tools competition! Competition entries are in the form of short papers (maximum of 4 pages) describing an evaluation of your tool against a benchmark supplied by the workshop organisers.

The results of the tools competition will be presented at the workshop. We additionally plan to co-ordinate a journal paper jointly authored by the tool developers that evaluates the results of the competition.

The Contest

The contest is targeted at developers/vendors of testing tools that generate test input data for unit testing java programs at the class level. Each tool will be applied on a set of java classes taken from open-source projects, and selected by the contest organization. The participating tools will be compared for:

  • execution time
  • achieved branch coverage
  • test suite size
  • mutation score

Each participant should install a copy of their tool on the server where the contest will be run. To this end each participant will have SSH-access to the contest-server. The benchmark infrastructure will run the tools and measure their outputs fully automatically, therefore tools must be capable of running without human intervention.

Participate

To participate, please send a mail to Tanja Vos describing the following characteristics of their tool: name, testing techniques implemented (SBST or other), compatible operating systems, tool inputs and outputs, and optionally any evaluative studies already published.

You will be sent credentials to log-in to the sbst-contest-server.

Instructions

To allow automatic execution of the participating tools, these need to be configured and installed on the contest-server:

  • Host: sbstcontest.dsic.upv.es 
  • OS: Ubuntu 12.04 LTS

You should install and configure your testing tool in the home directory of your account.  You can basically do that in any way you want with the following exceptions.

  • You must have an executable (or shell script) named $HOME/runtool that implements the protocol described below
  • Your tool must store intermediate data in $HOME/temp/data
  • Your tool must output the generated test cases in JUnit4 format in $HOME/temp/testcases

The Benchmark Automation Protocol

The runtool script/binary is the interface between the benchmark framework and the participating tools. The communication between runtool and the benchmark framework is done through a very simple line based protocol over the standard input and output channels. The following table describes the protocol, every step consists of a line of text received by the runtool program on STDIN or sent to STDOUT.

Step Messages STDIN Messages STDOUT Description
1 BENCHMARK Signals the start of a benchmark run; directory $HOME/temp is cleared
2 directory Directory with the source code SUT
3 directory Directory with compiled class files of the SUT
4 number Number  of entries in the class path (N)
5 directory/jar file Class path entry (repeated N times)
6 number Number  of classes to be covered (M)
7 CLASSPATH Signals that the testing tool required additional classpath entries
8 Number Number of additional class path entries (K)
9 directory/jar file Repeated K times
10 READY Signals that the testing tool is ready to receive challenges
11 class name The name of the class for which unit tests must be generated.
12 READY Signals that the testing tool is ready to receive more challenges; test cases in $HOME/temp/testcases are analyzed; subsequently $HOME/temp/testcases is cleared; goto step 11 until M class names have been processed

To ease the implementation of a runtool program according to the protocol above, we provide a skeleton implementation in Java.

Test the Protocol

In order to test whether your runtool implemented the protocol correctly, we installed a utility called sbstcontest on the machine. If you run it, it will output:

sbstcontest <benchmark> <tooldirectory> <debugoutput>
Available benchmarks: [mucommander, jedit, jabref, triangle, argouml]

The first line shows how the tool is used. <benchmark> is one of the installed benchmarks as shown in the second line. <tooldirectory> is the directory where your runtool resides and <debugoutput> is a boolean value to enable debug information. The benchmarks are collections of classes from different open source projects. triangle lends itself to testing the basic functionality, as it is a very simple benchmark consisting of only 2 classes. An example invocation would be:

sbstcontest triangle . true

If you implemented the protocol correctly and generated all the test cases, the tool will create a transcript file in the runtool’s directory. This file will show you several statistics, such as achieved branch coverage, mutation score, etc.

Test Case Format

The tests generated by the participating tools must be stored as one or more java files containing JUnit4 tests.

  • declare classes public
  • add zero-argument public constructor
  • annotate test methods with @Test
  • declare test methods public

The generated test cases will be compiled against

  • JUnit 4.10
  • The benchmark SUT
  • Any dependencies of the benchmark SUT