Testing NetBSD: Easy Does It
In a software project as large as NetBSD the interactions between different software components are not always immediately obvious to even the most skilled programmers. Tests help ensure that the system functions according to the desired criteria. Periodic automated runs of these tests with results visible on the web ensures both that tests are run in a regular fashion and that the results are available to all interested parties.
This short article explains the NetBSD test strategies and provides a brief overview of the enabling technologies. It also details how effortless it is to run the test suite and why doing so is in every developer's, patch submitter's and system administrator's best interest. The intended audience is people with a keen interest in testing and quality assurance, and a desire to reduce personal headache. The article is written against NetBSD-current as of June 2010 and applies to what will eventually become NetBSD 6.
Automated Testing Framework (ATF)
Julio Merino's Automated Testing Framework (ATF) unifies the interface for running tests, enables customizable test report formats and provides a standard interface for implementing tests. ATF also provides a mechanism for tests to determine if the feature under test, such as hardware, is present in the system and skip the test instead of incorrectly failing it. The goal is to make the tests run conveniently in batch mode without human supervision -- hence the name automated. ATF is shipped with NetBSD and all new NetBSD tests should be written against ATF.
ATF tests are found under /usr/tests in a standard NetBSD installation. As pointed out on the ATF website, this is done to enable a system administrator to run the NetBSD test suite
for the specific hardware/software setup with minimal effort. Executing the tests should be viewed as insurance for a particular installation and reporting any test failures immediately may save a lot of head scratching down the road.The tests can be run with the atf-run command in the appropriate subdirectory for a partial set of tests or on the top level of /usr/tests for the entire NetBSD test suite. Since the output of atf-run is meant to be post-processed by other tools, the idiomatic command for creating a human-readable report includes a pipe to the report generator:
atf-run | atf-reportThis gives a verdict for all the tests. Also, a summary like the following one is presented:
Summary for 25 test programs: 83 passed test cases. 0 failed test cases. 0 expected failures. 2 skipped test cases.Further documentation for running the tests and controlling the report format is available from the ATF manual pages, specifically atf-run and atf-report.
Automated NetBSD Installation and Test Application (anita)
The Automated NetBSD Installation and Test Application (anita) is a tool written by Andreas Gustafsson. When anita is run, a URL to release set binaries is given as an argument. Anita downloads the release sets, creates a disk image, boots the downloaded release in a virtual machine and installs the release. For example, the following command will download and install NetBSD/i386 5.0.2, and boot the resulting installation to a command prompt:
anita interact ftp://ftp.NetBSD.org/pub/NetBSD/NetBSD-5.0.2/i386/
Currently, anita supports only QEMU and the i386 port, although there has been interest in adding support for other virtual containers and NetBSD ports. Since installation is done in a virtual machine, the environment is theoretically the same regardless of the host the command is run on. This is both a blessing and a curse: different anita runs are comparable regardless of where they are executed, but features specific to certain machine configurations are not exercised. Nevertheless, if an anita install is successful, there is reasonably high confidence that the release it was executed for works.
What makes anita especially effective for testing installation is that it uses "screen scrape" with the sysinst installer. This means the display output of sysinst is read and interpreted by anita, and commands are given as response to the output. This tests that the installation works like a human would be doing it and sets it apart from testers which use various machine scripts to perform test installations.
Test Reports On The Web
In addition to anita, Andreas has written a set of tools which fetch the current sources from cvs, build a release, and use anita to install the release and run the ATF tests. The results are currently available on his website. The source revisions committed between each build/install/test run are available behind the "Details" links on the page. Furthermore, if the system build is broken, the tools make an effort to hunt down the exact guilty commit before publishing the result as to when the build was broken.
If despite testing efforts a regression does slip through, the logs from the runs make it easy to track down which commit introduced the regression, even days after it was introduced -- although hopefully it will not take so long to correct things. Once enough logs have been accumulated, they can also provide a source to figure out what breaks often, due to what, and for how long. This information can be used to prevent similar problems for occurring in the future.
Running The Test Suite With Anita
The results mentioned in the previous section are used as a reference point to determine the current health of NetBSD. Developers and users submitting patches are encouraged to repeat the anita test run to make sure changes do not have unwanted side-effects. Additionally, developers are committed to not causing long-term regressions in the anita test runs -- a clean test report for a submitted patch may further convince that the patch was sufficiently tested and should be included in NetBSD. Nevertheless, common sense is allowed as to when this is necessary.
Although writing tests is not covered in this article, it is also highly recommended that new features are submitted with the relevant tests in the same package. This is also to the advantage of the submitter since, as mentioned above, developers are required to make sure [future] changes do not cause existing tests to fail.
Anita can be found in pkgsrc under misc/py-anita. As of writing this, pkgsrc-current (and what will become pkgsrc-2010Q2) is required. Also, QEMU version 0.12nb3 or later is required due to a bugfix in the CPU emulation -- test runs will hang indefinitely without this bugfix.
A full anita install/boot/test cycle including the ATF test report is accomplished by using the test option. For example, when build.sh is used to build release sets into /objs/obj.i386/releasedir/i386, the following command would be used to run an install/boot/test cycle:
anita test /objs/obj.i386/releasedir/i386/
If the development host runs NetBSD and is sufficiently up-to-date, it is possible to simply run the ATF tests there. However, due to reasons already mentioned, the results might or might not reflect the anita run. The recommended "no brains necessary" method to make sure that a change does not cause a regression in the anita run is to do an anita run. Since it does not require additional hardware or even disrupting current work due to having to reboot, there is little excuse for not doing so. On the flip-side, the anita run does not test the system configuration of the development host, so ultimately the best choice is to run the tests in both environments. This arguably doubles the amount of necessary command lines from one to two, but still leaves little excuse for not executing both.
Conclusions
This short article presented NetBSD testing options and provided a brief introduction to the tools to make it happen. NetBSD testing is done periodically with results available from a web page, but manual execution, either piecemeal or wholesale, is possible and highly recommended. Ultimately tests help ensure the quality of NetBSD, and it is in everyone's personal interest to run tests on their local machines and to include test cases along code submissions.
Acknowledgments
Thanks to Andreas and Julio for their work on these awesome tools and for comments on a draft of this article.
[2 comments]
Posted by Eric on June 24, 2010 at 03:18 PM UTC #
Posted by jmmv on June 24, 2010 at 06:30 PM UTC #