If the special variable TESTS
is defined, its value is taken to be
a list of programs or scripts to run in order to do the testing. Under
the appropriate circumstances, it’s possible for TESTS
to list
also data files to be passed to one or more test scripts defined by
different means (the so-called “log compilers”, see Parallel Test Harness).
Test scripts can be executed serially or concurrently. Automake supports
both these kinds of test execution, with the parallel test harness being
the default. The concurrent test harness relies on the concurrence
capabilities (if any) offered by the underlying make
implementation, and can thus only be as good as those are.
By default, only the exit statuses of the test scripts are considered when determining the testsuite outcome. But Automake allows also the use of more complex test protocols, either standard (see Using the TAP test protocol) or custom (see Custom Test Drivers). You can’t enable such protocols when the serial harness is used, though. In the rest of this section we are going to concentrate mostly on protocol-less tests, since we cover test protocols in a later section (again, see Custom Test Drivers).
When no test protocol is in use, an exit status of 0 from a test script will denote a success, an exit status of 77 a skipped test, an exit status of 99 a hard error, and any other exit status will denote a failure.
You may define the variable XFAIL_TESTS
to a list of tests
(usually a subset of TESTS
) that are expected to fail; this will
effectively reverse the result of those tests (with the provision that
skips and hard errors remain untouched). You may also instruct the
testsuite harness to treat hard errors like simple failures, by defining
the DISABLE_HARD_ERRORS
make variable to a nonempty value.
Note however that, for tests based on more complex test protocols,
the exact effects of XFAIL_TESTS
and DISABLE_HARD_ERRORS
might change, or they might even have no effect at all (for example,
in tests using TAP, there is no way to disable hard errors, and the
DISABLE_HARD_ERRORS
variable has no effect on them).
The result of each test case run by the scripts in TESTS
will be
printed on standard output, along with the test name. For test protocols
that allow more test cases per test script (such as TAP), a number,
identifier and/or brief description specific for the single test case is
expected to be printed in addition to the name of the test script. The
possible results (whose meanings should be clear from the previous
Generalities about Testing) are PASS
, FAIL
,
SKIP
, XFAIL
, XPASS
and ERROR
. Here is an
example of output from a hypothetical testsuite that uses both plain
and TAP tests:
PASS: foo.sh PASS: zardoz.tap 1 - Daemon started PASS: zardoz.tap 2 - Daemon responding SKIP: zardoz.tap 3 - Daemon uses /proc # SKIP /proc is not mounted PASS: zardoz.tap 4 - Daemon stopped SKIP: bar.sh PASS: mu.tap 1 XFAIL: mu.tap 2 # TODO frobnication not yet implemented
A testsuite summary (expected to report at least the number of run, skipped and failed tests) will be printed at the end of the testsuite run. By default, the first line of the summary has the form:
Testsuite summary for package-string
where package-string is the name and version of the package. If
you have several independent test suites for different parts of the
package, though, it can be misleading for each suite to imply it is
for the whole package. Or, in complex projects, you may wish to add
the current directory or other information to the testsuite header
line. So you can override the ‘ for package-string’ suffix
on that line by setting the AM_TESTSUITE_SUMMARY_HEADER
variable. The value of this variable is used unquoted in a shell echo
command, so you must include any necessary quotes. For example, the
default value is
AM_TESTSUITE_SUMMARY_HEADER = ' for $(PACKAGE_STRING)'
including the double quotes (interpreted by the shell) and the leading
space (since the value is output directly after the ‘Testsuite
summary’). The $(PACKAGE_STRING)
is substituted by make
.
If the standard output is connected to a capable terminal, then the test
results and the summary are colored appropriately. The developer and the
user can disable colored output by setting the make
variable
‘AM_COLOR_TESTS=no’; the user can in addition force colored output
even without a connecting terminal with ‘AM_COLOR_TESTS=always’.
It’s also worth noting that some make
implementations,
when used in parallel mode, have slightly different semantics
(see Parallel make in The Autoconf Manual), which can
break the automatic detection of a connection to a capable terminal.
If this is the case, the user will have to resort to the use of
‘AM_COLOR_TESTS=always’ in order to have the testsuite output
colorized.
Test programs that need data files should look for them in srcdir
(which is both a make variable and an environment variable made available
to the tests), so that they work when building in a separate directory
(see Build Directories in The Autoconf Manual), and in particular for the distcheck
rule
(see Checking the Distribution).
Automake ensures that each file listed in TESTS
is built before
it is run; you can list both source and derived programs (or scripts)
in TESTS
; the generated rule will look both in srcdir
and
‘..’. For instance, you might want to run a C program as a test.
To do this you would list its name in TESTS
and also in
check_PROGRAMS
, and then specify it as you would any other
program.
Programs listed in check_PROGRAMS
(and check_LIBRARIES
,
check_LTLIBRARIES
, ...) are only built during make
check
, not during make all
. You should list there any program
needed by your tests that does not need to be built by make
all
. The programs in check_PROGRAMS
are not
automatically added to TESTS
because check_PROGRAMS
usually lists programs used by the tests, not the tests themselves.
If all your programs are in fact test cases, you can set TESTS =
$(check_PROGRAMS)
.