This tutorial describes how to write a test suite using the TET C language API binding. This is derived from Chapter 8 of the TETware Programmers Guide. The source code for the test suite can be found in Appendix A.
Table of Contents
1.1 Introduction
1.2 Defining a Test Suite
1.3 Defining Common Test Case Functions and Variables
1.4 Initializing Test Cases
1.5 Controlling and Recording Test Case Execution Results
1.5.1 Child Processes and Subprograms
1.6 Cleaning Up Test Cases
This tutorial describes how to write a test suite using the TET C language API binding. The source code for the test suite can be found in Appendix A.
This tutorial is designed to illustrate how a test suite can be structured under TET, as well as how individual test cases and their test purposes relate to each other and to the API. The test suite has been deliberately kept simple and realistic. For example, one test purpose compares the returned error code against an expected error code of a failed system call, while another test purpose in the same test case checks the successful execution of the system call.
Small segments of code from the test suite appear in the following sections to help illustrate specific points. Refer to the appropriate section in Appendix A to see the code in its entirety.
Test suites reside in subdirectories of $TET_ROOT (or alternately $TET_SUITE_ROOT, which is an Extended TET feature). The name of the subdirectory and the test suite are the same. The following figure shows the component files of the sample test suite, called C-API:
The make-up of this test suite is similar to the demo test suite which comes with the TET: an install script and cleantool in the bin directory; configuration files for test build, execution, and cleanup; a control file, tet_scen; a result codes file, tet_code; several test cases in a directory structure under the directory ts; and a results directory.
The control file, tet_scen, lists the components of the test suite; and its contents determine the scenarios that can be used in running the test suite. The following figure shows the contents of the control file, tet_scen, for the C-API test suite.
# chmod, fileno, stat, uname test suite.
all
"Starting Full Test Suite"
/ts/chmod/chmod-tc
/ts/fileno/fileno-tc
/ts/stat/stat-tc
/ts/uname/uname-tc
"Completed Full Test Suite"
chmod
"Starting chmod Test Case"
/ts/chmod/chmod-tc
"Finished chmod Test Case"
fileno
"Starting fileno Test Case"
/ts/fileno/fileno-tc
"Finished fileno Test Case"
stat
"Starting stat Test Case"
/ts/stat/stat-tc
"Finished stat Test Case"
uname
"Starting uname Test Case"
/ts/uname/uname-tc
"Finished uname Test Case"
# EOF
Figure 8. The C-API Control File
The control file lists five scenarios for the test suite: all (required), chmod, fileno, stat, and uname. Since the test suite is composed of four test cases, one for the chmod system call, one for the fileno system call, one for the stat system call, and one for the uname system call, the control file has been written to allow each test case to be handled as a separate scenario, or for the whole test suite to be run at once with the all scenario.
The lines enclosed in double quotation marks are optional information lines that get passed into the journal file. The lines that begin with a slash or stroke character (/) name the executable test cases associated with each scenario. Note that even though these lines begin with a slash character, their location is relative to the local directory (the root directory for the test suite). In this instance, the test cases are in a subdirectory named ts.
The cleantool is used to remove unwanted files after the build of each test case. It is invoked in the source directory of the test case. In this case it is set to exec make clean to remove unwanted object files as defined in each makefile.
Since most test suites lend themselves to lots of code redundancy, making an effort to group together common functions and variables can greatly simplify the writing and debugging of a test suite. With the C-API test suite, which is very small, no common functions and variables other than the standard ones in tetapi.h were created.
One additional result code was invented, however, which would normally be defined in a test suite specific header file. But because it is only used within one test case in this very small test suite, it is instead defined within uname-tc.c as follows:
#undef TET_INSPECT /* must undefine because TET_ is reserved prefix */
#define TET_INSPECT 33 /* this would normally be in a test suite header */
Every test case requires some minimum initialization of functions and variables. The fileno-tc test case provides a good illustration of how this initialization can be handled.
/* fileno-tc.c : test case for fileno() interface */
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <fcntl.h>
#include
extern char **environ;
static void cleanup();
static void tp1(), tp2(), tp3(), tp4(), ch4();
/* Initialize TCM data structures */
void (*tet_startup)() = NULL;
void (*tet_cleanup)() = cleanup;
struct tet_testlist tet_testlist[] = {
{ tp1, 1 },
{ tp2, 2 },
{ tp3, 3 },
{ tp4, 4 },
{ NULL, 0 }
};
/* Test Case Wide Declarations */
static char msg[256]; /* buffer for info lines */
After the #include statements, several functions are declared. TET provides the option of naming both a startup and cleanup function. The named startup function will be called before the first test purpose is executed; and the cleanup function will be called after all test purposes have been executed. In this test case, only the cleanup function is named. The cleanup function cleanup() removes files created during the course of the test case.
The stat-tc test case includes a more substantial cleanup function, as well as a startup function. It requires that a file be created before the first test purpose, so this is handled by the startup function; this same file, as well as another file and a directory created during the tests, is then removed in the cleanup function. See Appendix A for a complete code listing of the stat-tc test case.
The fileno-tc test case includes four test purposes, contained in the functions: tp1, tp2, tp3, and tp4. First the functions are declared (including an extra function which is a child process of tp4), as shown above. Then they are listed in the tet_testlist array with the invocable component to which they belong. In this case, each test purpose can be executed individually, so they are assigned to separate invocable components. If, say, tp2 depended on prior execution of tp1, then they would be assigned the same IC number. After the array is set, any test case wide declarations are made. This commonly includes a buffer to use for constructing information lines to be output with tet_infoline().
Identifying and executing highly specific tests is central to any test case. Each test purpose in a test case typically targets one specific test that is loosely or strongly related to the other test purposes contained in the test case. The central purpose of each of these test purposes is to relay information about the execution of the test for the tester to examine later. This relaying of information can take the form of informational messages describing the test being executed, fatal or non-fatal errors that were encountered, and specific test execution results, such as pass or fail.
The chmod-tc test case contains three test purposes:
tp1: successful chmod of a file, expecting a return code of 0.
tp2: failed chmod of a non-existent file, expecting a return code
of -1 and errno set to ENOENT.
tp3: failed chmod of a file that contains a non-directory path
component, expecting a return code of -1 and errno set to ENOTDIR.
Functions tp1 and tp2 are shown here and are described below.
static void
tp1() /* successful chmod of file: return 0 */
{
int ret, err;
mode_t mode;
tet_infoline("SUCCESSFUL CHMOD OF FILE");
/* change mode of file created in startup function */
errno = 0;
if ((ret=chmod(tfile, (mode_t)0)) != 0)
{
err = errno;
(void) sprintf(msg, "chmod(\"%s\", 0) returned %d, expected 0",
tfile, ret);
tet_infoline(msg);
if (err != 0)
{
(void) sprintf(msg, "errno was set to %d", err);
tet_infoline(msg);
}
tet_result(TET_FAIL);
return;
}
/* check mode was changed correctly */
if (stat(tfile, &buf) == -1)
{
(void) sprintf(msg,
"stat(\"%s\", buf) failed - errno %d", tfile, errno);
tet_infoline(msg);
tet_result(TET_UNRESOLVED);
return;
}
mode = buf.st_mode & O_ACCMODE;
if (mode != 0)
{
(void) sprintf(msg, "chmod(\"%s\", 0) set mode to 0%lo, expected 0",
tfile, (long)mode);
tet_infoline(msg);
tet_result(TET_FAIL);
}
else
tet_result(TET_PASS);
}
static void
tp2() /* chmod of non-existent file: return -1, errno ENOENT */
{
int ret, err;
tet_infoline("CHMOD OF NON-EXISTENT FILE");
/* ensure file does not exist */
if (stat("chmod.2", &buf) != -1 && unlink("chmod.2") == -1)
{
tet_infoline("could not unlink chmod.2");
tet_result(TET_UNRESOLVED);
return;
}
/* check return value and errno set by call */
errno = 0;
ret = chmod("chmod.2", (mode_t)0);
if (ret != -1 || errno != ENOENT)
{
err = errno;
if (ret != -1)
{
(void) sprintf(msg,
"chmod(\"chmod.2\", 0) returned %d, expected -1", ret);
tet_infoline(msg);
}
if (err != ENOENT)
{
(void) sprintf(msg,
"chmod(\"chmod.2\", 0) set errno to %d, expected %d (ENOENT)",
err, ENOENT);
tet_infoline(msg);
}
tet_result(TET_FAIL);
}
else
tet_result(TET_PASS);
}
The comments for the code should clarify what is happening on each line. However, it is important to note that a lot of useful diagnostics have been written right into the tests. If any of the system calls fail, whether it is the one being specifically tested or one that the test relies on, that failure will be reported. Also, the tests begin the same, with a message about the test's purpose; and they end the same, with a pass/fail result being reported.
This sort of consistency yields two important benefits:
Test purposes will be easier to write when they follow some sort of template.
Test purposes will be easier to debug and evaluate when diagnostic information is built in from the very start.
Some test cases may require user verification of information generated by a test case. An example of this can be found in the uname-tc test case when system specific information is being reported.
static void
tp1() /* successful uname: return 0 */
{
int ret, err;
struct utsname name;
tet_infoline("UNAME OUTPUT FOR MANUAL CHECK");
/* The test cannot determine automatically whether the information
returned by uname() is correct. It therefore outputs the
information with an INSPECT result code for checking manually. */
errno = 0;
if ((ret=uname(&name)) != 0)
{
err = errno;
(void) sprintf(msg, "uname() returned %d, expected 0", ret);
tet_infoline(msg);
if (err != 0)
{
(void) sprintf(msg, "errno was set to %d", err);
tet_infoline(msg);
}
tet_result(TET_FAIL);
}
else
{
(void) sprintf(msg, "System Name: \"%s\"", name.sysname);
tet_infoline(msg);
(void) sprintf(msg, "Node Name: \"%s\"", name.nodename);
tet_infoline(msg);
(void) sprintf(msg, "Release: \"%s\"", name.release);
tet_infoline(msg);
(void) sprintf(msg, "Version: \"%s\"", name.version);
tet_infoline(msg);
(void) sprintf(msg, "Machine Type: \"%s\"", name.machine);
tet_infoline(msg);
tet_result(TET_INSPECT);
}
}
Since the information from uname will be different on every machine, the output needs to be reported and then verified. Here the information is simply being printed out for the tester to see and check, but no attempt has been made to interact with the tester to receive verification of the information and then use that verification to set the pass/fail result. Instead, a result code of INSPECT has been used.
Some test purposes require the creation of a child process or execution of a subprogram. The Toolkit provides three interfaces to facilitate this:
tet_fork()
an API function called by test purposes to create a
child process and perform processing in parent and
child concurrently.
tet_exec()
an API function called by child processes to execute
subprograms.
tet_main()
a user-supplied function to be defined in subprograms
executed by tet_exec().
An example of their use can be found in test purpose tp4 of the fileno test case:
tp4() /* on entry to main(), stream position of stdin, stdout and
static void
stderr is same as fileno(stream) */
{
tet_infoline("ON ENTRY TO MAIN, STREAM POSITION OF STDIN, STDOUT AND STDERR"
);
/* fork and execute subprogram, so that unique file positions can be
set up on entry to main() in subprogram */
(void) tet_fork(ch4, TET_NULLFP, 30, 0);
}
static void
ch4()
{
int fd, ret;
static char *args[] = { "./fileno-t4", NULL };
/* set up file positions to be inherited by stdin/stdout/stderr
in subprogram */
for (fd = 0; fd <3; fd++) { (void) close(fd); if ((ret="open("fileno.4"," O_RDWR|O_CREAT, S_IRWXU)) !="fd)" { (void) sprintf(msg, "open() returned %d, expected %d", ret, fd); tet_infoline(msg); tet_result(TET_UNRESOLVED); return; } if (lseek(fd, (off_t)(123 + 45*fd), SEEK_SET)="=" 1) { (void) sprintf(msg, "lseek() failed errno %d", errno); tet_infoline(msg); tet_result(TET_UNRESOLVED); return; } } /* execute subprogram to carry out remainder of test */ (void) tet_exec(args[0], args, environ); (void) sprintf(msg, "tet_exec(\"%s\", args, env) failed errno %d", args[0], errno); tet_infoline(msg); tet_result(TET_UNRESOLVED); }
All the testing is done in the child, so the function tp4() simply calls tet_fork() and ignores the return value. If it needed to do any processing after the call to tet_fork(), it should check that the return value was one of the expected child exit codes before continuing. The arguments to tet_fork() are: a function to be executed in the child; a function to be executed in the parent (in this case no parent processing is required, so a null function pointer TET_NULLFP, defined in tet_api.h, is used); a timeout period in seconds; and, a bitwise OR of the valid child exit codes (in this case the only valid exit code is zero).
The file fileno-t4.c contains the definition of tet_main():
int
tet_main(argc, argv)
int argc;
char **argv;
{
long ret, pos;
int fd, err, fail = 0;
static FILE *streams[] = { stdin, stdout, stderr };
static char *strnames[] = { "stdin", "stdout", "stderr" };
/* check file positions of streams are same as set up in parent */
for (fd = 0; fd <3; fd++) { pos="123" + 45*fd; /* must match lseek() in parent */ errno="0;" if ((ret="ftell(streams[fd]))" !="pos)" { err="errno;" (void) sprintf(msg, "ftell(%s) returned %ld, expected %ld", strnames[fd], ret, pos); tet_infoline(msg); if (err !="0)" { (void) sprintf(msg, "errno was set to %d", err); tet_infoline(msg); } fail="1;" } } if (fail="=" 0) tet_result(TET_PASS); else tet_result(TET_FAIL); return 0; }
Since test cases often change and/or create data, it is important to cleanup this data before exiting the test case. As explained earlier, one way to do this is to specify a cleanup function with TET's tet_cleanup utility. The cleanup function named in the stat-tc test case provides a good example.
static void
cleanup()
{
/* remove file created by start-up */
(void) unlink(tfile);
/* remove files created by test purposes, in case they don't run
to completion */
(void) rmdir("stat.d");
(void) unlink("stat.p");
}
The cleanup function is called when all the test purposes have finished executing. As shown, it simply removes the files and directory that were created during the test.