Building a safe and secure embedded world

TESSY's basic functionality

For programs written in the C programming language, a unit is a function in the sense of C. To start unit testing with TESSY, you simply browse for the C source module containing the function to test. TESSY parses this module and lists the C functions in it. You can select the function you want to test.

Interface analysis

TESSY then analyzes the interface of that function, i.e. it determines the variables used by that function and whether they are input or output or both. TESSY displays its findings using the Test Interface Editor (TIE), where you are able to change them if this is necessary.

Test data acquisition

The next step is to define the test data. The test data of a test case consists of values for the input variables of the function to test and the expected output values (i.e. the results) belonging to these input values. The test data is entered using the Test Data Editor (TDE). The TDE automatically saves the test data in a data base for later re-use. The latter is all transparent to the user.

Generating the test application

In order to perform dynamic testing, TESSY must first generate a test driver. The test driver is necessary to call the function under test and TESSY generates the driver  automatically. The test driver and the function under test together form a complete (embedded) test application, including the startup code for it. In case the function under test uses external variables that are not defined, the test driver generated by TESSY can define that variable in the test driver. Similar, if the function under test itself calls other functions (subroutines), TESSY can provide replacement functions (stubs) for the missing subroutines with the test driver. TESSY features two types of stub functions: (1) Stub functions for which you may provide the C source code for the bodies of the stub functions and (2) stub functions for which TESSY is able to check if the expected value for a parameter is passed into the stub function and for which TESSY is able to provide any value specified by the user as return value of the stub function. The test driver, the function under test, stubs and everything else is automatically compiled and linked by TESSY using an appropriate compiler for the particular embedded microcontroller architecture being used.

Running a test

TESSY then executes the test application on the target system, e.g. using an in-circuit emulator stand-alone or connected to the real target hardware. TESSY transfers the input values of each test case to the target system and runs the function under test with these values. At the end executing the function under test, TESSY determines the resulting output values and checks if they conform to the expected results specified previously for that test case. Test values for the subsequent test runs are taken from the test data database and are not included in the test application. Hence in principle, the number of test cases is unlimited.

Test evaluation

Test result evaluations are visualized using colors: Tests that delivered the expected results are marked green, those that did not deliver the expected results are marked red. Not yet executed test cases or test cases for which the evaluation has become invalid are marked yellow.

Test reports

TESSY generates comprehensive reports on the test case results in selectable degrees of detail. Of course, these reports also indicate if the execution of a test case yielded the expected result.

Error detection

In case a test case did not yield the expected results, the cause for this needs to be examined. Tessy's tight integration with HiTOP allows TESSY to set a breakpoint at the entry point of the function under test and then to re-run the test using the input values that caused the unexpected result. So debugging in HiTOP starts exactly at the right position. The debug features of the in-circuit emulator now can be used to find the bug. Besides HiTOP, TESSY supports a lot of other debuggers.

Regression testing

After the bug is fixed, ALL tests can be easily re-run to prove that all test cases still yield the same result as previously.

Determining code coverage

Determining the achieved code coverage of the test cases requires no extra effort

Search formContactOnlineshop