|
GATeL: Automatic Test Generation from Lustre Descriptions
by Benjamin Blanc
GATeL is a tool developed by the French Nuclear Research Agency (CEA). Its main objective is to automate the testing phase of critical control applications. The key point here is that automation is not used as a means for generating numerous test cases, but rather helps to exercise truly problematic cases. GATe has been implemented GATeL is currently in use by the French Nuclear Certification Agency, and in avionics and automotive research centres.
Lustre is a formal modelling language belonging to the synchronous data-flow family. It describes programs which cyclically receive input from sensors and send output to actuators during the same cycle. Given a control program and a partial description of its behaviour as a Lustre model, the main role of GATeL is to automatically generate test sequences according to user-oriented test cases. These test sequences (representing an evolution of input data flows over time) can then be submitted to the program being tested. GATel also provides a basis for an automatic oracle, whether or not the program has been automatically generated from the model, depending on the development process followed.
In addition to the Lustre model, GATeL requires two more user inputs: a model of some aspects of the environment in which the program will be run, and a declarative definition of desired test cases. The model of the environment, also written in Lustre, is intended to filter out from all possible behaviours those corresponding to realistic reactions, thereby decreasing the state space to be explored. Unrealistic behaviours could include incompatible values for two input flows, or for an input flow where the occurrence of a past event depends on an output flow. Each filtering expression is stated by an assert directive that must be true for each cycle of the generated sequences.
Requested test cases can then be finely characterised in order to exercise meaningful situations. Test case selection is a crucial part of the testing process. Several approaches have been proposed to automate it, but none is universally recognised. For instance, one can try to perform a statistical coverage of the input domains through a forward simulation, but meaningful test cases may have a very low probability of occurrence (since they require the evolution of a particular input flow over hundreds of cycles). Another approach is to define a structural coverage criterion of the Lustre model, and to then generate test sequences until a given percentage is reached. However, relying only on a predefined coverage criterion assumes that the fault model encoded within this criterion will discover all possible bugs, which may not be sufficient for highly critical systems.
For all these reasons, we prefer to allow users to define their own selection strategies. The first step in this direction is the definition of a test objective. This states a number of important expected properties of the program under the test to be checked. These can be either invariant properties or reachability properties, both expressed in Lustre. Invariant properties are stated with assert directives. The properties that must be satisfied in at least one cycle (in fact, in the last cycle of sequences built by GATeL) are stated by reach directives. To build a sequence reaching the test objective according to the Lustre model of the program and its environment, these three elements are automatically translated into a constraint system. A resolution procedure then solves this system through alternate propagation and labelling phases. Propagation checks the local coherence of the system, while labelling aims at incrementally eliminating the constraints by the choice of a variable and a value within its authorised domain.
|
Left: The GATeL main window in which the Lustre model, the description of the environment and the test objectives are displayed. Top right: the test sequences generated.
Bottom: a tree representing a splitting process and the current test cases obtained. |
The random aspect of this resolution procedure implies that the input domains are not fairly covered, meaning quite distinct sequences may be generated for the same objective (for instance, different ways to raise an alarm). A second step in the definition of a selection strategy is to help GATeL to distinguish these sequences. This can be achieved by splitting the constraint system so that each sub-system characterises a particular class of behaviour reaching the objective. This splitting can be processed interactively either by applying predefined decompositions of boolean/integer/temporal operators in the Lustre expressions corresponding to the current constraint system, or by declaratively stating through a dedicated split directive the various behaviours one wants to observe. Notice that a systematic unfolding of different classes of operators would lead to the usual structural coverage criteria.
Finally, test submission consists in reading input sequences generated with GATeL, computing program outputs, and then comparing these values to the expected ones evaluated during the generation procedure. When the program has not been automatically generated from the Lustre model, this gives an automatic oracle. For the alternative case, the truth value of the test objective can play the role of a partial oracle. Methodological and efficiency aspects of GATeL are still under development. However, it has been successfully applied implemented in on industrial case studies, and has generated sequences of a thousand cycles, after the resolution of several thousand constraints over more than forty input flows.
Please contact:
Bruno Marre, Benjamin Blanc
Commisariat à l'Energie Atomique, France
Tel: +33 1 6908 5416
E-mail: benjamin.blanccea.fr
|