• No results found

MoMuT::UML Model-based Mutation Testing for UML

N/A
N/A
Protected

Academic year: 2021

Share "MoMuT::UML Model-based Mutation Testing for UML"

Copied!
8
0
0

Loading.... (view fulltext now)

Full text

(1)

MoMuT::UML

Model-based Mutation Testing for UML

Bernhard Aichernig

, Harald Brandl

, Elisabeth J¨obstl

, Willibald Krenn

, Rupert Schlick

and Stefan Tiran

∗† ∗AIT Austrian Institute of Technology,Graz University of Technology,AVL List GmbH

{name.surname}@ait.ac.at,{name.surname}@tugraz.at,{name.surname}@avl.com Abstract—Model-based mutation testing (MBMT) is a

promis-ing testpromis-ing methodology that relies on a model of the system under test (SUT) to create test cases. Hence, MBMT is a so-called black-box testing approach. It also is fault based, as it creates test cases that are guaranteed to reveal certain faults: after inserting a fault into the model of the SUT, it looks for a test case revealing this fault. This turns MBMT into one of the most powerful and versatile test case generation approaches available as its tests are able to demonstrate the absence of certain faults, can achieve both, control-flow and data-flow coverage of model elements, and also may include information about the behaviour in the failure case. The latter becomes handy whenever the test execution framework is bound in the number of observations it can make and - as a consequence - has to restrict them. However, this versatility comes at a price: MBMT is computationally expensive. The tool MoMuT::UML1 is the result of a multi-year research effort to bring MBMT from the academic drawing board to industrial use. In this paper we present the current stable version, share the lessons learnt when applying two generations of MoMuT::UML in an industrial setting, and give an outlook on the upcoming, third, generation.

I. INTRODUCTION

Fault-based testing plays an important role in the verifi-cation and validation of systems as it is able to demonstrate the absence of certain faults [1]. Model-based mutation testing (MBMT) is a particular instance of fault-based testing. MBMT is working in a black-box setting, which means that it does not look at the implementation during test case generation. Instead, it is working off a model of the system under test (SUT) that is directly derived from the requirements and usually is more abstract than the implementation. Hence, MBMT is able to generate test cases for systems lacking an accessible implementation (e.g., integrated circuits).

Given a behavioural model of the SUT and a set of generic fault models, i.e. so-called mutation-operators, MBMT strives to automatically generate test cases that are able to reveal whether any modelled fault has been implemented. To this end, MBMT will take the original model, apply one mutation operator at one particular location a time, deriving a so called mutant, compare the behaviour of the original model with the one of the mutant, and - once a difference is found - write out a test case that steers the SUT towards this difference. As a result, MBMT achieves a fault coverage of the model and the resulting test suite will uncover whether any of the modelled faults was implemented. In connection with the coupling effect, MBMT is able and has shown to produce high quality test suites (cf. [2], [3]) that will also satisfy certain control and data flow coverage criteria [4, page 186].

1https://www.momut.org

The principle drawback of MBMT is the computational cost associated to comparing the behaviours of the original model and the mutant. This amounts to a model-checking problem and, hence, typically suffers from the same issues, like state space explosion and the support of a restricted set of mathematical functions only. In difference to white-box testing or model-checking of source code, however, MBMT has the advantage of working with an abstract, simplified model of the SUT. Hence the setting is more amenable to automated reasoning and the elements of the modelling language exposed to the user can be well balanced to keep the computational costs within bounds. To further reduce costs, partial models can be employed (cf. [5]) and incremental techniques used (cf. [6]). Another challenge MBMT has to deal with is that some mutations do not lead to a different behaviour of the mutant. These so called equivalent mutants do not yield any test case, since they cannot be detected. However, they still consume computation time and contribute to the overall cost of MBMT. Finally, the selection of fault models also influences computational cost in another way: if the fault models are not chosen well a lot of duplicate tests might be computed, as different faults can yield the same test case.

The tool MoMuT::UML (pronounced ”MoMuT for UML”) is an attempt to investigate and overcome the challenges posed. The name stems from MoMuT’s main test case generation strategy, namely MOdel-based MUtation Testing. Since its inception in the MOGENTES2 project, the tool has been

eval-uated in a number of industry-driven case studies and research projects (cf. [2], [5], [7]), with yet unpublished work on-going. The current stable version3 of MoMuT::UML supports

two different case generation back ends, different test-case generation strategies and different conformance relations. MoMuT needs a conformance relation to decide whether the behaviour of a mutant is considered different from the original model. This paper describes the stable version of MoMuT::UML, the lessons learnt while evaluating the tool, and hints at improvements under development for the next major version.

The paper is structured as follows. Section II presents an overview of MoMuT::UML’s main workflow. This is followed in Section III by a detailed description of the UML models the tool is able to work with. Then, in Sections IV and V the current test case generation back ends are presented before Section VI discusses the lessons learnt of applying MoMuT::UML. Before the paper concludes in Section VIII a brief look at related research is given in Section VII.

2http://www.mogentes.eu

(2)

Fig. 1: MoMuT::UML Architecture

II. OVERVIEW: UML-BASEDMUTATIONTESTINGWITH MOMUT::UML

MoMuT::UML uses UML state charts, class diagrams, and instance diagrams as input for the test case generation. In addition a small UML-profile is employed so the tool knows which events, e.g., method calls, are supposed to come from the test environment and which are provided by the SUT in response. In our terminology the former ones are called ”controllables” (ctr), while we refer to the latter ones as ”observables” (obs). Notice that UML state charts form an

asynchronous model, which means that controllables (input

to the SUT) and observables (output of the SUT) can be interleaved in an almost arbitrary way. While this leads to more expressive models, it also makes test case generation more challenging and is unlike in synchronous systems where a controllable is always followed by an observable ad infinitum. Figure 1 shows the principle architecture of the tool. Because the execution semantics of UML state charts is not fully defined MoMuT::UML needs to settle for one interpre-tation and converts the UML model to an object oriented action system (OOAS), as explained in [8], [9]. OOASs are a simple extension to action systems as proposed by Back et al. [10]. They model parallel processes through non-deterministic choice of actions and their formal semantics are defined using the weakest precondition predicate transformer [11]. MoMuT::UML will also prepare mutated versions of the UML model by applying a set of user-selectable mutation operators to a set of user-selected name-spaces. The resulting UML-model mutants are converted to OOAS as well. After the OOAS of the original model and all mutants are available, MoMuT::UML is ready for test case generation.

MoMuT::UML offers two distinct test case generation back ends. One back end, developed over the course of the MOGENTES project, runs an explicit state exploration of the system, while the other one, developed within the TRUFAL4

project, uses symbolic techniques. In the following we refer to the former as ”enumerative back end”, while we call the latter ”symbolic back end”. Both back ends support three different test case generation strategies: (1) random, (2) mutation, and (3) combined random+mutation. All the strategies have in common that - conceptually - they work on a labelled transition system (LTS, cf. [8]) representation of the given OOAS. Deriving this representation is straight forward and mostly involves remembering the labels of actions that were active when animating the OOAS. Also, test case generation (TCG) always is bound by an upper limit on the search depth.

The random strategy only involves a random walk over

4https://trufal.wordpress.com/

the original model. It is the cheapest way to produce test cases and does not rely on mutants. The sole complication is with non-deterministic models that yield possibly tree-shaped, non-linear, test cases containing inconclusive verdicts next to the standard pass and fail. A model is said to be non-deterministic if, given a unique current state and some input, it can move to more than one possible next state. In other words, the SUT is free to choose a next state. This means that during TCG MoMuT::UML needs to either insertinconclusive

verdicts for allowed observables that branch away from the chosen random sequence when it wants to produce a linear test case, or explore all observables and generate tree-like adaptive test cases. Notice that adaptive test cases will also include

inconclusive verdicts for observables that lead away from the

test goal. MoMuT::UML tries to end each random test case with an observable for better testability. While being cheap, the random strategy cannot guarantee a certain test coverage. When using themutation strategyMoMuT::UML needs to decide whether the behaviour of the mutant differs from that of the original and generate a test case leading to this difference. To decide whether the behaviours are different, MoMuT::UML mainly employs Tretman’sioco[12] relation. Another option, exclusively offered by the symbolic back end, is a refinement relation defined over the state [13]. Notice that the former implements so called strong mutation testing (the difference must be observable), while the latter corresponds to weak mutation testing (a difference in the internal state suffices) [13]. By default MoMuT::UML uses the iocorelation, which is defined over traces of LTSs (SUT, Model) as follows.

SU TiocoM odel=df∀σ∈traces(M odel) : out(SU Tafterσ)out(M odelafterσ)

Here afterdenotes the set of reachable states after a trace σ, and out denotes the set of all observable events in a set of states. The observable events are all output events plus one additional quiescence event for indicating the absence of any output. The ioco relation supports non-deterministic models (cf. the subset relation) as well as partial models (only traces of the Model are tested) but does not lend itself easily to com-positional techniques [14]. For input-complete modelsioco is equivalent to trace-inclusion (language inclusion). To compute the ioco conformance, MoMuT::UML builds a product-LTS from the mutant (”SUT”) and the original (”Model”) on the fly. If an ioco violation is found, a test case is extracted from the LTS. This step is usually done with multiple worker processes in parallel, each worker processing one mutant at a time. In contrast to the random strategy, the mutation strategy is expensive but guarantees a fault coverage of the model.

Finally, MoMuT::UML offers a combined random and

mutation strategy. If selected, the tool will first generate a

small set of random tests and use this set to filter the mutants: any mutant that is detected by the random tests is removed from the set and only the ones not detected remain. Next, Mo-MuT::UML runs the mutation-based strategy on the reduced set of mutants. Past evaluations have shown this strategy to deliver test-suites with the best detection rates (cf. [2], [5]).

MoMuT::UML produces positive, abstract test cases in the Aldebaran5 format. The test cases are abstract as they are

(3)

on the same level of abstraction as the UML model. Positive means that only all the expected, correct behaviour is specified in the test. Consequently, each test case can be seen as an LTS with explicit passandinconclusive verdicts and implicitfail. Depending on the model and the capabilities of the test case generation back end, the test cases are either in linear form or adaptive, i.e. graphs. Before the test cases can be executed on the SUT, they need to be made concrete, which means that any abstraction made in the UML model needs to be undone so the test matches the interface of the SUT. Please note that this step can be fully automated.

III. UML MODELSFORTESTCASEGENERATION Figure 2 shows a typical UML test model. It comprises a class diagram, an instantiation diagram, and a state machine. Notice that in the class diagram the AlarmSystem class is marked as system under test, while the other classes are marked asenvironment. Both stereotypes are provided by the TCG profile shipped with MoMuT::UML and are used to define the testing interface: whenever theAlarmSystemcalls a method of/sends a signal to an object part of the environment, this action is assumed to be observable. In return, whenever the environment sends a signal to or calls a method of the

AlarmSystem object it will be deemed a controllable event.

In testing there is a principle distinction between state-based versus event-based testing methodologies. Mo-MuT::UML falls into the latter. It is assumed that the system under test communicates via events with the tester. This setting has certain implications. One is that, in the model, non-void method calls between the SUT and the environment have to be treated specially when generating the OOAS: they need to be split into two consecutive events. Whenever the SUT calls a method of the environment that returns a value, MoMuT::UML will generate a sequence of an observable and a controllable event. Similarly, when the environment calls a method on the SUT that returns a value, it will yield a controllable followed by an observable event. MoMuT::UML supports parametrised events, which are not used in Figure 2a, as the car alarm system model is too simplistic. That said, MoMuT::UML will automatically add one parameter to each event when generating the OOAS in case the UML model uses timed triggers. The new parameter holds the time when an event has to occur.

Currently, MoMuT::UML supports UML types that can be mapped to integers, such as integer, boolean, and enumer-ations. In addition it supports aggregate types like classes, signals, and lists (cf. Table I). MoMuT::UML also supports arbitrary associations and a limited version of sub-typing. In essence late binding, overriding, and overloading are not available. Due to the ambiguous semantics, inheritance of state machines is not supported. Only leaf classes in the inheritance hierarchy are allowed to own a state machine.

Figure 2c shows a simple state machine MoMuT::UML is able to work with. The tool supports a large subset of UML’s state machine language (cf. Table I). For example, orthogonal regions, nesting, time triggers, change triggers, signal and call triggers, choices, junctions, and OCL expressions are available. Currently, the Activity and Guard Specification Language (AGSL), developed in MOGENTES [15], is used as an action language. MoMuT::UML also honours SysML requirements

TABLE I: List of supported UML elements

Ty pe s Boolean Classes Active/Passive OCL

Literals (Number, Bool, Set, Null) Integer Associations and, or, not, implies,<,≤, >,≥,=,

Enum Restr. Inheritance +,−,∗, /,mod, xor List/Set Member Fields exists, forall, select, at, size Signal Signal Receptions oclIsInState, oclIsKindOf, oclIsTypeOf Class Methods Def. + Body oclAsType

State

M

achines

Substate Machine Orthogonal Regions (Final-, Initial-, Pseudo-) State Entry, Exit Action

Transitions with Trigger, Guard and Effect Change-, Signal-, Call-, Time-, Completion Trigger OCL Constraints

Junction, Choice

Fig. 3: Graphical User Interface

annotations and allows mapping of tests to requirements. There are only a few elements that are currently not supported like history states. Instantiation is done via a separate Instance Diagram (cf. Figure 2b) that specifies all objects and their links. Notice that MoMuT::UML was primarily designed with embedded control systems in mind, hence the dynamic creation and destruction of objects needs to be emulated.

The preferred UML editor is Eclipse Papyrus (Kepler), al-though MoMuT::UML has a limited ability to work with UML models coming from Visual Paradigm 10.2. MoMuT::UML also comes with a small graphical user interface (cf. Figure 3) that allows the monitoring of the test case generation progress.

A. Model Mutation

Currently, the mutation engine directly works on the UML model. Table II lists the set of mutation operators the user can choose from. Past evaluations have shown that some of the operators may overlap, i.e. sometimes produce syntactically equivalent mutants. Therefore it is recommended to use a sub-set for test case generation, which we are currently developing guidelines for. We are also looking into the set of mutations offered: past evaluations have shown the current set not to be optimal, in the sense that it produces many mutants that yield the same test cases. For the upcoming MoMuT::UML version we want to improve the situation and move to a more efficient engine offering a refined set of mutation operators.

(4)

AlarmSystem «system_under_test» + alarmArmed: AlarmArmed [1] + acousticAlarm: AcousticAlarm [1] + opticalAlarm: OpticalAlarm [1] «Signal» Reception_Lock «Signal» Reception_Unlock «Signal» Reception_Close «Signal» Reception_Open AlarmSystem_StateMachine AlarmArmed «environment» + SetOn() + SetOff() AcousticAlarm «environment» + SetOn() + SetOff() OpticalAlarm «environment» + SetOn() + SetOff() «Signal»

Lock «Signal»Unlock «Signal»Open «Signal»Close + alarmArmed + acousticAlarm

+ opticalAlarm

(a) Class Diagram

AlarmSystemInstance: AlarmSystem OpticalAlarmInstance: OpticalAlarm AcousticAlarmInstance: AcousticAlarm AlarmArmedInstance: AlarmArmed (b) Instance Diagram (c) State Machine

Fig. 2: MoMuT::UML Test Model of a Car Alarm System

TABLE II: List of supported UML mutation operators

Mutation Operator Upper Bound on # of Mutants

Change AGSL Int. Literal O(v) v. . . # literals in AGSL statements Change OCL Int. Literal O(v) v. . . # literals in OCL expressions Change Time Trigger Val. O(t) t. . . # transitions with time triggers Invert Change Expr. O(t) t. . . # transitions with change trigger Invert Guard O(t) t. . . # transitions with guard Invert OCL Sub-Expr. O(e) e. . . # boolean OCL sub expressions Remove AGSL Stmnt O(a) a. . . # AGSL statements

Remove Call Trigger O(t) t. . . # transitions with call trigger Remove Change Trigger O(t) t. . . # transitions with change trigger Remove Effect O(t) t. . . # transitions with effect Remove Entry Action O(s) s. . . # states with entry actions Remove Exit Action O(s) s. . . # states with exit actions Remove Signal Trigger O(t) t. . . # transitions with signal trigger Remove Time Trigger O(t) t. . . # transitions with time trigger Replace Effect O(t·te) t. . . # transitions,te. . . # trans. w. effects

Replace Entry Action O(a·oa) a. . . # entry actions, oa. . . # other entry actns Replace Exit Action O(a·oa) a. . . # exit actions, oa. . . # other exit actions Replace OCL Enum. Val. O(e) e. . . # enum. literals in OCL expressions Replace OCL Operator O(o) o. . . #∧,∨, <,≤, >,≥,=,'=,+,−,∗, /

Replace Signal Event O(t·sg) t. . . # signal triggered trans., sg. . . # signals Set Guard to False O(t) t. . . # transitions

Set Guard to True O(t) t. . . # transitions with guard Set OCL Sub-Expr. False O(e) e. . . # boolean OCL sub expressions Set OCL Sub-Expr. True O(e) e. . . # boolean OCL sub expressions

IV. ENUMERATIVEBACKEND

The first backend that is offered by MoMuT::UML is an explicitiococonformance checker calledUlysses(cf. [5], [16], [17]). It operates on preprocessed and fully flattened OOAS, which means that only one object remains.Ulysses

simultane-ously interprets two action systems, one marked as original, one marked as mutant and generates the corresponding LTSs. While the LTSs are generated, a synchronous product modulo

ioco is computed on-the-fly. This computation also includes a determinisation of the LTSs, in which silent (= hidden) transitions are removed. Silent transitions often occur in action systems generated from UML models as they are used for model-internal computations that encode the UML semantics. While constructing the synchronous product in a breadth-first manner, ioco-conformance is checked implicitly. When-ever non-conformance is detected, a so-called fail state is added to the product LTS. In general, the maximum exploration-depth, i.e. the number of visible actions on a path beginning from the initial state, is limited by some user-defined number. Once Ulysseshas found afailstate, however, it will restrict further exploration to the depth of thefailstate found. This optimisation was implemented in order to limit the test case generation time of complex systems to reasonable durations [5]. Notice that further exploration is conducted as

Ulyssesalways computes the full product graph.

Finally, test cases are generated by choosing a trace within the product graph starting from the initial state and ending at afailstate. Action systems usually describe non-deterministic behaviour, i.e. there can be multiple observable actions enabled at the same time and an implementation is allowed to choose among them. So whenever there is a branch on observables in the product graph along the chosen trace to the fail state, this branch also is copied into the generated test case, yielding

(5)

0 1 2 3 4 5 6 7 8 ctrLock(0) ctrClose(0)

obsAlarmArmed SetOn(20)

ctrOpen(0)

obsOpticalAlarm SetOn(0)

obsAcousticAlarm SetOn(0)

obsAcousticAlarm SetOff(30)

obsOpticalAlarm SetOff(270)

pass

Fig. 4: Abstract Test Case for the Car Alarm System

an adaptive test case. Only if no path leads to the fail state,

an inconclusive verdict is added to the test case. Figure 4

shows a test case generated from the deterministic car alarm system model. Since this UML model includes time triggers, the events are parametrised with the first (and in this case only) parameter being the amount of time after which the event needs to be given (for controllables) or has to occur (for observables). To optimise the number of test cases generated, Ulysses

first checks whether an already existing test is able to detect the non-conforming behaviour of a mutated action system. This is done by computing a synchronous product between the test case and the associated LTS of a given mutant. Only if none of the test-mutant product LTSs reveals a difference in behaviour, i.e. no test kills the mutant, the full mutation-based test case generation is started.

The back end supports a rich set of data types, such as booleans, integers, lists, and tuples. These data types cannot only be used to define state variables but also for parameters of actions and functions. Due toUlysses’ support of complex data types no further restrictions to the supported UML elements are necessary. However, sinceUlyssesenumerates all possible traces of actions, it is quite essential to restrict the range of the integer types used as rigorously as possible. Especially if there are states with a lot of enabled actions or if there are parametrized actions, the state-space explosion problem occurs rather easily. Notice that parametrized actions are automatically included in the OOAS as soon as time triggered transitions occur in the UML state machine.

V. SYMBOLICBACKEND

In order to cope with more complex models a second back end was implemented. It tackles the challenge of parametrized actions by using a symbolic approach, based on Microsoft’s SMT solver Z36 [18]. The back end relies on the assumption

that solving constraints with an optimised SMT solver is more efficient than enumerating all possible values in practice.

In addition to ioco the back end supports a conformance relation based on UTP refinement [19]. This conformance re-lation is based on a predicative semantics, i.e. the behaviour of an action in the OOAS is expressed as a predicate over variable valuations before and after the action has been executed [20]. According to this relation, an implementation refines its spec-ification iff the predicate encoding the implementation implies the predicate encoding the specification. So, in order to gen-erate a distinguishing test case, the symbolic back end checks whether the mutant model refines the original model and if a non-conforming state is reachable from the initial state of the system. Both checks are encoded as satisfiability problem and solved using the SMT solver. The predicative semantics also encodes which action is executed including all its parameters. One difference between this conformance relation andioco

is that refinement is more strict than iocoas any difference in the variable valuation causes non-refinement [21]. Iniocoonly labels are considered and an implementation is allowed to react to additional, unspecified, input labels and can still conform. The latter allows ioco to handle partial models. We consider using refinement asweakmutation testing, since any difference in the state variable valuation is sufficient for non-conformance

and ioco as strong mutation testing, since introduced faults

have to be propagated until a wrong output occurs.

The symbolic back end also allows to combine the two con-formance relations, in which case refinement is checked first. This singles out equivalent and conforming mutants. Only if non-refinement is identified, aniococheck is performed. Note, that this ioco check does usually not start at the initial state, but instead at the state, in which the refinement was violated. This is a good trade-off between the better performance of the refinement check and the better fault-finding power of strong

mutation testing.

The back end currently supports the data types boolean and integer. Complex data types such as lists and tuples have not been implemented yet. That said, finite lists that occur in the UML model are flattened to multiple OOAS-variables in a preprocessing step and are thus eliminated. Unfortunately this may result in huge test models and, hence, limits the applicability of the back end to UML models with one reactive instance as the system-under-test. The reason for this is that the asynchronous communication between objects is translated into list operations when transforming the UML model to an action system. Another limitation of the symbolic back end is that the generation of adaptive test cases has not been imple-mented and linear test cases are created instead. Whenever the system specification is non-deterministic in the sense that the implementation is allowed to choose among different output events, an inconclusive verdict is given immediately instead of trying to reach the fail state anyway.

(6)

In summary, our symbolic back end is highly optimised for test models with parametrized actions [7]. This applies to most models in which time constraints are considered. Models comprising multiple objects, however, have to be refactored before the symbolic back end can be applied.

VI. LESSONSLEARNTWHENAPPLYINGMOMUT::UML Over time MoMuT::UML has been applied to a number of cases studies of increasing complexity. The start marks the MOGENTES car alarm system, also used in this paper, featur-ing one live state machine, an OOAS model of approximately 900 LoC, and an explicit state vector the size of about 0.3 kB. The next step, as supported by the current stable version, were models with a couple of live state machines, about 2000 LoC OOAS, and an explicit state vector of around 1 kB. Models this size can be found when testing measurement devices, see [2]. For the next version of MoMuT::UML we target models with about 150 parallel running state machines, leading to an OOAS the size of about 10 000 LoC and an explicit state-size of about 20 kB. Early development versions of the upcoming MoMuT::UML are already able to produce random tests and run mutation analyses on models this size. So, including the current development version, we have three very different test-case generation back ends and a number of different test-case studies available to draw conclusions from.

A. MBMT Methodology

The main finding after applying MoMuT::UML to several case studies is that mutation-based test case generation lives up to its promise and delivers strong test suites: they are compact, both in the number and length of the test cases, and have an excellent fault detection capability. For example, in [2] MBMT-generated tests not only found new bugs in software compo-nents several years in use but were also efficient in discovering manually seeded faults. Often the fault detection capabilities can be further improved by combining MBMT with random testing (cf. [2], [5]). While this combination might reduce the compactness of the test suite, both strategies seem to complement one another very well in general. Unsurprisingly then, our combined strategy often found the most bugs.

We also did a few tests with pure random TCG: after generating a comparatively large set of random tests we ran the mutation analysis on the full set of model-mutants. To our surprise the many random tests proved quite good at revealing model-mutants and often killed more than 70% of the mutants. Hence this is another argument for the combination of random and mutation-based test case generation strategies.

An outcome of the evaluation in [2] was that test cases for 5% of the mutants covered the faults of an additional 40%, which means that we should be able to significantly reduce the number of mutants and still maintain fault coverage on the type of models investigated. Put differently, there is a large and unexploited potential in reducing the computational cost of MoMuT::UML’s MBMT strategy.

In one particular setting we faced a test automation system that had to query the SUT for state changes and also was limited in the amount of queries it could make during some interval of time. In this situation the information about the faulty behaviour of the mutant, which MBMT provides, proves

handy as it allows the test automation system to concentrate on fault-relevant queries only. However, it is clear that in doing so one reduces the fault detection capabilities towards the modelled faults and might miss others.

B. Modelling Languages

An interesting lesson we learnt was that UML’s state machine language is not as easily comprehensible as one might hope. Initially chosen due to the high interest of the industry partners in MOGENTES, UML was less intuitive to use than we had expected. In part this is due to the modelling tools lacking usability, but engineers also complained about the subtleties of the notation itself. Hence, the feedback we got from our case studies suggest that a specially tailored domain specific language (DSL) might be preferable in most cases. Also, from the implementation’s point of view, UML is not an easy target to support. Its lack of formal semantics and the general openness make UML state machines highly modelling tool dependent. We also had to introduce such tool-dependencies ourselves: we chose our own action language, as no standard one was available at the time. Also, in order to support partial test models we had to deviate from the standard UML semantics a little bit. Unlike the UML standard, MoMuT::UML does not treat state machines as input complete specifications. In particular the UML standard enforces a kind of angelic completion of state machines as events that cannot trigger a transition will be automatically dropped. This is the same as adding a transition that self-loops with the ignored event as trigger. In MoMuT::UML the modeller has to explicitly create such self loops, otherwise the behaviour is assumed unspecified.

The second modelling language MoMuT::UML supports is object oriented action systems. OOASs are quite powerful and allow for a relatively simple mapping of UML. That said, the UML to OOAS conversion often times produces less than optimal OOAS code due to requirements stemming from the event-based state machine semantics. Due to its expressiveness, some OOAS language features easily lead to the state-space explosion problem. Two are especially notewor-thy. First, OOASs are asynchronous and use non-deterministic choice of actions to emulate parallel running processes. Given a model with over 100 state machines running in parallel, the possible number of action-interleavings skyrockets. Hence, some dynamic partial order reduction is mandatory and can achieve astonishing results as we see in our development version. Lacking this algorithm, the current stable version of MoMuT::UML can handle smaller models only. Second, OOASs allow the modeller to ”let the system search for a permitted value” (projection). For example, when looking for the point in time when the next event will fire, an enumerative approach will have to enumerate all time-progression values until either it is out of range or has found a permitted value. To counteract this issue we have developed the symbolic back end that drastically improves the situation. Unfortunately, however, the current implementation cannot handle lists very well, hence UML operators over sets need to be ”flattened”, which can lead to a significant increase in the OOAS size up to the point where the back end solver cannot handle the symbolic transition system any more. We deem the way out of this dilemma to be a proper combination of enumerative and symbolic approaches, which we are currently working on.

(7)

C. The Implementation

MoMuT::UML is distributed as an executable .jar file and builds on the Eclipse platform (esp. the EMF, UML, and OCL frameworks). All parts of MoMuT::UML that are concerned with language parsing, handling of UML, OOAS generation, and job management are implemented in Java 7. The implementation heavily relies on the visitor pattern, a lot of Eclipse’s libraries, and ANTLR [22]. This turned out a good choice, although we run into performance and memory issues with some of the Eclipse modelling libraries in the mutation engine when working on large UML models. At the time of writing, the Java code responsible for the UML to OOAS conversion and the UML mutation engine comprises about 10k LoC (not counting the libraries MoMuT::UML depends on). Adding the remaining modules – including the ANTLR generated OOAS parser – brings the total amount of Java code to about 55k LoC.

Both back ends of the current stable version were written in Sicstus Prolog, with the symbolic back end interfacing to Microsoft’s Z3 solver after the Prolog integrated one could not handle the given formulae any more. While Prolog is fine for rapid prototyping it does have significant limitations. For one, Prolog does not have a nice way of multi-threading. MoMuT::UML works around this issue by running several back end-processes in parallel. For another, we found our enumerative back end’s Prolog implementation needing too much memory and not being fast enough. Hence we moved to C++ and a full just-in-time compilation via LLVM [23] for the back end we are currently working on. At the time of writing, this new back end comprises about 25k LoC C++ code.

VII. RELATEDWORK

Jia and Harman [24] give a detailed survey on publications about mutation testing techniques since Lipton introduced the idea in 1971 [25]. They also state that there was comparatively little work done on using mutations to drive test case genera-tion; this concept was initially described by Ammann, Black and Majurski [26].

Work on mutation coverage driven TCG picked up in the recent years. For example, the work of Papadakis et al. [27], [28] is concerned with automated white-box TCG for programs. The authors rely on symbolic/concolic execution, mutant schemata and the weak mutation testing criterion to lower the cost of mutation-based TCG. While we deem weak mutation testing not strong enough in our black-box and event-based setting, MoMuT::UML will most likely benefit from techniques like concolic execution and mutant schemata. The latter term refers to meta-programs that embody all mutated versions, saving mutant compilation time. Incidentally, we are incorporating a technique similar to mutant schemata in the upcoming MoMuT::UML version. One important additional difference between our setting and the one of Papadakis et al. is that we deal with highly non-deterministic models, while sequential programs usually are deterministic.

A lot of work has been done on test case generation from UML including UML state machines (e.g. for integration testing by Ali, Briand et al. [29]; achieving control and data flow coverage by Kim, Hong, Cho, Bae and Cha [30]) but mutation was usually only used to evaluate the results and not

for driving the TCG. MBMT, i.e. model-mutation driven TCG, was applied to probabilistic and stochastic finite state machines by Hierons and Merayo [31] and to abstract state machines (ASM) by Gargantini [32]. However, these works seemingly were not taken further towards industrial applicability.

Fraser and Gargantini [33] evaluated various coverage criteria to drive test case generation via model checking. The mutation coverage in their comparison provided a high cross coverage with the other coverages used, while producing a low to average number of tests of low to average length. A major result of this comparison was that there was no single superior coverage criterion subsuming all others - regarding coverage, combinations of criteria showed to be the best strategy. Taking this into account, the architecture of the upcoming MoMuT::UML version has been adapted to easily add support for additional coverage criteria.

VIII. CONCLUSIONANDFUTUREWORK

We have presented MoMuT::UML, a black-box test case generation tool that specialises in fault-based test case genera-tion. More precisely, MoMuT::UML generates model mutants and finds test cases that reveal them. To our knowledge, it is the first and up to now only tool doing so for UML state machine models. The generated test cases are both effective in finding faults and efficient when executed.

We have already shown over several case studies the value of the generated tests. In this paper we shared an outlook of our planned future work, driven by the lessons learned from the case studies. In particular the upcoming MoMuT::UML 3.0 will feature an LLVM-based enumerative TCG back end, use dynamic partial order reduction algorithms, and feature a new dynamic and context sensitive mutation engine that will address most of the shortcomings identified in the current version. Early development versions have already shown their ability to handle the mutation analysis of networks of 150 UML state machines or an OOAS model of about 10k LoC. Beyond the scope of MoMuT::UML 3.0, we are looking into combining explicit and symbolic techniques to further improve the tool’s performance. In parallel we also want to add further test case generation strategies and supported coverage criteria. Since the effort for building the test model and the achieved model quality are a crucial factor regarding industry adoption of the approach, we are working on support of the complete feature set for multiple input model formalisms, in particular support for domain specific modelling languages (DSL) -extending and maturing the MoMuT tool family. Further, we will explore options to support engineers in efficiently building the test models they need. While most of those options lie in the hands of the providers of the model editors, online animation and model analysis building on the execution engines of MoMuT might additionally improve the situation.

The current version of MoMuT::UML is available for free from its homepage at www.momut.org for academic and evaluation purposes. Future versions will be available from there too, as well as an online TCG service currently under development. We plan to open source MoMuT::UML once we have reached version 3.0 and invite the community to take part in our vision of bringing efficient and effective model-based testing to the average engineer.

(8)

ACKNOWLEDGEMENT

In addition to past projects MOGENTES, MBAT and TRUFAL, the research leading to these results has received funding from the European Union’s 7th Framework Program (FP7/2007-2013) for the Artemis Joint Undertaking projects CRYSTAL – Critical System Engineering Acceleration (grant agreement #332830) and SafeCer – Safety Certification of Software-Intensive Systems with Reusable Components (grant agreements #269265 and #295373) and from the Austrian Research Promotion Agency (FFG) on behalf of the Austrian Federal Ministry for Transport, Innovation and Technology.

REFERENCES

[1] L. Morell, “A theory of fault-based testing,” Software Engineering, IEEE Transactions on, vol. 16, no. 8, pp. 844–857, Aug 1990. [2] B. K. Aichernig, J. Auer, E. J¨obstl, R. Korosec, W. Krenn,

R. Schlick, and B. V. Schmidt, “Model-based mutation testing of an industrial measurement device,” inTests and Proofs - 8th International Conference, TAP 2014, Held as Part of STAF 2014, York, UK, July 24-25, 2014. Proceedings, ser. LNCS, M. Seidl and N. Tillmann, Eds., vol. 8570. Springer, 2014, pp. 1–19. [Online]. Available: http://dx.doi.org/10.1007/978-3-319-09099-3 1

[3] B. K. Aichernig, F. Lorber, and D. Nickovic, “Time for mutants - model-based mutation testing with timed automata,” in Tests and Proofs - 7th International Conference, TAP 2013, Budapest, Hungary, June 16-20, 2013. Proceedings, ser. LNCS, M. Veanes and L. Vigan`o, Eds., vol. 7942. Springer, 2013, pp. 20–38. [Online]. Available: http://dx.doi.org/10.1007/978-3-642-38916-0 2

[4] P. Ammann and J. Offutt,Introduction to Software Testing, 1st ed. New York, NY, USA: Cambridge University Press, 2008.

[5] B. K. Aichernig, H. Brandl, E. J¨obstl, W. Krenn, R. Schlick, and S. Tiran, “Killing strategies for model-based mutation testing,”

Software Testing, Verification and Reliability, Feb. 2014. [Online]. Available: http://onlinelibrary.wiley.com/doi/10.1002/stvr.1522/abstract [6] W. Krenn, D. Nickovic, and L. Tec, “Incremental language inclusion

checking for networks of timed automata,” in Formal Modeling and Analysis of Timed Systems - 11th International Conference, FORMATS 2013, Buenos Aires, Argentina, August 29-31, 2013. Proceedings, ser. LNCS, V. A. Braberman and L. Fribourg, Eds., vol. 8053. Springer, 2013, pp. 152–167.

[7] B. K. Aichernig, E. J¨obstl, and S. Tiran, “Model-based mutation testing via symbolic refinement checking,” Science of Computer Programming, vol. 97, Part 4, no. 0, pp. 383–404, 2015, special Issue: Selected Papers from the 12th International Conference on Quality Software (QSIC 2012). [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0167642314002329 [8] W. Krenn, R. Schlick, and B. K. Aichernig, “Mapping UML to labeled

transition systems for test-case generation - a translation via object-oriented action systems,” in Formal Methods for Components and Objects - 8th International Symposium, FMCO 2009, Eindhoven, The Netherlands, November 4-6, 2009. Revised Selected Papers, ser. LNCS, F. S. d. Boer, M. M. Bonsangue, S. Hallerstede, and M. Leuschel, Eds., vol. 6286. Springer, 2009, pp. 186–207.

[9] B. K. Aichernig, H. Brandl, E. J¨obstl, and W. Krenn, “UML in action: a two-layered interpretation for testing,” ACM SIGSOFT Software Engineering Notes, vol. 36, no. 1, pp. 1–8, 2011.

[10] R.-J. Back and R. Kurki-Suonio, “Decentralization of process nets with centralized control,” inPODC. ACM, 1983, pp. 131–142.

[11] G. Nelson, “A generalization of Dijkstra’s calculus,” ACM Trans. Program. Lang. Syst., vol. 11, no. 4, pp. 517–561, 1989.

[12] J. Tretmans, “Test generation with inputs, outputs and repetitive qui-escence.”Software - Concepts and Tools, vol. 17, no. 3, pp. 103–120, 1996.

[13] E. J¨obstl, “Model-based mutation testing with constraint and SMT solvers,” Ph.D. dissertation, Graz University of Technology, Institute for Software Technology, 2014.

[14] P. Daca, T. A. Henzinger, W. Krenn, and D. Nickovic, “Compositional specifications for ioco testing,” inIEEE Seventh Int. Conf. on Software Testing, Verification and Validation, ICST 2014, March 31 2014-April 4, 2014, Cleveland, Ohio, USA. IEEE, 2014, pp. 373–382. [15] G. Weissenbacher, “D 3.2b - modelling languages (final

version),” MOGENTES, Tech. Rep., 2010. [Online]. Avail-able: http://www.mogentes.eu/public/deliverables/MOGENTES 3-13 1.0r D3.2b ModellingLanguages.pdf

[16] B. K. Aichernig, H. Brandl, and W. Krenn, “Qualitative action systems,” inFormal Methods and Software Engineering, 11th Int. Conf. on Formal Engineering Methods, ICFEM 2009, Rio de Janeiro, Brazil, December 9-12, 2009. Proceedings, ser. LNCS, vol. 5885. Springer, 2009, pp. 206–225.

[17] B. K. Aichernig, H. Brandl, E. J¨obstl, and W. Krenn, “UML in action: a two-layered interpretation for testing,” ACM SIGSOFT Software Engineering Notes, vol. 36, no. 1, pp. 1–8, 2011. [Online]. Available: http://doi.acm.org/10.1145/1921532.1921559

[18] L. De Moura and N. Bjørner, “Z3: An efficient SMT solver,” in Proc. of the Theory and Practice of Software, 14th Int. Conf. on Tools and Algorithms for the Construction and Analysis of Systems, ser. TACAS’08/ETAPS’08. Berlin, Heidelberg: Springer-Verlag, 2008, pp. 337–340. [Online]. Available: http://dl.acm.org/citation.cfm?id=1792734.1792766

[19] C. Hoare and J. He,Unifying Theories of Programming. Prentice Hall, 1998.

[20] B. K. Aichernig, E. J¨obstl, and M. Kegele, “Incremental refinement checking for test case generation,” inTAP 2013, ser. LNCS, vol. 7942, 2013, pp. 1–19.

[21] M. Weiglhofer and B. K. Aichernig, “Unifying input output confor-mance,” inUnifying Theories of Programming, ser. LNCS, A. Butter-field, Ed. Springer, 2010, vol. 5713, pp. 181–201.

[22] T. Parr, The Definitive ANTLR Reference: Building Domain-Specific Languages. Pragmatic Bookshelf, 2007.

[23] C. Lattner and V. Adve, “LLVM: A compilation framework for lifelong program analysis and transformation,” in International Symposium on Code Generation and Optimization, San Jose, CA, USA, Mar 2004, pp. 75–88.

[24] Y. Jia and M. Harman, “An analysis and survey of the development of mutation testing,”IEEE Trans. Softw. Eng., vol. 37, no. 5, pp. 649–678, Sep. 2011. [Online]. Available: http://dx.doi.org/10.1109/TSE.2010.62 [25] R. J. Lipton, “Fault diagnosis of computer programs,” 1971, student

report.

[26] P. E. Ammann, P. E. Black, and W. Majurski, “Using model checking to generate tests from specifications,” in In Proceedings of the Sec-ond IEEE International Conference on Formal Engineering Methods (ICFEM98. IEEE Computer Society, 1998, pp. 46–54.

[27] M. Papadakis and N. Malevris, “Mutation based test case generation via a path selection strategy,” Information and Software Technology, vol. 54, no. 9, pp. 915 – 932, 2012.

[28] ——, “Automatically performing weak mutation with the aid of sym-bolic execution, concolic testing and search-based testing,” Software Quality Journal, vol. 19, no. 4, pp. 691–723, 2011.

[29] S. Ali, L. C. Briand, M. J. ur Rehman, H. Asghar, M. Z. Z. Iqbal, and A. Nadeem, “A state-based approach to integration testing based on UML models,”Information and Software Technology, vol. 49, no. 1112, pp. 1087 – 1106, 2007.

[30] Y. G. Kim, H. S. Hong, S. M. Cho, D. H. Bae, and S. D. Cha, “Test cases generation from uml state diagrams,” inIN IEE PROCEEDINGS: SOFTWARE, 1999, pp. 187–192.

[31] R. M. Hierons and M. G. Merayo, “Mutation testing from probabilistic and stochastic finite state machines,”Journal of Systems and Software, vol. 82, no. 11, pp. 1804 – 1818, 2009.

[32] A. Gargantini, “Using model checking to generate fault detecting tests,” inTests and Proofs ’07, ser. LNCS, Y. Gurevich and B. Meyer, Eds. Springer Berlin Heidelberg, 2007, vol. 4454, pp. 189–206. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-73770-4 11 [33] G. Fraser and A. Gargantini, “An evaluation of specification based test

generation techniques using model checkers,” Practice And Research Techniques, Testing: Academic & Industrial Conference on, vol. 0, pp. 72–81, 2009.

Figure

Fig. 1: MoMuT::UML Architecture
Figure 2c shows a simple state machine MoMuT::UML is able to work with. The tool supports a large subset of UML’s state machine language (cf
Fig. 2: MoMuT::UML Test Model of a Car Alarm System
Fig. 4: Abstract Test Case for the Car Alarm System

References

Related documents

S ohledem na negativ- ní vliv těchto kyselin na  kvalitu vína, disponují všechny země produkující víno právními předpi- sy stanovující horní limit pro hodnoty těkavých

by carrying the CoCI, and then forwards it to the unserved MU within its transmission range. Both the WiFi-Direct and Bluetooth techniques support the function of discovering

The new drought indicator, the Enhanced Combined Drought Index (ECDI), is based on four independently calculated components that include satellite-derived observations of rainfall,

In general, (i) larger plants are more likely to invest in R&amp;D, (ii) R&amp;D intensity increases the probability of process innovation, (iii) R&amp;D intensity does not affect

Based on the current study, it was demonstrated that with larger sample sizes ( N &gt; 100) the EIV method showed little bias independent of pretest/posttest reliability, whether

By using Zoom Away by Tim Wynne-Jones, the works of Michael Kusugak, The Inukshuk Book , the two stories about polar bears, and the Nunavut workbook, your students will know all

The coverages for our other lines have been designed to match the needs of small- to mid-sized employers from a broad range of classes, particularly “Main-Street America”