• No results found

Designing Variability and Deriving Variants (RQ3,RQ4)

4. Previous Work on Quality Attribute Variability

4.3 Designing Variability and Deriving Variants (RQ3,RQ4)

From the viewpoint of the product line architecture, the design of quality attribute variability (RQ3) and the derivation of product variants (RQ4) are rarely covered in the previous studies explicitly. Instead, both aspects are mostly addressed in conjunction or through feature models. This is because features can be used to represent, besides problem domain en- tities, also solution domain entities even to the level of design decisions and technologies (Lee et al., 2014; Jarzabek et al., 2006; Kang et al., 2002). Therefore, many studies on feature models and quality attribute variabil- ity actually describe design. As a concrete example, Linux Ubuntu pack- ages, that is, architectural entities, are modeled as varying features with a certain impact on memory footprint (Quinton et al., 2012). As another example, redundancy controls active and standby, that is, design tactics for availability, are modeled as alternative implementation technique fea- tures (Kang et al., 2002). Additionally, there are studies that focus on the product line architecture design activities (Kishi and Noda, 2000; Kishi

et al., 2002). Also the link from the feature models to the product line

architecture has been studied (Kang et al., 1990, 1998, 2002).

and derivation more challenging. It has been argued that quality at- tributes cannot be directly derived, that is, derived by simply selecting single features that represent product quality attributes in the feature models (Sincero et al., 2010). This is because quality attributes are the re- sult of and impacted by many functional features (Sincero et al., 2010). In other words, quality attributes are indirectly affected by other variability in the software product line (Niemelä and Immonen, 2007). This impact of

other variability must be taken into account in the design and derivation

approach.

To address the impact of other variability, most studies use model-based, externalized approaches. In particular, the impact of other variability is explicitly represented in the feature models as feature impacts. A fea-

ture impact characterizes how a particular feature contributes to a spe-

cific quality attribute: for example, selecting feature Credit Card adds 50 ms to the overall response time (Soltani et al., 2012). There can be two kinds of feature impacts that depend on the nature of the specific quality attribute. Firstly, there can be qualitative feature impacts, for example, feature Verification improves reliability (Siegmund et al., 2012b); these can be used as a guideline during the product derivation. Secondly, there can be quantitative feature impacts that can be either directly measured or inferred from other measurable properties, for example, one can com- pute to which extent a feature influences the memory footprint of an appli- cation (Siegmund et al., 2012b). However, there are also quality attributes that are quantifiable but not measurable per feature: for example, it has been claimed that response time can only be measured per product vari- ant (Siegmund et al., 2012b).

To complicate matters further, the impact of one feature may depend on the presence of other features (Siegmund et al., 2013, 2012a; Sincero

et al., 2010; Etxeberria and Sagardui, 2008). Thus, the derivation must

take into account feature interactions. The features in a software product line are not independent of each other, but their combinations may have unexpected effect on quality attributes compared with having them in isolation. For example, when both features Replication and Cryptography are selected, the overall memory footprint is 32KB higher than the sum of the footprint of each feature when used separately (Siegmund et al., 2012b). Feature interactions may occur when the same code unit partic- ipates in implementing multiple features, when a certain combination of features requires additional code, or when two features share the same

resource (Siegmund et al., 2012b, 2013). An approach to approximate and measure feature interactions has been proposed and evaluated for mem- ory footprint, main memory consumption, and time behavior (Siegmund

et al., 2012a, 2013).

Managing feature impacts and interactions may require explicit repre- sentation and dedicated tool support, as manifested by the Intrada prod- uct line derivation (Sinnema et al., 2006). It may be possible to man- age the feature impacts manually by trying to codify tacit knowledge into heuristics or by comparing with predefined reference configurations (Sinnema et al., 2006). However, when Intrada needed to create a high- performance product variant, the complications of the manual impact management caused the derivation to take up to several months, com- pared with only a few hours (Sinnema et al., 2006). Even when supported with a tool that was able to evaluate the performance of a given configura- tion, only a few experts were capable of conducting a directed optimization towards the high-performance configuration (Sinnema et al., 2006).

Based on this, we identified two distinct strategies for both designing quality attribute variability and deriving it from the design.

Firstly, quality attribute variability can be designed and derived through

design tactics and patterns, that is, by varying architectural patterns

(Hallsteinsen et al., 2003; Matinlassi, 2005) and tactics (Kishi and Noda, 2000; Kishi et al., 2002). This approach is used both for performance (Kishi and Noda, 2000; Kishi et al., 2002; Ishida, 2007) and for security (Fægri and Hallsteinsen, 2006). The challenge with varying tactics and patterns is that they may crosscut the architecture (Hallsteinsen et al., 2006a), and thus may be costly to implement, test, manage and derive. Also the impact of other variability may need to be managed separately.

Secondly, quality attribute variability can be designed and derived by using indirect variation only. We have termed this as indirect variation

strategy. Instead of using explicit design mechanisms, this approach pri-

marily relies on other variability in the product line to create the required quality attribute differences. That is, indirect variation (Niemelä and Im- monen, 2007) is used as a way to purposefully vary quality attributes. For example, a variant with different memory footprint is derived by leaving features Replication and Cryptography out from the product (Siegmund

et al., 2012b). Indirect variation is especially used for resource consump-

tion (Tun et al., 2009; White et al., 2007; Siegmund et al., 2013), but other quality attributes are covered as well (Siegmund et al., 2012b).

To create desired products with the indirect variation strategy, one must estimate and represent the impacts and interactions from other variabil- ity. Most of the studies use feature models as the basis. During derivation, one needs to aggregate single feature impacts onto overall product quality and manage interactions. The derivation can be about finding a variant that meets specific quality attributes, for example, to find a product that has 64 MB or smaller memory usage (Sinnema et al., 2006), or about opti- mizing over one or more quality attributes, for example, to find the most accurate possible face recognition system that can be constructed with a given budget (White et al., 2009). From the computational point of view, algorithms that are needed for finding and optimizing variants from fea- ture models are computationally very expensive. Earlier solvers based on constraint satisfaction problems resulted in exponential solution times to the size of the problem (Benavides et al., 2005). White et al. (2009) showed that finding an optimal variant that adheres to feature model and system resource constraints is an NP-hard problem. Therefore, several approximation algorithms have been proposed to find partially optimized feature configurations (Guo et al., 2011; White et al., 2009). Other pro- posals utilize hierarchical task network planning (Soltani et al., 2012). In particular, when the derivation takes place at runtime, the scalability of the approach becomes an issue (Wang et al., 2006; Hallsteinsen et al., 2006b).

Related documents