Measurement, Analysis with Visualization for Better Reliability







Full text


Measurement, Analysis with Visualization for

Better Reliability

Zeeshan Ahmed and Saman Majeed Department of Bioinformatics, Biocenter University of Wuerzburg, Germany


1 Introduction

Software industry is growing day by day along with the high expectations of the customers for the high quality products within shortest possible (software engineering) time, low budget and with the use of less man power and resources (Ahmed, 2007). Yet the problem of maintaining high success rate of quality software production in the industry is still there. In the beginning of software engineering, the products were developed using traditional product engineering concepts. For every new product a new design was implemented because there was no such concept of Reusability. With the passage of time software prod-ucts started expanding and becoming larger in size (line of source code) which in return raised the levels of complexities in production processes. This increase in size and complexity brought a high negative impact on software productivity, effort and scheduling.

To cope with the challenge of resolving levels of complexities, researchers started considering

Product Line engineering concept, known from the manufacturing industry, which might be adopted to improve the quality during software development processes (Ahmed, 2011). Software product line (see section 2 for details) provided a way to develop high quality products at lower cost, in less time and with the use of limited number of resources. At the same time when it was becoming popular amongst the software engineering communities it also introduced an additional level of complexity that may reduce or eliminate potential productivity gains due to the production of feature interaction and composition prob-lems (Robak and Franczyk, 2001).

In product line engineering, multiple sub-products have to be developed and integrated with each other. Managing the process of integration of products is one of the most complex tasks. Software practi-tioners have little support to cope with this additional level of complexity and gain maximum benefit out of it. Some researchers explicitly claimed of some solutions to these problems but those were not proved though promising when adopted. Meeting the targeted goals of this research, we discuss an approach to analyze, measure and visualize software product artifacts including preprocessed source code characteris-tics, to support software practitioners with a reliable, effective and efficient software project management solution. We tried to prove that the statistical measurement and quantitative management based approach can be helpful in improving the process of decision making at both strategic and implementation levels. Going into the details of this chapter, we present traditional and product line software architectures, hy-pothesizing variability as one of the major causes of unreliable software application production. We elaborate measurement analysis based approach with engineered design and implemented software appli-cation with evaluation using a real time software appliappli-cation (data set) i.e. Intelligent Semantic Oriented Agent based Search I-SOAS (Ahmed, 2009a) (Ahmed, 2009b). The focus of this research is limited to the implementation phase, as we are considering the preprocessed source code of only those projects which are developed using Java Object Oriented Programming language.

The remainder of this chapter is organized as follows: section 2 provides introductory details of the field of Software Product Line, section 3 discusses the targeted problems, section 4 presents related research and development work, section 5 describes proposed and implemented solution in the form of a prototype software application and section 6 discusses the results obtained during experimentation using imple-mented tool by analyzing preprocessed source code of real time product line application i.e. I-SOAS


2 Software Product Line

Software product line is one of the most recent paradigms initiated in late nineties to meet market de-mands (cost, time, quality). The idea of software product line was originally initiated by engineering in-dustry for the production of different kinds of hard goods e.g. cars, aeroplanes, heavy machinery etc. The concept was to manufacture high quality goods in shortest possible time, with limited number of resources and budget. The implanted approach was just to assemble already designed and manufactured parts of particular goods (e.g. by some third party manufacturers) to make a new product. This idea of reusability forced software practitioners with a constructive mean to adopt product line approach and produce high quality software products within less time, with limited number of resources and less cost (Ahmed, 2011).

In product line approach, software development life cycle splits in to two concurrent phases i.e.

Development for Reuse and Development with Reuse. Development for reuse is about to implement such modules which can be integrated with other (compatible) modules to form a complete application as de-velopment with reuse. Dede-velopment for reuse is applied in Domain Engineering where as the develop-ment with reuse is applied in Application Engineering (Kolb et al., 2006).

Software product line is defined as the family of products designed to take the advantage of their com-mon aspects and predicted variability (DM and CTR, 1999). Comcom-monality and Variability are two key properties of software product line.

Commonality represents the common (similar) functionalities in a line of products leading to the concept of reusability and standardization e.g. every database application has to implement a da-tabase layer (DBL) to communicate with the relational dada-tabase management system. If DBL is designed by keeping the concept of reusability in mind then it can be used in every kind of data base application instead of rewriting a new DBL each time from scratch. As a result this will save efforts, time and cost.

Variability represents the differences across the line of products. In order to have the better un-derstanding of variability we can take the example of already developed software product line based java application which takes Java preprocessed source code as an input, analyzing it and then visualizing the results. The process of input analysis and visualization of the results is same for every dataset which represents the common behavior whereas the process of analyzing the preprocessed source code is different for each input which represents the variable behavior of the product line application. Variability is identified during the domain engineering and exploited during the application engineering (Pohl et al., 2005). It is resolved by binding from domain en-gineering to application enen-gineering at binding time (time at which the decisions for a variation point are made) (Chakravarthy and Eide, 2006). Binding can be performed at Compile, Link, Run and Update time. Moreover variability is divided in to external, internal, discrete, continu-ous and abstract types (Becker et al., 2001) which can be helpful in decreasing the complexity levels and increasing the performance of the project.


3 Problem Definitions & Scope

Problem 1: Software product line provides a way to develop higher quality products and at the same time it introduces the additional level of complexity that may reduce or eliminate potential productivity gains. In product line engineering multiple sub-products have to be developed and integrated with each other. Managing the process of integration of products is one of the most complex tasks. Software practi-tioners have little support to cope with this additional level of complexity and gain maximum benefit out of it.

Problem 2: In order to quantitatively manage the variability in software product line system, it is manda-tory to analyze all the components and the features of the developed system (software application) to identify and trace the variation points and design complexities.

Problem 3: One another aspect of preprocessed source code analysis is to apply (some) measures (met-rics) for quantitative measurement analysis, which could be helpful in extracting meaningful, design based and statistically helpful information e.g. a complete project structure, line of source code, package and class information etc.

Problem 4: One of the main problems regarding project management is the way of result presentation. When the project’s preprocessed source code is analyzed, then the output is needed to be presented in well understandable formats. The best way to deliver the results is to represent it graphically.

Some researchers have claimed of providing some solutions to these problems but those are still not con-vincing. We have also tried to address these issues in our research and development. Due to the limited scope of our research, at the moment we have only focused on Java based systems.

4 Related Work / Literature Review

The presented literature review in this chapter encompasses only the most relevant approaches i.e. Rela-tion-based Approach (Succi and Liu, 1999), Static Scanning Approach (Viega et al., 2000), Columbus (Ferenc et al., 2004), TUAnalyser & Visualization: (Gschwind et al., 2004), CodeCrawler: (Lanza et al., 2005), Polymorphism Measure (Benlarbi S. and Melo W., 1999), Assessing reusability of C++ Code (Fatma and David, 2002) and Evaluation of Object Oriented Metrics (Denaro et al., 2003). A brief de-scription of each approach is as follows:

Relation based Approach; proposed for simplifying the metrics extraction from the object ori-ented code. In this approach relations are designed like prolog clauses to describe the language entity relationship.

Static Scanning; this approach breaks a non-preprocessed code file into a series of lexical tokens and then matches the patterns in that series of tokens. Matching of code is added manually, so that non-regular patterns can be recognized. It does not use the real parsing techniques because of less false negative rate to support interactive programming environments. The resultant series of tokens are then matched with already existing vulnerable database. Static scanning approach is validated with the implementation of ITS4.


Columbus; a reverse engineering framework to extract the facts from preprocessed code. It ac-quires the project information is indispensable way to carry out the extraction process.

Gschwind; Pinzger and Gall presented an approach to visualize the extracted information from source code. Compiler first extracts the information from preprocessed source code and stores it into RSF format. This RSF format is then passed as input to Graphviz to produce visual output. • CodeCrawler; based on a tool called FAMIX and used to visualize the object oriented source

code. This approach visualizes the software metrics and other source code semantics in light-weight 2D & 3D and polymeric views.

Polymorphism; a measure method used to identify the software reliability problems in the early stages of software development life cycle. The measurement has been categorized into Static polymorphism and Dynamic polymorphism. Static polymorphism based on the compile time and dynamic is based on the runtime binding decisions. Five metrics are initiated to combine the ear-ly identified poear-lymorphism forms with inheritance relationship.

Component Reusability; Fatma and David have presented a method for judging the reusability of components and assessing indirect quality attributes from the direct attributes. Method is divided into two phases. First phase is used to identify and analytically validate a set of measurements for assessing direct quality attributes. The second phase identifies and validates a set of meas-urements for assessing indirect quality attributes. Moreover, a set of relations is also provided which maps directly measurable software quality attributes to another set of indirectly measure-able quality attributes. This method is validated in the publication via an empirical study con-ducted on C++ code components.

Giovanni; Mauro and Luigi analyzed the relationships between the object oriented metrics de-fined by Chidamber and Kemerer (Chidamber and Kemerer, 1994) and fault-proneness across three different versions of this target application. Authors collected the data from three different sources and performed system and integration testing, and then stored the faultiness data in data-base. Whenever a fault was revealed, references to the version under test and the corresponding faulty code were traced in the database.

5 Approach / Tool Implementation

We have tried to analyze the targeted problems, taking advantage of the reviewed literature and using personal research and development experiences, proposing a systematic approach as the solution (Ahmed, 2006). The aim of our solution development is to support software practitioners with a comprehensive approach to cope with additional software complexity introduced by SPL (Ahmed, 2010). Our proposed approach provides a systematic way to analyze, measure and visualize the complex behavior of the sys-tem by analyzing preprocessed source code of software application to take advantage in identifying, trac-ing and resolvtrac-ing the variabilities. As earlier discussed in section 3, in this research our scope is limited as we are focusing only on projects developed using Java to estimate, measure and visualize (Size and Complexity). It could be helpful in predicting the overall behavior of the software product line based


ap-plications. This approach consists of three major components as shown in Figure 1 i.e. Analysis, Meas-urement and Visualization.

Figure 1: Three component based approach (Ahmed, 2010). Three components i.e. Analysis, Measurement and Visualization, works in sequential order to analyze preprocessed source code with the application of some useful measures and produces different visual diagrams to help practitioners in developing understanding.

Analysis; process to analyze preprocessed source code is initiated in this component. During the analysis part, all the project artifacts and preprocessed source code files are analyzed (tokenized and parsed) to identify the software product line and traditional characteristics, especially those which con-tribute to increase complex behavior.

Measurement; a potential analytical solution to the software project management. Measurement based approaches support project managers in analyzing project performance in a quantitative manner. It provides software practitioners with objective, cyclic way to characterize, control and improve software processes and their outputs. Successful project management is, in turn, the base for meeting aforemen-tioned market demands (CHAOS, 2005). Quantitative project management is actually the way towards the successful project management. It applies measurement analysis to predict the minimum possible cost and time required for software application development. Measurement starts with the definition of quantitative objectives and follows the top-down specification to define respective measures. This phase can be applied to all artifacts of the software development life cycle to measure the performance and im-prove the quality. It allows characterizing and controlling in a quantitative manner to achieve the organi-zation’s business goals by mapping them to software and measurement goals by using such approaches as GQM (Basili et al., 1994). Measurement allows stating quantitative organizational improvement ob-jectives and decomposes them down to measurable software process and product characteristics. This helps project managers to track their improvements and mitigate potential risks.

“You cannot control what you cannot measure” (DeMarco, 1982)

Visualization; next component of our approach is the visualization of results obtained during pre-processed source code analysis and quantitative measurement analysis.

This approach was required to be implemented in the form of a software tool capable of treating preprocessed (Java based) source code as an input to analyze, measure and visualize the results. It is quite difficult to implement such a comprehensive application because it is not easy to analyze the product line based software applications because of their unpredictable and complex nature (some times). We have implemented a tool to perform targeted operations (Ahmed and Saman, 2010) (Ahmed, 2010). It is divid-ed in to four components i.e., Analyzer, Data Manager, Measurer, and Visualizer; works in a cyclic or-der as shown in Figure 2 and consisting of five steps:


Figure 2: Conceptual Design (Ahmed, Saman, 2010). The conceptual diagram consist of five conceptual (also implemented) steps i.e. Source Code, Analyze, Data Manager, Measurer, and Visualizer, working in a cyclic sequential order to perform preprocessed source code analysis.

1. Input; taking preprocessed source code as input.

2. Analysis; analyzing internal software characteristics e.g. packages, classes, methods, functions, declarations, expressions and conditions.

3. Database; generating results and maintaining them in database. 4. Measurement; calculating preprocessed source code metrics.

5. Visualization; presenting results in different diagrams e.g. graphs, line charts, bar charts and tree maps etc.

The implemented application has been developed using freely available tools and technologies i.e. Java

(Object oriented software development language), Antlr (a language recognizer which interprets, com-piles, and translates grammatical descriptions containing actions in a variety of target languages) (Mark, 2008) (Ashley, 2005), MySQL (a relational database management systems) and Graphviz ( is an open source graph visualization software API used to produce visual representation of structural results in ab-stract graphs and networks).

Designed internal work flow of this application consists of six components .i.e., Source Code Ana-lyzer, Semantic Modeler, Data Manager, Measurer, Visualizer and Editor, as shown in Figure 3. Each of six components altogether works in a sequence to attain the final goal. At first the preprocessed source code of input software application is validated by Source Code Analyzer with the use of a Lexer and Par-ser (especially designed according to the semantic and syntax of Java programming language using ANTLR). Tokenized preprocessed source code, is then, maintained in designed relational database (using

MySQL). Designed Measurer module is applied to the tokenized code to perform quantitative measure-ment analysis and user requested (selected from graphical user interface) visualization is produced using


Figure 3: Internal Work Flow (Ahmed, Saman, 2010). The internal work flow diagram consists of six components Source Code Analyzer, Semantic Modeler, Data Manager, Measurer, Visualizer and Editor.

6 Experimentation Using Implemented Tool

To validate the potential and effectiveness of developed prototype application, we have experimented using a real time software application’s preprocessed source code. We have used a web application i.e. I-SOAS (Ahmed, 2009a), especially designed for Product Data Management Systems, and developed mainly using Java programming language. I-SOAS is a prototype application proposed to take advantage in implementing an advance Product Data Management System with flexible graphical user interface and intelligent semantic based search.

During experimentation, we take the complete project preprocessed source code as input along with all supporting project files. Applied some complexity and size measures to analyze the over all be-havior of the application with the estimation of different levels of complexities. Following measures have been applied i.e. File Information Measure (FIM), Number of Artifacts (NOA), Package Complexity Measure (PCM), Package Inheritance Tree (PIT), Number of Children (NOC) and Depth of inheritance (DIT).


2. NOA measures the number of artifacts used by computing direct ancestors (Figure 6). 3. PCM is to measure the level of complexity of each package (Figure 7).

4. PIT estimates the number of ancestors of the package (Figure 8).

5. NOC calculates the number of direct descendent for each class (Figure 9). 6. DIT indicates the rate of fault proneness in application (Figure 10).

Figure 4: I-SOAS – Multi Colour Tree File Map. The Figure Multi Colour Tree File Map represents different artifacts of project i.e. I-SOAS, analyzed. Blue colour boxes are executable (“exe”) files, yellow colour boxes are “Jar Files” and green colour boxes are “Java Files”, pink colour boxes are “JPG” (image files), brown colour boxes are “dll” files, .dark blue colour boxes are “Java Class” files, faint green colour boxes are “bitmap” files, light gray colour boxes are “html” files, purple colour boxes are “gif” files, dark brown colour boxes are database files, light green boxes are SQL script files.

As shown in Figure 4, a multicolour file tree map, generated by the developed prototype after pro-ject artifact (including source code files) analysis. This map is based on the number of files used in input


application (I-SOAS), plotted with respect to the size, format (type) and placement (directory), classified in different colours e.g. files represented in blue boxes are Executable (exe) Files, files drawn with yel-low colour boxes are representing Jar Files and files with green colour boxes are representing Java Files, files representing pink colour are JPG (image files) etc. The size of each box is with respect to the size of the file where as the placement of each box is with respect to the placement and association of files with each other and with directory structure. This visual representation can be helpful for the soft-ware practitioners in analyzing the over all structure of the project. Preprocessed source code consists of 736 Packages, 876 Classes and total 1619 files, as shown in Figure 5 in the form of a drawn Project over-view bar chart. The length of each bar is based on calculated average size of respective files.

Figure 5: I-SOAS –Project Over View. The Figure project over view consists of three main bars i.e. Packages, Files and Classes, representing three main source code elements.

A tree graph is shown in Figure 6, representing the overall inheritance relationships of pro-grammed pre-processed source code based on the number of packages and classes used in the develop-ment of I-SOAS e.g. association or inheritance between packages and class hierarchy and relationships. Measure Number of Artifacts (NOA) is used to measure the number of artifacts used by computing direct ancestor(s) for each artifact.


Figure 6: I-SOAS – Number of Artifact (NOA) Graph. The Figure number of artifact presents a tree graphs consisting of two colored (green and white) interconnected source code elements (artifacts). Green colour represents roots and white colour represents leaves.


This kind of visual representation can be helpful for the practitioners in analyzing the overall structure of the packages and classes used in a project. Moreover software practitioners can also take advantage in analyzing the level of complexity of relationships between packages and classes as well. If the number of artifact increases then most probably the number of dependencies between the artifacts will also increase, which ultimately causes the increase in complexity. Higher the number of NOA, higher will be the prob-ability of fault proneness in the product line system.

Figure 7 Package Overview presents the bar char drawn representing each package with respect to its size and complexity. Package Complexity Measure (PCM) is applied to measure the level of complex-ity of each package. This visual representation is helpful for the practitioners to analyze the overall com-plexity level of used and programmed packages in application.

Figure 7: I-SOAS –Package Complexity Overview. The Figure I-SOAS package complexity overview is a bar chart representing packages in blue colour and height of each bar represents the level of complexity.

Figure 8 Package Structure presents the graph of packages used in I-SOAS, including programmed and used libraries. As shown in the figure, there are 736 packages; default package is the parent namespace. This visual representation is helpful for the practitioners to analyze the overall structure of the packages


used in the project; moreover software practitioners can also take advantage in analyzing the complexity in package relationship. Package Inheritance Tree (PIT) measure is used to estimate the number of ances-tors of the package. PIT can be very helpful in the indication of fault proneness because package is one of the major components of any product line software application. Every component of project like class, library etc exists inside the package. If the number of package increases, then the number of ancestor will also increase which will raise the complexity. The increase if PIT may possibly increase the number fault proneness in software application.

Figure 8: I-SOAS –Package Inheritance Tree (PIT). The Figure I-SOAS package inheritance tree is complex tree graph consisting of the relationships between packages (navy blue colour boxes).


Figure 9 Number of Children (NOC) provides the visual presentation in the form of a 3D line bar chart, based on the level of complexity of classes in packs depending upon the number of ancestors of each class. Measure NOC is used to estimate the number of direct descendent for each class. NOC helps in the indication of the fault proneness, lower the rate of fault proneness if higher the rate of NOC.

Figure 9: I-SOAS – Number of Children (NOC). The Figure I-SOAS number of children representing the number of direct descendent for each class in 3D line bar chart.

Figure 10 Depth of inheritance (DIT) provides the visual 3D line bar chart of measured ancestors of the each class used in each package. DIT helps in the indication of fault proneness, the larger the DIT, the larger the probability of fault-proneness in the software.


Figure 10: I-SOAS – Depth of inheritance (DIT). The Figure I-SOAS depth of inheritance provides a 3D line bar chart representing measured ancestors of the each class used in each package.

7 Conclusion

The aim of this chapter is to address the importance of preprocessed source code and project artifact measurement for better reliability analysis of software application by visualizing obtained results in dif-ferent diagrams to take advantage in analyzing over all behavior of software project by predicting the level of complexities at different stages and estimating the rate of fault of proneness as well. To meet aforementioned goals of this research, we have discussed the conceptual architecture of used approach and briefly presented developed prototype, validating its potential strengths and effectiveness with an experiment by analyzing complete project preprocessed source code of a product line application i.e. I-SOAS. During experimentation we have measured preprocessed source code elements, applied size measures (NOC etc.) and inheritance measures (DIT, PIT etc.) along with visual presentations in the form of tree map, graph and chart. Provided visual representation can be helpful for the software practi-tioners in analyzing the overall structure of the project by examining the composition of packages, clas-ses and their complex relationships to bring out the intensity of fault proneness of any product line soft-ware application.



We (authors) are thankful to the University of Wuerzburg Germany for giving us the opportunity to work on this research project. We are thankful to Prof. Dr. Thomas Dandekar for his support during this re-search. We are also thankful to the blind reviewers and publishers for publishing this manuscript as book chapter.


Ahmed, Z. (2006). Integration of variants handling in M-system NT, Master Thesis, Department of Computer Science,

Blekinge Institute of Technology, University of Blekinge Sweden, In Co-operation with Fraunhofer Institute of Experimentells Software Engineering Germany.

Ahmed, Z. (2007). Measurement Analysis and Fault Proneness Indication in Product Line Applications (PLA), Chapter 7,

Frontiers in Artificial Intelligence and Applications, Volume 161, IOS Int. Publisher, pp. 391-400, ISBN 978-1-58603-794-9(print), ISBN 978-1-60750-281-4(online).

Ahmed, Z. (2009a). Proposing Semantic Oriented Agent and Knowledge base Product Data Management, In Emerald:

Information Management and Computer Security Journal, Volume: 17, Issue: 5, pp. 360-371, ISSN: 0968-5227, September.

Ahmed, Z. (2009b). Intelligent semantic oriented agent based search (I-SOAS), In Proceedings of the 7th International

Conference on Frontiers of Information Technology, Article No. 55.

Ahmed, Z. and Majeed, S. (2010). Towards Increase in Quality by Preprocessed Source Code and Measurement Analysis

of Software Applications. IST Transactions on Information Technology- Theory and Applications, Vol. 1, No. 1 (2),

ISSN 1913-8822, pp.8-13.

Ahmed, Z. (2010a). Towards Performance Measurement and Metrics based Analysis of PLA Applications, International

Journal of Software Engineering & Applications, Vol.1, No. 3,pp 66-80, ISSN: 0975 - 9018 (Online), 976-2221.

Ahmed, Z. (2011). Integration of variants handling in M-system NT: Empirically evaluated approach towards the

identifi-cation of correlation between traditional and product line measures, ISBN: 978-3-639-32553-9, pp. 112, Verlag Dr. Muller (VDM) Publishers.

Ashley, J.S. (2005). ANTLR.

Basili, V.R., Caldieram, G., Rombach, H.D. (1994). The goal question metric approach. Encyclopedia of Software

Engi-neering, pp. 528-532, John Wiley & Sons, Inc.

Becker, M., Geyer, L., Gilbert, A. and Becker, K. (2001). Comprehensive Variability Modelling to Facilitate Efficient

Variability Treatment, Software Product Family Engineering: 4th International Workshop, PFE 2002, Bilbao,

Spain, October 3-5.

Benlarbi, S. and Melo W. (1999). Polymorphism Measure for Early Risk Prediction”, In Proceeding of 21st international

conference on software engineering, ICSE 99, Los Angeles USA.

Briand, L. and Wüst, J. (2001). The Impact of Design Properties on Development Cost in Object-Oriented System, IEEE

Transactions on Software Engineering, vol. 27, no. 11.

Chakravarthy, V. and Eide, E. (2005). Binding-time flexibility for managing variability. In Proceedings of the OOPLSA


Chidamber, S.R, Kemerer, C.F. (1994). A metrics suite for object oriented design. IEEE Transactions on Software Engi-neering, 20(6):476–493.

Fatma, D. and David, C.R. (2002). A Method for Assessing the Reusability of Object- Oriented Code Using a Validated

Set of Automated Measurements, Proceedings of the 2002 ACM symposium on Applied computing, pp. 997–1003,

ISBN: 1-58113-445-2.

Ferenc, R., Beszedes, A. and Gyimothy, T. (2002). Extracting Facts with Columbus from C++ Code. In proceedings of the

6th International Conference on Software.

Giancarlo, S. and Liu, E. (1999). A Relations-Based Approach for Simplifying Metrics Extraction. In Department of

Elec-trical and Computer Engineering, University of Calgary. 2500 University Drive NW, Calgary, AB T2N 1N4. Giovanni, D., Mauro, P. and Luigi, L. (2003). An Empirical Evaluation of Object Oriented Metrics in Industrial Setting, In

5th Caber Net Plenary Workshop.

Gschwind, T., Pinzger, M. and Gall, H. (2004). TUAnalyzer—Analyzing Templates in C++ Cod, In Proceedings of the 11th Working Conference on Reverse Engineering (WCRE’04) 2004 IEEE.

Kolb, R, Muthing, D., Patzke, T. and Yamauchi, K. (2005). Refactoring a legacy component for reuse in a software

prod-uct line. IEEE International Conference on Software Maintenance (ICSM2005): 18(2), pp. 109-132, ISSN:


Lanza, M., Ducasse, S., Gall, H. and Pinzger, M. (2005). CodeCrawler: An Information Visualization Tool for Program

Comprehension. In Proceedings of the 27th international conference on Software, ICSE 2005: 672-673.

Pohl, K., Böckle, G. and van der Linden, F. (2005). Software Product Line Engineering Foundation, Principles and

Tech-niques, Upper Saddle River, NJ: Addison-Wesley, pp. 59.

Robak, S. and Franczyk, B. (2001). Feature interaction product lines, In Proceedings of Feature Interaction in Composed


von Tom DeMarco. (1982). American Consultant. Volkmann, M. (2008). ANTLR 3, Object Computing, Inc.

Viega, J., Bloch, J.T., Kohno, Y. and McGraw, G.. (2000). ITS4: A static vulnerability scanner for C and C++ code, acsac,

p. 257, 16th Annual Computer Security Applications Conference (ACSAC'00), December.

Weiss, D.M. and Lai, C.T.R. (1999). Software Product-Line Engineering: A Family-based Software Development process.





Related subjects :