Abstract: This volume of Electronic Communications of the EASST contains a se- lection of the best submissions to the third edition of the International ERCIM Sym- posium on SoftwareEvolution, co-located with ICSM in Paris in October 2007. The event was organised by the ERCIM Working Group on SoftwareEvolution, gather- ing researchers from all over the world to identify and discuss recent advancements and emerging trends in the state-of-the-art in research and practice on Software Evo- lution.
Abstract. There are known classes of software systems that can benefit from dynamic softwareevolution, including 24x7 systems that require on-line upgrades and adaptive systems that need to adapt to frequent changes in their execution environment. This paper investigates the use of dynamic software architectures and architectural reflection in building adaptive systems. We introduce the K-Component model and its architecture meta-model for building a dynamic software architecture. We address the issues of the integrity and safety of dynamic softwareevolution by modelling dynamic reconfiguration as graph transformations on a software architecture, and cleanly separate adaptation-specific code from functional code by encapsulating it in reflective programs called adaptation contracts. The paper also introduces the prototype implementation of our K-Component model.
Abstract— The Varity and complexity of software increased from day to day, the software quality assurance must be used to make a balance between quality and productivity. In software applications defect density and defect prediction are essential for efficient resource allocation in softwareevolution. We presented a new approach to software defect prediction based on value series of evolution attributes. In an empirical study we applied data mining techniques for value series based on evolution attributes. For that, we developed models utilizing genetic programming and linear regression to accurately predict software defects. In our study, we investigated the data of three independent projects, two open source and one commercial software system. The results show that by utilizing series of these attributes we obtain models with high correlation coefficients (between 0.716 and 0.946). Further, we argue that prediction models based on series of a single variable are sometimes superior to the model including all attributes: in contrast to other studies that resulted in size or complexity measures as predictors, we have identified the number of authors and the number of commit messages to versioning systems as excellent predictors of defect densities.
The subjects were 13 students from the course of Laboratory of Software Analysis, in their last year of the master degree in computer science at the University of Trento. The subjects had a good knowledge about programming, in particular Java, and an average knowledge about software engineering topics (e.g. design, testing, softwareevolution). Subjects have been trained in meaning and usage of FIT tables and Fitnesse 3 with two theoretical lessons and two practical lessons (two hours each).
The term softwareevolution and software maintenance are used interchangeably in a number of publications. Both terms revolve around changes subjected to software. However, according to Priyadarshiv and Kshivasagar , there are differences between softwareevolution and software maintenance. They argued that software maintenance comprises bug fixing activities to rectify defects in order to ensure the software meet its development purpose. The bug fixing activities happen after implementation phase and the functionalities of the software remain unchanged.
EB approach assumes that there are already available links between requirements and UML artifacts including test cases. Source code is excluded. The approach then runs its maintenance mechanism upon those artifacts and their links. The artifacts scope of EB approach includes all high- level and low-level artifacts (UML design documents, i.e. class diagram, sequence diagram, and other design diagrams are classified into low-level artifacts). This approach has a medium level of granularity since the source code is excluded from the established links. In EB approach, a change is only applied on requirements. The change is then used to analyze which parts of other artifacts are impacted. This change impacts on system-wide level. EB approach provides traceability analysis as well as change impact analysis to support softwareevolution.
Evaluating a particular data graphs tool for ODSE is essential. Common practice is that some set of guidelines are followed and a qualitative summary is produced. However, these guidelines do not usually allow a comparison of competing techniques or tools. A comparison is important because it identifies possible flaws in the research area or software development. Thus, a framework for describing attributes of tools is needed. Once the tools have been assessed in this common framework, a comparison is possible. However, a framework can be used for comparison, discussion, and Formative evaluation of the tools. Such framework was proposed in . So, the major contribution of this paper is to show how the framework can be applied to compare the Data graphs Tools which is presented in section 4. A Framework for visualizing Ontology-Driven SoftwareEvolution falls into key areas (views): Context View, Inter-model View, City View, Metric View, Transformation View, Evolution View and Evaluation view  and 22 Key features are identified for all key areas. The framework is used to evaluate data graphs tools and it is also used to assess tool appropriateness from a variety of stakeholder perspectives.
Abstract: Softwareevolution entails more than just redesigning and reimplement- ing functionality of, fixing bugs in, or adding new features to source code. These evolutionary forces induce similar changes on the software’s build system too, with far-reaching consequences on both overall developer productivity as well as soft- ware configurability. In this paper we take a look at this phenomenon in the Linux kernel from its inception up until present day. We do this by analysing the kernel’s build traces with MAKAO, our re(verse)-engineering framework for build systems. This helps us in detecting interesting idioms and patterns in the dynamic build be- haviour. Finding a good balance between obtaining a fast, correct build system and migrating in a stepwise fashion turns out to be the general theme throughout the evolution of the Linux build system.
Goal: The long-term objective of this research is to evaluate metrics to identify success- ful FLOSS projects, and to provide guidelines to FLOSS developers about practical actions to foster the successful evolution of their applications. Based on two samples from Debian and SourceForge, a comparison of their product and process characteristics will be evaluated to de- termine which sample should be considered more successful in terms of their evolution. This will also give an indication of the forges and distributions in which developers should include their projects so that they may achieve the best outcomes for their project’s future development.
Abstract: The orientation of the current software development practice requires efficient model-based iterative solutions. The high costs of maintenance and evolution during the life cycle of the software can be reduced by using tool-aided iterative development. This paper presents how model-based iterative software development can be supported through efficient model-code change propagation. The presented approach facilitates bi-directional synchronization between the modified source code and the refined initial models. The backgrounds of the synchronization technique are three-way abstract syntax tree (AST) differencing and merging. The AST-based solution enables syntactically correct merge operations. OMG's Model-Driven Architecture describes a proposal for platform-specific model creation and source code generation. We extend this vision with the synchronization feature to assist the iterative development. Furthermore, a case study is also provided.
A UML model is composed of different diagrams that address different aspects of a software system. The application of model refactorings may generate inconsistencies between these UML diagrams. Future work should explore the possibility to preserve the consistency among different kind of UML models after the application of model refactoring expressing inconsistency detec- tions and their resolutions as graph transformation rules. Mens, Van Der Straeten and D’Hondt [MVD06] propose to express inconsistency detection and resolutions as graph transformation rules, and to apply the theory of critical pair analysis to analyse potential dependencies between the detection and resolution of model inconsistencies.
The main objective of the research described is to assess how a system changes through the analysis of packages in the system and to compare that data with corresponding results from refactoring the same system. Knowledge of trends and changes within packages is a starting point for an understanding of how effective the original design may have been, how susceptible types of packages may be to change and can also inform our knowledge of facets of software such as coupling and cohesion. To this end, a case study approach was adopted using multiple versions of an evolving system. This system was a large OSS called ‘Velocity’ – a template engine allowing web designers to access methods defined in Java. For each version, we collected the number of added classes, lines of code (LOC), methods and attributes. Hereafter, we define a LOC as a single executable statement; we therefore disregard comment lines and white space from calculation of LOC.
In previous work, we have studied typical growth and change patterns in open-source, object- oriented software systems and shown that although software grows and changes over time, the structure and scope of both growth and change is, in general, predictable rather than erratic or purely random [VSWC05, VLS07, VSN07]. This leads us to ask whether we can gain a more detailed insight into where change occurs, and the degree to which change can be expected.
To support maintenance scenarios for large software systems, where manual effort is error prone or does not scale, developers are in need of tools. In the past, several engineering techniques have been examined for their contribution to unraveling the evolution knot. In the identifica- tion of maintenance hot spots, metrics provide quantitative indicators. Visualization serves a key role in understanding the system composition, and to facilitate design/architecture communica- tion. Patterns have been successfully introduced as a means to document both best practices (e.g., recurring programmatic needs [GHJV94]) or worst practices (e.g., anti-patterns document- ing historical design decisions evaluated as suboptimal [BMMM98]). In assistance to impact and effort analysis, dependency analysis teaches the developer about component interactions. Together, these techniques form the key means to reverse engineer a given system from source.
Our second goal is accomplished by demonstrating that the transformation rules can be adapted to the evolution of both applications and techniques being used in the transformation. Such adaptation is facilitated due to the low coupling between design and implementation models. Design models are independent of the platform / framework being used to implement the system, i.e., design models are not concerned with any characteristic of the implementation technique. In addition, the framework is also defined completely independent of the modeling language being used in the design models.
Views have been used in different activities of software construction because they help the developer to delimit the scope of a problem and thus, its complexity. Therefore, the developer can analyze the correctness and completeness of one concern or a set of concerns at a time. During the requirements definition process, as well as during the design process, i.e., during the elaboration of solutions, it is important that developers be able to obtain different views from a base model in order to facilitate the analyses of the solutions created from different viewpoints and perspectives.
Following this direction, Model Management [BHP00] is a new emergent discipline that pur- sues an abstract reusable solution for problems of this kind, independently of the metamodel un- der study. The Model Management discipline deals with software artifacts by means of generic operators that do not depend on their internal implementation because they work on mappings between models [Ber03]. These operators treat models as first-class citizens and increase the level of abstraction of the solution avoiding programming tasks and improving the reusability of the solution.
This contribution tries to narrow the bridge between industrial, code-centric development with C++ and model-centric development employing languages like the Unified Modeling Lan- guage (UML). We discuss a C++ metamodel and display first ideas how development and main- tenance of C++ artifacts can be performed on instantiations of this C++ metamodel based on refactorings. Our proposal is to express C++ refactorings and development steps as graph trans- formations. We think it is important to clearly express transformation concepts for an involved domain like C++ software development. Graph transformations possess a sound theoretical ba- sis and allow to express properties on a conceptual level, not only on an implementation level. Surprisingly, graph transformations have not yet been applied for C++ software development.