EB approach assumes that there are already available links between requirements and UML artifacts including test cases. Source code is excluded. The approach then runs its maintenance mechanism upon those artifacts and their links. The artifacts scope of EB approach includes all high- level and low-level artifacts (UML design documents, i.e. class diagram, sequence diagram, and other design diagrams are classified into low-level artifacts). This approach has a medium level of granularity since the source code is excluded from the established links. In EB approach, a change is only applied on requirements. The change is then used to analyze which parts of other artifacts are impacted. This change impacts on system-wide level. EB approach provides traceability analysis as well as change impact analysis to support softwareevolution.
Evaluating a particular data graphs tool for ODSE is essential. Common practice is that some set of guidelines are followed and a qualitative summary is produced. However, these guidelines do not usually allow a comparison of competing techniques or tools. A comparison is important because it identifies possible flaws in the research area or software development. Thus, a framework for describing attributes of tools is needed. Once the tools have been assessed in this common framework, a comparison is possible. However, a framework can be used for comparison, discussion, and Formative evaluation of the tools. Such framework was proposed in . So, the major contribution of this paper is to show how the framework can be applied to compare the Data graphs Tools which is presented in section 4. A Framework for visualizing Ontology-Driven SoftwareEvolution falls into key areas (views): Context View, Inter-model View, City View, Metric View, Transformation View, Evolution View and Evaluation view  and 22 Key features are identified for all key areas. The framework is used to evaluate data graphs tools and it is also used to assess tool appropriateness from a variety of stakeholder perspectives.
Computer systems that support dynamic softwareevolution have the ability to change their implementation at runtime allowing them to extend, customise or upgrade the services that they provide without the need for system recompilation or reboot. Designers have traditionally sought alternatives to runtime change, usually because it is avoidable. Several techniques have been devised to circumvent the need for it, including regularly scheduled downtimes, redundancy, and manual overrides. There are, however, certain classes of systems that benefit from dynamic adaptability. These include 24x7 systems, such as telecommunication switches where shutting down and rebuilding the system for upgrades may result in unacceptable delays and increased cost, and adaptive systems that adapt their provided functionality in response to the frequent changes in their usage context [Pui98]. Mobile systems, in particular, benefit from dynamic adaptability. Dynamic softwareevolution allows a mobile system to adapt its provided functionality in response to the often frequent changes in the device’s context. There has already been much research into building middleware that supports dynamic softwareevolution [Blair01, Kon01, DC00].
In this paper the focus is on the lifetime of source files during softwareevolution. For this we measure a set of evolution attributes for each source file over time and compose multiple value series describing the data points of the at-tributes as a sequence of measures. In our field study we use two months of development time to predict the defects of the following two months (see Section 6.2). The first two months comprise 61 days. On all days of this series period we measure the attributes for each file. For example the number of lines added within one day is summarized for the data points of this attribute in the value series. As a result many values in the series are zero, as in a development project not all source files are modified on each day. The number of defects is predicted for the entire period of the following two months for each source file. Thus, the in-stances for the prediction models are files. In the following we describe the different evolution attributes and the generation of series in detail.
In  two case studies were selected to characterize the initial development of a FLOSS project. A closed process, performed by a small group of developers, has some commonalities with traditional software development. Major differences appear when a FLOSS project either never leaves this initial stage, as documented for a large majority of the projects hosted on SourceForge ; or when it leverages a “bazaar”, i.e. a large and increasing amount of developers. Figure 2 displays the number of developers contributing to a system (the Arla Network File System), showing hot it has remained, through its lifecycle, as an effort of a small team. It has been argued that this should not be interpreted as a sign of the overall failure of a
Two promising approaches to software adaptation are middleware and policy based approach. Middleware is said to be an important building block that impede software development . Adaptive middleware on-the-other hand, allows modification to application systems when there are changes in user or operating environment . Middleware can be designed to allow for separation between adaptive behavior and non-adaptive behavior in an enterprise system. This research has the interest to adopt middleware approach in developing the proposed framework. Sub-section 2.5.4 present a discussion on the topic of middleware
Historically, there has always been a split between the process of software devel- opment and the process of softwareevolution (software maintenance). People think of software development as a creative activity in which a software system is devel- oped from an initial concept through to a working system. However, they sometimes think of software maintenance as dull and uninteresting. Although the costs of main- tenance are often several times the initial development costs, maintenance processes are sometimes considered to be less challenging than original software development. This distinction between development and maintenance is increasingly irrelevant. Hardly any software systems are completely new systems and it makes much more sense to see development and maintenance as a continuum. Rather than two separate processes, it is more realistic to think of software engineering as an evolutionary process (Figure 2.8) where software is continually changed over its lifetime in response to changing requirements and customer needs.
Dynamic AOP is used to extend the features of an application at run time. Dy- namic creation of aspects by the system designer can lead to unexpected softwareevolution: in our case, the scope of extensions is constrained to middleware code, i.e., future modifications of the connector implementation and configuration that were not considered in the early phases of the design. The aspects remote trans- mission in a transactional way allows all the components of the architecture to dynamically upload new classes. The necessary condition to obtain these results is running JADDA enabled with dynamic AOP features; moreover, when dy- namic AOP is set, application developer can further run JADDA in two distinct AOP-modes: in the first one, developers can handle each remote method invoca- tion in the code, writing local methods having the same signature as the needed methods on remote interfaces. These local methods, initially with an empty body, will be completed by the JADDA architectural framework, using the code of as- pects and classes inserted at run-time by the dynamic aspect-oriented platform PROSE , which wraps on a standard JVM, enhancing it with dynamic AOP features.
To address this issue, we present an approach, called EvolTrack that, based on software visualization, captures and communicates with minimal human intervention each contribution made to a specific software project. Here, con- tribution means any action (in fact, this can be configured) resulting in a softwareevolution. EvolTrack can be deployed in collocated or distributed settings but its main focus is when distribution is in place. The communication is made to all members of the team using EvolTrack and is achieved si- multaneously, showing everybody the emerged design from each individual contribution. Actually, it keeps track of all intermediate designs generated until the most current one, enabling the user to navigate, if necessary, through all the evolution history. Moreover, it provides some visual features that enhance the awareness of what has been changed from one evolution to another and, with the zooming feature, also allows working with large projects.
gies can support the development of platforms in SECOs is a challenge, considering the aspects of CSCW, global SE, and free and open source software (FOSS). The set of tools provided by social network sites can be orga- nized to explore some solutions . As pointed by Jan- sen et al. , social technologies should be evaluated, customized and integrated to SE environments and tools. In this sense, this paper analyses the impact of social networks in SECOs through an integrated framework of the SECO and social network challenges. A proposal for a sociotechnical architecture for the SECOs lifecycle is presented, based on open innovation and FOSS. In this sense, socio-technical networks are graphs of nodes (ac- tors and artifacts) and edges (their dependencies). In turn, sociotechnical networks extend them to contemplate a multidisciplinary view, including other elements to ana- lyze SECOs facts and artifacts based on the actor-net- work theory .
Autonomic cloud computing system is different with the general software system. Also, the autonomous unit is different with the general components, which have a unique the life cycle. The life cycle should be the provision of self-management support, with an autonomous unit begins with the design and implementation, tested, validated. Then, it can install, configure, then start the deployment run; at run time. The autonomous unit in the transition from clients, service catalogs, resource provisioning, virtualization, management, to data centers is shown in Figure 2.
As the absence of this awareness at universities, it leads to weaknesses of the graduate's quality. This means that there is a lack of skills and information needed for those modern approaches, and thus it makes the graduates facing difficulty in dealing with the modern systems in the labor market. Depending on the above, the researcher developed a set of recommendations one of them was the importance of restructuring methods in software engineering education so that there is continuous development of educational plans so as to be proportionate to the requirements of the labor market.
Abstract: In operation of increasing the technology day by day as increasing the complexity of telecommunication network system. As the subscriber become more interested to get the advance and easy and fastest technology of the cellular network. There is a evolution take place in order to get the advance technology in the wireless communication (1G to 5G). In addition, the main purpose of the wireless communication is to reduce the human effort. We are in the midst of major change in the wireless network and the primary objective of wireless network operation has been to satisfy the users need. As this paper represent the Generation of the wireless communication, the network architecture of wireless communication and the hardware and software logics evolution of the wireless communication, Network Security and the Future Technologies 6G and 7G.
To reiterate, counter to extant IS theorising on architectural evolution in digital infrastructures, in the advent of SDN infrastructures, network operators’ networking infrastructures were not replaced due to the need to introduce new underlying architecture (Hanseth & Lyytinen, 2010; Grisot, et al., 2014). Further, generally for networking infrastructures, the use of gateways as a means of underlying architectural evolution, the second position of IS theorising on architectural evolution in digital infrastructures (Hanseth, 2001; Hanseth & Lundberg, 2001; Edwards, et al., 2007; Egyedi & Spirco, 2011), has been limited to problems that are narrow in scope (Monteiro, 1998), and was not the means by which SDN infrastructures came about. Architectural evolution by interconnection, which is the third position taken in IS research explaining digital infrastructure scaling and evolution (Hanseth, 2001; Hanseth & Lundberg, 2001; Edwards, et al., 2007; Hanseth & Lyytinen, 2010; Grisot, et al., 2014), is limited to deployment architecture evolution which does not change underlying architecture in digital infrastructures. Therefore, it did not provide theoretical insight into how underlying architecture in such extensively sociotechnically ossified traditional networking infrastructures was evolved. Given these shortcomings of existing IS theorising, I searched for an alternative explanation (Sayer, 2000, pp. 13-17; Easton, 2010; Wynn & Williams, 2012; Reichertz, 2014; Kelle, 2014, pp. 561-562), framed by Archer’s critical realist morphogenetic approach to the transformation of structure (Archer, 1982; Archer, 1995), to ascertain how from an architectural perspective, production SDN infrastructures came about.
Example 1 Let us have only two classes A and B in the application which are not con- nected by an association and there are corresponding tables tab a and tab b in the database, which contain some data. We decide to merge A with B during the development. It means (on a structural level) that the result of the merging is a new class A’, which contains all properties of old A and all properties of B and B is removed from the application. The database schema is generated by the ORM framework automatically and it contains only the table tab a’’. The data migration has to be created manually. The developer has to define the evolution twice. The mapping between the data in tab a and tab b (a carte- sian product of data in both tables, equality of some columns etc.) has to be provided to merge the stored data correctly. Next the impact of this mapping on the database has to be verified: are there any data which can be lost during inlining and is this loss intentional?
The rest of this paper is organized as follows. In Section 2 we take a look at the most important wireless transmis- sion standards currently used in Europe and specify their main parameters. Section 3 provides an overview of design approaches for mobile SDR terminals, especially over PaC- SDRs. In Section 4 the software communications architec- ture (SCA), as it is used in the US Joint Tactical Radio System (JTRS), is introduced. The notion of cognitive radio (CR) is discussed in Section 5 and the need for a modified spec- trum management in at least some major portions of the electromagnetic spectrum is underlined in Section 6. Finally, in Section 7 we propose the development of technology cen- tric CRs as a first step towards terminals that may sense their environment and react upon their findings. Conclusions are drawn in Section 8.
distinguished by a high scanning accuracy, whereby the diameter of the ruby ball is set to the smallest grinder in the milling system, with the result that all data col- lected by the system can also be milled. The 3D scanners usually consist of a light source, one or more cameras, and a motion. The light source projects the light onto the surface of the object, and the camera(s) captures the images. Based on the known angle and distance between camera and light source (jointly called the scan head), the 3D position(s) where the projected light is reflected can be calculated using trigonometry. This is known as “triangulation.” 9-12 Special software are provided by the
Domain: In a broad context it is "a sphere of activity or interest: field" [Webster]. In the context of software engineering it is most often understood as an application area, a field for which software systems are developed. Examples include airline reservation systems, payroll systems, communication and control systems, spreadsheets, and numerical control. Domains can be broad like banking or narrow like arithmetic operations. Broad domains consist of clusters of interrelated narrower domains usually structured in a directed graph. To "reserve a seat" in the domain of airline reservation systems, for example, an update operation is called from the domain of database systems. To "update a record" in the database domain, operations from a still more basic domain, like programming languages, are needed. Other domains like user interfaces (e.g., screen manipulation, mouse interaction) are also instrumental for airline reservation systems. Domains, therefore, can be seen as networks in some semi hierarchical structure where primitive, narrow domains such as assembly language and arithmetic operations are at the bottom and broader, more complex domains are at the top. Domain complexity can be characterized by the number of interrelated domains required to be operational.
Abstract: This research developed a solution approach that is a combination of a web application and the modified differential evolution (MDE) algorithm, aimed at solving a real-time transportation problem. A case study involving an inbound transportation problem in a company that has to plan the direct shipping of a finished product to be collected at the depot where the vehicles are located is presented. In the newly designed transportation plan, a vehicle will go to pick up the raw material required by a certain production plant from the supplier to deliver to the production plant in a manner that aims to reduce the transportation costs for the whole system. The reoptimized routing is executed when new information is found. The information that is updated is obtained from the web application and the reoptimization process is executed using the MDE algorithm developed to provide the solution to the problem. Generally, the original DE comprises of four steps: (1) randomly building the initial set of the solution, (2) executing the mutation process, (3) executing the recombination process, and (4) executing the selection process. Originally, for the selection process in DE, the algorithm accepted only the better solution, but in this paper, four new selection formulas are presented that can accept a solution that is worse than the current best solution. The formula is used to increase the possibility of escaping from the local optimal solution. The computational results show that the MDE outperformed the original DE in all tested instances. The benefit of using real-time decision-making is that it can increase the company’s profit by 5.90% to 6.42%.
As the number of components available on the market increases, it is becoming more important to devise software metrics to quantify the various characteristics of components and their usage. Software metrics are intended to measure the software quality and performance characteristics quantitatively, encountered during the planning and execution of software development. These can serve as measures of software products for the purpose of comparison, cost estimation, fault prediction and forecasting. Metrics can also be used in guiding decisions throughout the life cycle, determining whether software quality improvement initiatives are financially worthwhile (Sedigh et al., 2001). A lot of research has been conducted on software metrics and their applications. Most of the metrics proposed in literature are based on the source code of the application. However, these metrics cannot be applied on components and component-based systems as the source code of the components is not available to application developers. Therefore, a different set of metrics is required to measure various aspects for component-based systems and their quality issues.