be connected in one database by the use of semantics. These semantics can be used to find data entities (see (Stuckenschmidt, 2009) and (Hitzler, 2008)). Important information of a UOM is “Transformation”. A UOM does not only contain describing information, but also supports to carry information which can transform UOM data. A transformation can be, for example, a transformation of one technology into another such as transforming Java byte code into .NET byte code (Frijters, 2008) or a UML diagram into Java source code. Thus the original UOM can be used in another (technology) domain. The service-based software construction process focuses on the transformation of implementation data. Moreover, it can be necessary to transform describing information. The service-based software construction process defines “Transformation” like the definitions of Mode Driven development (MDD/MDSD) (Meimberg et al. 2006), (Stahl, 2007) and Generative Programming (GP) (Czarneck and Eisenecker, 2000). Transformation means in this area that information will be transformed into the same or another domain specific model for later reuse. (Garcia-Magarino, 2008) shows different modelling languages and frameworks in the area of CASE-tool interoperability. (Czarnecki and Helsen, 2003) show a classification of different model transformation attempts. Within the scope of this research MDD and GP are focused. In this publication transformation is done by using existing transformation tools.
There are numerous solutions that have been proposed previously for knowledge management (KM) in softwaredevelopment. In an earlier analysis , it has been discovered that ontology and semantic based solution have been gaining popularity while other frameworks have also been well-thought-out like pattern-based, experience factory, social networking, wikis, Software and Systems Process Engineering Meta-Model (SPEM), agent-based and taxonomy. In a study by , Web Ontology Language (OWL) for the 2-layer ontology modelling has been introduced for an effective KM system to ease the process of knowledge, search and learning. In another study by , a domain specific ontology based system for distributed teams has been proposed as a tool to enable the handling and searching information from the knowledge base. An approach to combine multi- agent and ontology solutions for processing context information has been introduced with a tool called Distributed Software Engineering Environment (DiSEN) that is supposed to support communication, persistence and collaboration among teams geographically distributed . More collaborative solutions are presented such as FLOW Maps , Enterprise Software Engineering Model (ESEM) , and collaborative KMS framework .
Based the evaluation as presented in Table 2, several conclusions can be drawn. First of all, there is no existing framework or model that meets all the requirements of the reuse management tool. Secondly, the RiSe maturity model  satisfies most criteria, but also fails at crucial points, namely: in addressing changing reusable assets and providing practical guidance. The assessment method was out of scope at when the Rise Maturity model was presented and the model was never validated. No additional papers were found describing such an assessment method or validation related to the Rise Maturity Model. Changing reusable assets are also barely addressed by the model. Some reuse maturity models present configuration or change management as a way of managing changes, but in the RiSe maturity model this aspect is rather buried in other factors. Some characteristics of changing reusable assets are recognized in factors such as „previous development of reusable assets', 'origin of the reused asset' and 'technology support'. Nevertheless this point is rather weakly addressed. The strong point is that it elaborates on the work of previous reuse maturity models. Thirdly, almost all the reuse models and frameworks use the Capability Maturity Model (CMM) of the Software Engineering Institute (SEI), see e.g. . Although many models and frameworks claim to use CMM, it is actually used as guidance instead of a prescription. Ted Davis mentioned that 'maturity as an indicator of capability' and 'the use of maturity levels' is not present in his model. The same is true for many other reuse frameworks and models. Fourthly, the reuse factors mentioned by the existing reuse maturity models resemble each other, but at the same time lack a systematic theoretical foundation. The reuse factors can serve as a basis for describing software reuse.
This article introduces a computer-aided software requirement analysis tool, Software Inspection Support & Requirement Traceability (SIS-RT), which has inspection, traceability analysis, and formal analysis capabilities. Inspection and requirement traceability analysis are widely believed to be the most effective software verification and validation (V&V) methods. Though formal methods are also considered as an effective V&V harness, they are not easy to be used properly in nuclear fields because of their mathematical nature. These techniques are labor-intensive and thus are required to be partially automated. SIS-RT is designed to partially automate the software inspection process and requirement traceability analysis. Toward easy inspection and effective use of formal method, SIS-RT has three kinds of view; Inspection View, Traceability View, and Structure View. After further development efforts, SIS-RT will turn out to be a unique and promising software requirement analysis tool.
The audit software helps in simplifying and reducing the time and cost of energy audit. The results of the study will generally useful to facilitate the analysis of the energy audit, because the resulting practical software covers most measures and analyzes to be performed in an energy audit. Moreover, it gives also an understanding on how energy is used in buildings as well as on how to identify opportunities to reduce energy consumption.
The main focus of this chapter is the development of control algorithms that can effectively regulate the magnitude and frequency of the generated output ac voltage with minimal harmonics for photovoltaic power plants in the following modes of operation: a) single-phase operation, b) three-phase grid connected operation, and c) three-phase standalone operation. Firstly, a review of dc-ac converter interfaces classified according to their utilization topologies is done. This is followed by a review of such converters based on their schematic topologies. Thirdly, a review of the popular modulating strategies for dc-ac conversion used in single-phase applications is done. This is followed by a review of the vector control scheme applied for the control of a three-phase PV power plant in both the grid connected mode and in a standalone mode. Also reviewed briefly is the applicable standards for the integration of PV inverters to the grid. This is followed by the development of the proposed control system for a single-phase inverter to achieve a sinusoidal output waveform. Finally, the simulation results and discussions are presented, followed by the chapter conclusions.
Only the bug introduced during the development error passes through the rework discovery in Scrum valve. The two other errors can be discovered only at the end of the iteration. Note that the requirements have the same size and weight and before to be developed are subdivided in different Sprint backlogs. Each backlog includes a random number of features, extracted from a Gaussian distribution. These Sprint backlogs are developed during short fixed-length iterations. As requirements are implemented, they flow into the level variables “Integration Testing”, “System Testing” and “User Acceptance Testing” stock. If the tests are successfully passed, then the requirements are accepted and considered completed. Consequently, the accepted requirements flow into the level variable called “Production Environment”. Otherwise, a rework must be performed. This rework entails a delay due to the time needed for the correction. From “Production Environment” level variable the requirements flow into the “Live” level variable and the work is finished. As we have already said, in our model, the time to finish the work “Original Work to Do” is affected by two main effects: delays and errors.
• The LVST project concerns the development The LVST project concerns the development of the softwaretool that will be able to select of the softwaretool that will be able to select among all launch vehicles available those that among all launch vehicles available those that meet the mission requirements and constrains meet the mission requirements and constrains imposed
The review was limited to considering the activities around the development of the concourse including the identification of material that would be included in the concourse, what framework, if any was used to guide the inclusion of material, and themes identified to pursue prior to the review of the concourse. How the concourse material was gathered, although interesting and included within the Table 1, was not a focus of the review as the range of ways of collecting concourse data and the types of data gathered are varied and are usually clearly articulated well within the research articles. However, the framework or means to ensure that the concourse is fully representative of the opinions available on the topic is rarely articulated. Articles selected were published between 1996 (Brown 1996) and 2017 (Dune et al., 2017; Grimshaw et al 2017). Generally, the type of concourse data identified are
Software System Engineering performs a variety of functions at various stages in the product development life cycle. Software Requirement Analysis is a phase in which reevaluation has to be started. Requirements are classified into functional and non-functional requirements relevant to the product or process developed. The functional requirements categorized as, user interface, transaction performance, auditing and authentication were done in accordance with their functions. The basic representation which is relevant to software system engineering such as world, domain, elementary and component view were discussed. The views noted have been approached from top - down. This paper focuses on using the GA tool to have an iterative process of identifying and refining the requirements by a method of reevaluation. A series of repeated steps using the tool will help us to elicit all the requirements to develop the design phase successfully. The implementation through the tool will be discussed in the future.
Abstract — As mobile technology is rapidly expanding in many areas, there is a high demand from industry for graduates with mobile development skills. Graduates, who are entering the mobile development world, need to understand how the characteristics of mobile devices and applications affect decisions about software design and be able to select and use appropriate standards, APIs and toolkits to build mobile applications. In view of that, an electronic decision matrix based on Pugh method to be used as one of the learning tool in mobile development course is introduced. The electronic matrix is designed and developed to assist mobile application developers especially the novice, to choose the methodology that suits the requirements of their mobile development projects. Detail descriptions of how the electronic matrix can be used in facilitating the learning process of mobile development are described in this paper.
frequency, the processor designers come with a new model – multicore processors. Major microprocessors’ producers like INTEL and AMD both offer mul ticore processors that are now common for desktop computers. Multicore processors turn normal desktops into truly parallel computers. These facts emphasize a major shift in processors and computer design: further improvements in performance will add multiple cores on the same chip, and many processors that share the same memory. Having high performance multicore processors rise a new problem: how to write the software to fully exploit the capabilities of the processors in a manner that the complexity of the underlying software not to be exposed to the programmer.
Recently, according the development of computer technology, the dependency of railway systems on the computer software has been increased rapidly, and the high reliability and safety for vital railway software are required in accordance with this development of tech- nology. Accordingly, for the accurate performance test- ing on the vital signaling system embedded software which is required by international standards as a highly recommended matter and one of the validation items, this paper presented an automated tool which was developed for the first time at home and abroad. First of all, the functional design of developed tool for per- formance testing on the software dedicated to the rail- way was explained, and the result of its implementation was shown concretely. The tool for performance testing like this expresses results of testing on overload and stress, response time and memory restriction, others for performance requirements as the result of monitoring the present condition using target memory systems and task stacks from the viewpoint of graph so that they can be grasped easily in position of user. Users are able to check easily collection of task information, system resource use rate, memory resource use rate and results of monitoring the present condition using target mem- ory system and task stacks. Basically, this automated tool for performance testing on the railway software is the tool to be utilized at the software validation and maintenance stage, and at the same time, it is regarded that its degree of utilization can be sufficiently high at the softwaredevelopment process also. If this tool is used widely at the software validation and development stages, it will be able to contribute in securing the safety and reliability greatly by preventing the error of vital software for signaling system though it before any- thing happens.
The research setting within this study is Sigmax Public Transport, a Dutch company operating within the ICT- and softwaredevelopment industry. Data in this feedback session will be collected by the use of a qualitative research method; individual interviews. This method is selected as it allows for in-depth conversations, leaves room for questioning beyond the standard predefined subjects if necessary and benefit from the expertise and creativity of the participants. Furthermore, introvert and less well-spoken interviewees will have the chance to express their self and will not be overruled by more extravert ones. In order to gather a variety of different perspectives towards the design of the first prototype, employees with different backgrounds and roles within SPT are approached. In addition, a Sales Consultant of Sigmax Group is consulted for his expertise in business value
Peptide based vaccine designing and immunodiagnosis is the most important field in the diagnosis and therapy of various infectious and noninfectious diseases. It does critically require identification of regions in the pathogen native protein sequences, which are recognized by either B-cell or T-cell receptors. The antigenic regions of protein recognized by the binding sites of immunoglobulin molecules are called B-cell epitopes. The experimental identification of epitopes binding specifically to anti-peptide antibodies requires the binding assay of each peptide in an antigenic protein sequence which are very laborious and time consuming . A bioinformatics approach to predict linear B cell epitope in a protein sequence can be the best alternative to reduce the number of peptides to be synthesized for wet lab experimentation. The aim of this study is to develop a Python based software with graphical user interface for predicting the antigenic properties of protein. Hence the tool was named as Analysis of protein sequence and antigenicity prediction (ASAP). ASAP predicts the antigenicity of the protein sequence from its amino acid sequence, based on Chou Fasman turns and Antigenic index.
program and the video player. Logging which is limited to real-time introduces temporal and observational inaccuracies. It relies on rapid and accurate marking-up of the log, while the observer's attention is divided between the observation of the interaction and the control of the logging. It also does nothing to facilitate the more considered analysis of critical incidents. Retrospective logging usually involves pausing and reviewing, and therefore is only really practicable when the logging software can be used to drive the video player. During refinement of the requirements, it was considered essential that DRUM should support both real-time and retrospective logging of an extendible set of user-defined events as well as standard events; that the log should itself provide a means of controlling the video (by point and click); and that it should be possible to annotate events recorded in the log with comments, both in real time and retrospectively.
In the effort of developing this software, the developer is more inclined to using Macromedia Authorware 7.0 as the main programming tool. Although the developer has used the software before on previous assignments, it is still difficult to use the software since previously, the developer used it in doing group assignments whereas this is an individual assignment, thus a much higher and thorough skill is needed while using this software. While developing the software, in most cases when a problem arises, coursemates’ assistants could not be readily available since there are few people who were doing similar project and they are more inclined to handle the related problems.
Abstract: Software testing is one of the most critical phases of softwaredevelopment life cycle. The time and cost consumed by software testing are one of the most critical limitations of software testing. The testing process can be done manually or automatically. Recently, software automation testing is applied in many software organizations to guarantee the software quality and to reduce the cost and time consumed in manual testing. Software test automation framework is an independent application which maximizes the automation efforts by facilitating the execution of the automated test scripts. There are many software test automation frameworks (STAFs) are available in the marketplace. The automation testers face a problem in selecting the best STAF that meet their testing requirements. The main objective of this paper is to provide the automation testers with a good understanding of STAFs. This work aims to evaluate each STAF in terms of their scripting approach, features, advantages and disadvantages. Furthermore, it conducts a comparative analysis among STAFs by using the essential parameters of automation projects such Scripting capabilities, time, application size, scripting approach, modularity, scalability, reusability, maintainability, and complexity. This analysis aims to help the testers to select the excellent fit STAF.
Frameworks represent semi-codes for defining and im- plementing time-tested highly reusable architectural skeleton design experiences and hence become very use- ful in development of software applications and systems. As per Gamma et al. , famous in reuse literature as GoF, an object-oriented framework is a set of cooperating classes that make up a reusable design for a specific class of software which provides architectural guidance by partitioning the design into abstract classes and defining their responsibilities and collaborations. Being a reusable pre-implemented architecture, a framework is designed ‘abstract’ and ‘incomplete’ and is designed with prede- fined points of variability, known as hot spots, to be cus- tomized later at the time of framework reuse . A hot spot contains default and empty interfaces, known as hook methods, to be implemented during customization [3,4]. Applications are built from frameworks by extending or customizing the framework, while retaining the original design. New code is attached to the framework through hook methods to alter the behavior of the framework. Hook descriptions provide guidance about how and where to perform the changes in the hook method to fulfill some requirement within the application being developed. With
The main function of the tool is to assist project managers in evaluate the project duration in early stage of the project life cycle taking into account the risk around the project schedule. This tool can be used by the group of workers who are using the tool for analyzing project risk schedule in the project management to predict when the project will finish and to decide whether to take it or not. This could be a very important issue in the organizations and the business area they are working in. Mainly, there are two parts in the proposed tool namely; simulation and distribution type. The first part allows the user to decide the number of iterations that simulation will apply on a specific project, and the second part will let the user choose the best statistical distribution to be applied on an individual activity of the project. After that and during the second phase of this study, a simulation run using data will be conducted and the results will be compared and analyzed to verify the accuracy of the software.Project success depends on risk analysis tools. It will make project managers to be more reliable on developing a new project for the organizations they are working in. Also, it will enhance the risk analysis skills in the organization. With the softwaretool for scheduling risk analysis, softwaredevelopment will catch the key specifications in the project duration. This software can be used for the following functions: