In our work GA and Parallel GA are used to generate test data. In order to investigate the performance of both GA and parallel GA we implement these algorithms on real world problems. The CFG of the program are constructed manually from respective source code and all feasible paths are identified manually. . We implemented the GeneticAlgorithm and Multi Population Genetic (P3PGA) Algorithm in MATLAB and tested its performance on various program of MATLAB .The program which we are using in our work are given below:-
forward sigmoidal neural networks. It does not use any encoding scheme for the networks but rather uses the networks themselves as the chromosomes. Montana and L. Davis  have explained that multilayered feed forward neural networks possess a number of properties which make them particularly suited to complex pattern classification problem. Along with they also explained the concept of genetics and neural networks. Alba et al  proposed training Neural Networks with GA Hybrid Algorithms. They suggested the concept of weak hybridization (just the combination of two algorithms) by introducing and testing GA with the BP algorithm (GABP), and a GA with LM (GALM). In both cases the problem-specific algorithm (BP and LM) is used as a mutation-like operation of the general search template (GA). H Kitanao  compared genetic algorithms (GA) with BP (back propagation) and presented a hybrid approach GA-BP which was proved faster than GA alone. GA was found to be equally efficient to the faster variants of BP in small scale networks but found less efficient in large networks. J.N.D Gupta et.al , Zhen Guo Che et.al  compared Standard EBP with GA for optimizing artificial neural networks. GA was found to be better than EBP in effectiveness, ease-of-use and efficiency in training NNs. V. Saishanmuga Raja et al in  compared three optimization techniques GA, ACO and PSO in the biomedical application based on processing time, accuracy and time taken to train Neural Networks. The author concluded that GA outperformed the other two algorithms- ACO and PSO and is most suitable for training the neural network with minimum time and minimum mean square error. Richa Mahajan et al  used the concept of combining neural networks with geneticalgorithm. The author proposed a geneticalgorithm implementation in order to give a maximal approximation of the problem with the reduction of cost.
There is already a number of related works that deals with testcasegeneration for mc/dc coverage. Jones and harrold  introduce two strategies for generating mc/dc compliant test cases. The first strategy is based on the breakdown algorithm whilst the second strategy is based on the prioritization algorithm. At the start, both strategies generate the exhaustive mc/dc pairs as the basis for selection. For the first strategy, the selection of the test candidates is based on iterative generation of essential test cases. Here, essential test cases are established by summing up contribution of each testcase towards mc/dc coverage. In each iteration, the least contributing testcase is systematically removed leaving only available for selection. For the second strategy, the selection of test candidates is Also done iteratively. In this case, in each iteration, the contribution for each candidate testcase is prioritized based on greedy ordering, that is, to cover the most pairs. The iteration stops when no more pairs are available for selection. Although helpful, both strategies appear unsuitable for handling large predicates owing to the need to generate all exhaustive mc/dc pairs.
V. CONCLUSION AND FUTURE WORK Software testing is necessary to validate the requirements and to maintain software quality. The comparison of DFS and BFS algorithm to produce automatictest cases for AD and SD has been presented in this paper. The suitable test cases are produced from both algorithms. The test cases which are produced according to AD, SD, and STG graphs are given. The activity diagram based coverage criterion is considered for generation of test cases. But the test cases cannot display the whole process like lack of information in filling of application during the testing. So it becomes onerous to process further because of not using of standards words in the diagram. The test cases which are produced from SD only screen the system section. But the produced test cases display the whole information of business process inside a system. The test cases produced from the integrated graph shows the features of AD and SD. But consist of unnecessary information. The optimization of test cases by usinggeneticalgorithm will be combined with the current work.
The first group of methods used conventional GA like, Pargas et al. (1999) which presents a goal-oriented technique for automatictest-data generation that uses a GA and Control Dependence Graph (CDG) of software, and this algorithm is capable to be executed in parallel on multiple processors to reduce the execution time. In another approach an automatictestcasegeneration method for structural testing of software is proposed by Girigis (2005) that takes advantage of using GA and data flow dependencies of the program using this algorithm they improve the effectiveness of test cases. Similarly, another technique for structural testing of software by the means of GA is proposed by Alzabidi et al. (2009). A path coverage criterion is used in this technique for testing structure of software and fitness function of GA is improved in this technique. In another work, Srivastava and Kim (2009) proposed a method that by recognizing most critical path in software code can improve efficiency of software testing.
The common prefix indicates the users’ common events, or the same or similar operations. It also shows that the users bear the same or similar interests. The longer the common prefix is, the more evident it is, as is the case of most users. In addition, there is a special group of user sessions, the length of whose greatest common prefix of URL traces requested is shortest. This group of user sessions often indicates different URL requests, which represent distinct requirements for a Web applica- tion. In these sessions, many aberrant events often occur with unwonted input data. They belong to boundary cases, which are very easy to go wrong for the Web application. Herein, we prioritize test suites. The test suite with short- est length of common prefix ranks first, then all the other test suites are arranged according to their lengths of common prefix in descending order. So, the test suite in final position is that whose length of common prefix is last but one. In the test suites S 1 , S 2 , S 3 and S 4 , the lengths
One area where SBSE has seen much application is test data generation. Search based test data generation techniques have been applied to automatically generate data for testing functional and non- functional properties of software. For structural testing, most of the time the criterion used is branch coverage. However, a single criterion is not enough for effective testing. For the wider acceptance of search based test data generation techniques, much stronger criteria like MCDC is needed. Structural testing provides confidence in the correct functioning of the software in its intended environment. Experiments have been performed using simulated annealing to generate test data for simple programs and the parameters of simulated annealing have been optimized through a small set of experiments. This concept can be extended further to include other search based algorithms and to test programs with difficult branching structures. Comparisons are made with the results obtained from geneticalgorithm versus random testing. The experimental results show that the generated test cases give a higher structural coverage such as condition, statement, MCDC and multiple condition coverage.
In 2012 Nirpal et. al. in  shows that the genetic algorithms can be used to automatically generate test cases for path testing. Using a triangle classification program as an example, experiment results show that GeneticAlgorithm based test data can more effectively and efficiently than the existing method does. The quality of test cases produces by genetic algorithms is higher than the quality of test cases produced by random way because the algorithm can direct the generation of test cases to the desirable range fast. This paper shows that genetic algorithms are useful in reducing the time required for lengthy testing meaningfully by generating test cases for path testing. This paper presents a novel approach to generate the automated test paths . Due to the delay in the development of software, testing has to be done in a short time. This led to automation of testing because its efficiency and also requires less manpower. In this proposed approach, by using one of the most standard Unified Modelling Language (UML) Activity Diagram,
The result shows that the approached methodology is more efficient as compare to simple GeneticAlgorithm. It took less number of path case generations for finding the test cases. This can be more efficient if fuzzy logic is applied along with GeneticAlgorithm for testcasegeneration for data flow testing.
The most striking feature of SDLC is software testing. It is very labour-intensive and expensive process in software development and handling as well as maintenance of software. The main objective of this paper is to extend the testing technique. Testing is to show the incorrectness and is considered to succeed when an error is detected [Myers79]. Today’s automatic testing has replaced manual testing with a great extent. Automating testing is very helpful in reducing human efforts to generate test cases or test data. Test data or testcase is a very tiresome task in software testing. It has multiple set of values or data that are used to test the functionality of a particular feature. All degrees of the test values and conditions maintained in separate files and stored as test data. Testcase or data generation is a set of conditions or rules that are developed for finding the failure points in a developing software. Nowadays, many researches have paid considerable attention, focusing on test data generation techniques. This paper adopts a case study and proposes a technique for test data generation, based on geneticalgorithmusing critical path. Critical path testing is considered to solve the looping problem and improving the testing efficiency. Test data scenario is derived from sequence diagram. Sequence diagram reveals the sequence of calls in a system using exchange of messages among the objects of system.
Abstract. Distribution is the challenging and interesting problem to be solved. Distribution problems have many facets to be resolved because it is too complex problems such as limited multi-level with one product, one-level and multi-product even desirable in terms of cost also has several different versions. In this study is proposed using an adaptive geneticalgorithm that proved able to acquire efficient and promising result than the classical geneticalgorithm. As the study and the extension of the previous study, this study applies an adaptive geneticalgorithm considering the problems of multi-level distribution and combination of various products. This study considers also the fixed cost and variable cost for each product for each level distributor. By using the adaptive geneticalgorithm, the complexity of multi-level and multi-product distribution problems can be solved. Based on the cost, the adaptive geneticalgorithm produces promising result compared to the existing algorithm
The operational characteristics of GA, elitism provides a best chromosome (test patterns) by reducing the genetic drift between them, these chromosomes are allowed to pass their characteristics to the next generation. Genetic drift is the computation of stochastic changes in the frequency of the gene through the random sampling process for the finite population. Some gene of few chromosomes turn out to be critical ones when considered for final solution than any other. If a chromosome is a decision variable, which does not influence to the final solution, will not be in pressure for selection. Hence it is necessary to have a minimum selection pressure, as required for application. We can say that, required selection pressure must be used, if the genetic drift fails, it can be done by changing the elitism to higher value or the tournament size. Selection pressure can be increased by elitism at halting the minus of ‘salience’ genes of
Improvements to the alignment algorithm can be evaluated against the full-search selection al- gorithm. The evaluation results from section 3.3 suggest, however, that the margin for improve- ment here is very small. Thus, we do not expect any improvements here to bring serious boosts in over- all performance. Nevertheless, we plan to investi- gate one possible modification to the greedy search. It can be argued that each newly induced link in a sentence pair should affect the decisions regard- ing which links to select further in the alignment process for this sentence pair. This can be simulated to a certain extent by the introduction of a simple re-scoring module to the aligner. Each time a new link has been selected, this module will be used to recalculate the scores of the remaining links, con- sidering the restrictions on the possible word-level alignments introduced by this link, e.g. that words within the spans of the nodes being linked cannot be aligned to words outside those spans.
A common objective of testing is to detect all or most modeled faults. Although fault coverage has a somewhat nonlinear relationship with the tested product quality or defect level (parts per million), for practical reasons fault coverage continues to be a measure of the test quality. The increase in the design complexity and reduced feature sizes has elevated the probability of manufacturing defects in the silicon. These defects could result from shorts between wires/vias, breakage in wires/vias, transistor opens/shorts, etc., Fault diagnosis is the process of finding the fault candidates from the erroneous response. Any vector that can produce different responses for two different faults is called a distinguishing vector for those faults. Hence, to reduce the number of fault candidates, a test set that is able to distinguish between all distinguishable faults is highly desirable. The process of generating such distinguishing patterns is termed as Diagnostic Pattern Generation. The goal of an automatic diagnostic pattern generation (ADPG) is to generate a set of test patterns that is able to both detect all the detectable faults and make fully distinguishable all (detectable) faults that are not equivalent to each other. In general, we often prefer such a set of vectors to contain a small number of vectors. Most testgeneration systems are built around core ATPG algorithm for
The system is simultaneously tuned by including 5 scenarios shown in Table I into the tuning process. These scenarios represent the system under severe conditions on the major weak tie lines that will cause low frequency oscillations. In the tuning process, gains of each PSS were setup with bounds ranging from 0 to 40 and for PSS time constants with bounds ranging from 0-5 seconds. 2 k , 4 k and 5 k were fixed at 400, 40 and 40 respectively. These weights were set in order to equally distribute its importance of each objective function. The population size P was set at 5 and maximum generation N was 60 for each subpopulation. System counter was 10.
The C2M-transformation, as shown in Figure 3 can be divided in two steps. First, a Web Application (WA) Parser parses the source code and creates its corresponding WebParseTree (the DOM tree). Second, the WA generator, based on the resulting DOM tree, generates accordingly a WAPD model. This model conforms to the WAPD meta-model which defines all necessary information for generating test cases. In addition, a constraint validator, which is a part of WAPD meta-model, is used while generating the model. The constraints define connection’s rules between nodes and dependencies within the proposed graph to produce a proper WAPD model.
During the procedure of spacecraft integrated testing, partially because of complexity of spacecraft, one test program is often required to operate many computers and components of spacecraft . If we conduct multi-task parallel testing to the same spacecraft, may cause uncertainty state of the computers or components of the spacecraft. Therefore, when testing a spacecraft, it is impossible to directly group the test tasks and paralleltest tasks within each task group . Therefore, how to analyze the running status, so as to determine partial independence of test programs and finally realize parallel testing between the independent parts of test programs became a key point. While spacecraft test cases are multiple and complex, it is almost impossible for the testers to analyze test programs one by one and determine the internal mutual affiliations. Therefore, current spacecraft automated testing systems are more using traditional sequential testing method. Given the characteristics of spacecraft, how to conduct its parallel testing and how to build the runtime environment need to be further studied.
Different research work on regression testing proposed by various researchers have briefly analyzed in this section. Harroldet al. in  proposed a technique which emphasizes on changing effects within a module when the software is modified. Data flow graphs are used for identifying the pretentious definition use pairs. Also sub paths are used not necessarily. As, retesting is performed here for affected define use paths and newer paths, so the test effort is reduced comparatively which is an added advantage of this technique. According to  the technique was expanded so that it can also be used to identify affected procedures at inter procedural level. The three other researchers named Laski, Benedusi, and Prather have also proposed techniques in ,  and  which is based upon control flow graph techniques that works on for both procedures and functions for identifying the affected control paths in a module. H. K. J. Leung and L. White have introduced firewall concept which enclosed the affected modules that arises due to module modification in . A call graph is defined according to the concept of a control related firewall. Here test effort is reduced by augmenting the retesting factor of modules and links in the firewall of the changed module. In  significant test approaches are introduced like top- down, bottom-up, and sandwich approach. In the approach the tester performs selections to minimize test effort and cost.
Genetic Algorithms are modern and powerful search techniques that have been used successfully to solve difficult problems in optimization, neural networks, pattern recognition, robotics and data mining etc. The problem of premature convergence of simple GA led the research towards the implementation of various ParallelGenetic Algorithms to achieve a balance between exploration and exploitation of the search space. The main objective of this paper is to review and present the important research works in PGAs in a unified manner. To point the possible direction for further research, the paper also highlights the unresolved problems that have remained unaddressed or have not been studied in a systematical manner. After giving the initial benchmark studies, the main focus is on relatively recent research advances. Finally, a parallel and promising Gradual Distributed GA and Hierarchical Genetic Algorithms is discussed, using crossover and migration operators having different degree of exploration and exploitation, to solve the difficult search problems.
In this paper we will consider as the test hypothesis that also implementations could be described as an IOLTS IS=(Σ’, LO, σ’, M-step’). Thus the test hypothesis allows us to reason about implementations as if they were IOLTSs. Having specifications and implementations the next important thing is to define what it means for an implementation to conform to a specification, otherwise no useful testcase can ever be generated. Conformance is defined by means of an conformance/implementation relation between the models of implementations and the specifications [1,8].