•Diagnosis: here it decides whether the system needs to be healed or it is in good health. In this step a solution to the system will be required and suggested if the system is infected or in defect state. SFDR provides Diagno- sis for the software in an automated manner and for many cases such as software failure to run. SFDR corrects the fault by reversing the software component to its original state and this heals the software. According to pre- vious information defined by the preprocess phase in the system, SFDR can diagnose the following cases: Dele- tion of component that cause the system to fail to run, change of component by external factor either human or non-human factor, original component replace, or addition of external component to the software folder.
Whenever a mismatch is detected, the corresponding counter is incremented by one unit. Using this configuration, counter ij will contain the number of mismatches between modules i and j after Lsc clock cycles. The Fault Locator Unit (FLU) detects the faulty module using the fault module detection algorithm. The FLU stores the faulty module number in the Faulty Modules Register (FMR). In the SMERTMR technique, upon completion of the comparison mode, the fault locater unit (FLU) will determine the faulty modules. Algorithm 1 outlines how faulty modules are detected by the FLU. As can be seen, if all counters become zero, there is no faulty module and consequently the system returns to its normal mode. The condition statement in line 3 checks the existence of one faulty module. As discussed earlier, if there is only one faulty module, two out of three counters will have the same non-zero value while the third counter will be equal to zero. The condition statement in line 6 checks the existence of two faulty modules with no common faulty flip-flops. In the last two cases, the system enters the recovery mode to restore the correct state of the faulty modules using the state of the fault-free modules. If none of the previous conditions is valid, the system enters the unrecoverable condition . The FLU stores the faulty module numbers in a register named the faulty modules register (FMR).If FMR is equal to110, it means that modules I and II are faulty. This information is used by the SMERTMR controller during the recovery mode. It is worth mentioning that in SMERTMR, instead of directly comparing and voting the output of the three scan chains, we first make sure that we have correctly identified the fault-free module. If one directly compares and votes the outputs of the three scan chains, it is possible that two out of three replica flip-flops are erroneous and a wrong state is written back to all three modules. In this case, the system will continue to work in a wrong state. Such a condition is not acceptable in safety-critical applications.
Cipher level fault analysis. Khanna et al. [KRH17] recently proposed XFC – a frame- work for exploitable fault characterization in block ciphers. It takes a cipher specification as input and analyzes it w.r.t. DFA by coloring the fault propagation throughout the cipher state. While the authors show that this approach works when analyzing a high-level representation of a cipher, it is not sufficient to discover vulnerabilities that are implemen- tation specific. Agosta et al. [ABPS14] utilized an approach that works on intermediate representations in order to identify single bit-flip vulnerabilities in the code. While this approach takes the analysis one level lower, it still aims at detecting spots that can be exploited from the cipher level instead of finding implementation specific vulnerabilities. Hardware level analysis. Dureuil et al. [DPP + 16] presented a fault model inference
In wireless sensor networks, multi-hop routing is commonly performed through a routing tree. Eventually, the routing tree needs to be rebuilt to accommodate failures, balance the energy consumption, or improve data aggregation. Most of the current solutions do not detect when the routing topology that needs to be rebuilt. The existing shows it is important to provide failure recovery and avoid unnecessary traffic when the routing topology needs to be rebuilt. It presents an inference engine, called Diffuse, designed to detect when the routing topology needs to be rebuilt based on different goals, such as to recover from routing failures, improve data aggregation, and balance the energy consumption. Diffuse approaches efficiently avoid unnecessary topology constructions. The authors use information/data fusion to detect routing failures, which is a different and promising approach. As information fusion techniques can reduce the amount of data traffic, filter noisy measurements, and make predictions and inferences about a monitored entity by exploiting the synergy among the available data.
is given in Table I. The number of variables instrumented for each module accounted for no less than 90% of the total number of variables in that module. All code locations where an instrumented variable could be read were instrumented for fault injection. Those variables and locations not instru- mented were associated with execution paths that would not be executed under normal circumstances, e.g., test routines. Fault injection was used to determine the spatial and temporal impact associated with each software module . The Propagation Analysis Environment was used for fault injection and logging . A golden run was created for each test case, where a golden run is a reproducible fault-free run of the system for a given test case, capturing information about the state of the system during execution. Bit flip faults were injected at each bit-position for all instrumented vari- ables. Each injected run entailed a single bit-flip in a variable at one of these positions, i.e. no multiple injections. For FG each single bit-flip experiment was performed at 3 injection times uniformly distributed across the 2200 simulation loop iterations that followed system initialisation, i.e. 600, 1200 and 1800 control loop iterations after initialisation. For 7Z and M3, each single bit-flip experiment was performed at 25 distinct injection times uniformly distributed across the 25 time units of each test case. The state of all modules used in the execution of all test cases was monitored during each fault injection experiment. The data logged during fault injection experiments was then compared with the corresponding golden run, with any deviations being deemed erroneous and thus contributing to variable importance. D. Failure Specification
The basic objective of e-software testing comprises of execution of the application using combination of inputs and state to reveal failures. Here failure means the error in the system due to which it cannot perform a designated task within the stipulated time . Errors are classified as fault in application or anomalies in running environment. Since the running environment mainly affects the performance, stability or compatibility while the application is responsible for the functional requirements. Web application testing will have to be considered from two distinct perspectives. The first perspective will verify the conformance of the application with all non functional requirements. The second perspective will consider the problem of testing functional requirements. These two perspectives are complementary but not mutually exclusive .
A combination of the ranking tool was used by developers (especially for locating faults) and conventional troubleshooting (especially for understanding fault). Automated Test cases are executed during automated debugging. Exploring unfamiliar code can suggest many promising starting places for developers by using automated debugging tool. The tool displays appropriate code entrance points thereby helping in program understanding even though the tool couldn’t pinpoint the correct location of the fault. Developers quickly disregard the tool if they felt they could not trust the results or understand how such results were computed. Developers would be permitted to explore the failure in a more methodical and data-driven manner, if they were provided with such details. Developers are currently presented with a set of apparently disconnected statements and no additional support when using these tools, rather than working with the familiar and reliable step-by-step approach of a traditional debugger.
Softwarefault tolerance demands additional tasks like error detection and recovery through executable assertions, exception handling, diversity and redundancy based mechanisms. These mechanisms do not come for free, rather they introduce additional complexity to the core functionality. This paper presents light weight error detection and recovery mechanisms based on the rate of change in signal or data values. Maximum instantaneous and mean rates are used as plausibility checks to detect erroneous states and recover. These plausibility checks are exercised in a novel aspect oriented softwarefault tolerant design framework that reduces the additional logical complexity. A Lego NXT Robot based case study has been completed to demonstrate the effectiveness of the proposed design framework.
In this paper we consider the relationship between two published works with a view to better understanding the design of efficient EDMs. Specifically, work in  developed a metric suite to assess the vulnerability of software. This work was the first to address the EDM design problem from a variable-centric perspective. The approach produces a total ordering, known as an importance ranking, on the set of variables in a particular software module, according to the impact that these variables have on the software system as a whole. On the other hand, a machine learning approach for efficient EDM generation was proposed in . This approach uses fault injection data to generate efficient predicates, which are represented as tree structures. It would be beneficial for a software engineer to understand how the importance ranking is reflected in these trees, as this can lead to the development of templates for designing efficient error detection predicates. A. Contributions
No single fault-detection technique is capable of addressing all fault-detection concerns . One such technique is static analysis, the process of evaluating a system or component based on its form, structure, content, or documentation  and does not require program execution. Inspections are an example of a classic static analysis technique that rely on the visual examination of development products to detect errors 1 , violations of development standards, and other problems . Tools are increasingly being used to automate the identification of anomalies that can be removed via static analysis, such as coding standard non-compliance, uncaught runtime exceptions, redundant code, inappropriate use of variables, division by zero, and potential memory leaks. We term the use of static analysis tools automated static analysis (ASA). Henceforth, the term “inspections” is referred to as manual inspections. ASA may enable software engineers to fix faults before they surface more publicly in inspections or as test and/or customer-reported failures. In this paper, we report the results of a study into the value of ASA as a fault-detection technique in the software development process.
Existing Self-configuring Algorithm- In WSN sensor nodes are in a virtual grid structure in which the network nodes are divided into several cells. One node in each cell is selected as cell manager. Upper level nodes of the grid are cell managers and the remaining nodes will be in lower level grid. A large virtual group can be formed by several virtual cells and these cells can have hundreds to thousands sensor nodes. A group manager is appointed for each virtual group. This group manager is responsible for managing and organizing sensor nodes in its group. Another virtual grid structure is created by the group managers from different groups. This structure is shown in Figure 5. Top level of the management hierarchy is the sink, which is above the group manager. We are referring the algorithm of M. Asim et al  as existing algorithm. This self-configuring algorithm follows cellular approach [31, 32]. In self-detection mechanism, sensor nodes monitor their residual battery energy periodically
with respect to the largest correlation between a given fault-free fabric image and the texture structure of input fabric images. This approach is different from the other conventional inspection methods . Also, the output cartoon structure from the ID method is identified for examination and apparition. Thirdly, defective manual-labeled image databases of dot-, star-, and box-patterned fabrics are newly constructed for the performance estimation, while most of earlier literature did not have these and only simply counted the quantity of white pixels in the resultant images to determine accuracies. Fourth, a rigorous performance assessment is conducted on the databases. Based on defective manual-labeled images, the proposed method achieves 94.9% 99.6% of faultdetection accuracies. We also apply a FPR-CPR graph (FPR stands for false positive rate and CPR stands for correct positive rate) analysis, which is new in literature, to show the robustness of the proposed method compared to the other methods.
The software engineering literature contains many studies of the efficacy of fault finding techniques. Few of these, however, consider what happens when several different techniques are used together. We show that the effectiveness of such multi- technique approaches depends upon quite subtle interplay between their individual efficacies and dependence between them. The modelling tool we use to study this problem is closely related to earlier work on software design diversity. The earliest of these results showed that, under quite plausible assumptions, it would be unreasonable even to expect software versions that were developed ‘truly independently’ to fail independently of one another. The key idea here was a ‘difficulty function’ over the input space. Later work extended these ideas to introduce a notion of ‘forced’ diversity, in which it became possible to obtain system failure behaviour better even than could be expected if the versions failed independently. In this paper we show that many of these results for design diversity have counterparts in diverse faultdetection in a single software version. We define measures of fault finding effectiveness, and of diversity, and show how these might be used to give guidance for the optimal application of different fault finding procedures to a particular program. We show that the effects upon reliability of repeated applications of a particular fault finding procedure are not statistically independent - in fact such an incorrect assumption of independence will always give results that are too optimistic. For diverse fault finding procedures, on the other hand, things are different: here it is possible for effectiveness to be even greater than it would be under an assumption of statistical independence. We show that diversity of fault finding procedures is, in a precisely defined way, ‘a good thing’, and should be applied as widely as possible. The new model and its results are illustrated using some data from an experimental investigation into diverse fault finding on a railway signalling application.
These tools ensure that application don’t gives errors. These tools are used to test the functionality of the application. User interface of an application might change regularly, because of the incompatibilities between browsers and server or client platforms. Some tools are given below which helps to make it easier to build and execute automated tests for web applications.
In recent years, many studies on artificial neural networks have been carried out with the aim of determining intelligent fault diagnosis to investigate the potential applications in pattern recognition. It is common to train a neural network by using samples so that it can recognize the required input- output characteristics and classify the unknown input patterns . This type of neural network is based on supervised learning, including back-propagation (BP) neural networks, fuzzy networks, probability neural network, etc.; they are commonly used in fault diagnosis  to . However, only the patterns that occur in the training samples can be classified. If a new pattern is classified by the neural network, an incorrect result will be given. Both new patterns with original training samples as well as renewal training are needed in order to enable the neural network to recognize new patterns. Therefore, a neural network based on supervised learning cannot function without training samples. To overcome this issue, some unsupervised neural networks have been developed, including self-organizing competitive neural networks, self-organizing feature map neural networks, and adaptive resonance theory networks. They are all used for implementing pattern recognition without training samples  to . Regarding this matter, an adaptive resonance theory (ART) neural network can not only recognize objects in a way similar to a brain learning autonomously, but also can solve the plasticity-stability dilemma . Its algorithm can accept new input patterns
Areas in 3-D Magnetic Resonance Images (MRI) of brain can be assorted utilizing protocols for manually sectioning and marking structures. For prominent cohorts, expertness and time essentials build such approach visionary. In order to attain mechanization, a single segmentation can be disseminated to some other single employing an anatomical reference symmetry approximation linking the atlas image to the objective image. The precision of the leading target marking has been determined but can possibly be enhanced by aggregating multiple segmentations employing decision fusion process.
With this above work we tried to build efficient faultdetection and recovery algorithm will not only identify a faulty node at the same time trying to find the alternative path in the network. Sensor networks are always suffering from issues like to target the life time and fault node discovery and recovery. Techniques are already investigated and a new feasible and efficient solution is developed. Our work is based on fault node recovery alogirthm which uses grade diffusion with genetic algorithm to find the faulty node and to replace them with active node. The proposed routing technique involves a new technique master and slave to discover and recover the routing path efficiently. As this technique make use of primary,secondary path and data segementation only few sensor nodes have to be replaced in the whole network. Results to be reflected that the proposed system will efficiently find all possible faults and repair route compared to existing fault tolerant techniques in WSN.
This study followed the tenets of Declaration of Helsinki and was approved by the Khon Kaen University Ethics Committee for Human Research. Written informed consent from the patients to review their fundus photographs was not required by the ethics committee. The fundus photographs had no link to the patients’ identities, and the researcher respected the privacy of the patients. A total of 400 fundus images with 45 ° field of view from KKU Eye Center, Faculty of Medicine, Khon Kaen University were initially diagnosed and classified based on the DR severity into three groups as no DR, NPDR and PDR by the ophthalmologist (TR). The conceptual framework of this study is to develop the software in a kind of top-down programming. MATLAB R2015a with MATLAB Image Processing Toolbox (The MathWorks Inc., Natick, MA, USA) was used to extract the clinically significant features of DR pathologies and classify the severity of DR. The accuracy of software was measured by comparing the obtained results with the reference stan- dard, diagnosis from the ophthalmologist. The sensitivity, specificity, positive predictive value and negative predictive value of this software were also reported.
From the above literature review, we find that the techniques are segregated as different software modules with not much accuracy. Our software will be overcoming the above challenges and also optimizing the existing algorithms of Scanning the Prescription. It will also provide hands on user friendly interface like voice over assistance and Other Healthcare Services all Integrated on Single platform . It will keep the track record of the Medical profile of the patient and graphically displays it in form of demographic data.
 Konstantin P. Louganski,Modelling and analysis of a d.c. power distribution system in 21st century airlifters. Masters thesis, Virginia Polytechnic Institute and State University, Blacksburg Virginia, 1999.  S.D.Sudhoff, K.A. Corzine, S.F. Glover, H.J.Hegner, H.N.Robey, DC link stabilized field oriented control of electric propulsion systems, IEEE Transactions on Energy Conversion, Vol.13, No.1, March 1998  Afroz M.Imam, Condition monitoring of electrolytic capacitors for power electronics application, Dissertation, School of Electrical and Computer Engineering, Georgia Institute of Technology, 2007  M. Kuhn, Y. Ji; Joos, H.D. Joos, B.Johann, An Approach for Stability