component level reliability models

Top PDF component level reliability models:

A Survey of Software Reliability Models

A Survey of Software Reliability Models

The objective is to develop a framework to enable the early prediction of software reliability incorporating reliability measurement in each stage of the software development. Leslie et.al.[50] state that the ability to predict the reliability of a software system early in its development can help to improve the systems quality in a cost effective manner. Therefore, the proposed framework measures and minimizes the complexity of software design at the early stage of software development lifecycle, leading to a reliable end product. To calculate the reliability of software product, the reliabilities at different stages of product development like requirements analysis, design, development, testing and implementation etc. will have to be evaluated. This facilitates the improving of the overall product reliability. It is observed that modifications and error identifications during operation and implementation can lead to reengineering of large parts of the system, which has been shown to be costly. Hence to ensure the quality of the developed system, it is important to ensure quality at different stages of development. A few approaches which do consider component-level reliability (Goseva et. al. [51], Reussner et, al. [52]) , assume that the reliabilities of a given component’s elements, such as its services, are known. Reliability prediction is useful in a number of ways. A prediction methodology provides a uniform, reproducible basis for evaluating potential reliability during the early stages of a project. Predictions assist in evaluating the feasibility of proposed reliability requirements and provide a rational basis for design and allocation decisions.
Show more

10 Read more

Simulation of Reliability of Software Component

Simulation of Reliability of Software Component

time on specified operating conditions. In other words, by estimating or predicting the reliability [1] of component, the quality of software product can be estimated. The satisfaction of customers is directly dependent on the quality of that software. The analysis report that is commonly used to describe software reliability has been derived from observed or failure intensity. Failure intensity is defined as the number of failures observed per unit time period. Failure intensity is a also good measure for reflecting the user perspective of software quality. As Computer applications are going more diverse and spreading through almost every area of everyday life then reliability factor becomes a very important characteristic of software or component systems. The reliable component is a base of system and part of system i.e. client, administrator and working environment. Since it is a matter of cost and performance to produce a system having documented and estimated reliability [2] of system. Therefore, it is necessary to measure its reliability before releasing any software. When reliability reaches at threshold level then the software component can be released for further use. To do this, a number of models [3] have been proposed and has been being developed. Software modeling is a statistical estimation [4] method applied to failure data collected or simulated the software component or system developed after integration of software component by different approach of joining in software engineering .This can be one after a component testing has been executed so that failure data are available. The implementation of newly developed and modified models tries to make system better and help in predicting the reliability in a accurate way. The most important parameter of any software product are level of quality, time of delivery, and final cost of the product. The time of delivery and cost should be quantitative and pre decided, whereas these attributes is difficult to define Quantitatively. Reliability is one, and probably the most Software reliability is related directly to operation and performance instead of designing of a component.
Show more

7 Read more

An Optimization Framework for “Build-or-Buy” Strategy for component Selection in a Fault Tolerant Modular Software System under Recovery Block Scheme

An Optimization Framework for “Build-or-Buy” Strategy for component Selection in a Fault Tolerant Modular Software System under Recovery Block Scheme

alternative and only one version will be selected for each alternative of a module. If a component is in-house build component, then the alternative of a module is selected. A schematic representation of the software system is given in Figure 1. We are selecting the components for modules to maximize the system reliability by simultaneously minimizing the cost. The frequency with which the functions are used is not same for all of them and not all the modules are called during the execution of a function, the software has in its menu. Software whose failure can have bad effects afterwards can be made fault tolerant through redundancy at module level (Belli and Jadrzejowicz, [1]). We assume that functionally equivalent and independently developed alternatives (i.e In-house or COTS) for each module are available with an estimated reliability and cost. The first optimization model (optimization model-I) of this paper maximizes the system reliability with simultaneously minimizing the cost. The model contains four problems (P1), (P2), (P3) and (P4). Problem (P1) is not in normalized form, therefore, it has been normalized and transformed into problem (P3) and (P4). The second optimization model (optimization model-II) considers the issue of compatibility between different alternatives of modules as it is observed that some COTS components cannot integrate with all the alternatives of another module. The models discussed are illustrated with numerical example.
Show more

15 Read more

Simultaneous allocation of reliability & redundancy using minimum total cost of ownership approach

Simultaneous allocation of reliability & redundancy using minimum total cost of ownership approach

To evaluate the performance of the proposed TCO based allocation approach from traditional reliability allocation approaches, four mixed integer nonlinear reliability design problems (P1~P4) are solved. These examples are the series system, series-parallel system, complex (bridge) system and overspeed protection system. All the above problems are solved separately in three cases. The mathematical formulations of the four reliability-redundancy problems are furnished below.

16 Read more

New Paradigm for Software Reliability Estimation

New Paradigm for Software Reliability Estimation

A careful examination of the assumptions listed in Table 1 reveals that the cause for the inaccuracy of the predictions of traditional reliability estimation models is mainly due to their unrealistic nature and absence of mathematical implementation. All traditional software reliability growth models use system test data fitted to some distribution to predict the number of defects remaining in the software. However, real-time data was never actually fitted to these distributions or the distributions were never actually estimated. Further, the efficacy of each of these models is directly related to their analytical ability which implies that the number of residual defects predicted by the model should be same as the actual number found in field use [23]. Conversely under real-time operation this is never the case and hence the major reason for the inaccurate estimates by the traditional models. Having understood the underlying cause of the inaccuracy we now analyze the foundations for faulty predictions by the traditional models. The term software reliability quantifies our confidence in the ability of software to provide acceptable levels of performance under a given operational environment [25]. The inherent probabilistic nature of the term itself is a source of headache for the software designers and developers. Software performance under a given operational environment can be influenced by a large number of internal and environmental factors like schedule pressure, unstructured development practices, resource limitations, volatile and evolutionary user requirements, interdependence among modules etc. All the above factors can negatively impact software reliability estimation and measurement. Further it also becomes difficult to estimate whether the software being implemented is as reliable as predicted or not until the software is actually implemented. The above reliability estimation problem stems from the fact that we generally estimate the reliability of a software component during the testing phase on the assumption that its behavior during real-time execution is similar to the testing times when the failure data was actually collected. However, a hard to ignore fact that overrules the
Show more

6 Read more

Analysis of the Reliability of a Three-Component System with Two Repairmen

Analysis of the Reliability of a Three-Component System with Two Repairmen

In the present model an important aspect of repairs have been taken, i.e. how to obtain the reliability measures of a system when there are two repairmen involved in repairing jointly with different repair rates? It is not uncommon to see diverse ranges of performance between repairmen due to high degree of variability that exists in organization providing job as well as the diverse range of training and experience among employees. Keeping this fact in view, i.e. two repairmen, a foreman (boss) and an apprentice (assistant), with the incorporation of human error, the authors have tried to study the reliability measures of the system with the assumptions mentioned in the next section. In the present system analysis it is assumed that any failure whatsoever is first taken by foreman for repair. In case of his business in repairing of a unit any other unit fails, it will be taken for repair by apprentice. Whenever both the repairmen are involved in
Show more

7 Read more

Delayed maintenance modelling considering speed restriction for a railway section

Delayed maintenance modelling considering speed restriction for a railway section

Track system consists of rails, rail joints, fastening system, sleepers, ballasts. Degradation of these main devices may cause track dangerous failures, including track geometry faults and rail failures, which may lead to the train derailment. Rail failures include rail profile problems, rail breakages and rail cracks, which can be fixed by rail grinding and rail renewal. Track gauge, cant, level and alignment are geometry parameters widely used in the literature to describe track geometry condition. There are 4 major kinds of track geometry faults such as track gauge spread, track buckle, track top and twist. Gauge spread due to poor fastening or sleeper condition can be fixed by tie-bar, spot-sleepering and track renewal. Track vertical problems (such as track twist and top) due to poor ballast condition, can be controlled by tamping and stoneblowing. In this paper, track vertical geometry problems (track top and twist) are considered as the dangerous track problems leading to railway accidents. These vertical geometry problems can be identified when track cant measurement exceeds the threshold, and in this work the cant evolution is taken as an illustrative and characteristic deterioration process of the railway track geometry.
Show more

16 Read more

Analyses of delivery reliability in electrical power systems

Analyses of delivery reliability in electrical power systems

These simulations show that the mean duration in functioning state or mean time to failure for the load delivery varies in steps based on the power genera- tion capacity. In situations where capacity limits is reached for one of the branches in a parallel struc- ture, the reliability of the parallel branches behaves like serial structures. Including spinning reserve in the system contributes to increase the delivery reliability. In these simulation cases focus has been on the production units and spinning reserve. The same prin- ciple applies to all the branches in a meshed system. Assume two parallel lines supplying one load branch can deliver 80 % each of a power demand. Then the redundancy only applies up to 80 %. Above 80 % the parallel power lines act as a series structure, which means that both lines have to be in operation for trans- mission of power at this level. The grid will at this load change reliability behavior from a parallel structure to a serial structure. This type of change in reliability behavior makes analysis of large meshed grids very complex.
Show more

8 Read more

ADAPTIVE COLOR FILTER ARRAY INTERPOLATION ALGORITHM BASED ON HUE TRANSITION AND 
EDGE DIRECTION

ADAPTIVE COLOR FILTER ARRAY INTERPOLATION ALGORITHM BASED ON HUE TRANSITION AND EDGE DIRECTION

The results of the software architecture analysis with respect to security, component reliability, architecture reliability, adaptability and risk value has been presented.The proposed s[r]

21 Read more

One Sample Bayesian Predictive Analyses for an Exponential Non Homogeneous  Poisson Process in Software Reliability

One Sample Bayesian Predictive Analyses for an Exponential Non Homogeneous Poisson Process in Software Reliability

The Goel-Okumoto software reliability model, also known as the Exponential Nonhomogeneous Poisson Process, is one of the earliest software reliability models to be proposed. From literature, it is evident that most of the study that has been done on the Goel-Okumoto software reliability model is parameter estimation using the MLE method and model fit. It is widely known that pre- dictive analysis is very useful for modifying, debugging and determining when to terminate soft- ware development testing process. However, there is a conspicuous absence of literature on both the classical and Bayesian predictive analyses on the model. This paper presents some results about predictive analyses for the Goel-Okumoto software reliability model. Driven by the re- quirement of highly reliable software used in computers embedded in automotive, mechanical and safety control systems, industrial and quality process control, real-time sensor networks, air- crafts, nuclear reactors among others, we address four issues in single-sample prediction asso- ciated closely with software development process. We have adopted Bayesian methods based on non-informative priors to develop explicit solutions to these problems. An example with real data in the form of time between software failures will be used to illustrate the developed methodologies.
Show more

11 Read more

Estimation of the Reliability Measures of a Three component System with Human Errors and Common Cause Failures

Estimation of the Reliability Measures of a Three component System with Human Errors and Common Cause Failures

The present paper discusses the problem of estimating the reliability measures of a three-component identical system when the system is affected by Common Cause Shock (CCS) failures as well as human errors. The maximum likelihood estimators of the reliability measures like reliability function and mean time between failures of the present model are obtained. The performances of the proposed estimates have been developed in terms of mean square error, using simulated data.

7 Read more

Determining Path Flows in Networks: Quantifying the Tradeoff between Observability and Inference

Determining Path Flows in Networks: Quantifying the Tradeoff between Observability and Inference

consumers. Hopefully, the costs will drop over time as technology advances enable more networks to be highly instrumented. In condition B, budget constraints may have hindered a higher level of sensing so more reliance on inference is required. This inference may come in the form of engineering judgment such as applying conservation of flow equations, it may also include information derived from driver surveys, finally, traffic assignment models may be applied to the available data to fill in the gaps. Here in condition B, there is a strong call for quantifying the tradeoff between observability and inference, likely dependent on the type of each one, so that the greatest number of path flows can be distinguished. For instance, a network with 50 percent of links sensed with loop detectors may be better served by inference in the form of an OD survey with route choice questions rather than conservation of flow equations since the survey data asks about what paths a traveler follows through the network; condition B in the survey case is expected to yield a more accurate picture of the path flows than the conservation of flow case. Finally, condition C is the worst possible one with little or no data observed on which to base an estimate of path flows. In this case, the analyst relies heavily on inferring the correct path flows – quite a challenging task with weak substantiation. In summary, quantifying the relationship between observability and inference to create the best estimate of path flows is a key part of this proposed work.
Show more

265 Read more

Overview of Grid Computing Environments

Overview of Grid Computing Environments

Research in “2-level” Programming Models Basic user interface to middle tier proxies controlling backend software resources Component Models like ICENI Imperial College or DoE Common Com[r]

27 Read more

Significant Factors for Reliability Estimation of Component Based Software Systems

Significant Factors for Reliability Estimation of Component Based Software Systems

The primary goals of software security are the preservation of the confidentiality, integrity, and availability of the information assets and resources that the software creates, stores, processes, or transmit, including the exe- cuting programs themselves. In other words, users of secure software have a reasonable expectation that their data is protected from unauthorized access or modification and that their data and applications remain available and stable. Clearly some applications have a need for a much higher degree of assurance than others. Security constraints impact a project in two ways. Functional security requirements increase the functional size of the software system being developed and need to be treated in the same way as all other functional requirements being met by COTS components or home-grown code. Non-functional security requirements to attain a specific level of security assurance require additional processes, documentation, testing, and verifications. COTS com- ponents are typically black box products developed by third parties. Using them in your enterprise’s information system can introduce significant security and reliability risks. If your organization uses the Internet, for example, COTS components can leak internal information across a globally connected network.
Show more

10 Read more

Reliability Models Applied to Smartphone Applications

Reliability Models Applied to Smartphone Applications

In Chapter 4, a description of the data collection process adopted in this work is presented for each of the three chosen mobile applications: Skype, Vtok, and a Windows phone application. The experiment is then pursued by applying the chosen Software Reliability Growth Models to the collected failure data. A discussion of the obtained results is followed by a thorough analysis of why the present models cannot give a satisfactory account of the failure data and the need to reexamine their basic assumptions is stressed. In Chapter 5, a thorough study of newly collected failure data of the same above applications is carried out and two common distributions, Weibull and Gamma, as well as their particular cases, the Rayleigh and S-Shaped, respectively, are used to model the failure data after sorting them by application version number and grouping them into larger time periods. A comparative study of the performance of these distributions, based on error evaluation criteria, is presented and detailed.
Show more

114 Read more

A SYSTEMATICAL STUDY OF RELIABILITY ON VARIOUS MODELS

A SYSTEMATICAL STUDY OF RELIABILITY ON VARIOUS MODELS

The twenty first century is a century of technologies. Today, everyone has impact of these technologies, even industries, which are totally dependent on machines for their chores. Now, the great challenge for researchers/engineers is to produce highly reliable products at minimum cost. Thus, the fastest growing industries need to select highly reliable systems. The aim of present paper is to analyze the reliability and the behavior of mean time to system failure, availability, busy period and profit function with respect to systems parameters (failure rate, repair rate, service rate, etc). A tabular and graphical study also has been done to highlight the important results.
Show more

13 Read more

Synopsis of Software Reliability Growth Models

Synopsis of Software Reliability Growth Models

In severe contrast with the rapid advancement of hardware technology, proper development of software technology is not able to keep pace in all measures including quality, efficiency, performance and cost. The market demand for complex software/ hardware systems has increased swiftly than the ability to design, implement, test, deliver and maintain them. As the requirements for the dependencies on computers increase, the chances of failure from computer faults also increase. The effect of these failures ranges from inconvenience (e.g., malfunctions of domestic appliances) to monetary losses (e.g., interruptions of financial systems) to loss of life (e.g., failures of medical software or air traffic system). It is needless to mention, the reliability of computer systems has become a main concern for our society.
Show more

6 Read more

Reliability Evaluation Optimal Selection Model of Component Based System

Reliability Evaluation Optimal Selection Model of Component Based System

based system reliability evaluation models include path- based approach [1,2,19], state-based approach [3-5,20, 21], and additive model [22]. Path-based approaches [1-2, 19] evaluate system reliability by considering all the pos- sible execution paths of the software. A sequence of all execution paths are gained by algorithm, experiment or simulation. The reliability of each path is estimated, and then the system reliability is estimated by calculating the average reliability of all paths. State-based models [4,5,20,21] regard the execution of the software as a state transfer process [6]. This class of models apply stochastic process theorem to the analysis of the system reliability. Some typical approaches adopt the Markov process the- ory, which assume that the system change process with the state of Markov. The approaches include the discrete time Markov model (DTMC) [5,20] and the time con- tinuous Markov model (CTMC) [4] and semi-Markov process (SMP) [21]. Additive models [22] are mainly used in the software testing stage. It assumes that the reliability of each component can be modeled by non- homogeneous Poisson process (NHPP), and thus the failure behavior of the system will also be a NHPP [23].
Show more

9 Read more

Imprecise system reliability and component importance based on survival signature

Imprecise system reliability and component importance based on survival signature

The concept of the survival signature has recently attracted increasing attention for performing reliability analysis on systems with multiple types of components. It opens a new pathway for a structured approach with high computational effi- ciency based on a complete probabilistic description of the system. In practical applications, however, some of the parameters of the system might not be de- fined completely due to limited data, which implies the need to take imprecisions of component specifications into account. This paper presents a methodology to include explicitly the imprecision, which leads to upper and lower bounds of the survival function of the system. In addition, the approach introduces novel and efficient component importance measures. By implementing relative impor- tance index of each component without or with imprecision, the most critical component in the system can be identified depending on the service time of the system. Simulation method based on survival signature is introduced to deal with imprecision within components, which is precise and efficient. Numerical example is presented to show the applicability of the approach for systems. Keywords: Imprecision; survival signature; system reliability; component importance; sensitivity analysis.
Show more

33 Read more

Reliability analysis techniques explored through a communication network example

Reliability analysis techniques explored through a communication network example

The use of phase type distributions dates back to the pionner work of Erlang on congestion in telephone systems at the beginning of this century [7]. His ap- proach (named method of stages ), although simple, was very eective in dealing with non-exponential dis- tributions and has been considerably generalized since then. The age (repair time) of a component is as- sumed to consist of a combination of stages each of which is exponentially distributed. The whole process becomes Markovian provided that the description of the state of the system contains the information as to which stage of the component state duration has been reached. The division into stages is an operational device and may not necessarily have any physical sig- nicance, and any distribution with a rational Laplace transform can, in principle, be represented exactly by a phase type expansion.
Show more

11 Read more

Show all 10000 documents...