on the performance of the interval estimation, that is, the bigger sample sizes lead to the smaller widths of the confidence intervals and larger coverage probabilities. This simply indicates that the estimators of the parameters are consistent. The coverage probabilities do not provide any pattern with respect to change in the true parametric values. However, it is good to see that coverage probabilities regarding all the confidence intervals are greater than 0.95 (which are greater than concerned confidence coefficient) that indicates the reliability of the interval estimation. The confidence intervals for parameter ( ) are skewed to right, while the intervals regarding parameter ( ) are left aligned. As a natural consequence, the increased censoring rate results in: slower convergence of estimates, inflated MSEs, wider confidence intervals and smaller coverage probabilities. However, it has been observed that the affects of the left censored observations are not that much severe in case of bigger sample sizes. Further for fixed sample size and censoring rate, the higher actual values of the parameters impose a negative impact on the performance (in terms of MSEs, convergence rate and widths of confidence intervals) of the estimates. It leads to the conclusion that the estimation of extremely large values of the parameters of the **Burr** **type** **iii** **distribution** may become difficult and the Fisher information matrix may be the decreasing function of the parameters. But the moderate to huge sample sizes can face off this problem.

Show more
16 Read more

The acceptance of a software system depends on its reliability. Assessing reliability takes more time using classical hypothesis as the volume of data increases day by day. The volumes of data can be transformed using order statistics. Order statistics deals with applications of ordered random variables and functions of these variables. Sequential Analysis of Statistical science is very quick in deciding the reliability or unreliability on developed software. The method adopted is, Sequential Probability Ratio Test (SPRT) for continuous monitoring of the software. The likelihood based SPRT proposed by Wald is very general and it can be used for many different probability distributions. In this paper, the mean value function of **Burr** **type** **III** **distribution** based on Non-Homogenous Poisson Process (NHPP) with Order statistics and Sequential Probability Ratio Test is applied to analyze the results. Maximum Likelihood Estimation (MLE) is used to derive the unknown parameters of mean value function.

Show more
(1) Here we present a technique SPRT, which uses Maximum Likelihood Estimation (MLE) for detecting reliable software based on SPRT. There are two classes of Wald’s SPRT procedure which are reliable or unreliable, pass or fail and certified or uncertified in which software can be under test and distinguished. SPRT can provide statistically optimal correct decision in a short period of time compared to all other tests with equivalent decision errors. Based on the estimated likelihood of the hypothesis, this procedure is working to detect fault based software systems. The **Burr** **type** **III** **distribution** model along with the principle of Stieber [1] is considered as a reliable software model to identify the reliable or unreliable software for accepting or rejecting the software developed.

Show more
From the Table -2 it can be seen that the value of SSE is smaller and the value of R-square is more close to 1. The results indicate that our NHPP **Burr** **type** **III** **distribution** model based on fault detection rate fits the data in the given datasets, best and predicts the number of residual faults in software most accurately.

Assessment of Software Reliability is a vital aspect to be considered during the software development process. Software reliability is the probability that given software functions work without failure in a specific environment during a specified time. It can be assessed using Statistical Process Control(SPC). SPC is a method of quality control that uses statistical methods to control and monitor a software process and thereby contributes significantly to the improvement of software reliability. Control charts are widely used SPC Tools to monitor software quality. The proposed model involves estimation of the parameters of the mean value function and hence these values are used to develop the control charts. The Maximum Likelihood Estimation (MLE) method is used to derive the estimators of the **distribution**. In this paper we propose a mechanism to monitor software quality based on order statistics of cumulative observations of time domain failure data using mean value function of **Burr** **type** **III** **distribution** based on Non-Homogeneous Poisson Process.

Show more
10 Read more

The process of analysis in this study involves the characteristics of **Burr** **Type** **III** and XII distributions, Maximum Likelihood Estimation (MLE) and Expectation- Maximization (EM) algorithm approaches and copula method. The research work leads us to develop the methodology for simulating complete and censored data of 2- and 3-parameter **Burr** **Type** **III** and XII distributions based on the characteristics and the derivation of parametric forms of probability density function (pdf) and cumulative density function (cdf) of the distributions. The research work develops the simulation process for censored data and estimates the 3-parameter **Burr** **Type** XII **distribution** for censored data and 2- and 3-parameter **Burr** **Type** **III** **distribution** since other researcher always focused on 2-parameter **Burr** **Type** XII **distribution**.

Show more
34 Read more

Software reliability is assessed quantitatively by Software Reliability Growth Model (SRGM) for tracking and measuring the growth of reliability. Software Reliability is the probability of failure-free operation during precise period in precise environment. To improve reliability and quality of the selected process the execution of software process must be controlled and the accepted choice for monitoring software process is Statistical Process Control (SPC). This helps the professionals to identify anomalies while monitoring the process and take the necessary action. In this paper we proposed a control mechanism based on the cumulative observations of the time domain data using the mean value function of **Burr** **type** **III** **distribution**, which is based on Non- Homogenous Poisson Process (NHPP). To estimate the unknown parameters of the model, maximum likelihood estimation method is used. The failure data is analyzed with the proposed mechanism and the results are exhibited through control charts.

Show more
This model is for **Burr** **type** **distribution** with three parameters which is discussed in two versions - the **Burr** **type** **III** and **Burr** **Type** XII. In this paper, we compare the performance of two versions of the suggested model is tested on five real time software failure data sets. The versions perform with variable accuracy, which suggest that no universal “best” among the two versions of the model could be attained.

The remainder of the paper is organized as follows. Max- imum likelihood estimation is presented in Section II. Bayesian estimation under SEL and LINEX loss functions is described in Section **III**. In Section IV, E-Bayesian estimation under SEL and LINEX loss functions is introduced. MCMC method is presented in Section V. In Section VI, illustrative examples, real data set and conclusion of the results are reported.

The choice of plotting position formula for fit to the distributions has been dis- cussed many times in hydrology and statistical literature. Different plotting positions attempt to use to achieve almost quantile-unbiasedness for different distributions. In this paper, the focus is to find the best plotting position formula to fit the PIII distribu- tion. In order to determine which plotting position formula is the most suitable for PIII **distribution**, the probability plot correlation coefficient test and RMSE and RMAE were used. The parameters for each **distribution** was estimated using moment method. 2.0 PEARSON **TYPE** **III** **DISTRIBUTION**

Show more
13 Read more

a parallel system of k- components whose life times are independently and identically distributed random variables, each with a common CDF – F(x) if k is an integer. Exploring this concept to non integer values of k also, many researchers in the recent past made extensive studies on models of the **type** [ ( )] generated by a basic well known model F(x). Such new models are named as exponentiated models by some authors and generalised models by some authors. For instance if the basic F(x) is exponential, [ ( )] is named as generalised exponential (Gupta & Kundu- 1999), if F(x) is Weibull, [ ( )] is named as exponentiated Weibull (Mudholkar and Srivastava – 1993). Banking on this notion, generalised Rayleigh **distribution** was studied by Raqab and Kundu (2006), as a process of revisit to **Burr** **type** X **distribution**. Its cumulative **distribution** function (CDF), probability density function (PDF) and hazard function are given by ( ) ( ) ; x>0, k>0 (1.4)

Show more
29 Read more

If some other parameters are involved, then they are assumed to be known, for an example, if shape parameter of a **distribution** is unknown it is very difficult to design the acceptance sampling plan. In quality control analysis, the scale parameter is often called the quality parameter or characteristics parameter. Therefore it is assumed that the **distribution** function depends on time only through the ratio of t/ σ.

14 Read more

We provide two applications to illustrate the importance, potentiality and flexibility of the WGBXII model. For these data, we compare the WGBXII **distribution**, with BXII, Marshall-Olkin BXII (MOBXII), Topp Leone BXII (TLBXII), Zografos-Balakrishnan BXII (ZBBXII), Five Parameters beta BXII (FBBXII), BBXII, B exponentiated BXII (BEBXII), Five Parameters Kumaraswamy BXII (FKwBXII) and KwBXII distributions given in Afify et al. (2018), Yousof et al. (2018a, b), Altun et al. (2018a, b) and Yousof et al. (2019) . Data Set I called breaking stress data. This data set consists of 100 observations of breaking stress of carbon fibres (in Gba) given by Nichols and Padgett (2006). Data set II called leukaemia data. This real data set gives the survival times, in weeks, of 33 patients suffering from acute Myelogeneous Leukaemia (see the data sets in Appendix A). We consider the following goodness-of-fit statistics: the Akaike information criterion (AIC), Bayesian information criterion (BIC), Hannan-Quinn information criterion (HQIC), consistent Akaike information criterion (CAIC), where

Show more
13 Read more

To elicit a prior density, Aslam (2003) forms some new methods base on the prior predictive **distribution**. For the elicitation of hyper-parameters, he considers prior predictive probabilities, predictive mode and confidence level. In this study, the method of prior predictive probabilities is used for obtaining the hyper-parameters of the considered informative prior. In fact, prior predictive removes the uncertainty in parameter (s) to reveal a **distribution** for the data point only. We suppose that prior predictive probabilities satisfy the laws of probability because this law ensure the expert would be consistent in eliciting the probabilities and some inconsistencies may arise which are not very serious.

Show more
23 Read more

If some other parameters are involved, then they are assumed to be known, for an example, if shape parameter of a **distribution** is unknown it is very difficult to design the acceptance sampling plan. In quality control analysis, the scale parameter is often called the quality parameter or characteristics parameter. Therefore it is assumed that the **distribution** function depends on time only through the ratio of t/ σ.

14 Read more

perimenter might decide to group the test units into several sets, each as an assembly of test units, and then run all the test units simultaneously until occurrence the first failure in each group. Such a censoring scheme is called a first-failure censoring scheme. Jun et al. [6] discussed a sampling plan for a bearing manufacturer. The bearing test engineer decided to save test time by testing 50 bearings in sets of 10 each. The first-failure times from each group were observed. Wu et al. [7] and Wu and Yu [8] obtained maximum likelihood estimates (MLEs), exact confidence intervals and exact confidence regions for the parameters of the Gompertz and **Burr** **type** XII distributions based on first-failure-censored sampling, respectively. If an experimenter desires to remove some sets of test units before observing the first failures in these sets this life test plan is called a progressive first-failure-censoring scheme which recently introduced by Wu and Kuş [9].

Show more
11 Read more

Software reliability growth model is one of the fundamental techniques to assess software reliability quantitatively. A number of testing-effort functions for modeling software reliability based on the non- homogeneous Poisson process (NHPP) have been proposed in the past decades. Although these models are quite helpful for the software testing, we still need to put more testing-effort into software reliability modeling. This paper develops a software reliability growth model based on the non-homogeneous Poisson process which incorporates the **Burr** **Type** **III** testing-effort. This scheme has a flexible structure and may cover many of the earlier results on software reliability growth modeling. Models parameters are estimated by the maximum likelihood estimation and the least square estimation methods, and software reliability measures are investigated through numerical experiments on actual data from three software projects. Results are compared with other existing models which reveal that the proposed software reliability growth model has a fairly better prediction capability and it depicts the real-life situation more faithfully. Also, these results can provide a flexible tool on the decision making for software engineers, software scientists, and software managers in the development company. Furthermore, the optimal software release policy for this model based on cost- reliability criteria has been discussed.

Show more
The rest of the paper is outlined as follows. Section 2, is devoted to derive some mathematical properties of the OEHLBXII **distribution**. In Section 3, we use maximum likelihood to estimate the model parameters. Two real data sets are analyzed to prove the flexibility of the OEHLBXII model in Section 4. Finally, some concluding remarks are presented in Section 5.

13 Read more

Plots of the PDF and HRF of the EWBXII **distribution** are displayed in Figures 1 and 2, respectively. The plots in Figure 1 show that the PDF of the EWBXII **distribution** can be reversed J-shape, right-skewed, left-skewed, symmetric or concave down. Figure 2 shows that the HRF of the proposed model can be constant, decreasing, increasing, J-shape, unimodal, bathtub shapes.

12 Read more

In this work, we introduce a new extension of the Fréchet **distribution**. A set of the mathematical and statistical properties have been derived. The estimation of the parameters is carried out by considering the different method of estimation. The performances of the proposed estimation methods are studied by Monte Carlo simulations. The potentiality of the proposed model has been analyzed through two data sets. The weighted least square method is the best method for modelling breaking stress data, the least square method is the best method for modelling strengths data, however all other methods performed well for both data sets. On the other hand, the new model gives the best fits among all other fitted extensions of the Fréchet models to these data. So, it could be chosen as the best model for modeling breaking stress and strengths real data.

Show more
24 Read more