and outputs in the real world problems are often uncertain. Uncertain **data** in DEA models have been examined in the literature in different forms. Some researchers have suggested Fuzzy **data** envelopment analysis and **interval** **data** envelopment analysis in encounter with uncertain **data**. In more recent period, uncertain **data** have been expressed by means of two approaches. **Interval** **data** envelopment analysis was proposed for the first time by Cooper, W. W., Park, K. S., & Yu, G. (1999) and fuzzy **data** envelopment analysis was proposed for the first time by Sengupta. Cooper, W. W., Park, K. S., & Yu, G. (1999) have extended an **interval** approach which allows using a mix of uncertain and certain **data** by means of transformation of DEA model to an ordinary linear planning form. Assessment of lower and upper limits of DMU efficiencies has been regarded as one of the problems in **interval** approach. [4] Despite this problem, some researchers have proposed a variety of **interval** approaches. [5]

Show more
15 Read more

On account of the existence of uncertainty, DEA occasionally faces the situation of imprecise **data**, especially when a set of DMUs include missing **data**, ordinal **data**, **interval** **data**, stochastic **data**, or fuzzy **data**. Therefore, how to evaluate the efficiency of a set of DMUs in **interval** environments is a problem worth studying. In this paper, we discussed the new method for evaluation and ranking **interval** **data** with stochastic bounds. The approach is exemplified by numerical examples.

Show more
10 Read more

Moreover, two numerical examples are demonstrated and in example-2 we have used the testing procedure through trapezoidal **interval** **data** which are already analysed by Wu [21], Chachi et al. [9] through triangular fuzzy numbers. But we have extended this idea to trapezoidal **interval** **data** with some modifications. 2. Preliminaries and definitions

In this paper, the focus is on additive models with **interval** data.An additive model can be converted to a multi-objective linear problem if information about preferences of the consumption of inputs and the production of outputs are taken into account. Here in this study, **data** are not exact and are of **interval** kind. Moreover, the most preferred solutions with available information by **interval** additive models are sought. It has also been shown that if additional information is available, an axial solution can be applied. Also, the most preferred target settings will be computed too. In this study twenty bank branches in Iran are evaluated, and target settings and eﬃciency are compared with the original case and signiﬁcant decisions are made. .

Show more
Abstract. A method for the two person non-zero sum game whose payoffs are represented by **interval** **data** has been investigated. In this paper a new method for solving bimatrix game with triangular fuzzy numbers using LCP has been applied. The obtained solution of this FLCP is the solution of the given fuzzy bimatrix game.

In this paper, we present a method for ranking decision making units (DMUs) with **interval** **data** in **data** envelopment analysis that is dierent from the other methods. In this method we use a new pair of **interval** DEA models that are constructed on the basis of **interval** arithmetic, which dier from the existing DEA models that work with **interval** **data** in fact it is a linear CCR model without a need for extra variable alternations and uses a xed and unied production frontier (i.e. the same constraint set) to measure the eciencies of decision making units (DMUs) with **interval** input and output **data**. This methodology exerts an appropriate minimum weight restriction on all input and output weights which is called maxmin weight. New linear programming (LP) model constructs proportion with each ecient unit for determining maximin weight. Choosing a maxmin weight, all ecient units put in order as fully or partially. One numerical example is inspected using the proposed ranking methodology to illustrate its ability in discriminating between DEA ecient unit with **interval** **data**.

Show more
10 Read more

In this paper, the difference between multiplicative and envelopment models of network DEA is examined, in which network DEA multiplicative model is able to calculate efficiency and the envelopment model can calculate the projection on the frontier. Here, a model is presented that can calculate both frontier projection and efficiency in network DEA. Since in real world, many **data** are **interval** **data**, we present a model in this article that calculates the efficiency of the units being evaluated by such these **interval** **data**. Since **data** are as intervals, the resulting efficiencies are calculated as intervals. We present two models for calculating the lower and upper bounds for any DMU and prove that these models give upper and lower bounds of efficiency.

Show more
In the network technology era, the collected **data** are growing more and more complex, and become larger than before. In this article, we focus on estimates of the linear regression parameters for symbolic **interval** **data**. We propose two approaches to estimate regression parameters for symbolic **interval** **data** under two different **data** models and compare our proposed approaches with the existing methods via simulations. Finally, we analyze two real datasets with the proposed methods for illustrations.

17 Read more

In this paper, we combined location/allocation models with DEA in **interval** inputs and outputs environments to improve performance of these models. Solving for the DEA efficiency measure, simultaneously with other location modeling objectives, provides a promising rich approach to multi objective location problems. The ability to use location models to test trade-offs between spatial efficiency and facility efficiency provides a promising new rich approach for multi objective location analysis. We presented a new pair of location/DEA models for dealing with **interval** **data**. The presented models used the **interval** CCR model and combined it with UPLP and CPLP models to optimize two efficiencies, spatial efficiency and facilities efficiency. Due to the existence of uncertainty in real world conditions, our models dealt with **interval** inputs and outputs. The models were run with the **data** discussed in Numerical Example I and the results obtained. Since **interval** efficiencies measure the performances of DMUs more comprehensively than the traditional DEA efficiency, they are expected to have widely potential applications in the future.

Show more
12 Read more

We developed in this paper an approach for dealing with **interval** **data** in context dependent DEA. It is done by considering each DMU (which have **interval** **data**) as two DMUs (which have exact **data**) and then we obtain in- terval attractiveness for each DMU. For this reason, we introduced some DEA models for evaluating these DMUs, and in the next step to obtain the **interval** attrac- tiveness we merge the results of these models. Also we show that, if we choose n arbitrary DMUs with exact **data**, then the attractiveness of these DMUs are belong to that intervals. After this manager decided that, what combination of each **interval** is appropriate.

Show more
Many models in DEA have been proposed to estimate returns to scale. Determining the nature of returns to scale has considerable of importance in the theory of production. Knowing the fact that returns to scale is a constant, ascending or descending decision making unit, proper actions can be performed to develop the decision making units. In this study, the eﬃciency of parallel production systems with shared resources is evaluated such that the **data** is inexact and **interval**, so the function of these systems is **interval** too. Then a model is proposed to estimate returns to scale of **interval** **data** on these systems; when the **data** is inexact, the nature of returns to scale of these units is inexact too, so returns to scale is estimated as multiple in best and worst conditions.

Show more
11 Read more

Contemporary computers bring us very large datasets, datasets which can be too large for those same computers to analyse properly. One approach is to aggregate these **data** ( by some suitably scientific criteria ) to provide more manageably-sized datasets. These aggregated **data** will perforce be symbolic **data** consisting of lists, intervals, histograms, etc. Now an observation is a p-dimensional hypercube or Cartesian product of p distributions in R p ,

10 Read more

identical to that provided by the panel of human observers (a priori partition). This partition was also obtained by Guru et al. (2004), using a similarity measure for estimating the degree of similarity among patterns (described by **interval** type **data**) in terms of multivalued **data**, and un unconventional agglomerative clustering technique, by introducing the concept of mutual similarity value (Guru et al., 2004). The partition into four clusters provided by the AV1 and AVB methods (see Table 4 and Figure 2) it is not the same that the a priori partition (others authors (e.g., De Carvalho, 2007) also have obtained partitions into four clusters that were not identical to that a priori partition). It can be seen that the partition into three clusters provided by the ( , ′) coefficient combined with the three applied aggregation criteria is quite close to the a priori partition given by the panel of human observers, excepting in what concerns the location of city 18, which in the classification given by the panel of human observers is a cluster with only one element (singleton) [see Figures 1 and 2]. This is also the most significant partition (the best partition), according to all applied validation indexes, as is shown in Table 5, due to the maximum values of STAT (18.9912), DIF (2.3429in the case of AV1 and AVB), and (0.8524), and to the minimum value (0.3176) of P(I2mod,).

Show more
DEA is one of the most important and common methods for estimating assessment of DMUs The analysis compares the relative efficiency of organizational “units” such as bank branches, hospitals, vehicles, shops and other cases where units perform similar execution of work. The first assumptions with the main DEA are that there is no error or noise in input/output **data** and the information of all DMUs is available to be considered. However, as previously discussed, the goal of this paper is to propose a bootstrapped robust DEA model to achieve the correct efficiency and ranking for the telecommunication companies. The proposed Bootstrap DEA model can be used to overcome the disruption in the **data** and the sampling error for many real world case studies and is used achievement for the hospitals. To consider the effect of the confusion and the sampling error, we were used the robust optimization and bootstrap methods. In this study both robust concept and **interval** **data** were considered simultaneously for the first time. The performance of the suggested method is shown using the **data** from 16 hospitals in Iran. The results show that the RDEA efficiency scores are biased upwards and the bias- corrected bootstrap efficiency scores from a BRDEA model are lower. This indicates that the RDEA efficiency scores have to be designed to be biased for small samples. Also, the RDEA model results operation an average of [0.61,0.673], while the BRDEA generates an average bias- corrected score of [060,066]. The results show that the consideration of a confusion in the **data** and sampling error, and applying a bootstrapped robust **data** envelopment analysis model can be more reliable for efficiency assessment strategies. For future studies we suggest following directions to extend the current study:

Show more
16 Read more

Abstract—Recently, there has been much research into effective representation and analysis of uncertainty in human responses, with applications in cyber-security, forest and wildlife manage- ment, and product development, to name a few. Most of this research has focused on representing the response uncertainty as intervals, e.g., “I give the movie between 2 and 4 stars.” In this paper, we extend upon the model-based **interval** agreement approach (IAA) for combining **interval** **data** into fuzzy sets and propose the efficient IAA (eIAA) algorithm, which enables effi- cient representation of and operation on the fuzzy sets produced by IAA (and other **interval**-based approaches, for that matter). We develop methods for efficiently modeling, representing, and aggregating both crisp and uncertain **interval** **data** (where the **interval** endpoints are intervals themselves). These intervals are assumed to be collected from individual or multiple survey respondents over single or repeated surveys; although, without loss of generality, the approaches put forth in this paper could be used for any **interval**-based **data** where representation and analysis is desired. The proposed method is designed to minimize loss of information when transferring the **interval**-based **data** into fuzzy set models and then when projecting onto a compressed set of basis functions. We provide full details of eIAA and demonstrate it on real-world and synthetic **data**.

Show more
To assess the performance of the maximum likelihood and Bayesian with help of the Lindley’s approximation and Markov Chain Monte Carlo, where the Metropolis-Hastings algorithm used to estimate the scale and shape parameters, the mean squared errors (MSE) for each method were calculated using 10,000 replications for sam- ple size n = 25, 50 and 100 of Weibull distribution with **interval**-censored **data** for different value of parameters were the scale parameter λ = 2 , shape parametric α = 0.5 , 1, 1.5 and 2, the considered values of λ , α are meant for illustration only and other values can also be taken for generating the samples from Weibull distribu- tion.

Show more
Abstract – The evidence theory is ascribed to a specific kind of uncertainty. In this theory, uncertainty refers to the fact that the element of our interest (the true world) may be included in subsets of other similar elements (possible worlds). In the original evidence theory, the estimates of the basic probability masses for the focal elements are given in an unambiguous form. In practice, to obtain such estimates is often difficult or even impossible. In such a situation, the relevant estimates are given in the **interval** or fuzzy form. The goal of the paper is to present and analyse the calculation procedures for determination of the belief functions and plausibility functions in the evidence theory for cases when the initial estimates are given in the **interval** or fuzzy form.

Show more
The vertical grid for COSMO presented in this study seems a good alternative to the standard vertical layering of the COSMO-DE domain when focusing on the upper tropo- sphere and lower stratosphere in polar latitudes. It has been shown to run stably, simulating almost a year. By comparing with **data** from synoptic radiosondes and regridded reanaly- sis **data**, it could be shown that the model is able to reproduce measurements of temperature well and produce reasonable values of relative humidity. The enlarged time series show a small-scale variability in the model that is not present in the measurements and cannot be expected form regridding the boundary **data**. The stability against varying the bound- ary forcing **interval** and the extent of the damping layer was

Show more
17 Read more

Therefore, we still consider the PH model due to its good properties and straight- forward interpretation based on hazard ratios in the **interval** censored **data**. However, the partial likelihood does not work under the PH model when the **data** is **interval** cen- sored. There are several ways to estimate coefficients and the baseline hazard function for PH model with **interval** censored **data**. Basically, it is a maximization problem subject to monotonicity constraints on the part of cumulative hazard function. For example, a two-step algorithm, or sometimes called generalized Gauss-Seidel algorith- m [7], is usually recommended to get the maximum likelihood estimators. However, the algorithm could be very slow to compute the profile likelihood curve when sample size is large.

Show more
59 Read more