(2) M. K. LUHANDJULA. 260. are members of C 0,1 . We’ll use the following notation for Z ; L Z Z , Z . . For Z 1 , Z 2 C 0,1 the distance between Z 1 and 2 Z is given by:. . 1. d C 0,1 Z , Z. 2. d Z , Z d 1. 0. 1. if , I , sup X inf Y . For details on random sets, we refer the reader to [18, 19].. 2.2. Fuzzy Random Variable. , B, P X : Fcc . Consider again a probability space. 2. H. where d H denotes the Hausdorff metric. Moreover I and 1A denote the interval [0, 1] and the indicator function of A respectively.. 1.4. Structure of the Paper The remainder of the paper is organized as follows. In the following section, we introduce the notions of random closed set and fuzzy random variable and we briefly discuss some of their properties. In Section 3, we prove that the set of fuzzy random variables can be embedded into the set of random closed sets isomorphically and isometrically. This embedding result is then exploited in Section 4 to describe an approach for solving mathematical programs with fuzzy random coefficients. Section 5 is devoted to a numerical example for the sake of illustration. We end up in Section 6 with some concluding remarks along with lines for further developments in this field.. 2. Random Set and Fuzzy Random Variable. X is a fuzzy random variable if for every 0,1 and for every Borel set B of , X 1 B B where X : 2 is defined as follows.. X x X x . In the sequel, the set of fuzzy random variables in the above sense is denoted by F . A remarkable property of a fuzzy random variable (frv) is that Zadeh’s decomposition principle for fuzzy quantities extends naturally to frvs, that is for Y F , Y . X : C 0,1. X ( ) I The class of the above random sets is denoted by R . R can be endowed with the following metric based on the Hausdorff metric d H For X ,Y R , d R X , Y . I. d H X , Y dP d. Given X , Y R we say that X is less or equal than Y , in symbol: X Y R. Copyright © 2011 SciRes.. . 0,1. Y .. Another fundamental key fact about fuzzy random variables, which is of interest on its own right and which has a huge impact on applications is that an -level set of a fuzzy random variable is a random interval [17]. Arithmetic operations on F are defined as follows. Given Y , Z F and we have:. Y Z Y Z ; Y ⊙Y ; . 2.1. Random Set Consider a probability space , B, P and let F be a set of collections of subsets of E . A random set in E is a map: F that satisfies some measurability conditions [17]. For our purposes, E and we consider random sets of the form:. A map. Y Z if and only if Y Z , . where □ and ○ indicate that operations are on F and on Fcc respectively. It is worthmentioning that and ⊙ are based on Zadeh’s extension principle [20]. Moreover, Y Z is tantamount to Y Z for all (0,1] where I stands for inI. equality between intervals. We equip F with a distance defined as follows. Given Y , Z F d F Y , Z I d H Y , Z dP d. As in the case of random variables, it is efficient to describe the distribution of a frv by means of certain measures summarizing some of its most relevant characteristics. In this way, the first two moments of a fuzzy random variable Y are defined as follows. The expectation of a frv Y , in symbol EY is the fuzzy quantity whose -level sets are given by: AJOR.

(3) M. K. LUHANDJULA. EY . . . . 261. . fdP f is a selection of Y .. The variance of Y, in symbol VY, is given by the following relation:. . VY E d F Y , EY . . Limit theorems [22] have been obtained for frvs based on the above notions of expectation and variance. Moreover, fuzzy random variables enjoy the RandomNikodým property [21]. That is, if F : Fcc is a P-continuous fuzzy measure of bounded variation, then there is Y F such that:. F B YdP , B B. Figure 1. Diagram involving auxiliary functions.. Therefore. o f .. 3.2. Main mapping . B. 3. Embedding Theorem for Fuzzy Random Variables. The mapping that is used in our Embedding Theorem for fuzzy random variables is defined as follows.. : F R X X. 3.1. Auxiliary Mappings where The following three maps will play a staring role in the statement and the proof of an Embedding Theorem for fuzzy random variables. 3.1.1. Mapping maps Fcc into C 0,1 as follows. : Fcc C 0,1 a a L , a . I. where a = a and a = a . It is well known (see e.g [23,24]) that thus defined is injective, isometric and satisfies the following relation: For a , b Fcc and s, t , s 0 , t 0 L. . L. . 1s ⊙ a 1t ⊙ b s a t b .. X : C 0,1 X L , X and L X L , X X , X I. Relationships between the main mapping and auxiliary mappings is given in Figure 2. Mappings involved in Figure 2 make the diagram given in Figure 2 commutative. As a matter of fact, For . X X X Hence X X . For the sake of convenience, we identify a real number a with the following degenerate fuzzy number.. 3.1.2. Mappings f and The two other auxiliary maps are given below. f : F Fcc . a : 0,1. with. X X. : F C 0,1 X X . 3.1.3. Remark Relationships between auxiliary mappings are shown in Figure 1. The three auxiliary mappings make the diagram given in Figure 1 commutative. As a matter of fact, f X X X Copyright © 2011 SciRes.. Figure 2. Diagram involving the main mapping.. AJOR.

(4) M. K. LUHANDJULA. 262. . if t a 1 a t 0 otherwise. Moreover, we denote by 1 A A , the degenerate fuzzy random variable: 1 : F A A. Therefore. 1A. . R. d H X , Y dP d. 4) X Y if and only if X Y , . This is tantamount to say that: X Y if and only if X Y L , (1). As X L X and Y L Y , we have that (1) can be written: X Y if and only if L L X , X Y , Y , . (2). X Y if and only if X Y , (3). But X Y , is equivalent to. f X X f Y Y As is injective, we conclude that X Y for all. .. sup X inf Y , . or. X Y .. Therefore X Y and we are done. 2). R. . 1 X 1 Y 1 X 1 Y f 1 X 1 Y 1 X 1 Y . . . 1 X 1 Y Copyright © 2011 SciRes.. I. d F X , Y .. This means that for we have:. . . (2) is equivalent to X Y if and only if. X Y .. . X , Y dP d .. dH. d X , Y . X Y ;. . . . As is isometric we have that:. R. 3.3.2. Proof of Theorem 1 1) Let X ,Y F and assume X Y Then for we have that:. X , Y dP d. I. 4) X Y X Y In other terms maps F into R isomorphically and isometrically. Moreover, is order preserving.. I d H. I d H X , Y dP d. . . . d R X , Y . 3.3.1. Theorem 1 Consider X ,Y F ; , , 0 , 0 and let be as in §3.2. Then the following statements hold true. 1) is injective 2) 1 X 1 Y = X Y 3) d F X , Y d R X , Y . . . as desired. 3). 3.3. Statement and Proof of the Embedding Theorem. . . 1 X 1 Y X Y. Building on the mappings , f , , and on the above terminological conventions, we are now ready to present the main result of this section.. . . X Y X Y .. cc. . . 1 ⊙ X 1 ⊙Y X Y X Y . Therefore (3) can be written: X Y if and only if X Y and we are done. R. 4. Solving Fuzzy Random-Valued Optimization Problems 4.1. Case of Deterministic Feasible Set Here we are interested in solving the following Optimization problem: AJOR.

(5) M. K. LUHANDJULA. min f x . P1 . x X. where f : F and X is a convex and bounded subset of n As is an isomorphism isometric and order preserving, solving P1 is tantamount to find a solution of the mathematical program: n. min f x . P1 . x X. More formally we have, Proposition 1 x* is an optimal solution of P1 if and only if x* is an optimal solution of P1 . Proof Assume x* is an optimal solution for P1 . Then x* X and f x* f x x X . Then by Theorem 1 (d) we have that:. . f x* f x x X R. This means x* is optimal for P1 . Assume now that x* is optimal for P1 . Then * x X and f x* f x x X . By Theorem 1(d) again, we have that:. . . f x* f x x X. . . . . min E f L x E f x P 2 x X 0,1 ; .. from now on,. f x . . stands for. . . . E f x E f x L. Therefore (P2) reads merely: min f x P2 x X I. Let’s now select a finite subset of I , S = 1 , , m and let 1 , , n be real-valued functions such that: 1) j 0 ; I ; j = 1, , m 1 if i j 2) j i 0 if i j. Kh = j h j m. . min ( fL x , ( f x P1 x X 0,1 ; . Worthy to note here is the fact that P1 is a stochastic multiobjective mathematical program with infinitely many objective functions. To the best of our knowledge, there is no available solution technique for it. This is the price to pay for considering an equivalent approach to treat fuzziness instead of an approximate one. To be able to carry out a fairly discussion of P1 we find it convenient to assume that: 1) the expectation model is acceptable for tackling randomness; 2) minimizing an interval can be well handled by minimizing its midpoint. It might be pointed out in passing, that assumption 1) is often used in the literature for derandomization purposes [25,26]. Moreover, assumption 2) grants us a way for transforming intervals into real numbers. This Copyright © 2011 SciRes.. transformation generalizes quite canonically the real case. As a matter of fact the midpoint of a, a is a . Bearing in mind 1), 2) and considering the fact that multiplying an objective function by a constant does not alter the localisation of an optimum, P1 may be written as follows.. Define a positive interpolating operator K with nodes 1 , , m as follows:. and we are done By definition of , P1 is equivalent to:. . 263. j =1. Consider now the following mathematical programs. min Kf x P3 x X I . min f x j P 4 xX j 1, , m. The following result bridge the gap between P3 and P 4 . Proposition 2 If x* is efficient for (P3) then x* is efficient for (P4). Proof Suppose that x* is an efficient solution for P3 and not efficient for P 4 Then there is no x X such that. . Kf x Kf x* I. (§) AJOR.

(6) M. K. LUHANDJULA. 264. and. . Kf x < Kf x* for some I (§§). As x* is not efficient for there is x X. P4. we have also that,. . such that f x j f x* j j 1, , m. and. . f x s < f x* s for some s 1, , m .. Consider now I arbitrarily chosen. As j 0 and j j = 1 we have that:. j f x j f x* j 0 j 1, , m. and. s f x s f x* s < 0 for some. Step 2: Frame (P1) as P 2 . Step 3: Put 0 . Step 4: Take a discretization of I , S 1 , , m . Step 5: Write (P4). Step 6: Find an efficient solution for (P4). Step 7: Compute h max I min1 i m i and check whether h < . If this is true, go to Step 9, Otherwise go to Step 8. Step 8: Take a finer discretization S , put S S and go to Step 5. Step 9: Print the solution obtained. Step 10: Stop.. . 4.2. Case of Fuzzy Random Constraints Here we are interested in the following optimization problem. min f x g i x bi ; i = 1, , m. P5 . s 1, , m. Therefore we can say that there is x X such that:. j f x j f x* j < 0 m. j =1. where f , g i ,i = 1, , m are fuzzy random-valued functions of n and bi F ; i = 1, , m . Consider the following optimization problem: min f x i x R bi ; i = 1, , m. This means, as has been chosen arbitrarily, that there is x X such that:. . Kf x < Kf x* I. This contradicts the fact that there is no x X such that (§) and (§§) hold. Therefore x* is efficient for (P4). It is a common place to say that (P3) is the same as P 2 where f x is replaced by Kf x Unfortunately both P 2 and (P3) are too cumbersome for mathematical tractability. For practical purposes, we’ll resort to (P4) that is a discretization of P 2 Thanks to the contraposite of Proposition 2, we know that only an efficient solution of (P4) can be efficient for (P3). It might also be pointed out in passing that the discretization error decreases when the grid S 1 , , m is refined [30]. This means we should keep the roughness of the grid, i.e. h h 1 , , m max min i I 1 i m. as low as possible. The foregoing discussion leads us to describe the following algorithm for solving P1 . Description of the algorithm Step 0: Fix 0 an acceptable bound of error for h . Step 1: Read data of (P1). Copyright © 2011 SciRes.. . P6 g. Before stating a result that bridges the gap between and P 6 , we introduce the following respective surrogates to P5 and P 6 respectively.. P5 . min f x x X . P5 . min f x x X . P6 . where X and X are deterministic counterparts of the following sets respectively:. x . n. . g i x bi ; i = 1, , m. and. x . n. . g i x R bi ; i = 1, , m. Proposition 3 x* is an optimal solution for P5 if and only if * x is optimal for P6 . Proof By Theorem 1 (d), we have that X X Moreover by Proposition 1, P5 and P 6 are equivalent. In this sense, we can say that (P5) and (P6) are AJOR.

(7) M. K. LUHANDJULA. equivalent. To solve (P5) we have to consider P 6 and then apply the method described in the previous section for solving P 2 .. max c1 x1 c2 x2 PE x1 x2 8 2 x1 3x2 19 x 0; x 0 1 2. 5. Numerical Example For the sake of illustration, we consider the following simple example.. Here f x = c1 x1 c2 x2 and. . Table 1. Details on frvs c1 and c2 . frv. fuzzy values. probabilities. c1. c11 = 1,1,1. 2 5 3 p12 = 5 2 p21 = 5 3 p22 = 5. c 2. c22 = 3;0,5;0, 25 . p11 =. reads:. . . where c1 and c2 are fuzzy random variables defined 2 on 1 , 2 with P 1 p1 and 5 3 Details on the two frvs are given in P 2 p2 5 Table 1. a, b, c stands for a triangular fuzzy number with membership a ,b,c defined as in Figure 3. According to Proposition 1, PE is equivalent to the following mathematical program:. c21 = 2;1,5;3. P2. max E c1L x1 c2L x2 E c1 x1 c2 x2 x x 8 P E 1 2 2 x 3x 19 2 1 x1 0; x2 0 I ; . max c1 x1 c2 x2 PE x1 x2 8 2 x1 3 x2 19 x 0; x 0 2 1. c12 = 4; 2;0,5 . 265. . Let 0, 25 and consider the following discretization of I ; S = 0;0, 25;0,5;0, 75;1. with this grid, ( P 4) takes the form: max p11c11L x1 p12 c12L x1 p21c21L x2 L p22 c22 x2 p11c11 x1 p12 c12 x1 p21c21 x2 p22 c22 x2 PE x x 8 1 2 2 x1 3 x2 19 x1 0; x2 0 0;0, 25;0,5;0, 75;1 . Considering data of Tables 1-3,. PE . becomes:. max 6,5 x1 4, 75 x2 , 6, 27 x1 4, 86 x2 , 6, 05 x1 4,97 x2 , 5,82 x1 5, 08 x2 ,10, 4 x1 5, 2 x2 x1 x2 8 2 x 3 x 19 2 1 x1 0; x2 0 . A Pareto optimal solution of this multiobjective program may be obtained by solving the following weighting program. Table 2. Left endpoints of α-level cuts of cij .. Figure 3. Triangular fuzzy number delta (a, b, c). Copyright © 2011 SciRes.. cijL . 0. 0.25. 0.5. 0.75. 1. cijL . 0. 0.25. 0.5. 0.75. 1. c11L . 3.5. 3.625. 3.75. 3.875. 4. c21L . –1. –0,25. 0.5. 1.25. 2. c22L . 2.75. 2.812. 2.875. 2.937. 3. AJOR.

(8) M. K. LUHANDJULA. 266. Table 3. Right endpoints of α-level cuts of cij . cij . 0. 0.25. 0.5. 0.75. 1. c . 2. 1.75. 1.5. 1.25. 1. c . 6. 5.5. 5. 4.5. 4. c . 3.5. 3.125. 2.75. 2.375. 2. . 3.5. 3.375. 3.25. 3.125. 3. 11 12. 21. c. 22. max 1 6,5 x1 4, 75 x2 2 6, 27 x1 4,86 x2 3 6, 05 x1 4,97 x2 4 5,82 x1 5, 08 x2 5 10, 4 x1 5, 2 x2 x1 x2 8 2 x1 3 x2 19 x1 0; x2 0; s > 0; s = 1, ,5. for 1 2 3 4 4 1 we have the program: max 35, 04 x1 24,86 x2 x1 x2 8 2 x1 3 x2 19 x 0; x 0 1 2. which yields the solution x* 8, 0 using LINGO software. As the roughness of the grid h is small, this solution is a good approximation of the solution of the original problem PE . 6. Concluding Remarks Though significant progress has been made in recent years on Fuzzy Stochastic Optimization [3-7], there are from an algorithmic point of view, many challenges remaining. Developing effective and efficient techniques for handling such problems still remain an important issue. This paper has been written to address some of the above mentioned challenges. It is also filled with many references for those whose appetite has been sufficiently whetted that they are hungry for more. It might be pointed out that a general methodology for solving Fuzzy Stochastic Optimization problems has been outlined in [3]. The quintessential of that methodology is to perform a couple of transformations (possibilistic and probabilistic) say f1 and f 2 either sequentially or in parallel in a way to put the original problem into deterministic terms. To be in tune with uncertainty principles [22], these transformations should be able to introduce possibilistic and probabilistic information at a more fundamental level. They should also capture the essence of involved. Copyright © 2011 SciRes.. fuzziness and randomness. This is the reason why, in this paper, we found it convenient not to let both f1 and f 2 be mere approximations. The possibilistic transformation is an equivalence obtained from connections between fuzzy random variables and random closed sets. Therefore our approach contrasts markedly with those where approximation of fuzzy values by real ones is followed by approximation of random variables by their moments [7]. It also differs form approaches based on fuzzystochastic simulation [28]. Moreover, our approach can handle both linear and non linear optimization problems. It is also less demanding in terms of information that the decision maker should provide before having his problem solved. This departs strongly with extant methods as illustrated by the following sample. In [4,5] for example, emphasis is placed on linear optimization problems. The method described in [28] is based on the assumption that involved fuzzy random variables are of the L R type. Techniques discussed in [6,29] require that the decision maker be able to set appropriate targets and suitable thresholds or to manipulate complex indexes. The price to pay for using the method described here is the computational challenge brought up by the resulting problem that is a stochastic program with infinitely many objective functions. An algorithm for solving this problem has be presented. An efficient implementation and numerical testing of the proposed algorithm is a topic of future research.. 7. References [1]. H. Bandemer and W. Gerlach, “Evaluating Implicit Functional Relationships from Fuzzy Observations,” Freiberger Forschungshefter, Vol. D170, 1985, pp. 101-118.. [2]. C. M. Hwang, “A Theorem of Renewal Process for Fuzzy Random Variables and Its Application,” Fuzzy Sets and Systems, Vol. 116, No. 2, 2000, pp. 237-244. doi:10.1016/S0165-0114(98)00143-2. [3]. M. K. Luhandjula, “Fuzzy Stochastic Linear Programming: Survey and Future Research Directions,” European Journal of Operational Research, Vol. 174, No. 3, 2006, pp. 1353-1367. doi:10.1016/j.ejor.2005.07.019. [4]. H. Katagiri, E. B. Mermri, M. Sakawa, K. Kato and I. Nishizaki, “A Possibilistic and Stochastic Programming Approach to Fuzzy Random MST Problems,” IEICETransactions on Information Systems, Vol. E88-D, No. 8, 2008, pp. 1912-1919.. [5]. H. Katagiri, M. Sakawa, K. Kato and I. Nishizaki, “Interactive Multiobjective Fuzzy Random Linear Programming: Maximization of Possibility and Probability,” European Journal of Operational Research, Vol. 188, No. 2, 2008, pp. 530-539. doi:10.1016/j.ejor.2007.02.050. [6]. B. Liu, “Fuzzy Roandom Chance-Constrained Programming,” IEEE Transactions on Fuzzy Systems, Vol. 9, No.. AJOR.

(9) M. K. LUHANDJULA 5, 2001, pp. 713-720. doi:10.1109/91.963757 [7]. [8]. [9]. Y. K. Liu and B. Liu, “A Class of Fuzzy Random Optimization: Expected Value Models,” Information Sciences, Vol. 155, No. 1-2, 2002, pp. 89-102. doi:10.1016/S0020-0255(03)00079-3 Y. K. Liu and B. Liu, “Fuzzy Random Programming with Equilibrium Chance Constraints,” Information Sciences, Vol. 170, No. 2-4, 2005, pp. 363-395. doi:10.1016/j.ins.2004.03.010 E. E. Ammar, “On Fuzzy Random Multiobjective Quadratic Programming,” European Journal of Operational Research, Vol. 193, No. 2, 2009, pp. 329-341. doi:10.1016/j.ejor.2007.11.031. [10] Z. Zmeskel, “Value at Risk Methodology under Soft Conditions Approach (Fuzzy-Stochastic Approach),” European Journal of Operational Research, Vol. 161, No. 2, 2005, pp. 337-347. doi:10.1016/j.ejor.2003.08.048 [11] P. Dutta, D. Chakraborty and A. R. Roy, “A Single-Period Inventory Model with Fuzzy Random Variable Demand,” Mathematical and Computer Modelling, Vol. 41, No. 8-9, 2005, pp. 915-922. doi:10.1016/j.mcm.2004.08.007 [12] F. Ben Abdelaziz, L. Enneifar and J. M. Martel, “A Multiobjective Fuzzy Stochastic Program for Water Resource Optimization: The Case of Lake Management,” 2005. http://www.sharjah.ac.ae/academi/ [13] M. K. Luhandjula, “Optimization under Hybrid Uncertainty,” Fuzzy Sets and Systems, Vol. 146, No. 2, 2004, pp. 187-203. doi:10.1016/j.fss.2004.01.002 [14] C. Mohan and H. T. Nguyen, “An Interactive Satisfying Method for Solving Multiobjective Mixed Fuzzy Stochastic Programming Problems,” Fuzzy Sets and Systems, Vol. 117, No. 1, 2001, pp. 61-79. doi:10.1016/S0165-0114(98)00269-3 [15] S. Nanda, G. Panda and J. Dash, “A New Solution Method for Fuzzy Chance Constrained Programming Problem,” Fuzzy Optimization and Decision Making, Vol. 5, No. 4, 2006, pp. 355-370. doi:10.1007/s10700-006-0018-8 [16] W. M. Kirby, “Paradigm Change in Operations Research: Thirty Years of Debate,” Operations Research, Vol. 55, No. 1, 2008, pp. 1-13. doi:10.1287/opre.1060.0310 [17] R. Kruze and K. D. Meyer, “Statistics with Vague Data,” Reidel, Dordrecht, 1987. doi:10.1007/978-94-009-3943-1. Copyright © 2011 SciRes.. 267. [18] I. Moklanov, “Theory of Random Sets,” Springer, New York, 2005. [19] G. Matheron, “Random Sets and Integral Geometry,” John Wiley & Sons, New York, 1975. [20] D. Dubois and H. Prade, “Fuzzy Sets and Systems: Theory and Applications,” Academic Press, New York, 1980. [21] J. Bán, “Radon-Nikodyým Theorem and Conditional Expectation of Fuzzy-Valued Measures and Variables,” Fuzzy Sets and Systems, Vol. 34, No. 3, 1990, pp. 383392. doi:10.1016/0165-0114(90)90223-S [22] E. P. Klement, M. L. Puri and D. A. Ralescu, “Limit Theorems for Fuzzy Random Variables,” Proceedings of the Royal Society A, Vol. 407, No. 1832, 1986, pp. 171182. doi:10.1098/rspa.1986.0091 [23] C. X. Wu and M. Ma, “Embedding Problem of Fuzzy Number Space,” Fuzzy Sets and Systems, Vol. 44, No. 1, 1991, pp. 33-38. doi:10.1016/0165-0114(91)90030-T [24] H. C. Wu, “Evaluate Fuzzy Optimization Problems Based on Biobjective Programming Problems,” Computer and Mathematics with Applications, Vol. 47, No. 6-7, 2004, pp. 893-902. doi:10.1016/S0898-1221(04)90073-9 [25] P. Kall, “Stochastic Linear Programming,” Springer, New York. doi:10.1007/978-3-642-66252-2 [26] S. Vajda, “Probabilistic Programming,” Academic Press, New York, 1972. [27] G. J. Klir, “Principles of Uncertainty: What Are They? Why Do We Need Them?” Fuzzy Sets and Systems, Vol. 74, No. 1, 1995, pp. 13-31. doi:10.1016/0165-0114(95)00032-G [28] J. Li, J. Xu and M. Gen, “A Class of Multiobjective Linear Programming Model with Fuzzy Random Coefficients,” Mathematical and Computer Modelling, Vol. 44, No. 11-12, 2008, pp. 1097-1113. doi:10.1016/j.mcm.2006.03.013 [29] V. H. Nguyen, “Solving Linear Programming Problems under Fuzziness and Randomness Environment Using Attainment Values,” Information Sciences, Vol. 177, No. 14, 2007, pp. 2971-2984. doi:10.1016/j.ins.2007.01.032 [30] K. Glashoff and S. A. Gustafson, “Linear Optimization and Approximation,” Springer-Verlag, Berlin, 1983. doi:10.1007/978-1-4612-1142-6. AJOR.

(10)