In order to simplify the typical FFS scheduling problem in consideration of stochasticprocessing times, the following assumptions are made: (1) Preemption is not allowed for job processing; (2) Each machine can process at most one operation at a time; (3) All jobs are released at the same time for the first stage; (4) There is no travel time between machines; (5) There is no setup time for job processing; (6) Infinite buffers exist for machines; (7) For the same job, the expected processing time at any parallel machine at a stage is identical; (8) The actual processing time of a job on a machine is uncertain; and (9) As parallel machines at a stage are functionally identical, they lead to the same CPTV when processing any jobs, but the CPTV may be different for other stages.
Chanas and Kasperski  considered two single machine scheduling problems with fuzzy processing times and fuzzy due dates. They defined the fuzzy tardiness of a job in a given sequence as a fuzzy maximum of zero and the difference between the fuzzy completion time and the fuzzy due date of this job. In the first problem, they considered the minimization of the maximal expected value of a fuzzy tardiness. In the second one, they considered the minimization of the expected value of a maximal fuzzy tardiness.
is considerable number of approaches in the PERT literature. Approaches which are based on approximating or bounding the distributions of the completion times of the activities are common in the PERT literature [13-16]. Another natural and flexible way to approximate the distribution function of performance measure is the Monte Carlo simulation . However, simulation alone is only able to evaluate one specific solution to the SJSSP at a time, and incapable of performing a search of the entire solution space for an optimal or good solution. Due to the hard theoretical limitation of the stochastic counterparts, only in some special scheduling problems, heuristics such as the priority dispatching rules have an elegant solution . In many applications classical approaches that guarantee to find the optimal solution require a lot of computational effort and are limited only for small size instances.
In this experiment, the processing times of all jobs are assumed to be stochastic and follow a normal distribution. Since the normal distribution has two parameters, the standard deviation is given as a ratio to the mean; the ratio of the standard deviation to the mean is known as the variability and assumed to be 10%; N (µ ij , σ ij ) where σ ij equals 10 % of µ ij .
The open shop problem with uncertainty constitutes a relatively new and complex research line. While there are many contributions to solve fuzzy job shop problems (we can cite, among others, ,,  or ), the literature on fuzzy open shop is still scarce. Among the few existing proposals, a heuristic approach is proposed in  to minimise the expected makespan for an open shop problem with stochasticprocessing times and random breakdowns; in  the expected makespan of an open shop with fuzzy durations is minimised using a genetic algorithm hybridised with local search. Finally, in the framework of mul- tiobjective approach, a possibilistic mixed-integer linear programming method is proposed in  for an OSP with setup times, fuzzy processing times and fuzzy due dates to minimise total weighted tardiness and total weighted completion times and in  a goal programming model based on lexicographic multiob- jective optimisation of both makespan and due-date satisfaction is adopted and solved using a particle swarm algorithm.
This paper presents a multi-objective simulated annealing algorithm for the mixed-model assembly line balancing with stochasticprocessing times. Since, the stochastic task times may have effects on the bottlenecks of a system, maximizing the weighted line efficiency (equivalent to the minimizing the number of station), minimizing the weighted smoothness index and maximizing the system reliability are considered. After solving an example in detail, the performance of the proposed algorithm is examined on a set of test problems. The experimental results show the new approach performs well.
In this paper, a novel MUD scheme was presented based on the theory of neural networks and adaptive detectors. The new detector is also capable of serving many users in heav- ily loaded environments. The robustness of the method lies in the application of the stochastic recurrent neural network and the self-organizing feature map. As an improvement, a control unit is also proposed which could switch between stochastic and hysteretic operations depending on the cir- cumstances. However, the rules of how to do so have not yet been investigated. The system has been tested by extensive simulations. Simulation results are promising and they show that the performance of the proposed scheme is close to the one of the optimal detector.
The performance of telephone-based speaker verification systems can be severely degraded by linear and nonlinear acoustic dis- tortion caused by telephone handsets. This paper proposes to combine a handset selector with stochastic feature transformation to reduce the distortion. Specifically, a Gaussian mixture model (GMM)-based handset selector is trained to identify the most likely handset used by the claimants, and then handset-specific stochastic feature transformations are applied to the distorted feature vectors. This paper also proposes a divergence-based handset selector with out-of-handset (OOH) rejection capability to identify the “unseen” handsets. This is achieved by measuring the Jensen diﬀerence between the selector’s output and a constant vector with identical elements. The resulting handset selector is combined with the proposed feature transformation technique for telephone-based speaker verification. Experimental results based on 150 speakers of the HTIMIT corpus show that the handset selector, either with or without OOH rejection capability, is able to identify the “seen” handsets accurately (98.3% in both cases). Results also demonstrate that feature transformation performs significantly better than the classical cepstral mean normalization approach. Finally, by using the transformation parameters of the seen handsets to transform the utterances with correctly identi- fied handsets and processing those utterances with unseen handsets by cepstral mean subtraction (CMS), verification error rates are reduced significantly (from 12.41% to 6.59% on average).
2. Growth of the Internet: Is the growth exponential in time, and how can this be measured? 3. How can we model end-to-end delays? Is the observed ”long range dependence” present through the entire Internet? Is there any connection between the delay and the hopcount? We intend to address these questions by relating empirical evidence obtained from Internet data to mathematical, in particular stochastic, models of the Internet. Obviously, a successful model will not only enhance our understanding of the current phenomena in Internet, but will lead to recommendations for improvements on both the network infrastructure and the network protocols.
SELECT COUNT (*) FROM sensors s, modern light ml, WHERE ml.nodeid > s.nodeid SAMPLE EPISODE 10S AND nodeid = 4 SAMPLE EPISODE 5S The above statement COUNTS the number of query the user wants to perform temporal operations. For example, the following top ‘n’ multi-query outputs a stream of counts indicating the number of modern light readings that is brighter than the current reading at 10 seconds and changes done when the nodeid=4 using the mutation operation in GA. The GA crossover operation combines the query together using the ‘AND’ operator during the processing of query. The SHO-GA framework operates using multiple queries with stochastic heuristic optimization function. As a result, the SHO-GA framework improves the accessibility level with minimum computation operations required.
In this chapter we will describe the results of the analysis of the longitudinal bed elevation profiles taken from the bathymetry data. With the longitudinal bed elevation profiles as input we used the bedform tracking tool to determine the geometric properties of sand waves for every sand wave in the study. The geometric properties analysed are the sand wave height, sand wave length, crest elevation, trough elevation and asymmetry of the sand waves. We analysed the stochastic characteristics of the sand waves by plotting probability density functions. It is also determined if the stochastic characteristics are distributed according to known probability distributions. A goodness-of-fit test is used to determine if a data set can be represented by a certain probability distribution. We also determined the total vertical difference between the cumulative density of the data and the cumulative density of different probability distributions. A low value for this difference means more resemblance between a certain probability distribution and the distribution of the data.
proved that the mean-square order of (8) is 1, and that of (9) is 2, for which we only give empirical evidence through numerical tests in the next section, and leave the theoretical investigation in another paper. For the implementation methods of ∆ n W i (i = 1, 2) and the stochastic integrals in
Estimation fusion has been investigated for more than two decades. Target Tracking and Motion Analysis demand practical fusion systems for both military and non-military ap- plications. Most of research efforts on estimation fusion focus on the optimal fusion rule [4, 16, 54, 53, 50, 48, 49, 85] at the fusion center. With the advances of modern sensor technology such as the wireless network connecting the MEMS microsensors, a lot of new issues in estimation fusion area arise. In general, estimation fusion can occur at any level when the data processor has access to measurements from multiple sensors regarding the same target state. Therefore it is very natural to consider communication, computation and power constraints in the fusion system. In a real situation, both the optimal sensor data processing and the optimal fusion rule need to be taken into consideration since the information is shared by the whole fusion system with various constraints.
in a fuzzy environment. In this study, part demands, machine capacities, and Exceptional Elements (EEs) elimination costs were considered as fuzzy numbers. The objective functions that they considered included the minimization of EEs elimination cost, the mini- mization of inter-cell movements, and the maximiza- tion of utilized machine capacity. Ghezavati and Saidi- Mehrabad  addressed an integrated mathematical model of CF and group scheduling problems in an uncertain environment. It was assumed that the processing time of parts on machines was a stochastic parameter, which was represented by discrete scenar- ios. The main goal of their model was to minimize the total expected costs of maximum tardiness, EEs sub- contracting, and resource under-utilization. A hybrid genetic-simulated annealing algorithm was employed as a solution method. Das and Abdul-Kader  presented a bi-objective integer-programming model for designing a CMS by considering dynamic changes in machine reliability and parts demands. The rst objective function was to minimize the total system costs including the manufacturing, inter-cell material handling, machine under-utilization, and machine du- plication costs. The second objective function was to maximize the total system reliability. An "-constraint solution method was used to solve the problem. Gheza- vati and Saidi-Mehrabad  applied a queuing theory approach to design a CMS with exponentially dis- tributed service and arrival times. It was assumed that each machine worked as a server and each part was a customer that should be served by machines. They formulated a mathematical model to maximize the average utilization level of machines. A hybrid method based on genetic and simulated annealing algorithms was exerted to solve the problem. Rabbani et al.  proposed a bi-objective CF problem in which part demands were expressed by some probabilistic scenarios. A two-stage stochastic programming model was presented to undertake the uncertain demand of parts. The expected variable cost of all machines and the expected inter-cell material handling cost were considered in the rst objective function. The total expected cell load variation was considered as the second objective function. They applied a two- phase fuzzy linear programming approach to solve the presented problem. Forghani et al.  applied an interval robust optimization approach to take the uncertainty of part demands into consideration. Then, an integrated CF and layout problem was formulated to minimize the inter- and intra-cell material handling costs.
Paper outline. The paper is organized as follows. In Section 2 the problem of interest is formulated and analyzed. Further in Section 3, a new stochastic proximal point algorithm is introduced and its relations with the previous work are highlighted. We provide in Section 4 the first main result of this paper regarding the nonasymptotic convergence of SPP in the convex case. Further, stronger convergence results are presented in Section 5 for smooth strongly convex objective functions. In order to improve the convergence of the simple SPP scheme, in Section 6 we introduce a restarted variant of SPP algorithm. In Section 7 we provide some preliminary numerical simulations to highlight the empirical performance of our schemes. Some long proofs are moved in the Appendix.
With this basis we have reformulated the GIATI methodology to infer stochasticstochastic k-TSS bi- languages for machine translation purposes, which takes advantage of the knowledge about stochas- tic k-TSS languages and their application to natural language tasks. Moreover, the finite-state formal- ism allows easy integration of other automata rep- resenting target language models or acoustic mod- els in speech translation tasks. However, the mono- tonic segmentation does not allow to deal with long- distance alignments which is a problem when the distance between the pair of languages is large. On the other hand smoothing techniques dealing with any pair of strings need also to be further explored.
equations are used as in the study of queues, insurance risks, dams and more re- cently in mathematical finance. On the other hand, some recent research in auto- matic control such as Boukas and Liu (2002) and Ji and Chizeck (1990) have been devoted to stochastic differential equations with Markovian jumps. As a popular and important topic, the stability property of stochastic differential equations has always lain at the center of our understanding concerning stochastic models de- scribed by these equations. Dong and Xu (2007) proved the global existence and uniqueness of the strong, weak and mild solutions and the existence of invariant measure for one-dimensional Burgers equation in [0, 1] with a random perturba- tion of the body forces in the form of Poisson and Brownian motion. Later the uniqueness of invariant measure is given in Dong (2008). R¨ ockner and Zhang (2007) established the existence and uniqueness for solutions of stochastic evolu- tion equations driven both by Brownian motion and by Poisson point processes via successive approximations. In addition, a large deviation principle is obtained for stochastic evolution equations driven by additive L´ evy noise. Svishchuk and Kazmerchuk (2002) made a first attempt to study the pth-moment exponential stability of solutions of linear Itˆ o stochastic delay differential equations asso- ciated with Poisson jumps and Markovian switching, which was motivated by some practical applications in mathematical finance. Quite recently, Luo and Liu (2008) considered a strong solutions approximation approach for mild solutions of stochastic functional differential equations with Markovian switching driven by L´ evy martingales in Hilbert space. In addition, the sufficient conditions for the moment exponential stability and almost sure exponential stability of equations have been established by the Razumikhin-Lyapunov type function methods and comparison principles.