• No results found

[PDF] Top 20 ON THE FIRST PASSAGE g-MEAN-VARIANCE OPTIMALITY FOR DISCOUNTED CONTINUOUS-TIME MARKOV DECISION PROCESSES

Has 10000 "ON THE FIRST PASSAGE g-MEAN-VARIANCE OPTIMALITY FOR DISCOUNTED CONTINUOUS-TIME MARKOV DECISION PROCESSES" found on our website. Below are the top 20 most common "ON THE FIRST PASSAGE g-MEAN-VARIANCE OPTIMALITY FOR DISCOUNTED CONTINUOUS-TIME MARKOV DECISION PROCESSES".

ON THE FIRST PASSAGE g-MEAN-VARIANCE OPTIMALITY FOR DISCOUNTED CONTINUOUS-TIME MARKOV DECISION PROCESSES

ON THE FIRST PASSAGE g-MEAN-VARIANCE OPTIMALITY FOR DISCOUNTED CONTINUOUS-TIME MARKOV DECISION PROCESSES

... The mean and variance of the discounted total reward for each policy are valuated up to the (random) first passage time of the controlled process to a target set, instead of over the ... See full document

19

Variance Optimization for Continuous Time Markov Decision Processes

Variance Optimization for Continuous Time Markov Decision Processes

... the variance optimization problem of continuous-time Markov decision processes, which is different from the mean-variance optimiza- tion problem previously ...the ... See full document

15

Randomized and Relaxed Strategies in Continuous-Time Markov Decision Processes

Randomized and Relaxed Strategies in Continuous-Time Markov Decision Processes

... a and a current state x), CTMDPs are much more popular; see articles and mono- graphs [7, 9, 10, 11, 16, 20, 23, 25] and references therein. In the case of discounted total expected cost, an excellent discussion ... See full document

31

Continuous time Markov decision process of a single aisle warehouse

Continuous time Markov decision process of a single aisle warehouse

... the first open empty location to store the product or ...the time the number of classes is restricted to three, although more classes might reduce travel times in some cases (de Koster et ...using ... See full document

74

Random walks on complex trees

Random walks on complex trees

... dynamical processes taking place on top of them ...pair-annihilation processes [14], and a model for norm spreading ...dynamical processes on looped network can be reason- ably accounted for by ... See full document

10

Optimization of Cash Management Fluctuation through Stochastic Processes

Optimization of Cash Management Fluctuation through Stochastic Processes

... Managing the cash balance is important in business administration, but rarely do we apply the techniques presented in this study in practice. Much of this neglect is due to the difficulty in developing models closer to ... See full document

18

Sufficient Markov Decision Processes.

Sufficient Markov Decision Processes.

... The fight against human traffickers is dynamically evolving with adversarial opponents. Traffickers often avoid using explicit keywords when advertising victims, but instead use acronyms, intentional typos, and emojis. ... See full document

121

Continuous-observation partially observable semi-Markov decision processes for machine maintenance

Continuous-observation partially observable semi-Markov decision processes for machine maintenance

... semi-Markov decision processes (POS- MDPs) provide a rich framework for planning under both state transition uncertainty and observation ...yet continuous-observation ...is continuous ... See full document

20

Continuous Observation Partially Observable Semi Markov Decision Processes for Machine Maintenance

Continuous Observation Partially Observable Semi Markov Decision Processes for Machine Maintenance

... To date, the POSMDP model has not been developed for the case of discrete states, discrete actions and continuous observations. In addition, many of the applications of the POSMDP have failed to capture the ... See full document

20

THE VARIANCE-OPTIMAL MARTINGALE MEASURE FOR CONTINUOUS PROCESSES

THE VARIANCE-OPTIMAL MARTINGALE MEASURE FOR CONTINUOUS PROCESSES

... a continuous process S = M + α · h M i such that there exist equivalent martingale measures Q (even with dQ dP uniformly bounded) but nevertheless the local martingale E ( − α · M ) is not uniformly ...is ... See full document

28

Small sets and Markov transition densities

Small sets and Markov transition densities

... for Markov chain Monte Carlo (see also the extended notion of pseudo- small sets described by Roberts and Rosenthal [20, 21]) and also (under the rubric of gamma-coupling) to produce effective Coupling from the ... See full document

21

Asymptotics of rare events in birth–death processes bypassing the exact solutions

Asymptotics of rare events in birth–death processes bypassing the exact solutions

... of mean first passage times in a class of discrete state space Markov processes, namely one-variable birth–death ...of processes that we consider here, exact—albeit ... See full document

12

Some contributions to Markov decision processes

Some contributions to Markov decision processes

... Chapter 5 and 6 tackle MDPs with long-run expected average cost criterion. In Chapter 5, we consider a constrained MDP with possibly un- bounded (from both above and below) cost functions. Under Lyapunov- like ... See full document

160

ICTS AND ECONOMIC GROWTH IN AFRICAN COUNTRIES

ICTS AND ECONOMIC GROWTH IN AFRICAN COUNTRIES

... discrete Markov processes in pure birth-death processes with final number of states were ...the Markov pure birth-death processes can be distinguished by the value of the ... See full document

5

Portfolio Selection in Mean Minimum Return Level Expected Bounded First Passage Time Framework

Portfolio Selection in Mean Minimum Return Level Expected Bounded First Passage Time Framework

... The paper is structured in four main parts. The second section examines the differential equations which represent the multi-dimensional Ito’s processes and constructs the portfolio process. Within this section, ... See full document

10

Augmenting Markov Decision Processes with Advising

Augmenting Markov Decision Processes with Advising

... Advice-MDPs aim to enable operators to augment the plan- ning model with advice, towards fitting the irregularities of the environment (like ASRL) with prescriptive a priori advising (like MIPSs). This way, Advice-MDPs ... See full document

8

Compositional Reasoning for Markov Decision Processes

Compositional Reasoning for Markov Decision Processes

... After performing the action up it finds itself either in a state in which the action down will accrue the benefit 2, or 25% of the time there will be a nondeterministic choice between it accruing either 1 or 6. In ... See full document

16

Generalized inverses, stationary distributions and mean first passage times with applications to perturbed Markov chains

Generalized inverses, stationary distributions and mean first passage times with applications to perturbed Markov chains

... 1, G(α α α α, β β β β, γ) satisfies condition 3 if and only if α α = π α α π π π/π π π π′π π π π, G(α α α α, β β β β, γ) satisfies condition 4 if and only if β β = e/e′e, β β G(α α α α, β β β β, γ) ... See full document

18

Strategy improvement algorithm for 
		singularly perturbed discounted Markov decision processes

Strategy improvement algorithm for singularly perturbed discounted Markov decision processes

... the discounted expected criterion; we give explicitly the limit Markov control problem (limit MCP) that is entirely different from the original unperturbed MDP, which forms an appropriate asymptotic to a ... See full document

7

Ring structures and mean first passage time in networks

Ring structures and mean first passage time in networks

... s of random walkers starting from different nodes on a generic network. We have introduced a new approximate method, based on the concept of rings, which maps the original Markov process on another Markov ... See full document

9

Show all 10000 documents...