In this paper, a nonlocal convection-diffusion model is introduced for the master equation of Markov jump processes in bounded domains. With minimal assumptions on the model parameters, the nonlocal steady and unsteady state master equations are shown to be well-posed in a weak sense. Finally, then the nonlocal operator is shown to be the generator of finite-range nonsymmetric jump processes and, when certain conditions on the model parameters hold, the generators of finite and infinite activity Lévy and Lévy-type jump processes are shown to be special instances of the nonlocal operator.

We provide a new approach for analyzing both static and dynamic randomized load balancing strategies. We demonstrate the approach by providing the first analysis of the following model: customers arrive as a Poisson stream of rate {lambda}n, {lambda} < 1, at a collection of n servers. Each customer chooses some constant d servers independently and uniformly at random from the n servers, and waits for service at the one with the fewest customers. Customers are served according to the first-in first-out (FIFO) protocol, and the service time for a customer is exponentially distributed with mean 1. We call this problemmore » the supermarket model. We wish to know how the system behaves, and in particular we are interested the expected time a customer spends in the system in equilibrium. The model provides a good abstraction of a simple, efficient load balancing scheme in the setting where jobs arrive at a large system of parallel processors. This model appears more realistic than similar models studied previously, in that it is both dynamic and open: that is, customers arrive over time, and the number of customers is not fixed. Our approach consists of two distinct stages: we first develop a limiting, deterministic model representing the behavior as n {r_arrow} {infinity}, and then show how to translate results from this model to results for large, but finite, values of n. The analysis of the deterministic model is interesting in its own right. This methodology proves effective for studying a number of similar problems, and simulations demonstrate that the method accurately predicts system behavior even for relatively small systems.« less

This paper deals with a mean-variance problem for finite horizon semi-Markov decision processes. The state and action spaces are Borel spaces, while the reward function may be unbounded. The goal is to seek an optimal policy with minimal finite horizon reward variance over the set of policies with a given mean. Using the theory of N-step contraction, we give a characterization of policies with a given mean and convert the second order moment of the finite horizon reward to a mean of an infinite horizon reward/cost generated by a discrete-time Markov decision processes (MDP) with a two dimension state spacemore » and a new one-step reward/cost under suitable conditions. We then establish the optimality equation and the existence of mean-variance optimal policies by employing the existing results of discrete-time MDPs. We also provide a value iteration and a policy improvement algorithms for computing the value function and mean-variance optimal policies, respectively. In addition, a linear program and the dual program are developed for solving the mean-variance problem.« less

We introduce singular perturbation methods for constructing asymptotic approximations to the mean first passage time for Markov jump processes. Our methods are applied directly to the integral equation for the mean first passage time and do not involve the use of diffusion approximations. An absorbing interval condition is used to properly account for the possible jumps of the process over the boundary which leads to a Wiener-Hopf problem in the neighborhood of the boundary. A model of unimolecular dissociation is considered to illiustrate our method.