Rare event simulation for stochastic models of complex systems is still a great challenge even for Markovian models. We review results in importance sampling for Markov chains, provide new viewpoints and insights, and we pose some future research directions.
Packet losses can significantly degrade the quality of service (QoS) provided by networks. In this paper, asynchronous bufferless optical packet switched networks with multiple service classes and full wavelength conversion are considered. In case of a large number of wavelengths per fibre, packet losses become rare and thus intractable by means of conventional performance methodologies. We present a novel fast simulation technique for efficiently estimating the loss rates in this setting. Our technique applies Importance Sampling which proceeds by using an alternative probability distribution for accelerated simulation and appropriately weighting the results to obtain unbiased estimators. With our technique the necessary parameters determining the alternative distribution are adaptively obtained by ant colony optimization, a metaheuristic that is in widespread use for solving combinatorial optimization problems. Thereby, no intimate a priori knowledge of the system under consideration is required to set up the simulation. Rather, the parameters adapt to the model. The accuracy and the efficiency of the novel technique are demonstrated by numerical results.
Systems of stochastic chemical kinetics are modeled as infinite level-dependent quasi birth-and-death (LDQBD) processes. For these systems, in contrast to many other applications, levels have an increasing number of states as the level number increases and the probability mass may reside arbitrarily far away from lower levels. Ideas from Lyapunov theory are combined with existing matrix-analytic formulations to obtain accurate approximations to the stationary probability distribution when the infinite LDQBD process is ergodic. Results of numerical experiments on a set of problems are provided.
Scheduling policies significantly influence the performance of queueing systems, and performance measures like response and waiting times, throughput, or related properties have been comprehensively investigated in queueing and scheduling theory. The important issue of quantifying user perceived fairness has received less attention. Measuring fairness suffers from the subjective nature of fairness, and it has received growing attention only the past few years. Recently, an intuitive discrimination frequency based queueing fairness measure, which possesses important axiomatic properties, has been introduced. In this paper, we derive analytical expressions for the expected discrimination frequency in M/M/1, M/D/1 and M/GI/1 queues operating under FCFS, non-preemptive LCFS, and SJF scheduling. Variances are evaluated by simulation and special attention is drawn to Pareto distributed service times.
Importance Sampling is a potentially powerful variance reduction technique to speed up simulations where the objective depends on the occurrence of rare events. However, it is crucial to find a change of the underlying probability measure yielding estimators with significantly reduced variance compared to direct estimators. In this paper, we present a new dynamic and adaptive method for this purpose. The method is inspired by ant-based systems that are in widespread use for solving optimization problems. No intimate knowledge of the model under consideration is necessary. Instead, the method adapts to it. Different commonly used modeling paradigms such as queueing and reliability models, amongst many others, are supported by describing the new method in terms of a transition class formalism. Simulation results demonstrate the accuracy of the obtained estimates, and details of the adapted change of measure are investigated to gain insights into the inner workings of the method.
Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.
In many scenarios services are provided in successive stages. While tandem queues appropriately reflect the structure of such scenarios, the typical assumption that service times at different stages are independent often does not fit to reality. We examine, via simulation, the impact of dependencies among service times on expected customer waiting times. The usual network simulation overhead caused by event list handling is avoided by an extension of the Lindley recursion to the two-stage case. Numerical results are presented for exponentially and uniformly distributed service times with different types of dependencies and varying server utilizations.