A Constrained Langevin Approximation for Chemical Reaction Networks

S. C. LEITE and R. J. WILLIAMS


Continuous-time Markov chain models are often used to describe the stochastic dynamics of networks of reacting chemical species, especially in the growing field of systems biology. These Markov chain models are often studied by simulating sample paths in order to generate Monte-Carlo estimates. However, discrete-event stochastic simulation of these models rapidly becomes computationally intensive. Consequently, more tractable diffusion approximations are commonly used in numerical computation, even for modest-sized networks. However, existing approximations either do not respect the constraint that chemical concentrations are never negative (linear noise approximation) or are typically only valid until the concentration of some chemical species first becomes zero (Langevin approximation).

In this paper, we propose an approximation for such Markov chains via reflected diffusion processes that respects the fact that concentrations of chemical species are never negative. We call this a constrained Langevin approximation because it behaves like the Langevin approximation in the interior of the positive orthant, to which it is constrained by instantaneous reflection at the boundary of the orthant. An additional advantage of our approximation is that it can be written down immediately from the chemical reactions. This contrasts with the linear noise approximation, which involves a two-stage procedure - first solve a deterministic reaction rate ordinary differential equation, followed by a stochastic differential equation for fluctuations around those solutions. Our approximation also captures the interaction of non-linearities in the reaction rate function with the driving noise. In simulations, we have found the computation time for our approximation to be at least comparable to, and often better than, that for the linear noise approximation.

Under mild assumptions, we first prove that our proposed approximation is well defined for all time. Then we prove that it can be obtained as the weak limit of a sequence of jump-diffusion processes that behave like the Langevin approximation in the interior of the positive orthant and like a rescaled version of the Markov chain on the boundary of the orthant. For this limit theorem, we adapt an invariance principle for reflected diffusions, due to Kang and Williams, and modify a result on pathwise uniqueness for reflected diffusions, due to Dupuis and Ishii. Some numerical examples illustrate the advantages of our approximation over direct simulation of the Markov chain or use of the linear noise approximation.

Published in Annals of Applied Probability, 2019, Vol. 29, No. 3, 1541-1608. To access the IMS journal version, please click here. You can also find a pdf of the published article here (for personal, scientific, non-commercial use only).