Notes:
The original publication is available at link.springer.com.
|
Abstract.
Probabilistic model checking provides formal guarantees for stochastic models relating to a wide range of quantitative properties. But this is typically with respect to the expected value of these quantities, which can mask important aspects of the full probability distribution. We propose a distributional extension of probabilistic model checking, for discrete-time Markov chains (DTMCs) and Markov decision processes (MDPs). We formulate distributional queries, which can reason about a variety of distributional measures, such as variance, value-at-risk or conditional value-at-risk, for the accumulation of reward or cost until a co-safe linear temporal logic formula is satisfied. For DTMCs, we propose a method to compute the full distribution to an arbitrary level of precision. For MDPs, we approximate the optimal policy using distributional value iteration. We implement our techniques and investigate their performance and scalability across a range of large benchmark models.
|