Inverse problems: A Bayesian perspective
read more
Citations
Machine learning
Probability and Random Processes
Physics-informed machine learning
Hidden physics models: Machine learning of nonlinear partial differential equations
Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization
References
Equation of state calculations by fast computing machines
Compressed sensing
Nonlinear total variation based noise removal algorithms
Monte Carlo Sampling Methods Using Markov Chains and Their Applications
Machine learning
Related Papers (5)
Frequently Asked Questions (11)
Q2. Why is the importance of this class of algorithms natural?
The importance of this class of algorithmsstems from the fact that, in many applications, solutions are required online, with updates required as more data is acquired; thus sequential updating of the posterior measure at the current time is natural.
Q3. What is the common method for interrogating a probability measure in high dimensions?
Another commonly used method for interrogating a probability measure in high dimensions is sampling : generating a set of points {un}Nn=1 distributed (perhaps only approximately) according to πy(u).
Q4. What is the key ingredient in the definition of well-posed posterior measures?
This will enable us to measuredistance between pairs of probability measures, and is a key ingredient in the definition of well-posed posterior measures described in this article.
Q5. What is the role of decay of the covariance operator in determining the regularity properties?
In particular, the rate of decay of the eigenvalues of the covariance operator plays a central role in determining the regularity properties.
Q6. What are the powerful tools for sampling?
Among the most powerful generic tools for sampling are the Markov chain Monte Carlo (MCMC) methods, which the authors review in Section 5.2.
Q7. What is the case where (i) is satisfied trivially?
That paper contains Theorems 4.1 and 4.2 under Assumptions 2.6 in the case where (i) is satisfied trivially because Φ is bounded from below by a constant; note that this case occurs whenever the data is finite-dimensional.
Q8. What is the formula for the mean derived by completing the square?
The authors haveΓ−1 − (Γ +AC0A∗)−1AC0A∗Γ−1 = (Γ +AC0A∗)−1. (6.16)The formula for the mean derived by completing the square gives m = C ( (C−1 −A∗Γ−1A)m0 +A∗Γ−1y ) = m0 + CA∗Γ−1(y −Am0).
Q9. What is the third assumption important for showing that the posterior probability measure is well-defined?
The third assumption is important for showing that the posterior probability measure is well-defined, whilst the fourth is important for showing continuity with respect to data.
Q10. How is the probability of a small ball of radius maximized?
Thus the Lebesgue density of µ is maximized by minimizing The authorover Rn. Another way of looking at this is as follows: if u is such a minimizer then the probability of a small ball of radius ε and centred at u will be maximized, asymptotically as ε→ 0, by choosing u = u.
Q11. What is the generalization of the theorems to allow for (i)?
Generalizing the theorems to allow for (i) as stated here was undertaken in Hairer, Stuart and Voss (2010b), in the context of signal processing for stochastic differential equations.