Applying methods from computational statistics to create more realistic models of how financial assets behave to better estimates of losses.
Value-at-Risk (VaR) is a commonly used measure in Finance and Banking to quantify the maximum loss (at a certain confidence level) that can be incurred during a certain time period (a day, a week, a month) in a portfolio of financial assets. This could for example be a collection of stocks or funds that make up your savings for retirement. Banks and other financial institutions are required by law to report measures such as VaR to regulatory agencies to prove that they are robust enough not to go bankrupt from these losses.
VaR can be computed using a range of different approaches and one of the most common is to simulate the outcome of the portfolio using historical data. Another method is to estimate a model of the price changes in the assets, a so-called stochastic volatility model, and make use of it to simulate millions of future possible outcomes. This accuracy of this second method relies on the type of model used. One problem is that the most common model underestimates the risk of large losses as they tend to be light-tailed. The use of heavy-tailed models is therefore important to obtain better estimates of VaR but these are often more complicated to estimate. The aim of this project is to develop new methods for inference that are much faster than the existing ones to be able to obtain better estimates of VaR.
Image is used under Creative Commons with credits to Pictures of Money on FlickR.