Abstracts

Data-based forecasting using stochastic processes
Houda Ghamlouch (Troyes University of Technologie, France)

Tuesday June 3, 16:00-16:30 | session P2 | Poster session | room lobby

This document gives a brief description of the first step in our project of asset price modeling and risk prediction. Having the price evolution of 13 equity indexes from different countries for almost 20 years, the aim of our project is to suggest a suitable model describing the dynamics of these indexes. This model can be applied in price forecasting and risk prediction with a small confidence interval and relatively short calculation time. For each index, the collected dataset consists of the daily closing prices starting from 1th January 1993 until 11th January 2013, which forms 5000 daily points. First, the probability distribution of price evolution, represented by the log-returns, is studied. After data analysis, a mixture of Normal and Laplace probability distribution is associated to log-returns. Afterward, the study is focused on price dynamics modeling. Several types of asset price modeling were considered and improved over previous years. In this paper, two types of models are considered: stochastic processes and time series models (ARCH/GARCH). First, the adequacy of time series models has been tested using the Hinich portmanteau bicorrelation statistical test. Results show the inadequacy of the use of GARCH model or any of its variants for the 13 indexes under consideration. The remainder of this paper is focused on stochastic processes. Stochastic volatility (SV) models have been considered to be the most reliable models for the cases where the volatility, as in our case, has significant variations over time. Therefore, SV models where the volatility follows different types of stochastic process are considered. Furthermore, a jump-diffusion process with Log-Uniform Jump Amplitude is proposed. Comparing the volatility behavior of different indexes, the economic environment states are divided into three states (calm, normal and agitated). The transition between these three states is controlled by an external covariate following a Markov chain. The jump-diffusion model parameters calibration is carried out for each index and each covariate state to build an overall model that represents the price variation behavior. The model parameters calibration is carried out for both SV model and the jump-diffusion process. The work is in progress in order to clarify the strengths and weaknesses of each model. The comparison between these models will be based on the reliability of price and return prediction, availing the already known collected data.


Coherent foreign exchange market models
Alessandro Gnoatto (LMU München, Germany)

Tuesday June 3, 16:00-16:30 | session P2 | Poster session | room lobby

We consider the pricing of foreign exchange (FX) European options. The foreign-domestic parity provides a no-arbitrage relationship between call options on the e.g. EURUSD exchange rate and puts on USDEUR. This no arbitrage requirement implies a set of restrictions on the parameters of the model under different pricing measures. We generalize the results in Del Bano Rollin (2008) to the class of affine stochastic volatility models and exponential Lévy processes and show that these models price calls and puts coherently, i.e. in line with the foreign-domestic parity. We then provide an example which generalizes the model of De Col et al. (2013). Paper available at http://www.fm.mathematik.uni-muenchen.de/download/coherent.pdf


Risk Management in Global Financial Markets: A Practical Perspective
Franklin Goncalves (Spinnaker Capital Group, Brazil)

Tuesday June 3, 16:00-16:30 | session P2 | Poster session | room lobby

This paper is about the risk management of a portfolio of global assets from a practical quantitative finance perspective. The intention is to provide concepts, understanding and possibly guidance to the activity of financial risk control in the real world of investments.
The paper will approach the subject of risk management by three avenues: (1) finance theory, statistics and computing; (2) economic concepts, markets knowledge and intuition and (3) thematic perspectives and historical events.
At level 1 we will cover the main tools, models and applications of finance, statistics and computing in order to be able to quantify the risks of a portfolio on several relevant conditions. At level 2 we will attempt to merge the technical / quantitative knowledge of level 1 with two central fields of investments: markets and economics. Very often the subject of “risk management” does not take into account actual markets and economic conditions.
Finally we have at level 3 the subjective dimension of risk control as complementing economic knowledge we have historical events generating institutions and shaping behavior as well as “themes” that allow us to understand at each point in time the main sources of risk on the portfolio´s investment horizon. This will be illustrated with several results and simulations from a realistic portfolio of international liquid financial assets.


Information Contents of Option Prices on Idiosyncratic and Systematic Equity Risks
Elise Gourier (Princeton University, USA)

Tuesday June 3, 16:00-16:30 | session P2 | Poster session | room lobby

This paper analyzes the relationship between the dynamics of individual stocks and that of the market. We implement a rigorous time-series estimation approach which incorporates information on single name and index option prices as well as high-frequency underlying returns in different market situations. We identify the idiosyncratic and systematic risk components of single stocks and investigate their impact on the risk-neutral distributions of returns and on the volatility smiles. Finally, we analyze the structure of single name equity and variance risk premia in relation to index risk premia.


Modelling the variance risk premium of equity indices: the role of stochastic volatility, stochastic dependence and self- and mutually exciting jump processes
Andrea Granelli (Imperial College, UK)
Joint work with Almut Veraart

Tuesday June 3, 16:00-16:30 | session P2 | Poster session | room lobby

Understanding variance risk is of key importance in mathematical finance since it affects risk management, asset allocation and derivative pricing. Variance risk is priced in financial markets by the so-called variance risk premium (VRP), which refers to the premium demanded for holding assets whose variance is exposed to stochastic shocks. The importance of the VRP is also evident from the proliferation of derivatives products such as the variance swaps, whose market volume has increased steeply over the past few years.
The aim of this paper is to identify a suitable parsimonious model for the stochastic dynamics of equity indices, capable of producing dynamics of the implied VRP in line with the empirical findings that the VRP of equity indices displays stochastic dynamics and jumps, whereas the VRP of individual stocks does not exhibit such stochastic fluctuations.
Existing theoretical and empirical work has so far only advocated univariate models to explain the dynamics of the VRP. However, we argue that dependencies across assets play a key role in explaining the stochastic dynamics of the VRP of stock indices and hence a multivariate stochastic model is needed. This paper presents for the first time explicit analytical formulas for the VRP in a multivariate stochastic volatility framework, which includes multivariate non-Gaussian Ornstein-Uhlenbeck processes and Wishart processes, as well as a new model specification. Moreover, we propose to incorporate self- and mutually exciting multivariate Hawkes processes in the model and find that the resulting dynamics of the VRP represent a convincing alternative to the models studied in the literature up to date.
We show that we can identify the Hawkes intensity process as a possible driver for the stochastic dynamics of the VRP. As a by-product of our work, we also prove useful explicit formulas involving conditional expectations of this popular process.
In addition, we find that our new model can explain the key stylised facts of both equity indices and individual assets and their corresponding VRP, while popular (multivariate) stochastic volatility models, including the Wishart model, fail.
We finally prove the existence of a structure-preserving risk neutral measure for our model, laying the theoretical foundations for the derivations described above. In particular, we establish the class of equivalent probabilities that preserve the self-affecting structure of the Hawkes process.


A fast adjoint-based quasi-likelihood parameter estimation method for diffusion processes
Josef Höök (Uppsala University, Sweden)
Joint work with Erik Lindström

Tuesday June 3, 16:00-16:30 | session P2 | Poster session | room lobby

Likelihood based parameter estimation for diffusion processes is an important topic in many areas of mathematical finance. For a general irreducible diffusion model it is common to approximate the transition density using either Monte Carlo based methods or by finite difference discretization of the Fokker-Planck equation. These methods require the evaluation of an approximate probability density between each observation, which quickly becomes very time consuming as the number of observations increase. Instead of approximating the transition density explicitly in the construction of the likelihood a simple strategy is to replace the exact, but unknown density by an approximate density with exact moments. This technique is known as the quasi-likelihood method and many previous studies have been on diffusion models where one can obtain analytical expression for the moments. Monte Carlo estimation is the standard method of choice when analytical moments are unavailable.
Instead of using Monte Carlo based methods we here suggest to estimate the moments for the quasi-likelihood from the approximate solution of the Kolmogorov-backward equation using finite differences. The Kolmogorov-backward equation is the adjoint to the Fokker-Planck equation. The immediate advantage of this is that we need only to solve one backward equation for any number of observations, which is a dramatic reduction in computational complexity. Another nice property of the backward equation is the well-behaved initial condition in terms of moments, which should be contrasted to the initial condition of the Fokker-Planck equation given by a Dirac measure.
The quasi-likelihood method together with approximate moments from the discrete backward equation is tested on common models e.g. CIR, GEN1 and the low computational complexity and high performance is demonstrated.


On pricing-hedging duality in robust mathematical finance
Zhaoxu Hou (University of Oxford, UK)
Joint work with Zhaoxu Hou

Tuesday June 3, 16:00-16:30 | session P2 | Poster session | room lobby

In the framework of robust mathematical finance, Dolinsky and Soner (2013) showed that there is no duality gap between the robust hedging of path-dependent European options and a martingale optimal transport problem, when option prices for one maturity are given. In this work, we present a duality result in the setup of multiple maturities. The proof proceeds through a discretisation of the problem. Key steps are to relate the robust hedging problem to a probabilistic super-hedging problem, and to use classical duality result to connect the probabilistic super-hedging problem to a discretised martingale optimal transport problem.
Furthermore, in discrete time, we extend a duality result proved by Beiglbock, Henry-Labordere and Penkner (2011) to markets with bubbles. In both continuous and discrete time, motivated by Mykland (2005)’s idea of having a prediction set of paths (i.e. super-replication of a contingent claim required only for paths falling in the prediction set), we add a path restriction into the pricing and hedging framework.


Analysis of optimal dynamic withdrawal policies in withdrawal guarantee products
Yao Tung Huang (The Hong Kong University of Science and Technology, Hong Kong)
Joint work with Yue Kuen Kwok

Tuesday June 3, 16:00-16:30 | session P2 | Poster session | room lobby

The Guaranteed Minimum Withdrawal Benefits (GMWB) are popular riders in variable annuities with withdrawal guarantees. With withdrawals spread over the life of the annuities contract, the benefit promises to return the entire initial annuitization amount irrespective of the market performance of the underlying fund portfolio. Treating the dynamic withdrawal rate as the control variable, the earlier works on GMWB have considered the construction of a continuous singular stochastic control model and the numerical solution of the resulting pricing model. This paper presents a more detailed characterization of the pricing properties of the GMWB and performs a full mathematical analysis of the optimal dynamic withdrawal policies under the competing factors of time value of fund, optionality value provided by the guarantee and penalty charge on excessive withdrawal. When a proportional penalty charge is applied on any withdrawal amount, we can reduce the pricing formulation to an optimal stopping problem with lower and upper obstacles. We then derive the integral equations for the determination of a pair of optimal withdrawal boundaries. When a proportional penalty charge is applied on the amount that is above the contractual withdrawal rate, we manage to characterize the behavior of the optimal withdrawal boundaries that separate the domain of the pricing models into three regions: no withdrawal, continuous withdrawal at the contractual rate and an immediate withdrawal of finite amount. Under certain limiting conditions, like high policy fund value, time close to expiry, low value of guarantee account, we manage to obtain analytical approximate solution to the singular stochastic control model of dynamic withdrawal.


Portfolio optimization under a partially observed stochastic volatility models
Dalia Ibrahim (École Centrale Paris, France)

Tuesday June 3, 16:00-16:30 | session P2 | Poster session | room lobby

A basic problem in mathematical finance is the problem of an agent wants to maximize his expected utility of terminal wealth. In this paper, we study this problem in a financial market where the risky asset evolves according to the following stochastic volatility model:
\begin{align*}
dS_{t}&=S_{t}[\mu_{t}dt +g(V_{t}) dB_{t}]\\
dV_{t}&=f(\beta_{t},V_{t}) dt + h(V_{t}) dW_{t}
\end{align*}
The special feature of this paper is that the only information available to the investor is the one generated by the asset prices. Especially, the trend $\mu_{t}$ cannot be observed directly but can be modelled by a stochastic process, for example, an Ornstein-Uhlenbeck. $\beta_{t}$ is either a stochastic process or a constant. $B$ and $W$ are two correlated Brownian motions. $g$,$f$,$h$ are Borel mesurable functions such that a unique solution of the above dynamics exist.
We are in the context of a portfolio optimization problem under partial information. In order to solve it, we need first to reduce this problem to one with complete information. This step can be done with the change of probability method and filtering theory. The idea is to exploit all the information coming from the market, in order to update the knowledge of the not fully know quantities, precisely, we need to replace the unknown risk premia processes of the models by their filter estimates, which are given by the conditional expectations of these variables with respect to the filtration generated by the price process. Secondly, we aim to find an explicit computations of these filters. We show that in the uncorrelated case ($B$ and $W$ are independent), we can use the Bayesian framework to deduce an explicit computations of these filters. But in the correlated case, we use the non-linear filtering theory to show that these processes satisfy a Kushner-Stratonovich equations.
Finally, according to the aboves steps, we reduced our problem to a full observation optimization problem. We solve it with the martingale duality approach in the decorrelated case, and with the dynamic programming approach in the correlated case. We show that the dynamic approach leads to a characterization of the value function as viscosity solution of a nonlinear PDE (Hamilton-Jaccobi-Bellman equation). But, by logarithm transformation, we show that the value function and the optimal portfolio can be expressed in terms of a smooth solution of a semilinear parabolic equation.


Bayesian inference for stochastic volatility models driven by fractional Brownian Motion
Konstantinos Kalogeropoulos (London School of Economics, UK)

Tuesday June 3, 16:00-16:30 | session P2 | Poster session | room lobby

In this paper we consider continuous-time diffusion models driven by fractional Brownian Motion (fBM), with observations obtained at discrete-time instances. As a prototypical scenario we will give emphasis on a a stochastic volatility (SV) model allowing for memory in the volatility increments through an fBM specification. Due to the non-Markovianity of the model and the high-dimensionality of the latent volatility path, estimating posterior expectations is a computationally challenging task. We present novel simulation and re-parameterisation framework based on the Davies and Harte method and use it to construct a Markov chain Monte-Carlo (MCMC) algorithm that allows for computationally efficient parametric Bayesian inference upon application on such models. The algorithm is based on an advanced version of the so-called Hybrid Monte-Carlo (HMC) that allows for increased efficiency when applied on high-dimensional latent variables relevant to the models of interest in this paper. The inferential methodology is examined and illustrated in the SV models, on simulated data as well as real data from the S&P500/VIX time series that may include intra-day data. Contrary to a long range dependence attribute of the SV process (Hurst parameter H > 1/2) many times assumed in the literature, the posterior distribution favours H < 1/2 that points towards medium range dependence.


Fully liquidity Adjusted CVA
Christian Kamtchueng (CTK Corp, UK)

Tuesday June 3, 16:00-16:30 | session P2 | Poster session | room lobby

After Lehman default (credit crisis which started in 2007), practitioners considered the default risk as a major risk. The Industry began to charge for the default risk of any derivatives. In this article we defined a methodology in order to fully adjusted the close out premium used to compute the CVA according to the Liquidity Risk. It is the first time that the CVA is adjusted taking into account the Liquidity Market Risk. We innovate market liquidity risk methodologies, in order to quantify this risk in the future to the close out premium. Therefore the CVA or Exposure will be function of our risk view or apetite.


Pricing Interest Rate Derivatives in a Multifactor HJM Model with Time Dependent Volatility
Boda Kang (University of York, UK)
Joint work with Ingo Beyna and Carl Chiarella

Tuesday June 3, 16:00-16:30 | session P2 | Poster session | room lobby

We investigate the partial differential equation (PDE) for pricing interest derivatives in the multi-factor Cheyette Model, which involves time-dependent volatility functions with a special structure. The high dimensional parabolic PDE that results is solved numerically via a modified sparse grid approach, that turns out to be accurate and efficient. In addition we study the corresponding Monte Carlo simulation, which is fast since the distribution of the state variables can be calculated explicitly. The results obtained from both methodologies are compared to the known analytical solutions for bonds and caplets. When there is no analytical solution, both European and Bermudan swaptions have been evaluated using the sparse grid PDE approach that is shown to outperform the Monte Carlo simulation.


Investment Incentives in the Presence of a Credit Rating Agency
Mariana Khapko (Stockholm School of Economics, Sweden)
Joint work with Emanuela Iancu

Tuesday June 3, 16:00-16:30 | session P2 | Poster session | room lobby

When making investment decisions shareholders don't factor in benefits to firms’ creditors and they have an incentive to underinvest in the presence of risky debt. Our objective is to understand if dynamic investment choices are different when rating sensitivity of debt is introduced and how they are affected by the policy of a rating agency.
To this end, we study a dynamic continuous time economy with two agents: a firm and a credit rating agency. Cash flows of the firm follow a diffusion process. The growth rate of the firm’s cash flows is an endogenous investment decision controlled by equity holders. Moreover, the firm has debt in place in the form of rating-sensitive liabilities, which require the firm to pay higher coupons in case of deteriorating creditworthiness.
The problem of the equity holders is to choose the investment time and the default time that maximize the equity value of the firm. The rating agency’s objective is to continuously report its accurate assessment of the default probability of the firm in the form of a discrete rating. The interaction between the firm and the rating agency displays strategic complementarities: a lower rating triggers higher interest payments, which leads to faster default, which in turn brings about new downgrades.
In solving our model we rely on tools developed for the study of optimal switching and stopping problems as well as on a number of results on first passage times.
Our findings suggest that the credit rating agency alters the incentives of the firm to invest. With rating-sensitive debt in place, the firm weighs the cost of investing in a better project with the benefits stemming from reduced debt payments in the case of a higher credit rating. We show that there exist two types of equilibria and we present valuations as well as optimal policies in both cases. Rating sensitivity of debt indeed plays a role in mitigating the stockholder-bondholder conflict. Incentives to invest earlier are created when, by investing, the firm benefits of an upgrade or can avoid a downgrade for a longer time. Depending on the rating policy of the rating agency it can reduce or worsen the underinvestment problem. Too harsh rating policies deter early investments as the investment will not lower costs. Too lenient rating policies also deter early investments as the status quo is not bad enough.