Abstracts

Theory of dynamical models of covariance swaps
Ozan Akdogan (ETH Zürich, Switzerland)
Joint work with Josef Teichmann

Tuesday June 3, 10:30-11:00 | session P1 | Poster session | room lobby

We present a model for the market of covariance swaps with a particular emphasis on calibration and (consistent) re-calibration. For this we start with a particularly tractable class of time-homogeneous Markov processes and introduce a dynamic version of the Hull-White extension such that consistent re-calibration is possible without losing too much of the tractability of the model.
In more detail, we parametrize the market by means of a stochastic volatility model such that the log-prices and the instantaneous spot-covariation are jointly polynomial. Polynomial processes were recently introduced as time-homogeneous Markov processes with the property that the computation of moments reduces to the computation of matrix-exponentials. This class generalizes the widely used class of affine processes and contain popular non-affine models such as the Pearson diffusion and (jump-) diffusion limits of GARCH models. We slightly generalize this class to processes which takes their values in the cone of positive semi-definite matrices. The corresponding forward co-variation processes inherit the polynomial property and are thus tailor-made for calibration. However, a perfect fit to an arbitrary initial matrix valued curve of forward covariations is not to be expected.
Similar problems arise within the interest rates world, where affine diffusion models for the short-rate lack the ability to fit an arbitrary initial forward curve. Here the popular notion of Hull-White extensions provide a somewhat minimal extension towards a time-inhomogeneous affine diffusion such that this lack is fixed. However, as for re-calibration it is not obvious how to apply this notion repeatedly in a consistent way.
We build on a model which was recently introduced for the forward characteristics of affine processes in discrete time and apply it to our setting. For this we discretize the above mentioned model and provide the necessary extension such that consistent re-calibration is possible. The model is realized as a weak solution of a (discrete time) SPDE. Finally we consider the continuous time limit of this model and provide an algorithm for calibration and simulation. We discuss some applications which include risk management, portfolio optimization and pricing of relevant derivatives.


Integrating Black-Litterman and the Mental Accounting Framework: A Sensible and Intuitive Approach
Alexandre Alles Rodrigues (NEOMA Business School, France)
Joint work with Sébastien Lleo

Tuesday June 3, 10:30-11:00 | session P1 | Poster session | room lobby

Black and Litterman (1992) developed a framework that produces more accurate and stable assets' returns estimates as inputs to Markowitz's (1952) mean-variance portfolio optimization (MV). Within the Black-Litterman (BL) model, investors combine a prior vector, based on market equilibrium returns obtained through MV reverse optimization, and their views on the future returns of the assets to arrive at a posterior vector. Previous studies have demonstrated that BL helps to overcome the shortcomings of traditional MV.
Das et al (2010) proposed a mental accounting (MA) framework in which they integrate MV and behavioral portfolio theory (BPT) from Shefrin and Statman (2000). In MA investors divide their wealth in subportfolios, each having a different goal stated as a threshold return and maximum probability of not reaching it. Hence, instead of specifying a single risk-aversion coefficient, in MA investors can intuitively set multiple VaR-like constraints for their mental accounts.
We integrate BL and MA to achieve a full integration between standard and behavioral portfolio theory starting from parameter specification to optimization and to monitoring. The objective is to build portfolios that deliver a strong risk-adjusted performance as a consequence of sensible and intuitive processes. We compute the prior vector by reverse optimization using the MA equations, with market capitalization weights and a market threshold return and probability as inputs. The level of confidence on the views is treated as in Idzorek (2004). Once the posterior vector is computed through the BL equation, the weights of each subportfolio are calculated constrained by the investor's goals. Finally, we obtain the aggregated portfolio with the selected distribution of wealth.
A preliminary out-of-sample analysis with the 10 S&P 500 Sector Indices from Jan/2001 until Sep/2013 indicated that the performance of the proposed strategy is superior compared to the market portfolio. The investor's views were set as the historical means of each asset (with a 50\% confidence) and we used three mental accounts. The BL-MA strategy produced an annualized Sharpe ratio and 5\% monthly VaR and CVaR of 0.33, -6.44\%, and -9.22\%, while the market portfolio returned 0.24, -7.81\%, and -10.09\%. The break-even transaction cost of the BL-MA strategy was 0.3\%. The BL-MA strategy is then capable of overperforming its prior even in the presence of naïve views and when a high confidence level is assigned to them.


A Computational Application for Portfolio Fitting and `Inui-Kijima' ES Estimation
Sotiris Zisis (University of the Aegean, Greece)
Joint work with Vasil Bankov and Christos Kountzakis

Tuesday June 3, 10:30-11:00 | session P1 | Poster session | room lobby

We introduce an application and extension of the computation of Expected Shortfall as a Coherent Risk Measure, as proposed by Inui and Kijima (2005). Under this method a probability distribution is fitted to a weighted portfolio of financial instruments and its parameters are estimated. A Monte-Carlo simulation is then applied for this distribution to obtain N resampled portfolios, and their respective ES's are calculated and, with the use of Richardson's Extrapolation, a Coherent Expected Shortfall is obtained, all accordingly to the method proposed by Inui and Kijima (2005). Finally, the results are compared, and their biases as well, while Mean Squared Error's against the theoretical ES are calculated for inference. We show that quick computation of Coherent Expected Shortfall Measures is possible in an automated environment and further, there is a field for application in smaller time horizons.


A BNS-type stochastic volatility model with non-joint jumps in the asset price and volatility
Karl Friedrich Bannör (Technische Universität München, Germany)
Joint work with Thorsten Schulz

Tuesday June 3, 10:30-11:00 | session P1 | Poster session | room lobby

We present an extension of the Barndorff-Nielsen-Shephard stochastic volatility model class, where the jumps in the asset price and the jumps in the asset volatility are partially disentangled. For certain special cases, where jump dependence is established by linear or time change constructions, we deduce the characteristic function in a semi-closed form. In case of jumps following a compound Poisson processes with exponential jumps, we derive the weak-link Gamma-OU-BNS model, which has a true closed-form characteristic function and yields the Gamma-OU-BNS model in a limit case.


A stochastic programming model for hedging options in a market with transaction costs
Mathias Barkhagen (Linköping University, Sweden)
Joint work with Jörgen Blomvall

Tuesday June 3, 10:30-11:00 | session P1 | Poster session | room lobby

In this paper we consider the problem of hedging a portfolio of options with the help of standardized options and futures. For this problem we propose a stochastic programming (SP) hedging model which minimizes a portfolio risk measure and at the same time penalizes transaction costs in order to produce a cost effective hedge. We evaluate the hedging model using historical market data from the Swedish index options market. In order to evaluate the model under realistic market conditions all transactions occur at the observed bid and ask prices for the options and futures.
To be able to accurately measure the risk in the option portfolio we need to have a good representation of the distribution of the risk factors that affect option prices. We use historical innovations for the empirically estimated local volatility surface in order to determine the risk factors that the option portfolio is exposed to. With the help of principal component analysis (PCA) for innovations of the empirical local volatility surfaces we can build an arbitrage-free stochastic process for a collection of option prices and thereby determine an optimal hedge with the SP model. In order to extract the true dynamics via the PCA it is vital that we have a method that can estimate local volatility surfaces of high quality. To this end we estimate local volatility surfaces with the help of a novel non-parametric estimation method , which produces local volatility surfaces that are stable over time.
We perform an evaluation that studies the hedging result with respect to risk and costs and compare the results with those obtained with traditional methods such as delta and delta-vega hedging. Tests show that we are able to come up with an effective hedge to a low hedging cost compared to the traditional methods.


A method to solve optimal stopping problems for Lévy processes in infinite horizon
Elena Boguslavskaya (Brunel University, UK)

Tuesday June 3, 10:30-11:00 | session P1 | Poster session | room lobby

Recently, there were series of papers, where the solution of optimal stopping problems for Lévy processes and random walks when the reward is a monotone function were found in terms of the maximum/minimum of the process. In the present paper we use the integral transform method and develop it further to the case of non-monotone functions. We present a constructive method on how to solve optimal stopping problems (i.e. show how to find the optimal stopping boundary) for a fairly general reward function $g$. The key ingredient here is to introduce the integral transform $\mathcal{A}^{\eta(x)}$ based on the random variable $\eta(x) = \argmax g_{0 \leq t \leq e_q}(x+ X_s) - x$.
We find the stopping region as those arguments at which the function $\mathcal{A}^{\eta(x)}\{g\}$ (where $\mathcal{A}^{\eta(x)}$ is a form of an Esscher-Laplace transform) is non-negative, and the continuation region as those arguments at which the function $\mathcal{A}^\eta\{g\}$ is negative. Our algorithm to find the solution to the optimal stopping problem is the following: firstly, we introduce an auxiliary random variable $\eta(x)$ pathwise tracking the value of $X_t$ that achieved the running maximum of $g(x+X)$. Secondly, we use ${\eta(x)}$ to define the integral transform $\mathcal{A}^{\eta(x)}$. The integral transform maps the reward function $g=g(\cdot)$ into function $\mathcal{A}^{\eta(x)}\{g\} (\cdot)$ for each $x$. Finally, we find the stopping region $S$ as those arguments $(x,y)$ at which $\mathcal{A}^{\eta(x)}\{g\}$(y) is non-negative. The optimal strategy is to stop whenever $(x,x) \in S$, and to continue observations while $(x,x) \notin S$. We illustrate this approach with some examples. The proposed method is computationally attractive as it does not require solving differential or integro-differential equations which appear if the traditional methods are used.


Almost worst case distributions
Thomas Breuer (PPE Research Centre, Austria)

Tuesday June 3, 10:30-11:00 | session P1 | Poster session | room lobby

A worst case distribution is defined to be a distribution minimising, among the risk factor distributions satisfying some plausibility constraint, the expectation of some fixed random payoff. The plausibility of a risk factor distribution is quantified by a convex integral function. This includes the special cases of relative entropy, Bregman distance, and $f$-divergence.
An ($\epsilon$-$\gamma$)-almost worst case distribution is a risk factor distribution which violates the plausibility constraint at most by the amount $\gamma$ and for which the expected payoff is better than the worst case by no more than $\epsilon$. From a practical point of view the localisation of almost worst case distributions determines the efficiency of a hedge against the worst case distribution. We prove that almost worst case distributions cluster in the Bregman neighbourhood of a positive function, which may be interpreted as a worst case localiser. In regular cases, the worst case localiser is at the same time the worst case density. But it may also be the case that the infimum of payoff expectations over the plausible distributions is not achieved, in which case the worst case localiser is not a worst case density, and perhaps not even a density.


On robustness of arbitrage from intrinsic market model properties
Matteo Burzoni (Università degli studi di Milano, Italy)
Joint work with Marco Frittelli and Marco Maggis

Tuesday June 3, 10:30-11:00 | session P1 | Poster session | room lobby

The introduction of Knightian uncertainty in mathematical models for finance has recently renew the attention on foundational issues such as the existence of option pricing rules or super-hedging theorems. In this context an agent allows for the possibility of describing a certain market with a probabilistic model but he can’t be sure about a single reference probability, hence, a whole class of reference probabilities is taken into account. Although the literature on this topic is rapidly growing it is not clear yet what is a good notion of arbitrage opportunity and the consequent properties on martingale measures. In the present work we slightly change the point of view avoiding the necessity of fixing a subset of probability measures, which might be problematic in some settings. Given a certain market model, described by a discrete time stochastic process, we study the intrinsic properties of the model, independently on any reference probability. In particular two questions arises naturally: 1) which markets are feasible, in the sense that the properties of the market are nice for most probabilistic models? 2) which are the markets that exhibits no arbitrage for most probabilistic models?
A good notion for “most” probabilistic models is clearly needed and it is undertaken in this work from a topological point of view. Different results are obtained depending on the coarseness of the topology under consideration.
A key property is the existence of martingale measures. An important difference from the classical case is that we are not providing equivalent conditions in terms of absence of arbitrage opportunities, we are studying, instead, the structural properties of the market needed for the existence of such measures.
Once this minimal property of the market is guaranteed, we introduce definitions of arbitrage as trading strategies that work for most probabilistic models, in the spirit of the rest of the work. Additional properties on the martingale measures are retrieved under the No arbitrage hypothesis.


Financial Instability Contagion – Dynamical Systems Approach
Youngna Choi (Montclair State University, USA)
Joint work with Giuseppe Castellacci

Tuesday June 3, 10:30-11:00 | session P1 | Poster session | room lobby

We build a multi-agent economic model as a dynamical system on a compact set, and show that the market instability is closely related to the leverage and borrowing capacity of the market participant. The economy under consideration can be a single domestic economy, or a global one which consists of multiple subeconomies. First, we divide an economy into finitely many aggregates called economic “agents,” and build a deterministic dynamical system of their wealth. We use well-known theories of dynamical systems to represent a financial crisis as the breakage of a financial equilibrium and subsequent propagation of negative shock on wealth. Secondly, we define an early warning system, the market instability indicator, as the spectral radius of the Jacobian matrix of the wealth dynamical system. We show that the size of the indicator is proportional to the instability level of the economic system, therefore monitoring the indicator would enable us to predict upcoming financial crises. Thirdly, using the market instability indicator we give a quantitative definition of financial instability contagion, and provide a thorough mathematical analysis on the mechanism of instability contagion. Lastly, we use macroeconomic data from a period containing recent financial crises to test the market instability indicator and instability contagion model, and discuss limitation in their application in real life, such as data availability and frequency. Our contribution is to provide a methodology to quantify and monitor the level of financial instability in sectors and stages of a structured global economic model and how it may propagate between its components.


Optimal Timing for Short Covering of a Security
Tsz-Kin Chung (Tokyo Metropolitan University, Japan)

Tuesday June 3, 10:30-11:00 | session P1 | Poster session | room lobby

Short-selling is the selling of a financial security which the investor does not own. The trading is an important tool for portfolio management to hedge the downside risk of the security price. In this paper, we formulate a short-selling strategy and seek an optimal timing of short covering as an optimal stopping problem. The aim is to study how the optimal trading strategy of the short-seller is influenced by various features of the stock borrowing market, including the random time recall from broker-dealers, the loan fee payment and the short interest rebate. We characterize the optimal timing of short covering depending on the conditions that lead to different cost and benefit of keeping the position. We find that the short-seller should stop earlier in the presence of broker's recall and should stop immediately when the recall risk is high during an up market. Moreover, the value function can become negative due to a forced termination at the recall time. When there are borrowing cost and interest rebate, the optimal stopping strategy depends on a delicate balance between the loan fee (cost) and interest rate (benefit). When the loan fee is charged too high, the short-seller's optimal strategy is to stop immediately; in contrast, when the interest rate is sufficiently high, the short-seller forfeits the optionality to stop and wait-and-see until the recall time. As more interesting results we observe that in other cases, the optimal stopping rule is a down-and-out type (put-type problem) or an up-and-out type (call-type problem), depending on the levels of loan fee and interest rate relative to the short-seller's discount rate and expected return of the stock. The solution to the optimal stopping problems is obtained in closed-form and we show its optimality by the verification theorem. Given the closed-form solution, we are able to characterize explicitly the regimes (versus loan fee and interest rate) in which the short-seller is active or inactive. We can also determine the optimal loan fee charged by the broker-dealer given the short-seller's optimal strategy. The analysis in this paper can be readily extended to incorporate a stop-loss limit and regime-switching in the stock price process.