# Abstracts

**Optimum Strategy in Market Order Execution Associated with the Poisson Process**

*Amirhossein Sadoghi (Frankfurt School of Finance and Management, Frankfurt am Main, Germany)*

Thursday June 5, 16:00-16:30 | session P6 | Poster session | room lobby

Nowadays, High-Frequency-Trading is a highly automated business and automatic trading is introduced in the major stock exchanges. One participant in these markets wants to trade a large quantity of shares within a given time via limit order or market order. The main question is how to slice the order, when and how to trade optimally. In this research, we exclusively consider optimal order execution in an illiquid market where this participant is the only price taker. We address optimal execution problem for market order trading under different micro structure such as order book shape, resilience of price impact. In such market, the trader will take offers in the limit order book and seeking for the optimal trade-off between liquidity and minimal price impact.

In this research, discrete market order flow can be viewed as occurring at random times, say according to a Poisson process. The minimizing of total price impact can be done via splitting large orders into smaller pieces in a given time frame. The highlight of the study is to construct numeric boundaries based on a discrete order flow. Whenever we observe the bidding price hit or higher than the boundary, we will execute the amount of notional that is associating with this specific boundary.

We start from toy example where the analytical boundary for single unit execution is derived. Later on we extend the model to multi unit order case and construct different boundary numerically via Monte Carlo simulation method. With numerical experiments, we study the converge of algorithm to optimal execution strategy regarding to time-dependent parameters such as order book shapes, order intensity and resilience function and other constraints on optimal strategy.

**Market Completion with Derivative Securities**

*Daniel Schwarz (Carnegie Mellon University, USA)*

Thursday June 5, 16:00-16:30 | session P6 | Poster session | room lobby

In mathematical finance a model of a financial market is said to be complete if any payoff can be obtained as the terminal value of a self-financing trading strategy. The completeness property is important because it allows the hedging of any non-traded derivative security. Yet, numerous models, such as stochastic volatility models or structural models in commodity markets are incomplete. We present conditions, in a general diffusion framework, which guarantee that in such cases the market of primitive assets enlarged with an appropriate number of traded derivative contracts is complete. Key to the proof is the analyticity in time and space of the coefficients of the model, allowing us to draw upon recent results on the analytic smoothing properties of linear parabolic partial differential equations.

**Tail-risk protection trading strategies**

*Peter Schwendner (ZHAW, Switzerland)*

Thursday June 5, 16:00-16:30 | session P6 | Poster session | room lobby

We derive robust portfolio protection trading strategies by taking into account different aspects of time-variation and dynamics of distributional parameters of financial time series. To model financial time series we first account for the time-dependent dynamics of financial time series via a GARCH(1,1) process, which allows to incorporate volatility clustering and autoregressive behavior in volatility, both of which are well-documented stylized facts of financial time series. Second, we fit the GARCH residual (innovations) to different families of distributions including the Generalised Hyperbolic (GH) distribution and tempered stable distributions. Because of the their ability to incorporate a wide range of empirical stylized facts, both distribution families are popular in modelling financial data. In particular, the GH distribution contains the normal and Student-t-distributions as special cases.

Aside from examining the time-varying behavior of the distributional parameters, we study the spread of the value-at-risk (VaR) between a non-normal and normal GARCH-innovation process. Because of the GARCH component, the magnitude of VaR, when viewed as a process over time, adapts quickly to changes in volatility. The distributional properties of the innovation process on the other hand provides information on skewness, excess kurtosis and in particular on the heaviness of the tails present in the data. The resulting VaR spread can therefore be used to derive an expectation of the frequency of extreme events which in turn generates signals of the temporal presence of tail risks.

This information, in particular the information about potential ‘tail-risks’ contained in the short-term VaR is used to generate trading signals with the intention to protect against extreme losses and at the same time to not miss the upside. This portfolio protection trading strategy is compared to CPPI and protective put trading strategies, which are popular portfolio insurance strategies. Based on DAX returns from 1996 to 2013 we find that the tail-risk protection strategy outperforms the classical strategies, for example in terms of a higher Sharpe ratio and when comparing excess returns relative to maximum drawdown (Calmar ratio). These results are backed by robustness tests, e.g. by comparing the trading strategy to a randomly generated trading strategy.

**Robust parameter estimation for the stochastic volatility model using Markov chain Monte Carlo**

*Youngdoo Son (Seoul National University, South-Korea)*

Thursday June 5, 16:00-16:30 | session P6 | Poster session | room lobby

In this paper, a new technique for the estimation of parameters for the stochastic volatility model is developed. Although several techniques including Markov chain Monte Carlo were applied and developed for this problem, most of their results were very sensitive for the initial value or random seed selection. Because of the problems mentioned above, it was difficult to apply those developed method directly to the real data. In the proposed method, unlikely in the original Markov chain Monte Carlo methods, the parameters are separated by some criteria and estimated iteratively like many other iterative methods such as expectation-maximization algorithm and this makes robust results. Applied to the artificial datasets, the proposed methods found the parameters and those results were robust compared to the parameters from the original Markov chain Monte Carlo. Additionally, it is successfully applied to the real market data including several indices and stock prices.

**Explicit Solutions and Performance Evaluation of Discrete-time Quadratic Optimal Hedging Strategies for European Contingent Claims**

*Easwar Subramanian (Tata Consultancy Services, India)*

Thursday June 5, 16:00-16:30 | session P6 | Poster session | room lobby

We consider the problem of optimally hedging a European contingent claim (ECC) with its underlying in a discrete-time setting. The ECC could be written on multiple underlyings and may be path-dependent. Specifically, we consider two quadratic optimal hedging strategies : minimum-variance hedging in a risk-neutral measure and optimal local-variance hedging in a market probability measure. The objective function for the former is the variance of the hedging error calculated in a risk-neutral measure and the latter optimizes the variance of the mark-to-market value of the portfolio over a trading interval in a market measure. The motivation for introducing and deriving expressions for optimum local-variance hedging are two fold. First, it is useful to consider strategies that minimize the variance of the mark-to-market value of the portfolio locally in time. Second, their analysis is simpler even for the general semi-martingale case. The main aim of the work is to derive explicit closed form solutions to hedge different types of ECCs using the above mentioned quadratic hedging schemes. To arrive at closed-form solutions, we assume geometric Brownian motion (GBM) as the model for the underlying asset prices. The different types of ECCs considered include a general path-independent ECC, exchange option to represent a multi-asset ECC and a discretely-monitored path-dependent ECC. All the hedging solutions are expressed in terms of the pricing function of the hedged ECC and the prices of the underlying assets. These explicit solutions when used instead of complex Monte-Carlo based solutions makes the proposed hedging solution well suited for computer implementation.

Yet another motive of the work is to compare the effectiveness of the two quadratic trading strategies with the standard delta hedging. Since the delta hedging strategy is s based on the theory of continuous-time trading, our intention is to show that, when trading is performed in discrete-time, quadratic optimal strategies perform better than the standard delta-hedging. The comparison is done on multiple performance measures such as the probability of loss, expected loss, different moments of the hedging error and shortfall measures such as value at risk (VaR) and conditional value at risk (CVaR). The outputs of various measures can then be used to evaluate the appropriateness of using a particular trading strategy in a given scenario. We argue that the evaluation process can be better performed if a trader relies on the results of multiple performance metrics instead of just one. Our performance evaluation results on path-independent and Exchange like options conclude that in the discrete-time setting the quadratic optimal hedging strategies outperform the delta hedging strategy. Indeed, as the re-balancing is done more sparsely, the quadratic-optimal trading schemes fair better than the standard delta-hedging.

**A Payoff Consistent Approach to Cash-Settled Swaptions and CMS Replication**

*Chyng Wen Tee (Singapore Management University, Singapore)*

Thursday June 5, 16:00-16:30 | session P6 | Poster session | room lobby

European interest rate markets use cash-settlement (CS) for swaptions, the main interest rate volatility instrument. In the market only at-the-money (ATM) straddles and out-of-the-money (OTM) payers/receivers are liquidly quoted and usable as calibration instruments. As a result all ITM payer/receiver swaption prices are model-based. CS-swaptions form the basis for all volatility-sensitive interest rate product valuations such as constant maturity swaps (CMS), Bermudan swaptions and callable libor exotics. Therefore, mispricings of CS-swaptions impacts the whole spectrum of interest rate volatility products.

As the difference between physical- and cash-settlement is often considered minor, a Black model based market-approximation (MA) formula is heavily used to value cash-settled swaptions across the industry. We show that at the forward swap rate (ATM) the value of cash-settled payer and receiver swaptions are economically significantly different, contrary to physical-settled swaptions. However, the MA formula implies equality for ATM payers/receivers. We will show that using the MA formula leads to economically significant mispricings of both ATM and ITM swaptions. Furthermore, we show problems in swaption portfolio risk management and significant mispricing of constant maturity swap (CMS) products using replication.

This paper provides an overview of the market formula followed by a detailed analysis of the key reasons for valuation/sensitivity differences. We propose a stochastic volatility model to accurately capture the convex/concave payoff profile. The model is capable of accounting for the non-linear payoff profile while being analytically tractable. Furthermore, it has enough degree of freedom to be calibrated accurately to the implied volatility smile and skew in the swaption market. We report several key findings. The implied volatilities for cash-settled payer and receiver swaptions are shown to be different due to the convex and concave payoff profile. ITM swaptions can be valued effectively using the semi-analytical model formulated in this work, allowing for efficient risk management of large swaption portfolio. In addition, it is observed that vega for cash-settled payer swaptions could become negative in high interest rates scenarios. Finally, we detail the implications of these mispricings for CMS products.

**The value of monitoring trading signals**

*Antonella Tolomeo (University of Salento, Italy)*

Thursday June 5, 16:00-16:30 | session P6 | Poster session | room lobby

We discuss the continuous-time utility maximization problem under asymmetric information in financial markets, in the context of Fads models (Shiller, 1981) and signaling theory of trade. Particularly, our work is based on the asset pricing approach introduced by Guasoni (2006), which models price dynamics with both a martingale component and a stationary component. Based on this framework, we formulate a different process for the price dynamics. Whereas Guasoni (2006) models the price dynamics from a GBM and an Ornstein-Uhlenbeck process, our analysis draws on two Ornstein-Uhlenbeck processes. The asset price for the informed agents is in the larger filtration and initially specified. From this we decompose the asset price dynamics of the uninformed agents in the smaller filtration. We find non-Markovian dynamics for the uniformed agents, which requires analysis of the filtering problem by applying the Hitsuda representation (1968). We then solve the logarithmic utility maximization problem, in order to compare both agents' level of market information. In contrast to the method in Guasoni (2006), where the above dynamics are obtained by applying the theory of the negative resolvent of a Volterra kernel to a product function, we start out from the representations of the process, and then verify our solution by using the fundamental equation from the general scheme given in Cheredito (2003). This allows us to confirm the existence of the negative resolvent associated with a Volterra Kernel, which is a sum of functions. Subsequently, by recalling the logarithmic utility maximization problem from the terminal wealth, we obtain the two agents' optimal trading strategies. Our study supports the results of Guasoni (2006), where informed and uninformed agents' log utilities are found to be different. Particularly, our analysis yields positive excess utility for informed agents.

**Estimating and Backtesting Distortion Risk Measures**

*Hideatsu Tsukahara (Seijo University, Japan)*

Thursday June 5, 16:00-16:30 | session P6 | Poster session | room lobby

The class of distortion risk measures is the one we should use if we stick to the law invariance and comonotonic additivity in addition to the four axioms of coherence, and is broad enough to fully express agents’ subjective assessment of risk (conservatism). If one is to apply this risk measure to practical financial risk management problems, it is necessary to pick one distortion function depending on his/her attitude towards risk.

In our previous work, we have shown that estimating distortion risk measure is possible based on general weakly dependent times series data. The next step in financial risk management is backtesting. The purpose of backtesting is twofold: to monitor the performance of the model and estimation methods for risk measurement, and to compare relative performance of the models and methods. It is a tool for the validation process which is indispensible for adequate risk management and is even required by the Basel 2 Capital Accord.

Statistically, it is just a form of cross validation; the ex ante risk measure forecast from the model is compared with the ex post realized portfolio loss. In the case of Value-at-Risk (VaR), a popular procedure for backtesting VaR depends on the number of VaR violations; we say that a VaR violation occurred when the loss exceeds VaR. Extending a backtesting procedure for the renowned expected shortfall (ES), we suggest, based on a simple observation, a backtesting procedure for distortion risk measures, and check its effectiveness in a simulation study using ES and also proportional odds distortion risk measure.

Many people claim that it is easier to backtest VaR than ES and other risk measures for the following reasons: (i) the existing tests for ES are based on parametric assumptions for the null distribution; (ii) asymptotic approximation is needed for the null distribution of the test statistics; (iii) testing an expectation is harder than testing a single quantile. Summing up these arguments, what they call ‘backtestability’of a risk measure means that it can be nonparametrically backtested with small samples.

It has recently been claimed that expected shortfall (and distortion risk measures) cannot be backtested because it fails to satisfy the so-called elicitability condition. Roughly speaking, a statistical functional is called elicitable if it is a unique minimizer of some expected loss function. While elicitability is useful when we wants to compare and rank several estimation procedures, there seems to be no clear connection with backtestability. We will try to illustrate this with simple examples including the expectiles.

**How Sensitive Is Corporate Demand for Internal Liquidity to Financial Development?**

*Alexander Vadilyev (Australian School of Business, University of New South Wales, Australia)*

Thursday June 5, 16:00-16:30 | session P6 | Poster session | room lobby

Khurana, Martin and Pereira (2006) and the following studies find that the sensitivity of cash holdings to cash flows (referred to as the cash flow sensitivity of cash) decreases with the level of financial development; that is, firms from financially developed countries are less exposed to financing constraints and thus exhibit a lower propensity to save cash out of their cash flows. While financing frictions are arguably less binding on such firms, the negative relationship with a country’s financial development holds only if the cash flow sensitivity of cash is linear. Using a large sample of public firms from 44 markets and over the 1995 to 2010 period, the study reveals that (i) corporate propensity to save cash from internal cash flows is non-linear and highly sensitive to the sign of cash flow, and (ii) the inverse relationship between a country’s financial development and cash-cash flow sensitivity becomes insignificant after controlling for the non-linearity. The findings further support the hypothesis that positive cash flow firms persistently save cash from internal resources regardless of the financial market advances. Negative cash flow or loss-making firms are also insensitive to the level of financial development because their access to capital markets is generally limited or closed. In conclusion, my results signify that corporate saving propensities reflect a multitude of forces to be independent from cross-country financial integration.

**Stochastic control model for R&D race in a mixed duopoly with spillovers and knowledge stocks**

*Jingjing Wang (The Hong Kong University of Science and Technology, Hong Kong)*

Thursday June 5, 16:00-16:30 | session P6 | Poster session | room lobby

We consider the stochastic control model with finite time horizon for a mixed duopoly R&D (Research and Development) race between the profit-maximizing private firm and welfare maximizing public firm. In our two-firm stochastic control R&D race model, the stochastic control variable is taken to be the private firm’s rate of R&D expenditure and the hazard rate of success of innovation has dependence on the R&D effort and knowledge stock. Given the fixed R&D effort of the public firm, the optimal control is determined so as to maximize the private firm’s value function subject to market uncertainty arising from the stochastic profit flow of the new innovative product. Our R&D race model also incorporates the impact of input and output spillovers. We apply the Bellman optimality condition to construct the Hamilton-Jacobi-Bellman equation of the stochastic control model. Finite difference schemes together with policy iteration procedure are constructed for the numerical solution of the value function and optimal control of R&D expenditure of the private firm. We conduct various sensitivity tests with varying model parameters to analyze the effects of input spillover, output spillover and knowledge stock on the optimal control policy and the value function of the profit-maximizing private firm. The R&D effort of the private firm is found to increase when the profit flow rate increases. Moreover, the optimal R&D effort level may decrease with increasing private firm’s knowledge stock and output spillover. The effects of input spillover on the optimal control policy and value function are seen to be relatively small. We examine the robustness of various observed phenomena of the two-firm R&D race with varying values of the fixed R&D effort of the public firm. With regard to public policy issue, we examine the level of the fixed public firm’s R&D effort so that social welfare is maximized.

**Optimal trading with multiple assets and cross-price impact**

*Marko Weber (Dublin City University - Scuola Normale di Pisa, Ireland)*

Thursday June 5, 16:00-16:30 | session P6 | Poster session | room lobby

We solve a portfolio choice model in a market where the order flow in each asset has a linear price impact in all assets. For small illiquidity cost we derive the asymptotic equivalent safe rate and an asymptotically optimal policy, which trades towards the frictionless portfolio only if the cross-impact between assets is proportional to their covariance. Trading volume approximately follows a multivariate Ornstein-Uhlenbeck process. Leverage and short-selling in each asset is endogenously ruled out.

**Fast Simultaneous Calibration and Quadratic Hedging under Parameter Uncertainty**

*Magnus Wiktorsson (Lund University, Sweden)*

Thursday June 5, 16:00-16:30 | session P6 | Poster session | room lobby

It is a well-established empirical fact that there are jumps in real world assets prices. These complicate calibration due to the additional parameters and they also make hedges based on Taylor expansions (delta, Gamma etc. ) perform suboptimal, see Brodén and Tankov [2011].

It was found in Lindström et al. [2008] that the calibration can be written as a non-linear filter problem, which is faster and more robust than (penalized) weighted least squares. The filter idea was extended to simultaneous calibration and quadratic hedging (computed under the risk neutral measure) in Lindström and Guo [2013], a result that was achieved by augmenting the filter problem with the underlying asset as an additional state variable.

There are two main contributions in this paper. The primary contribution is the computational complexity, as we are unable to see that calibration can be done with fewer function evaluations (price calculations) than what is done here. The second contribution is that parameter uncertainty; cf. Lindström [2010] is taken into account when calculating the hedges. This is important as the price for having a flexible model often is many parameters, and hence large parameter uncertainty.

We find that the resulting algorithm, implemented using an unscented Kalman filter and Fourier based option valuation, is computationally very competitive, with hedges that often are similar to the ordinary quadratic hedges when these can be computed. A nice feature is that quadratic hedges using several hedge instruments are obtained with very few (and inexpensive) additional computations, since the required covariances are already calculated in the filter. Another feature of the proposed algorithm is that only prices are required to compute hedges for exotic options, simplifying quadratic hedging of these options substantially.

**A Theory of risk management with model risk and finiteness of number of issued securities**

*Daisuke Yoshikawa (Bank of Japan, Japan)*

Thursday June 5, 16:00-16:30 | session P6 | Poster session | room lobby

In our previous paper ``A pricing theory on finite number of issued securities'', we capture the effect of constraint of finite number of issued securities. However, we postulated very simple model on stochastic process; one period binomial model. Binomial model is strongly related to the normal distribution. However, the normality is often denied empirically. We can observe non-normality on many kinds of assets. `Fat-tail' is the typical example. Statistics and/or mathematical finance shows several methods to describe non-normality. The simplest method for embedding non-normality is to extend binomial model to trinomial, quadinomial or higher dimensional model with the extreme lower values. However, these extreme lower values are often very difficult to capture, because this is often happened with unobservable lower probability.

Furthermore, information on such a rare event is restricted. A market participant might have sufficient information on it, while the other market participant might not. In such a circumstance, the former market participant can exactly specify the probability space by his information, while the latter market participant may construct the probability different from the probability with the sufficiently informed market participant. In this sense, asymmetry of cognition of market participants will be included in the model. In the previous paper, the difference of risk-aversions expresses the heterogeneity of market participants. In addition to this heterogeneity, we consider the heterogeneity on cognition of market participants and analyze the effect of this additional heterogeneity on security price, under the condition that there is the finite number of issued securities.

It is clear that this new heterogeneity is deduced to the problem on model risk, because model risk appears, if market participants are not necessarily to be able to exactly specify model parameters and/or structure of probability space.

Our goal is to describe the security price with the finite number of issued securities under this additional heterogeneity related to the model risk. In this paper, we formulate to control this heterogeneity utilizing the method of maxmin expected utility and derive the security price formula consistent with our previous paper.