Risk-Sensitive Markov Decision Processes with Applications to Finance and Insurance
Nicole Bäuerle (KIT, Germany)
Joint work with Anna Jaskiewicz and Ulrich Rieder

Wednesday June 4, 17:00-17:30 | session 6.2 | Portfolio Optimization | room CD

In the first part of the talk we investigate the problem of minimizing a certainty equivalent of the total or discounted cost over a finite and an infinite horizon which is generated by a Markov Decision Process (MDP). The certainty equivalent is defined by $U^{-1}(EU(Y))$ where $U$ is an increasing function. In contrast to a risk-neutral decision maker this optimization criterion takes the variability of the cost into account. It contains as a special case the classical risk-sensitive optimization criterion with an exponential utility. We show that this optimization problem can be solved by an ordinary MDP with extended state space and give conditions under which an optimal policy exists. Interestingly, it turns out that in case of a power utility, the problem simplifies and is of similar complexity than the exponential utility case, however has not been treated in the literature so far. A simple portfolio problem is considered to illustrate the influence of the certainty equivalent and its parameters. At the end, we will also consider risk-sensitive dividend problems.

Good Coskewness, Bad Coskewness
Konark Saxena (University of New South Wales, Australia)
Joint work with Petko Kalev and Leon Zolotoy

Wednesday June 4, 17:30-18:00 | session 6.2 | Portfolio Optimization | room CD

We extend the influential framework of Campbell (1993) and Campbell and Vuolteenaho (2004) to an intertemporal CAPM where coskewness of asset returns with news about cash flows and discount rates is priced. In the model, news about future risk, defined as the variance of the SDF plus the market portfolio, is determined by the quadratics of shocks to discount rates and shocks to cash flows. Assets with good coskewness have lower expected returns as they act as hedges against increases in volatility of news to cash flows and volatility of news to discount rates. Assets with bad coskewness have higher expected returns as they are negatively correlated with simultaneous shocks to cash flow news and discount rate news, which predicts an increase in future risk. An empirical implementation of the intertemporal model captures approximately 73\% of the return variation across size/book-to-market, size/momentum, and industry portfolios. We find that the momentum strategy has significantly higher exposures to risk related to simultaneous news about discount rates and cash flows that increase future risk. We show that the price of risk for hedging against simultaneous shocks to cash flows and discount rates is economically and statistically important and a significant determinant of the momentum premium.

Black-Litterman in Continuous Time: Jump-Diffusion Processes and Applications
Sebastien Lleo (NEOMA Business School, France)
Joint work with Mark Davis

Wednesday June 6, 16:30-17:00 | session 6.2 | Portfolio Optimization | room CD

Black and Litterman (1991, 1992) developed a successful one-period mean-variance optimization model in which the expected risk premia of the assets returns incorporate views formulated by securities analysts. The model has proved extremely popular although most discussions take place in the original one-period setting and hardly any dynamic extensions exist.
We developed a continuous-time analogue to the Black-Litterman model using a standard linear filtering argument to incorporate analyst views and stochastic control to solve the asset allocation problem [QFL, 1 (2013)]. The key in our approach is that the filtering problem and the stochastic control problem are effectively separable. Hence we can incorporate analyst views and non-investable assets as observations in our filter even though they are not present in the portfolio optimization.
In this paper, we generalize the continuous time Black-Litterman model in three significant ways. First, we examine the selection of the prior, that is the initial uninformed vector of risk premia. Black and Litterman’s choice of prior through a reverse optimization process is an important reason for the success of their model. However, the question of the prior is often overlooked in filtering theory, where the prior is drawn from a given distribution. We propose a method to construct the prior in a continuous time setting.
Second, we use jump-diffusion processes to model the observations. An obvious motivation is to model asset price jumps and the higher moments of the return distribution. A less obvious but equally important reason is to define non-Gaussian confidence intervals around the analyst views. The literature on expert opinions suggests that the Gaussian distribution may not generate appropriate confidence intervals. Lévy processes give us access to a wider class of distributions and enable us to develop a more accurate probabilistic characterization of the analyst views.
Finally, we discuss applications to stochastic control problems in general and more specifically to risk-sensitive investment management models. We consider three examples: portfolio management, benchmarked asset management and asset and liability management. Although the filtering step and the stochastic control problems are effectively separable, the nature of the investment problem has an impact on both. These three examples show show the impact of a replicable benchmark and of an unhedgeable liability on the filtering process.