IS08: Emerging Topics in Stochastic Finance
Organizer: Ju-Yi Yen (University of Cincinnati)
Convergence Analysis of Real-time Recurrent Learning (RTRL) for a class of Recurrent Neural Networks
Konstantinos Spiliopoulos
Recurrent neural networks (RNNs) are commonly trained with the truncated backpropagation-through-time (TBPTT) algorithm. For the purposes of computational tractability, the TBPTT algorithm truncates the chain rule and calculates the gradient on a finite block of the overall data sequence. Such approximation could lead to significant inaccuracies, as the block length for the truncated backpropagation is typically limited to be much smaller than the overall sequence length. In contrast, Real-time recurrent learning (RTRL) is an online optimization algorithm which asymptotically follows the true gradient of the loss on the data sequence as the number of sequence time steps t→∞. RTRL forward propagates the derivatives of the RNN hidden/memory units with respect to the parameters and, using the forward derivatives, performs online updates of the parameters at each time step in the data sequence. RTRL’s online forward propagation allows for exact optimization over extremely long data sequences, although it can be computationally costly for models with large numbers of parameters. We prove convergence of the RTRL algorithm for a class of RNNs. The convergence analysis establishes a fixed point for the joint distribution of the data sequence, RNN hidden layer, and the RNN hidden layer forward derivatives as the number of data samples from the sequence and the number of training steps tend to infinity. We prove convergence of the RTRL algorithm to a stationary point of the loss. Numerical studies illustrate our theoretical results. One potential application area for RTRL is the analysis of financial data, which typically involve long time series and models with small to medium numbers of parameters. This makes RTRL computationally tractable and a potentially appealing optimization method for training models. Thus, we include an example of RTRL applied to limit order book data.
Stochastic filtering equations for diffusions on infinite graphs
Tomoyuki Ichiba
We shall discuss diffusions with mean-field interactions and interactions through a tree structure by introducing stochastic differential equations with distributional constraints. This class of system includes a system of directed chain stochastic differential equations, and it has an infinite-dimensional feature and so does the corresponding system of filtering equations we discuss. We also discuss approximations of the system by particle systems to determine the distributions and its application for portfolio optimization.
Median process in fragmented crypto-markets: robust estimation, hedging
Emmanuel Gobet
Title: Median process in fragmented crypto-markets: robust estimation, hedging
Abstract: Crypto-finance is characterized by a highly-fragmented market where digital assets are traded on dozens of Centralized Exchanges and thousands of Decentralized Exchanges operated with the blockchain technology. These markets have experienced strong development in recent years and are currently experimented at the highest level by regulation authorities, in particular in Europe and in USA.
In this context of large market fragmentation, the consensus "market price" is then challenging to evaluate. We will present how the use of median (weighted with volumes and/or times) emerge as a key ingredient for robust aggregation: for such medians, I will show how we can derive relevant sub-gamma concentration-of-measure inequalities. Then, since this consensus "market price" can serve as a reference for derivatives pricing, I will investigate how it can be hedged by studying its Malliavin differentiability properties.
The talk is based on joint works with M. Allouche (Kaiko), M. Echenim (Grenoble Institute of Technology), Y. Ma (Tokyo Institute of Technology), A.C. Maurice (Kaiko).