Poster session 3
Wednesday, 7/16, 12:00-13:00, Congress Centre
P7: Doubly stochastic resetting
Kacper Taźbierski (WUST)
stand: S2
Stochastic resetting is a very potent branch of stochastic processes. It can model sudden (or not!) returns of the process or some of its properties to their initial state.
One of the forms of stochastic resetting is the so-called partial resetting, where at the moment of the reset the value of the process changes according to the rule \(X(\tau)= c X(\tau^-)\), \(c\in [0,1]\), and the process loses its memory. We will discuss the problems of complete and memory resetting, which are the special cases where \(c=0\) and \(c=1\). The main focus is put on the case where the resetting mechanism is guided by a Cox process. There, some interesting anomalous diffusion properties emerge.
Theorem 1. \([1]\) Let the underlying stochastic process \(X(t)\) display anomalous diffusion with exponent \(\alpha\) for large times and a memory resetting mechanism be guided by a mixed Poisson process with random intensity \(R\) and PDF displaying power law \(p_R(r)\stackrel{r\downarrow 0}{\sim}L(r)r^{\varrho-1}\). Here \(L\) is a slowly varying function and \(\varrho>0\). Then the process under resetting displays anomalous behavior with exponent \(\alpha-\varrho\) if \(\varrho<\alpha-1\) and normal diffusion otherwise.
Theorem 2. \([2]\) Let the underlying stochastic process \(X(t)\) display anomalous diffusion with exponent \(\alpha\) for large times and a complete resetting mechanism be guided by a Cox process with random intensity function \(r(t)\) scaling like \(t^{\varrho-1}\) for \(\varrho>0\). Then the process under resetting displays anomalous behavior with exponent \(\alpha(1-\varrho)\).
Bibliography
\([1]\) Taźbierski, K., Magdziarz, M., Metzler, R. Stochastic memory resetting. In preparation
\([2]\) Taźbierski, K., Magdziarz, M., Doubly stochastic resetting. In preparation
P26: Risk Control in Federated Learning via Threshold Aggregation
Onrina Chandra (Rutgers University)
stand: S15
Introduction
In federated learning, multiple local servers collaboratively train models without sharing raw data. However, aggregating their predictions can result in uncontrolled risk due to data heterogeneity and model uncertainty. We propose a novel risk control framework that leverages knowledge transfer to calibrate and control risk across local servers. Our approach assumes that each server has a threshold parameter to bound the risk at a desired level of confidence, even when the local models are black-box. By aggregating these thresholds through a calibrated weighting scheme, our method guarantees that the overall risk remains below a target level with high probability.
Model Setup
Let \(K\) be the number of local servers. For each server \(k \in \{1, \dots, K\}\):
\(\{(X^{(k)}_i, Y^{(k)}_i)\}_{i=1}^{n_k} \sim P_k\) are exchangeable observations.
Data is split into training and calibration sets: \(I^{(k)}_{\text{train}}\) and \(I^{(k)}_{\text{cal}}\).
A score function \(S(Y, X)\) is defined.
Set-valued predictor \(\mathcal{T}^k_\lambda: \mathcal{X} \to \mathcal{Y}'\), e.g., \[\mathcal{T}^k_\lambda(X) = \{ Y : S(X, Y) \leq \lambda \}.\]
Predictors are nested: \(\lambda_1 < \lambda_2 \Rightarrow \mathcal{T}^k_{\lambda_1}(x) \subset \mathcal{T}^k_{\lambda_2}(x)\).
Loss function \(L_k(Y, \mathcal{T}^k_\lambda(X))\), e.g., \[L_k(Y, \mathcal{T}^k_\lambda(X)) = 1 - \frac{|Y \cap \mathcal{T}^k_\lambda(X)|}{|Y|}.\]
Local Risk
Local risk for server \(k\) is \[R_k(\lambda) = \mathbb{E}_{(X,Y) \in I_{\text{cal}}^{(k)}} \left[ L_k(Y, \mathcal{T}^k_\lambda(X)) \right].\]
For a fixed cutoff \(\alpha\), assume \(R_k(\lambda) \leq \alpha\) for all \(\lambda \in \Lambda\), for all \(k\).
Fix \(M\) levels \(\delta_1, \delta_2, \ldots, \delta_M \in [0,1]\). Each server provides \(M\) threshold-risk pairs \(\{(\tilde{\lambda}^{(k)}_{\delta_m}, \delta_m)\}_{m=1}^M\). Given only the threshold-risk pairs \(\{(\tilde{\lambda}^{(k)}_{\delta_m}, \delta_m)\}_{m=1}^M\) and cutoff \(\alpha\), can we control the global risk at a level \(\gamma\)? We employ conformal prediction to aggregate the values \(\{ \{ \tilde{\lambda}^{(k)}_{\delta_m}, \delta_m \}_{m=1}^M \}_{k=1}^K\) and define a parameter \(\tilde{\lambda}_\gamma\) such that \[\mathbb{P}_{\{I^{(k)}_{\text{cal}}\}_{k=1}^K} \left( R(\tilde{\lambda}_\gamma) \leq \alpha \right) \geq 1 - \gamma\]
Bibliography
Bates, Stephen, et al. "Distribution-free, risk-controlling prediction sets." Journal of the ACM (JACM), 68.6 (2021).
Mohri, Christopher, and Tatsunori Hashimoto. "Language models with conformal factuality guarantees."
Lu, Charles, et al. "Federated conformal predictors for distributed uncertainty quantification."
Angelopoulos, Anastasios N., et al. "Conformal risk control." arXiv preprint arXiv:2208.02814 (2022).
P29: Functional convergence of self-normalized partial sums of linear processes with random coefficients
Danijel Krizmanic (University of Rijeka)
stand: S1
We derive a self-normalized functional limit theorem for strictly stationary linear processes with i.i.d. heavy-tailed innovations and random coefficients under the condition that all partial sums of the series of coefficients are a.s. bounded between zero and the sum of the series. The convergence takes part in the space of càdlàg functions on \([0,1]\) with the Skorokhod \(M_{2}\) topology.
Bibliography
\([1]\) Danijel Krizmanić. "A functional limit theorem for self-normalized linear processes with random coefficients and i.i.d. heavy-tailed innovations." Lithuanian Mathematical Journal, vol. 63, 2023, pp. 32.
P31: Risk-Sensitive First Exit Time Control with Varying and Constant Discount Factors on a General State Space with Approximation Algorithms
Amit Ghosh (Indian Institute of Technology Guwahati)
stand: S3
The risk-sensitive first exit time stochastic control problem for discrete-time on a general state space with state-dependent as well as constant discount factors has been analyzed. Under suitable assumptions, we prove the existence, Bellman characterization and uniqueness of optimal value function over randomized history-dependent policy space for bounded cost. We not only propose a Policy Improvement Algorithm (PIA) but also prove its convergence on general state space using stochastic representation of the optimal value function. To estimate the value function, a Value Iteration scheme is proposed. Lastly, we verified our convergence result through an example using MATLAB simulation.
Bibliography
\([1]\) Wei Q, Chen X. Continuous-time Markov decision processes under the risk-sensitive first passage discounted cost criterion. Journal of Optimization Theory and Applications. 2023 Apr;197(1):309-33.
\([2]\) Biswas A, Pradhan S. Ergodic risk-sensitive control of Markov processes on countable state space revisited. ESAIM: Control, Optimisation and Calculus of Variations. 2022;28:26.
P32: Representation of a class of nonlinear SPDE driven by Lévy-space time noise
Boubaker Smii (King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia)
stand: S4
In this work, we discuss a class of nonlinear stochastic partial differential equations (SPDEs) driven by Lévy space-time noise. A perturbative strong solution is constructed using tree expansion, while the truncated moments of the solution are expressed as sums over a specific class of graphs. Additionally, we will discuss some applications.
P33: Learning optimal search strategies
Maximilian Philipp Thiel (Friedrich Schiller University Jena)
stand: S5
We investigate a repeated search problem in a stochastic environment. In
particular, we deal with the so-called parking lot problem in continuous
time. We model the arrival of free parking lots by an (unknown)
inhomogeneous Poisson process. The search for a free parking lot near a
desired location is repeated and optimal stopping rules, i.e. the time
at which the first free parking space is taken, are to be determined
based on the available observations. The analysis of the problem
therefore requires results and approaches from stochastic control
theory, reinforcement learning and, in this context, bandit problems.
We develop an algorithm to determine optimal stopping rules under
uncertainty. The algorithm achieves an asymptotically logarithmic regret
rate under mild conditions on the Poisson process. Using a minimax
criterion from statistical decision theory, we can show that a
logarithmic rate is a lower bound for the regret in this problem, which
in turn means that our algorithm is asymptotically optimal.
P34: Stable Thompson Sampling: Valid Inference via Variance Inflation
Budhaditya Halder (Rutgers University, New Brunswick)
stand: S6
We consider the problem of statistical inference when the data is collected via a Thompson Sampling-type algorithm. While Thompson Sampling (TS) is known to be both asymptotically optimal and empirically effective, its adaptive sampling scheme poses challenges for constructing confidence intervals for model parameters. We propose and analyze a variant of TS, called Stable Thompson Sampling, in which the posterior variance is inflated by a logarithmic factor. We show that this modification leads to asymptotically normal estimates of the arm means, despite the non-i.i.d. nature of the data. Importantly, this statistical benefit comes at a modest cost: the variance inflation increases regret by only a logarithmic factor compared to standard TS. Our results reveal a principled trade-off: by paying a small price in regret, one can enable valid statistical inference for adaptive decision-making algorithms.
P35: Estimation for functionals in Renewal models and applications
Anna Kouroukli (University of Piraeus)
stand: S7
We introduce a fully non-parametric methodology for estimating function-valued performance measures of renewal systems, with particular emphasis on the mean-availability curve \(\bar{A}(t)=t^{-1}\!\int_{0}^{t}A(x)\,dx\) of alternating working/repair cycles. The renewal equation is discretised on a uniform time grid, and the resulting convolution sums are evaluated with a single pass of the Fast Fourier Transform (FFT). This grid-based strategy eliminates distributional assumptions, scales as \(O(n\log n)\) in the time-horizon length, and remains numerically stable even on high-resolution grids or with large samples.
Starting from the empirical distributions of the observed working and repair periods, we obtain the estimator \(\hat{\bar{A}}_{n}\) in closed FFT form. A refined Glivenko–Cantelli theorem in weighted càdlàg spaces guarantees strong consistency, while the Functional Delta Method yields asymptotic normality: \(n^{1/2}\bigl(\hat{\bar{A}}_{n}-\bar{A}\bigr)\overset{d}{\longrightarrow}\mathcal{G},\) where \(\mathcal{G}\) is a centred Gaussian process. Because every step of the construction is Hadamard-differentiable, bootstrap resampling supplies valid, data-driven confidence bands for \(\bar{A}(t)\) across the entire timeline.
The proposed toolkit combines computational speed with convergence properties for our non-parametric estimators.
Bibliography
\([1]\) Grübel, R. and Pitts, S. (1993). Non-parametric estimation in renewal theory I: The empirical renewal function. Annals of Statistics 21(1), 143–168.
\([2]\) Politis, K. and Pitts, S. (2000). Non-parametric estimation in renewal theory II: Solutions of renewal-type equations. Annals of Statistics 28(1), 88–115.
\([3]\) Bøgsted, M. and Pitts, S. (2006). Non-parametric inference from the M/G/1 workload. Bernoulli 12(4), 737–759. 10.3150/bj/1155735934
\([4]\) Van der Vaart, A. W. (2012). Asymptotic Statistics. Cambridge University Press.
P36: Neural network correction for numerical solutions of stochastic differential equations
Marcin Miśkiewicz (Wrocław University of Science and Technology)
stand: S8
Neural networks, known for their universal approximation capabilities, are widely used to solve ordinary and partial differential equations. In addition to directly approximating solutions, deep learning models can be incorporated into conventional numerical schemes to reduce the associated local truncation error. In this work, we explore the idea of a similar enhancement in the context of non-deterministic systems by introducing a neural network as a correction term in the Euler–Maruyama and Milstein methods for SDEs. Without assuming any prior knowledge of the exact solution, a single training run for a given equation enables the subsequent generation of more accurate trajectories. As demonstrated numerically on several test SDEs, augmenting the baseline scheme reduces the global approximation error in both the weak and strong sense.
P37: Differential equations driven by exponential Besov-Orlicz signals
František Hendrych (Charles University)
stand: S9
The poster describes the extention of the rough path theory to cover paths from the exponential Besov-Orlicz space \[B^\alpha_{\Phi_\beta,q}\quad\mbox{ for }\quad \alpha\in (1/3,1/2],\,\quad \Phi_\beta(x) \sim \mathrm{e}^{x^\beta}-1\quad\mbox{with}\quad \beta\in (0,\infty), \quad\mbox{and}\quad q\in (0,\infty],\] which is then used to treat nonlinear differential equations driven by such paths. The exponential Besov-Orlicz-type spaces, rough paths, and controlled rough paths are defined, a sewing lemma for such paths is given, and the existence and uniqueness of the solution to differential equations driven by these paths is stated. The results cover equations driven by paths of continuous local martingales with Lipschitz continuous quadratic variation (e.g. the Wiener process) or by paths of fractionally filtered Hermite processes in the \(n\)th Wiener chaos with Hurst parameter \(H\in (1/3,1/2]\) (e.g. the fractional Brownian motion).
P38: Learning payoffs while routing in skill-based queues
Sanne van Kempen (Eindhoven University of Technology)
stand: S10
Motivated by applications in service systems, we consider queueing systems where each customer must be handled by a server with the right skill set. We focus on optimizing the routing of customers to servers in order to maximize the total payoff of customer–server matches. In addition, customer—server dependent payoff parameters are assumed to be unknown a priori. We construct a machine learning algorithm that adaptively learns the payoff parameters while maximizing the total payoff and prove that it achieves polylogarithmic regret. Moreover, we show that the algorithm is asymptotically optimal up to logarithmic terms by deriving a regret lower bound. The algorithm leverages the basic feasible solutions of a static linear program as the action space. The regret analysis overcomes the complex interplay between queueing and learning by analyzing the convergence of the queue length process to its stationary behavior. We also demonstrate the performance of the algorithm numerically, and have included an experiment with time-varying parameters highlighting the potential of the algorithm in non-static environments.
P40: The strong law of large numbers and a functional central limit theorem for general Markov additive processes
Víctor Rivero (Centro de Investigacion en Matemáticas)
stand: S12
In this note we re-visit the fundamental question of the strong law of large numbers and central limit theorem for processes in continuous time with conditional stationary and independent increments. For convenience we refer to them as Markov additive processes, or MAPs for short. Historically used in the setting of queuing theory, MAPs have often been written about when the underlying modulating process is an ergodic Markov chain on a finite state space, cf. \([1]\) and \([2]\) not to mention the classical contributions of Prabhu \([3]\) and \([4]\). Recent works have addressed the strong law of large numbers when the underlying modulating process is a general Markov processes; cf. \([5]\) and \([6]\). We add to the latter with a different approach based on an ergodic theorem for additive functionals and on the semi-martingale structure of the additive part. This approach also allows us to deal with the setting that the modulator of the MAP is either positive or null recurrent. The methodology additionally inspires a CLT-type result.
Key words: Markov additive processes, strong law of large numbers, central limit theorem.
MSC 2020: 60J80, 60E10.
Bibliography
\([1]\) S. Asmussen. Applied probability and queues, volume 51 of
Applications of Mathematics (New York). Springer-Verlag, New York,
second edition, 2003. Stochastic Modelling and Applied Probability.
\([2]\) S. Asmussen and H. Albrecher. Ruin probabilities, volume 14 of
Advanced Series on Statistical Science & Applied Probability. World
Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, second edition,
2010.
\([3]\) A. Pacheco and Prabhu N.U. Markov-Additive Processes of Arrivals,
pages 103–140.
\([4]\) N. U. Prabhu. Stochastic storage processes, volume 15 of
Applications of Mathematics (New York). Springer-Verlag, New York,
second edition, 1998. Queues, insurance risk, dams, and data
communication.
\([5]\) A. E. Kyprianou, V. Rivero, B. Sengul, and T. Yang. Entrance laws
at the origin of self-similar Markov processes in high dimensions,
2019.
\([6]\) M. Çaglar. and C. U. Yaran Long time behaviour of general Markov
additive processes, 2024. Preprint
P41: Weak convergence of stochastic integrals with applications to SPDEs
Salim Boukfal (Universitat Autònoma de Barcelona (UAB))
stand: S13
In the present work, see \([1]\) and \([2]\), We provide sufficient conditions for sequences of stochastic processes defined by stochastic integrals of the form \(\int_{D} f(x,u)\theta_n(u)du\), \(u \in D\) (being \(D\) a rectangular region), to weakly converge towards the stochastic integral with respect to the Brownian motion \(\int_{D} f(x,u)W(du)\) in the multidimensional parameter set case. We then apply these results to stablish the weak convergence of solutions of the stochastic Poisson equation.
Bibliography
\([1]\) Xavier Bardina and Salim Boukfal. “Weak convergence of stochastic
integrals." 2025. arXiv:
2504.00733 [math.PR]
.
\([2]\) Xavier Bardina and Salim Boukfal. “Weak convergence of stochastic
integrals with applications to SPDEs." 2025. arXiv:
2504.08317 [math.PR]
.
P42: From Text to Trends: The Feasibility of LLMs in Quantitative Finance
Rahul Tak (Bucharest University of Economic Studiies)
stand: S14
Predicting stock movements remains a formidable challenge due to the
dynamic nature of financial markets and the impact of external factors
like investor sentiment,macroeconomic events, etc. Conventional
time-series statistical models frequently fail to capture the intricate
nonlinear dependencies and latent market signals present in unstructured
data. Recent advancements in Large Language Models (LLMs) offer new
opportunities to enhance predictive accuracy by integrating textual data
from financial news, social media, and earnings reports. In this study,
we used the Chronos and LLaMA families of LLMs to evaluate the stock
price forecasts generated by combining historical price data with news
sentiment. We compare the performance of LLaMA-3.3, LLaMA-3.1, Chronos,
and a benchmark ARIMA model to assess their efficacy in capturing
temporal patterns and textual signals. Our findings reveal that while
LLaMA-3.3 effectively extracts sentiment cues, all models exhibit
limitations in accurately predicting market direction. Furthermore, our
analysis suggest that market sentiment influences stock returns,
particularly in driving short-term returns changes. By incorporating
news sentiment into LLM prompts, we achieve improved forecasting
performance compared to models relying solely on numerical data. These
results underscore the importance of integrating structured (time
series) and unstructured (sentiment) data for robust financial modeling.
Our study suggests that LLM-driven sentiment analysis holds considerable
promise for traders, analysts, and financial institutions seeking
enhanced market insights. Future research should explore fine-tuning
LLaMA models for domain-specific financial applications and improving
interpretability in investment decision-making processes.
Keywords: LLMs, LLaMA, Chronos, News sentiment, Time series
forecasting, Stock price
Bibliography
\([1]\) Alamsyah, A., Ayu, S. P., & Rikumahu, B. (2019). "Exploring relationship between headline news sentiment and stock return" In 2019 7th International Conference on Information and Communication Technology (ICoICT). (pp. 1-6). IEEE.
\([2]\) Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., ... & Ganapathy, R. (2024). "The llama 3 herd of models." arXiv preprint, arXiv:2407.21783.
\([3]\) Hossain, A., & Nasser, M. (2011). "Comparison of the finite mixture of ARMA-GARCH, back propagation neural networks and support-vector machines in forecasting financial returns." Journal of Applied Statistics, 38(3), 533-551.
\([4]\) Ansari, A. F., Stella, L., Turkmen, C., Zhang, X., Mercado, P., Shen, H., ... & Wang, Y. (2024). "Chronos: Learning the language of time series." arXiv preprint, arXiv:2403.07815.
\([5]\) Xu, Y., Liang, C., Li, Y., & Huynh, T. L. (2022). "News sentiment and stock return: Evidence from managers’ news coverages." Finance Research Letters, 48, 102959.