CS19: Reinforcement Models: Elephant Random Walk
Organizer: Krishanu Maulik (Indian Statistical Institute)
Step Reinforced Random Walks with Regularly Varying Memory
Aritra Majumdar
The Step Reinforced Random Walk (SRRW) is a generalization of the usual random walk with memory. The walker, at every epoch, chooses a past step uniformly at random; repeats it with probability \(p\) or innovates with probability \(1-p\), independent of the past. Here, \(p\) is called the memory parameter. The SRRW has been a subject of great interest in the last few decades, since it exhibits a phase transition from diffusive to superdiffusive behavior, based on \(p\) unlike the usual walk. Bertenghi and Laulin (2024) studied a variant of the SRRW in which the past step is selected with probability proportional to a weight sequence that is regularly varying of index \(\gamma>-1\). The innovation steps were assumed to be independent and identically distributed with finite variance. The law of large numbers and fluctuations around the large number limit have been obtained for certain values of \(p\in \left(\frac{\gamma}{\gamma+1},\frac{\gamma+1/2}{\gamma+1}\right)\) and \(\gamma\geq 0\). In this talk, we shall provide a complete picture of the limiting behavior of the random walk; the law of large numbers and functional limit theorems for all \(p\in[0,1]\) and \(\gamma>-1\). We discuss the phase transition emerging in such a walk with novel superdiffusive scalings in the critical regime. Along with the usual \(\sqrt{n\log n}\) scale observed for the SRRW, a variety of different scalings (e.g. \(\sqrt{n(\log n)^{\frac{1}{2}}}\), \(\sqrt{n(\log n)^{-2}}\), \(\sqrt{n\log n\log\log n}\) etc.) appear in the generalized walk. Further, depending on the scaling exponent, one can have both almost sure and weak functional limit theorems in the critical regime, in contrast to the usual SRRW where only weak limits are observed.
The talk is based on a joint work with Krishanu Maulik \[2\].
\([1]\) Bertenghi, M. and Laulin, L. (2024). A universal scaling limit for diffusive amnesic step-reinforced random walks. ALEA Lat. Am. J. Probab. Math. Stat., 22(1): 517-536.
\([2]\) Majumdar, A. and Maulik, M. (2025). Limit theorems for step reinforced random walks with regularly varying memory. arXiv preprint arXiv: 2505.05921.
Some results for variations of the Elephant random walk
Ulrich Stadtmüller
Elephant random walks are random walks with steps \(\pm1\) depending on the whole past. In this talk we will discuss limit theorems for various variations of such walks, e.g., we will consider walks were the Elephant has a restricted memory only, remembering, e.g., only the last few or first few steps or walks were the Elephant may stop from time to time. For some results see e.g. \([1]\).
Bibliography
\([1]\) Gut, A., Stadtmüller, U., Elephant random walks: a review, Ann.
Univ. Sci. Budapest, Sect. Comp. 54 (2023), 171-198
Elephant Random Walk with multiple extractions
Simone Franchini
Consider a generalized Elephant Random Walk in which the step is chosen by selecting \(k\) previous steps, with \(k\) odd, then going in the majority direction with probability \(p\) and in the opposite direction otherwise. In the \(k=1\) case, the model is the original one, and can be solved exactly by analogy with Friedman’s urn. However, the analogy cannot be extended to the \(k>2\) case already. In this paper we show how to treat the model for each k by analogy with the more general urn model of Hill, Lane and Sudderth. Interestingly, for \(k>2\) we find a critical dependence from the initial conditions above certain values of the memory parameter \(p\), and regions of convergence with entropy that is sub-linear in the number of steps.