Jack Griffiths (University of Sheffield)
Generative Modelling for Physical Reservoir Computing
Artificial intelligence systems consume large amounts of energy, raising environmental concerns and limiting their use in portable, edge-AI devices [1]. Physical reservoir computing offers a low-energy alternative by exploiting the natural dynamics of, for example, “spintronic” devices [2]. In this seminar, I will introduce reservoir and physical reservoir computing [3], which processes time-varying signals of dynamical systems by converting them into high-dimensional, highly non-linear representation that can capture temporal dynamics and long history, enabling tasks such as time-series prediction and speech recognition. Single reservoir devices, however, empirically struggle to provide both long-term memory and prediction. By connecting different devices—each with its own timescales, memory capacity, or nonlinear response—as nodes of a neural network, we hypothesise that we can create richer dynamics and achieve performance beyond what any individual device can offer. In this seminar, I will say that designing such networks requires accurate, differentiable models of these physical systems, and I will discuss how we build these “digital twins” using diffusion models [4,5], originally developed for image generation models. The principles of diffusion modelling will be introduced generally for modelling any experimental data or dynamical system.
[1] Grollier et al., Neuromorphic Spintronics. Nat. Electron. 3 (2020).
[2] Stenning et al., Neuromorphic overparameterisation and few-shot learning in multilayer physical neural networks. Nat. Commun. 15 (2024).
[3] Tanaka et al., Recent advances in physical reservoir computing: A review. Neural Netw. 115 (2019).
[4] Ho et al., Denoising Diffusion Probabilistic Models. In proceedings of NeurIPS 33 (2020).
[5] Song et al., Score-Based Generative Modeling through Stochastic Differential Equations. arXiv:2011.13456 (2020).