Chaos Researchers Can Now Predict Perilous Points of No Return
Introduction
Predicting complex systems like the weather is famously difficult. But at least the weather’s governing equations don’t change from one day to the next. In contrast, certain complex systems can undergo “tipping point” transitions, suddenly changing their behavior dramatically and perhaps irreversibly, with little warning and potentially catastrophic consequences.
On long enough timescales, most real-world systems are like this. Consider the Gulf Stream in the North Atlantic, which transports warm equatorial water northward as part of an oceanic conveyor belt that helps regulate Earth’s climate. The equations that describe these circulating currents are slowly changing due to the influx of fresh water from melting ice sheets. So far the circulation has slowed gradually, but decades from now it may abruptly grind to a halt.
“Suppose everything is OK now,” said Ying-Cheng Lai, a physicist at Arizona State University. “How do you tell that it’s not going to be OK in the future?”
In a series of recent papers, researchers have shown that machine learning algorithms can predict tipping-point transitions in archetypal examples of such “nonstationary” systems, as well as features of their behavior after they’ve tipped. The surprisingly powerful new techniques could one day find applications in climate science, ecology, epidemiology and many other fields.
A surge of interest in the problem began four years ago with groundbreaking results from the group of Edward Ott, a leading chaos researcher at the University of Maryland. Ott’s team found that a type of machine learning algorithm called a recurrent neural network could predict the evolution of stationary chaotic systems (which don’t have tipping points) stunningly far into the future. The network relied only on records of the chaotic system’s past behavior — it had no information about the underlying equations.
The network’s learning approach differed from that of deep neural networks, which feed data through a tall stack of layers of artificial neurons for tasks like speech recognition and natural language processing. All neural networks learn by adjusting the strength of the connections between their neurons in response to training data. Ott and his collaborators used a less computationally expensive training method called reservoir computing, which adjusts only a few connections in a single layer of artificial neurons. Despite its simplicity, reservoir computing seems suited to the task of predicting chaotic evolution.
Impressive as the 2018 results were, researchers suspected that machine learning’s data-driven approach wouldn’t be able to predict tipping-point transitions in nonstationary systems or infer how these systems would behave afterward. A neural network trains on past data about an evolving system, but “what’s happening in the future is evolving by different rules,” said Ott. It’s like trying to predict the outcome of a baseball game only to find that it’s morphed into a cricket match.
And yet, in the past two years, Ott’s group and several others have shown that reservoir computing works unexpectedly well for these systems too.
In a 2021 paper, Lai and collaborators gave their reservoir computing algorithm access to the slowly drifting value of a parameter that would eventually send a model system over a tipping point — but they provided no other information about the system’s governing equations. This situation pertains to a number of real-world scenarios: We know how the carbon dioxide concentration in the atmosphere is rising, for instance, but we don’t know all the ways that this variable will influence the climate. The team found that a neural network trained on past data could predict the value at which the system would eventually become unstable. Ott’s group published related results last year.
In a new paper, posted online in July and now undergoing peer review, Ott and his graduate student Dhruvit Patel explored the predictive power of neural networks that only see a system’s behavior and know nothing about the underlying parameter responsible for driving a tipping-point transition. They fed their neural network data recorded in a simulated system while the hidden parameter was drifting, unbeknownst to the network. Remarkably, in many cases the algorithm could both predict the onset of tipping and provide a probability distribution of possible post-tipping-point behaviors.
Surprisingly, the network performed best when trained on noisy data. Noise is ubiquitous in real-world systems, but it ordinarily hinders prediction. Here it helped, apparently by exposing the algorithm to a wider range of the system’s possible behavior. To take advantage of this counterintuitive result, Patel and Ott tweaked their reservoir computing procedure to enable the neural network to recognize noise as well as the system’s average behavior. “That’s going to be important for any approach that’s trying to extrapolate” the behavior of nonstationary systems, said Michael Graham, a fluid dynamicist at the University of Wisconsin, Madison.
Patel and Ott also considered a class of tipping points that mark an especially stark change in behavior.
Suppose the state of a system is plotted as a point moving around in an abstract space of all its possible states. Systems that undergo regular cycles would trace out a repeating orbit in the space, while chaotic evolution would look like a tangled mess. A tipping point might cause an orbit to spiral out of control but remain in the same part of the plot, or it might cause initially chaotic motion to spill out into a larger region. In these cases a neural network may find hints of the system’s fate encoded in its past exploration of relevant regions of the state space.
More challenging are transitions in which a system is suddenly expelled from one region and its later evolution unfolds in a distant region. “Not only are the dynamics changing, but now you’re wandering into territory you’ve never ever seen,” explained Patel. Such transitions are typically “hysteretic,” meaning they’re not easily reversed — even if, say, a slowly increasing parameter that caused the transition is nudged down again. This kind of hysteresis is common: Kill one too many top predators in an ecosystem, for instance, and the altered dynamics might cause the prey population to suddenly explode; add a predator back again and the prey population stays elevated.
When trained on data from a system exhibiting a hysteretic transition, Patel and Ott’s reservoir computing algorithm was able to predict an imminent tipping point, but it got the timing wrong and failed to predict the system’s subsequent behavior. The researchers then tried a hybrid approach combining machine learning and conventional knowledge-based modeling of the system. They found that the hybrid algorithm exceeded the sum of its parts: It could predict statistical properties of future behavior even when the knowledge-based model had incorrect parameter values and therefore failed on its own.
Soon Hoe Lim, a machine learning researcher at the Nordic Institute for Theoretical Physics in Stockholm who has studied the short-term behavior of nonstationary systems, hopes the recent work will “serve as a catalyst for further studies,” including comparisons between the performance of reservoir computing and that of deep learning algorithms. If reservoir computing can hold its own against more resource-intensive methods, that would bode well for the prospect of studying tipping points in large, complex systems like ecosystems and Earth’s climate.
“There’s a lot to do in this field,” Ott said. “It’s really wide open.”