neural networks

Researchers Discover a More Flexible Approach to Machine Learning

“Liquid” neural nets, based on a worm’s nervous system, can transform their underlying algorithms on the fly, giving them unprecedented speed and adaptability.
a worm superimposed with lines and nodes representing neural networks

Kristina Armitage/Quanta Magazine; source: Frank Fox/Science Source

Introduction

Artificial intelligence researchers have celebrated a string of successes with neural networks, computer programs that roughly mimic how our brains are organized. But despite rapid progress, neural networks remain relatively inflexible, with little ability to change on the fly or adjust to unfamiliar circumstances.

In 2020, two researchers at the Massachusetts Institute of Technology led a team that introduced a new kind of neural network based on real-life intelligence — but not our own. Instead, they took inspiration from the tiny roundworm, Caenorhabditis elegans, to produce what they called liquid neural networks. After a breakthrough last year, the novel networks may now be versatile enough to supplant their traditional counterparts for certain applications.

Liquid neural networks offer “an elegant and compact alternative,” said Ken Goldberg, a roboticist at the University of California, Berkeley. He added that experiments are already showing that these networks can run faster and more accurately than other so-called continuous-time neural networks, which model systems that vary over time.

Ramin Hasani and Mathias Lechner, the driving forces behind the new design, realized years ago that C. elegans could be an ideal organism to use for figuring out how to make resilient neural networks that can accommodate surprise. The millimeter-long bottom feeder is among the few creatures with a fully mapped-out nervous system, and it is capable of a range of advanced behaviors: moving, finding food, sleeping, mating and even learning from experience. “It lives in the real world, where change is always happening, and it can perform well under almost any conditions thrown at it,” Lechner said.

Respect for the lowly worm led him and Hasani to their new liquid networks, where each neuron is governed by an equation that predicts its behavior over time. And just as neurons are linked to each other, these equations depend on each other. The network essentially solves this entire ensemble of linked equations, allowing it to characterize the state of the system at any given moment — a departure from traditional neural networks, which only give the results at particular moments in time.

“[They] can only tell you what’s happening at one, two or three seconds,” Lechner said. “But a continuous-time model like ours can describe what’s going on at 0.53 seconds or 2.14 seconds or any other time you pick.”

Liquid networks also differ in how they treat synapses, the connections between artificial neurons. The strength of those connections in a standard neural network can be expressed by a single number, its weight. In liquid networks, the exchange of signals between neurons is a probabilistic process governed by a “nonlinear” function, meaning that responses to inputs are not always proportional. A doubling of the input, for instance, could lead to a much bigger or smaller shift in the output. This built-in variability is why the networks are called “liquid.” The way a neuron reacts can vary depending on the input it receives.

two men sit in front of a whiteboard covered in equations, and behind microphones and MacBook computers

Mathias Lechner (left) and Ramin Hasani envisioned a new kind of flexible neural network based on the nervous system of the Caenorhabditis elegans worm.

Courtesy of Ramin Hasani

While the algorithms at the heart of traditional networks are set during training, when these systems are fed reams of data to calibrate the best values for their weights, liquid neural nets are more adaptable. “They’re able to change their underlying equations based on the input they observe,” specifically changing how quickly neurons respond, said Daniela Rus, the director of MIT’s Computer Science and Artificial Intelligence Laboratory.

One early test to showcase this ability involved attempting to steer an autonomous car. A conventional neural network could only analyze visual data from the car’s camera at fixed intervals. The liquid network — consisting of 19 neurons and 253 synapses (making it minuscule by the standards of machine learning) — could be much more responsive. “Our model can sample more frequently, for instance when the road is twisty,” said Rus, a co-author of this and several other papers on liquid networks.

The model successfully kept the car on track, but it had one flaw, Lechner said: “It was really slow.” The problem stemmed from the nonlinear equations representing the synapses and neurons — equations that usually cannot be solved without repeated calculations on a computer, which goes through multiple iterations before eventually converging on a solution. This job is typically delegated to dedicated software packages called solvers, which would need to be applied separately to every synapse and neuron.

In a paper last year, the team revealed a new liquid neural network that got around that bottleneck. This network relied on the same type of equations, but the key advance was a discovery by Hasani that these equations didn’t need to be solved through arduous computer calculations. Instead, the network could function using an almost exact, or “closed-form,” solution that could, in principle, be worked out with pencil and paper. Typically, these nonlinear equations do not have closed-form solutions, but Hasani hit upon an approximate solution that was good enough to use.

“Having a closed-form solution means you have an equation for which you can plug in the values for its parameters and do the basic math, and you get an answer,” Rus said. “You get an answer in a single shot,” rather than letting a computer grind away until deciding it’s close enough. That cuts computational time and energy, speeding up the process considerably.

“Their method is beating the competition by several orders of magnitude without sacrificing accuracy,” said Sayan Mitra, a computer scientist at the University of Illinois, Urbana-Champaign.

As well as being speedier, Hasani said, their newest networks are also unusually stable, meaning the system can handle enormous inputs without going haywire. “The main contribution here is that stability and other nice properties are baked into these systems by their sheer structure,” said Sriram Sankaranarayanan, a computer scientist at the University of Colorado, Boulder. Liquid networks seem to operate in what he called “the sweet spot: They are complex enough to allow interesting things to happen, but not so complex as to lead to chaotic behavior.”

At the moment, the MIT group is testing their latest network on an autonomous aerial drone. Though the drone was trained to navigate in a forest, they’ve moved it to the urban environment of Cambridge to see how it handles novel conditions. Lechner called the preliminary results encouraging.

Beyond refining the current model, the team is also working to improve their network’s architecture. The next step, Lechner said, “is to figure out how many, or how few, neurons we actually need to perform a given task.” The group also wants to devise an optimal way of connecting neurons. Currently, every neuron links to every other neuron, but that’s not how it works in C. elegans, where synaptic connections are more selective. Through further studies of the roundworm’s wiring system, they hope to determine which neurons in their system should be coupled together.

Apart from applications like autonomous driving and flight, liquid networks seem well suited to the analysis of electric power grids, financial transactions, weather and other phenomena that fluctuate over time. In addition, Hasani said, the latest version of liquid networks can be used “to perform brain activity simulations at a scale not realizable before.”

Mitra is particularly intrigued by this possibility. “In a way, it’s kind of poetic, showing that this research may be coming full circle,” he said. “Neural networks are developing to the point that the very ideas we’ve drawn from nature may soon help us understand nature better.”

Comment on this article