What Is Analog Computing?
Introduction
Computing today is almost entirely digital. The vast informational catacombs of the internet, the algorithms that power AI, the screen you’re reading this on — all are powered by electronic circuits manipulating binary digits — 0 and 1, off and on. We live, it has been said, in the digital age.
But it’s not obvious why a system that operates using discrete chunks of information would be good at modeling our continuous, analog world. And indeed, for millennia humans have used analog computing devices to understand and predict the ebbs and flows of nature.
Among the earliest known analog computers is the Antikythera mechanism from ancient Greece, which used dozens of gears to predict eclipses and calculate the positions of the sun and moon. Slide rules, invented in the 17th century, executed the mathematical operations that would one day send men to the moon. (The abacus, however, doesn’t count as analog: Its discrete “counters” make it one of the earliest digital computers.) And in the late 19th century, William Thomson, who later became Lord Kelvin, designed a machine that used shafts, cranks and pulleys to model the influence of celestial bodies on the tides. Its successors were used decades later to plan for the beach landings on Normandy on D-Day.
What do these devices have in common? They are all physical systems set up to obey the same mathematical equations behind the phenomena you want to understand. Thomson’s tide-calculating computer, for example, was inspired by 19th-century mathematical advances that turned the question of predicting the tide into a complex trigonometric expression. Calculating that expression by hand was both laborious and error-prone. The cranks and pulleys in Thomson’s machine were configured so that by spinning them, the user would get an output that was identical to the result of the expression that needed to be solved.
Analog computing reached its apotheosis in the differential analyzer, first built by Vannevar Bush at the Massachusetts Institute of Technology in 1931. The analyzer used a complicated series of gears and shafts driven by electric motors. It could calculate a huge variety of differential equations — the kind of equation used to model physical phenomena. But to modify an equation, the machine had to be laboriously reconfigured by hand.
When modern digital computing began in the late 1930s, it was clunky, expensive and inferior. But digital computation had benefits. Digital computers were easier to program and often more accurate than analog machines. And with the rise of the transistor and the subsequent advances fueled by Moore’s law, digital processing soon took over.
But as our digital world has exploded, its costs have as well. Every switch of a digital bit takes a smidgen of energy. And new artificial intelligence systems require huge amounts of computing power. To take just one example, news reports have revealed that Microsoft and OpenAI are planning a $100 billion data center that would suck about 5 gigawatts of power. That’s roughly the output of five nuclear reactors.
Analog computing offers an alternative. The neural networks that power AI systems make predictions by repeatedly blasting through a sequence of multiplication and addition operations.
In an analog computer that uses electrical signals — not gears and pulleys — a current could pass through a circuit that uses carefully chosen resistors to model those operations, at a significant power savings.
The advantages of digital computing are real, but so are the drawbacks. Perhaps, by reaching back to computing’s past, researchers will be able to steer a sustainable path toward our computational future.
Correction: August 2, 2024: The $100 billion data center would require 5 gigawatts of power, not 5 megawatts.