We have witnessed a co-evolution between simulations and the hardware on which they run. Better hardware enables better simulations; simulations which in turn demand that we make better hardware. They may do this through market pressures that register demand for faster, more complex and ever-more essential models that only semiconductor innovation can provide, or through the creation of new possibilities that would be otherwise inconceivable, as with quantum computers whose activities may only be understood through approximation .

As philosopher of cognition Andy Clark, and others, have argued, knowledge is acquired and sorted spatially. “Thought is movement,” writes Jeff Hawins in A Thousand Brains. If thought can be understood as patterns of interaction between cells constantly shifting inside a plastic frame of galaxy-like complexity, it stands to reason that the computing devices best able to calculate the relationships between moving objects and the physics that governs their movement would play a crucial role in the artificial expansion of intelligence.

In order to provide an overview, we may plot a chronology beginning with Kriegsspiel in 1811 – acknowledging that the use of modeling likely far preceded it – leading up to the “simulation crisis” of the early 2020s: an inflection point defined by the Covid-19 pandemic, the 2021 cryptocurrency bull market, factory bottlenecks in China and elsewhere, and early flourishings of AGI. During this period we see the evolution of modeling, prediction and actuation using computer simulations. Seemingly distinct fields overlap and open pathways for one another, and a new interdisciplinary practice is embedded across the board.

The application of GPU computing across the full breadth of scientific disciplines – as opposed to in studies of the brain, AI or computation itself – was relatively slow on the uptake. One reason for this was decades of legacy code that would require laborious ports and rewrites, making it harder to introduce new hardware. Another was the shortage of GPUs themselves, though as supply normalized by the mid-2020s, the co-design of algorithms and computation (with increased integration of machine learning) began to reshape research. While the atomic limit of transistors was demonstrated by introducing a single phosphorus atom across a silicon crystal at liquid helium temperatures, engineers predicted it would be crises in the machinery of chip production itself that would eventually bring about the end of Moore’s Law and the physical limits of silicon computing.

From 2025 onwards, high-numerical-aperture extreme-ultraviolet (EUV) photolithography became the means by which chip engineers were able to continue shrinking the geometry of transistor gates to the critical lower limit of 0.25 nanometers.⁠32 Over a period of three decades, engineers had managed to reduce resolution by two orders of magnitude: first from 365 nm, generated using a mercury lamp, to 248 nm, via a krypton-fluoride laser in the late 1990s, and then to 193 nm, from an argon-fluoride laser, at the beginning of the 21st century. Immersion lithography – which used water to significantly enlarge the NA – reduced the wavelength to 193 nm, but it was the 20-year development of EUV which brought an unparalleled reduction to just 13.5 nm by accessing an entirely new way of generating light.⁠33

The issue is not the theoretical lower limit of electron gates – an on-off gate built from a single silver atom could potentially work – but rather the complex interplay between specialized elements as well as factors such as heat, quantum tunneling and interference from the background radiation that constantly bombards Earth. Moore’s Law is not a law of nature but an aspiration. It requires an elevated level of R&D investment and relies on some of the most sophisticated experimental technologies ever devised, and creative minds to devise them. Yet it has limits.

Along with continued improvements in silicon computers, a speciation across alternate substrates took forward the capacity to computer. Each of these in turn enabled new types of simulation. This “cambrian explosion of hardware ” advanced most immediately in the service of brain emulation and biological digital twins, which used neuromorphic chip architectures and evolutionary algorithms to more accurately mirror the biological phenomenon for which they were conceived. Other areas in which breakthroughs occurred include inverse design and agent-based modeling, which proved well suited to spectral photonics: the parallelising of computations on different wavelengths of light.

Each breakthrough – in quantum computation, analog inference or cortical processing using biological neurons – involved some degree of simulation across research, development or manufacturing. The best-suited hardware, once released into the world, found applications nobody could have predicted including deep mantle geothermal energy, advanced gene therapies and ecosystem planning. The role of simulations was so integral that even in common parlance the word ceased to be synonymous with the “imitations” of western literature and philosophy, and more as a tool that replicated the basic processes by which we came to know the world.

Autopoiesis refers to a “network of processes that recursively depend on each other for their own generation and realization.” An autopoietic system, then, develops the “capacity to maintain [its] identity in spite of fluctuations and perturbations coming from without,” but also, it must be said, from within. This system is never static. For as long as it persists it must be constantly remade, “maintaining the physiochemical and information processing capacities that constitute its own ‘going-on’.” The integration of multi-scalar simulations capable of describing, predicting and projecting causal, complex, collective, emergent, and open-ended phenomena is integral to planetary intelligence. Among other things, it provided the technical means to unify the aggregate activities of individual actors and their planetary scale consequences. It was not long, however, before the consequences of such simulations made themselves apparent in the data.

EPILOGUE

Part-way through the 2030s a number of Earth’s most advanced simulations began to show the same anomaly. As the world’s supercomputers ran scenarios for coastline erosion policies and urban adaptation projects, an uncategorizable event increased noise in the system leading to frequent errors that cast their usefulness in doubt.

In time researchers determined that the error was not in fact something unforeseeable – not a solar flare or unsurpassable technical threshold – but was something that had been expected since the 2020s: the moment, estimated to arrive around 2040, when the energy requirements of the system doing the modeling would surpass everything it was expected to model.⁠34 Despite the staggering efficiencies of unconventional computation, the energy use of information processing alone had reached a hundred quintillion (1020) joules of energy per year compared to a hundred trillion (1014) joules just twenty years prior.

Knowing that this enormous energy-suck was inevitable, computational devices had been launched into orbit around the Earth in increased numbers and were now being assembled and augmented as a federated entity. Of the 90,000 terawatts of solar power absorbed by the Earth’s surface every year, a quantity was diverted to power the machine, operating at cooler temperatures than were available anywhere on the planet’s surface while simultaneously reducing radiative forcing when a heat reduction was most critical.

Computation joined many other industrial activities – mining, smelting, toxic synthesis – off-planet, leaving the biosphere to bloom beneath its technological shell. Some assumed this was the inevitable causeway on all planets where life lasted long enough, an artificial superorganism with its own metabolic relationship between sun and floating minerals, operated from within by both ancient and newer forms of intelligence. Yet even as the very long term ambitions for the Infinity Mirror were realized, the sun’s luminosity would eventually increase to the degree no life and no infrastructure would survive.

With a planet-sized boost in computational capacity operating beyond the scope of anything that would’ve been possible on Earth, models of the galaxy predicted similar fluctuations in the output of stars: technosignatures taken as statistically incontrovertible. The arrival of life not as presence but mathematical certainty. It remained unlikely we could ever reach but the number of twinkling suns increased the longer the simulation ran. This is what it meant to be the first. Our best methods for continuing to exist would always be our tombstone.

Infinity Mirror

Simulation and planetary intelligence

Continue Reading

INFINITY MIRROR BY Philip maughan GRAPHIC DESIGN AND DEVELOPMENT JOEL FEAR SON LA PHAM TYPEFACE SCHENGEN by SEB MCLAUCHLAN November 2023
Program Director Benjamin Bratton Studio Director Nicolay Boyadjiev Associate Director Stephanie Sherman Senior Program Manager Emily Knapp Network Operatives Dasha Silkina Andrey Karabanov
Thanks to The Berggruen Institute and One Project for their support for the inaugural year of Antikythera.