To understand how the Infinity Mirror came to be, it is necessary to unpack the interplay between hardware, simulations, and intelligence. So far we have witnessed the co-evolution of simulations and the hardware on which they run. Better hardware enables better simulations, which in turn demand and help build better hardware. They may do this through market pressures that beg for faster, more complex, and time-efficient models requiring semiconductor innovation beyond the status quo. They may shed light on new possibilities that would be otherwise inconceivable, as with quantum computers whose activities may only be understood through simulated results .

As philosopher of cognition Andy Clark, and others, have argued, knowledge is acquired and sorted spatially. “Thought is movement,” writes Jeff Hawkins in A Thousand Brains. If human thought can be understood as patterns of interaction between cells, constantly shifting inside a plastic frame of galaxy-like complexity, it stands to reason that the computing devices best able to calculate the relationships between moving objects and the physics that governs these objects both play a crucial role in the artificial expansion of intelligence.

To provide an overview, we can lay out a chronology beginning with Kriegsspiel in 1811—acknowledging that the use of physical and numerical modeling likely far preceded it—leading to the “simulation crisis” of the early 2020s: an inflection point defined by the COVID-19 pandemic, the 2021 cryptocurrency bull market, factory bottlenecks in China and elsewhere, and the early flourishing of artificial general intelligence. During this period we see the evolution of modeling, prediction, and intervention based on computer simulations. Seemingly distinct fields overlap and open adjacent pathways for one another. A new all-encompassing interdisciplinary practice is embedded globally.

The application of GPU computing across the full breadth of scientific disciplines—as opposed to in studies of the brain, AI, or computation itself—was relatively slow on the uptake. One reason for this was decades of legacy code that required laborious ports and rewrites, making it harder to introduce new hardware. Another was the shortage of GPUs themselves, though as supply normalized the co-design of algorithms and computation (with increased integration of machine learning) began to reshape research. While the atomic limit of transistors was demonstrated49 by introducing a single phosphorus atom across a silicon crystal at liquid helium temperatures, engineers predicted it would be crises in the machinery of chip production itself that would eventually bring about the end of Moore’s Law and the physical limits of silicon computing.

From 2025 onward, high-numerical-aperture extreme ultraviolet (EUV) photolithography became the means by which chip engineers were able to continue shrinking the geometry of transistor gates to the critical lower limit of 0.25 nanometers. Over three decades prior, chip designers and engineers had reduced resolution by two orders of magnitude:⁠50 first from 365 nanometers, generated using a mercury lamp, to 248 nanometers, via a krypton–fluoride laser in the late 1990s, and then to 193 nanometers, from an argon–fluoride laser, in early 2001. Immersion lithography, which uses water to significantly enlarge the numerical aperture, reduced the wavelength to 193 nanometers, but it was the 20-year development of EUV that brought an unparalleled reduction to just 13.5 nanometers by leveraging an entirely new way of generating light.⁠51

The issue is not the theoretical lower limit of electron gates—an on-off gate built from a single silver atom could do the trick—but rather the complex interplay between specialized elements as well as factors such as heat, quantum tunneling, and interference from background radiation that constantly bombards Earth. Moore’s Law is not a law of nature but an aspiration. It requires an elevated level of research and development investment and relies on some of the most sophisticated experimental technologies ever devised, and creative minds to devise them. Yet it has limits.

Along with continued improvements in silicon computers, a speciation across alternate substrates took forward the capacity to compute. Each of these in turn enabled new types of simulation. This “Cambrian explosion of hardware ” advanced most immediately in the service of brain emulation and biological digital twins, which used neuromorphic chip architectures and evolutionary algorithms to more accurately mirror biological phenomena. Other areas in which breakthroughs occurred included inverse design and agent-based modeling, which proved to be well suited to spectral photonics: the parallelizing of computations on different wavelengths of light.

Each breakthrough—in quantum computation, analog inference, or cortical processing using biological neurons—involved some degree of simulation across research, development, or manufacturing. The best-suited hardware, once released into the world, found applications nobody could have predicted: deep-mantle geothermal energy, advanced gene therapies, ecosystem monitoring and planning. By 2030, the need for simulation technology was so widespread that its pre-twentieth century associations with falsehood and misrepresentation were all but forgotten.

The integration of simulations capable of describing, predicting, and responding to causal, collective, emergent, and open-ended phenomena is fundamental to planetary intelligence. It was not long, however, before the cost of running so many codependent simulations on Earth made themselves apparent in the data.

Autopoiesis refers to self-creation or organisation, a network of processes that recursively depend on one another for their generation and realization. An autopoietic system, then, develops the “capacity to maintain [its] identity in spite of fluctuations and perturbations coming from without,” but also from within. This system is never static. For as long as it persists it must be constantly remade, “maintaining the physiochemical and information processing capacities that constitute its own ‘going-on.’”⁠52

EPILOGUE

In 2034, a number of Earth’s most advanced simulations began to show a similar anomaly. As the world’s supercomputers ran scenarios of future coastline erosion, galactic mining schemes, and rapid urban redevelopment, an uncategorizable event increased noise in the system leading to an increased number of errors that cast their usefulness in doubt.

In time researchers determined that the error was not in fact a black swan event—not a solar flare or unsurpassable technical threshold—but something they had been expecting since the 2020s: the moment when the representational complexity of the system doing the modeling surpassed all the available data it was expected to model. Like Y2K or artificial general intelligence before it, this simulation point omega had been a focal point for speculation for decades. Would overfitting reality lead to epistemic back holes, a total disintegration of a ground truth with which to act?

Despite the efficiency gains of unconventional computation, the energy use of information processing alone had reached a hundred quintillion (1020) joules of energy per year by 2040, compared with a hundred trillion (1014) joules just twenty years prior. Expecting that energy demand will continue to increase where it can, solar-powered satellite computers were launched into orbit around Earth in huge quantities. Once stable, they combined to form a federated entity known as the Infinity Mirror: a fleet of supercomputers that encircled Earth, a distant descendant of vacuum tubes, microprocessors, GPUs, and photonic cores.

Of the 90,000 terawatts of solar energy absorbed by Earth’s surface every year, a small quantity was caught in order to power the machine, a glassy overseer suspended at all times above the Arctic running the simulations integral to planetary intelligence, bouncing the rest into space. Computation had joined many other industrial activities—mining, material processing, toxic synthesis—off-planet, leaving the biosphere to bloom once more beneath its technological shell.

With the functionally limitless boost in computational capacity operating beyond the scope of anything possible on Earth, models of the galaxy began to predict fluctuations in the output of distant stars comparable to our own: technosignatures considered statistically incontrovertible, a suggestion of life at work. Though the distance between star systems remained, the predictions were considered contact of a sort. The discovery of life not as physical presence but mathematical certainty.

It remained unlikely that frail humans would ever reach those distant places to find out for sure. But the number of twinkling suns continued to increase the longer the simulation ran. Perhaps this was what it meant to be the first intelligence in a youthful universe, they thought, pained by their ongoing solitude but confident of things to come.

Infinity Mirror

Simulation and planetary intelligence
by Philip Maughan

Continue Reading

INFINITY MIRROR BY Philip maughan GRAPHIC DESIGN AND DEVELOPMENT JOEL FEAR SON LA PHAM TYPEFACE SCHENGEN by SEB MCLAUCHLAN November 2023
Program Director Benjamin Bratton Studio Director Nicolay Boyadjiev Associate Director Stephanie Sherman Senior Program Manager Emily Knapp Network Operatives Dasha Silkina Andrey Karabanov
Thanks to The Berggruen Institute and One Project for their support for the inaugural year of Antikythera.