1. Rainer Werner Fassbinder’s Welt am Draht (1973) The appearance of a frightening imitation is a common trope in cinema. Christopher Nolan’s The Prestige (2006), Denis Villeneuve’s Enemy (2013) and Richard Ayoade’s The Double (2013) weave contemporary concerns about identity, the unconscious and the limits of perception into classic storylines inspired by Nikolai Gogol, Edgar Allan Poe, E. T. A. Hoffmann and others. It may be, however, that the on-screen inheritors of Platonic-Cartesian skepticism is the firmly established genre of simulation cinema – most obviously Rainer Werner Fassbinder’s Welt am Draht (1973), which inspired The Matrix (1999), without which it would be hard to imagine Westworld (2016-2022) or Ready Player One (2018), all of which are attuned to the possibilities of totalizing computer simulations and the philosophy of universal computationalism as popularized in Nick Bostrom’s 2003 paper, “Are You Living in a Computer Simulation?” The paper assures us “it is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.”

  2. In the 1760s the French naturalist and encyclopedia-maker Georges-Louis Leclerc, Comte de Buffon started a series of experiments heating iron balls until they glowed white, then measuring how long they took to cool to room temperature. He didn’t use a thermometer for these experiments; he relied on touch alone, yet his results were surprisingly accurate. Next, he heated and cooled a variety of materials including sandstone, glass and various rocks. He was hoping to learn the age of the Earth. Buffon now believed that the Earth indeed had a beginning, and it was originally a molten mass too hot for any living creature. It had to cool before it could support life. In 1778, he published Les Epoques de la Nature, asserting that the Earth was 74,832 years old, an 68,000-year increase on the Biblical chronology that had held since the Middle Ages.

  3. In Simulation and Similarity: Using Models to Understand the World (2013) philosopher Michael Weisberg explains: “Roughly speaking, concrete models are physical objects whose physical properties can potentially stand in representational relationships with real-world phenomena. Mathematical models are abstract structures whose properties can potentially stand in relation to mathematical representations of phenomena. Computational models are sets of procedures that can potentially stand in relation to a computational description of the behavior of a system.”

  4. One example might be the continued idealization of pastoral life despite millennia of peasants fleeing agricultural labor at the first opportunity, the dramatic reduction in food-borne illnesses, improvements in nutrition, agricultural productivity and a range of culinary choice available throughout much of the world. For more on this specific topic see Cooking Earth (2023) by Black Almanac.

  5. The use of the planetary in the context of an astrobiology paper is twofold. First, it enables the abstraction of historical dynamics on Earth that may prove to be generic, useful as a benchmark with which to understand evolutionary principles elsewhere in the universe. Second, it means recognizing that small tweaks in the DNA of microscopic organisms can cause changes that cascade upward and become significant enough in scale to change the weather. New states are created by upward causation which in turn constrain and shape the behavior at lower levels. As such, the idea of a planetary boundary is neither a single ultimate region of activity nor a hierarchy of scales in any conventional sense. Instead we have a prismatic spheroid where both less and more are different. For more see the original paper at Frank A, Grinspoon D, Walker S (2022). Intelligence as a planetary scale process. International Journal of Astrobiology 1-5.

  6. These include emergence as seen in biological systems driving the organization of matter from chemistry to cells and to the planetary boundary. It is worth noting too that emergence is often associated with top-down causation as newly formed modes of behavior modulate the parts (cells, organisms, individuals) from which they have gradually become distinct. Others are the active use of information (a distinguishing feature of living systems), the semantic aspect of that information (its meaning, or what it does) and the instantiation of complex adaptive systems which self-regulate via the use of signals and boundaries, which here may mean a planet defending itself from asteroids as much as a cell membrane guarding from invading parasites.

  7. Frank A, Carroll-Nellenback J, Alberti M and Kleidon A (2018) The anthropocene generalized: evolution of exo-civilizations and their planetary feedback. Astrobiology 18, 503-518. The five classes of planet featured here are an update of astrophysicist Nikolai Kardashev’s energy-harvesting measuring stick known as the Kardashev Scale a rubric for searching out intelligent civilizations according to their capacity to tap into the supplies of their planet, star, or galaxy of origin. Later adaptations of the Kardashev Scale have ranked the capacity to compute, to bounce back from natural disasters, or to engineer to ever smaller scales from objects to genes and molecules, atoms to nuclei, quarks and leptons, and finally time and space.

  8. Book Cover: The Major Transitions in Evolution, Eörs Szathmáry and John Maynard Smith The paper borrows the concept of “evolutionary transitions” from Eörs Szathmáry and John Maynard Smith’s The Major Transitions in Evolution (1995) which proposes a series of “changes in the way information is stored and transmitted” to account for an apparent increase in biological complexity in evolutionary lineages.

  9. Haff, P (2014) Humans and technology in the Anthropocene: six rules. The Anthropocene Review 1, 126-136.

  10. “In living systems, information always carries a semantic aspect – its meaning – even if it is something as simple as the direction of a nutrient gradient in chemo-taxis.”

  11. Clark A, (2023) The Experience Machine: How Our Minds Predict and Shape Reality. London: Allen Lane. Simulation technologies that artificialize this dynamic conform to what Richard Dawkins described as an “extended phenotype,” extending the output of genes beyond protein synthesis or eye color to the changes they make to their environment.

  12. The anecdotal parallel to this line of neuroscientific enquiry would be the “method of loci” or “memory palace,” placing objects, words or symbols into a familiar environment, such as one’s home, then mentally moving through that environment in order to recollect the new information. In his book Moonwalking with Einstein (2011), journalist Joshua Foer recounts his victory at the 2006 US Memory Championship at which he memorized 52 cards in 1 minute and 40 seconds despite having no real aptitude (the book opens with him forgetting his keys a year earlier). This suggests that learning is eased by attaching information to an existing reference frame, but also that thinking itself evolved out of sensory-motor interactions between body and world.

  13. IMBUILD001 – TABLES AND MATRICES: In Kriegsspiel a set table is used to calculate combat outcomes. The game’s map features different types of terrain – forests, mountains, rivers – each of which influence the effectiveness of a battalion. A unit’s artillery, strengths and relative position are disturbed by rolling dice, introducing a stochastic element to mimic the uncertainty of war. What’s more, the game’s umpire maintains a delay between orders being given and their implementation, recreating the communication issues of 19th century warfare. In a sense, the umpire represents the “human computer” of the age: people who were employed to carry out numerical procedures without deviation – producing and working with mathematical tables used by the insurance industry, in agriculture, science, industry and government. In the Netherlands a (human-run) computer simulation was used to predict the effects of building the Afsluitdijk between 1927 and 1932. The majority of calculations related to the Manhattan Project were carried out by uncredited “computers.”

  14. IMBUILD002 – MATHEMATICAL FORMULAS: Calculations that approximate in numbers from first principles the behavior of objects, matter and agents are the foundation of the study of physics. They are not representations of any deeper reality, however, but extensions of intelligence’s capacity to know its environment, no less real than any other type of thought.

  15. Fairchild Semiconductor division head Robert Noyce speaking in serious portrait in his office, w. diagrams of semiconductors & microchips. (Photo by Ted Streshinsky/Getty Images) IMBUILD003 – TRANSISTORS: The first Turing-complete, programmable digital computer, ENIAC, was finished in 1945. The 27-ton machine was intended to calculate ballistic trajectories at far higher resolution than was possible using tables and human computers, though its actual first program simulated the potential for a fusion-powered, thermonuclear bomb. The ENIAC’s vacuum tubes and crystal diodes were soon outpaced by the advent of point-contact, MOSFET (silicone oxidation) and mesa transistors, which involved covering a block of germanium – another semiconductor material – with a drop of wax. The engineer Jay Lathrop, an engineer with the US Army’s Diamond Ordnance Fuze Lab in the 1950s, needed a transistor that would fit inside a mortar shell. The wax method was difficult to miniaturize so Lanthrop and lab partner James Nall devised a method that involved turning microscope optics upside down. As Chris Miller writes in Chip War: The Fight for the World’s Most Critical Technology (2023): “Lathrop had spent years looking through microscopes to make something small look bigger. As he puzzled over how to miniaturize transistors, he and Nall wondered whether microscope optics turned upside down could let something big—a pattern for a transistor—be miniaturized. To find out, they covered a piece of germanium material with a type of chemical called a photoresist, which they acquired from Eastman Kodak, the camera company. Light reacts with photoresist, making it either harder or weaker. Lathrop took advantage of this feature and created a ‘mask’ in the shape of a mesa, placing it on the microscope with upside-down optics. Light that passed through holes in the mask was shrunk by the microscope’s lens and projected onto the photoresist chemicals. Where the light struck, the chemicals hardened. Where light was blocked by the mask, they could be washed away, leaving a precise, miniature mesa of germanium. A way to manufacture miniaturized transistors had been found.” The new process, photolithography, was patented in 1957. Nall was hired to work at Fairchild Semiconductors by Robert Noyce, who later went on to found Intel with Gordon Moore, and used lenses for the company’s photolithography program bought in a San Francisco camera shop. This was the innovation that enabled chip fabricators to keep pace with the prediction that the number of transistors in an integrated circuit would double every two years, also known as Moore’s Law.

  16. IMBUILD004 – HYBRID SYSTEMS: The first electronic flight simulator was built by Edwin Link in 1929, a chunky blue aircraft frame with an engine underneath that used pneumatic bellows to control pitch and roll and a small motor-driven device to produce turbulence. Link built the “Link Trainer” because his father, a piano maker, could not afford flight school for his son, yet following deaths and errors due to pilot error the US Air Corps bought six of the devices for $3,500 each and many of the pilots heading into World War Two were trained on them. The first computer image generation (CGI) systems for simulation were produced by General Electric for the space programme. Early versions produced a patterned “ground plane” image while later systems were able to generate images of three-dimensional objects, enabling continuous viewing of 180 degrees. The interplay of calculated scenery in a loop with the on-board instruments and the human between them represents a specific type of hybrid simulation that incorporates physical or “concrete” testing features with computer modeling and in this case a human-in-the-loop.

  17. America’s Army was the first overt use of video games as a recruitment and propaganda tool, a tactic that later became widespread everywhere from making stock picks to air traffic control. It incorporated real-world data based on the latest armaments, vehicles, medical training (though this was optional) and even disciplinary protocols for poor behavior. It cost just $7 million from a recruitment budget of $3 billion and was continually developed with an additional $3 million every year until the series was discontinued in 2022. According to Colonel Wardynski, the game generated interest from other agencies, which resulted in the development of a training version for internal government use only. In a 2005 blog post by Scott Miller, the game developer and publisher who created Duke Nukem in 1991 and worked on America’s Army in the early 2000s, he wrote about furious disagreements and a run of resignations after the moonshot project turned out to be a runaway success and its history was gradually rewritten: “When the project was just a fly-by-night rogue mission, no one paid much attention to it. Once the Army figured out that the game was the single most successful marketing campaign they'd ever launched (at 1/3rd of 1 percent of their annual advertising budget), we suddenly came under a very big microscope […] The Army is basically clueless when it comes to making games and they don’t know how to treat people, especially game developers. They had an A-level team, but I honestly don’t see them building another one (particularly since they weren’t the ones who built the first one). It’ll be interesting to watch what happens though. Essentially, there was a magic couple of years there where two totally alien cultures came together to do something cool.”

  18. IMBUILD005 – INTEL 4004: The first microprocessor, the 4-bit Intel 4004 CPU, was released to the market in 1971. It had been an under-resourced side project at Intel whose primary concern at the time was producing memory chips. The company lacked specialists capable of patterning the logic gates to be imprinted on silicon. Somewhat late in the day they hired an Italian named Federico Faggin who left his initials, “F. F.,” on the design like an artist’s signature transferred into mass production.

  19. IMBUILD006 – PICTURE PROCESSING UNIT: The Japanese electronics giant Ricoh developed what it described as a “video co-processor” in the early 1980s and named it the PPU – or picture processing unit – later incorporated in Nintendo’s Famicom (1983) and the Nintendo Entertainment System, or NES (1985). Up until that point the entire command center had been the central processing unit or CPU – essentially the brain of the machine and the most expensive and IP-controlled part of the product. The major realization for Nintendo was that their system didn’t need to be a universal machine that could run any kind of software. Nobody was planning to run payroll or inventory management on it. The device was to be used for gaming and gaming alone – meaning they could pair a cheaper CPU with a dedicated parallel chip for processing the background scenery and sprites (moving characters), which is did with the 8 bit, 256×240 pixel PPU.

  20. IMBUILD007 – GEFORCE 8800GTX: The 128 processing streams (or cores) inside Nvidia’s Geforce 8800GTX allowed graphical tasks to be parallelised at greater efficiency than had been possible up to that moment. This enabled greater and more immersive gameplay but also opened the door to general purpose GPU computing using graphics cards. A new paradigm in simulation capabilities emerged from the furnace of industrial competition to support the hunger for improved graphics and in-game physics. This was gradually appreciated and absorbed in fields beyond its origin, mostly during the 2010s. Gaming produced the core technology behind complex modeling, advanced plasma physics and machine learning but also Bitcoin mining. In the early 2020s the bull market and crash of cryptocurrencies, the expansion of home work, factory shutdowns across Asia and PC gamers trapped inside due to the Covid-19 pandemic produced a worldwide shortage we might think of as a “simulation crisis.” It meant the widespread comprehension that an expanding virtual realm is not without cost – the energy and resources required to keep these worlds operational but also the immense price paid to manufacture the hardware on which they run.

  21. Melencolia I, Albrecht Dürer (1514) Ray tracing is a means of calculating light transport plotted across a grid system whose history dates back to the German draftsman Albrecht Dürer. The necessary computations can be expensive, which is why Nvidia’s RTX (the RT is for “ray tracing”) GPU features dedicated ray tracing cores. Unreal Engine 5, released in 2020, incorporates Lumen, a dynamic global illumination and reflections system designed for next-generation consoles based on ray-tracing to produce photorealistic effects. Deep learning super sampling (DLSS) meanwhile uses a dedicated deep learning chip to up-scale resolution while only rendering them at a fraction of the resolution.

  22. Large Hadron Collider In Reconstructing Reality, Margaret Morrison uses the example of the Large Hadron Collider and the discovery of the long-theorized Higgs particle within Peter Higgs’s lifetime to explain how simulations contribute experimental knowledge. She notes that the detected five-sigma signal (a result with a 0.00003 percent likelihood of being a statistical fluctuation) could only have been made through the use of simulation data – never mind the vast number of simulations necessary in the production of the machine itself.

  23. In the same interview Morrison described some of the work partner militaries had been doing to simulate future battle spaces including a “snake robot” for the Australian Defence Force, who started conducting trials using the Virtual Battle Space series since 2003. “It was a robot snake that laid explosive devices,” Morrison said. “Each device was a part of the snake.”

  24. In the same interview Morrison described some of the work partner militaries had been doing to simulate future battle spaces including a “snake robot” for the Australian Defence Force, who started conducting trials using the Virtual Battle Space series since 2003. “It was a robot snake that laid explosive devices,” Morrison said. “Each device was a part of the snake.”

  25. The 2023 Hummer EV used Epic Games’s Unreal Engine to visualize a digital twin of itself on its 13.4 inch dashboard display. The interface was designed by Perception, the creative studio responsible for the technology in Marvel movies (and bears no small resemblance to the inside of Iron Man’s helmet). As well as running real-time diagnostics on the vehicle users are able to customize the skins of the vehicle on-screen and to reimagine the terrain as something quite other than the suburban roads they may be driving on – the moon, for example.

  26. Pollok T, Junglas L, Ruf B and Schumann A (2019) UnrealGT: Using Unreal Engine to Generate Ground Truth Datasets. International Symposium on Visual Computing pp 670-682.

  27. Simulations are places where multiple competing interests overlap – whether that be challenger theories in science, wants and needs in social models, or the competitive and collaborative aspects of game worlds. “Serious games” are instances in which the interests of players are pitted against one another in order to arrive at useful outcomes as when volunteers were asked to play a farming simulator in order to develop protocols against African Swine Fever, or to run operations in a shipping port as a way to develop greener practices. Sometimes the dynamics are unplanned but worthy of study, as case studies of governance as in Eve Online, which can host 20k at any one time, 8000 solar systems or loot systems like Dragon Kill Points which stand in as the political economy to solve redistribution problems in massively multiplayer online games like Everyquest or World of Warcraft.

  28. IMBUILD008 – NVIDIA A100 AND SUNWAY SW2610: The open commercial field that marked the early months after the public release of GPT-3.5 was characterized both approvingly and with great reservation as an “AI arms race.” Both LLMs and generative AI were certainly being considered for their applicability in warfare – but this was taking place in parallel with a period of learning and experimentation that spread to all fields where intelligence was at work – from script writing to cooking. OpenAI’s GPT models were trained in Microsoft servers running cabinets filled with Nvidia A100 Tensor Core GPUs sold on boards with eight units at around $200,000 each. “One day a new application that wasn’t possible before discovers you,” said Jensen Huang when interviewed by the CNBC network, highlighting the role of Nvidia GPUs and its general purpose GPU API CUDA played in AlexNet, the convolutional neural network that won ImageNet 2012 image recognition contest. Often cited as the big bang moment for AI in the 2010s, the contest proved that the sort of parallel processing being used to generate realistic computer graphics could be the substrate on which deep learning could evolve. As advancements accelerated towards the end of the decade and with breakthroughs in image generation in 2020 and beyond, Nvidia’s generalization of its hardware with the $8 billion CUDA project paid off. Its new range of chips provided relief for the company following the boom and bust cycles of 2020-2023 brought about by a viral pandemic, factory shutdowns, an energy crisis exacerbated by wars in eastern Europe and the Middle East, a rapid increase in the number of people working from home and a spike then crash in cryptocurrency prices that saw demand for Nvidia GPUs skyrocket then fall. After almost a decade of flatlining stock prices, Nvidia’s share price went through the roof. When the US announced export controls designed to hamper Chinese technological advancement on October 7, 2022, the company needed to rewire its supply chains to keep on serving a market that accounts for a full quarter of its revenue. The lower-spec H800 was designed specifically to get around the US’s rules, though this too was included in the ban when the controls were updated to “close loopholes” on 17 October 2023. The sanctions caused enormous upset in China. Among the many areas of business and defense applications hampered by the controls, its impact was felt on another arms race of sorts: the Top500 Project world ranking of supercomputers. In 2022, the United States dominated the top of the league with the most powerful individual systems but had just 128 supercomputers in the rankings compared with China’s 173 (China, it should be noted, had just one system in the top 500 in the year 2000). Where the Tianhe-2 supercomputer ran on 16,000 computer nodes, each comprising two Intel Ivy Bridge Xeon processors and three Xeon Phi coprocessor chips, the machine that succeeded it, Sunway TaihuLight ran on an architecture built from 40,960 Chinese-designed SW26010 manycore 64-bit RISC processors, based on the Sunway architecture developed at the National Supercomputer Center in Wuxi, Jiangsu province. Likewise, Tianhe-3, which many believed to be China’s first exascale computer – meaning it was capable of performing one quintillion operations per second – stacked a greater number of Phytium 2000+ FTP chips, designed by the majority Japanese-owned firm Arm, with Matrix 2000+ MTP accelerators produced by China’s National University of Defense Technology (NUDT), requiring an undisclosed but presumably gargantuan power and volume to achieve comparable results.

  29. Bauer P, Stevens B and Hazeleger W (2021) A digital twin for the green transition. Nature Climate Change Vol 11, pp 80-83.

  30. IMBUILD009 – EXTREME SILICON COMPUTING: In “The digital revolution of Earth-system science” published in Nature Computational Science (vol 1, February 2021) Peter Bauer and colleagues note serious concern within the community about the delivery of reliable weather and climate predictions “in the post-Moore/Dennard era,” and propose a novel infrastructure “that is scalable and more adaptable to future, yet unknown computing architectures.” Dennard scaling refers to the loose notion that as transistors shrink their energy requirements stay consistent, creating a heat threshold that led to the “multicore crisis” and partly to the need for parallel computation on distinct cores. “Increasingly,” he writes, “the recognition that small scales matter for climate predictions and that Earth-system complexity matters for weather predictions dawns on our community.” This is both an “extreme-scale computing and data-handling challenge” and even though porting computationally intensive code parts to novel architectures such as GPU-accelerated systems showed promising results, much of the code was written generations earlier and requires pain-staking rewrites as when the US National Center for Atmospheric Research’s high-resolution version of the Community Earth System (CESM) model was adapted and optimized for the Sunway TaihuLight supercomputer. Instead the team looks to algorithms and computer architectures designed in common as demonstrated by Nvidia’s Earth-2 supercomputer and its physics-ML framework Modulus – a move for which “the weather and climate community is largely unprepared.”

  31. They are not only a clear example of “an emergent complex adaptive system composed of multi-layered networks of semantic information flows” (p. 9) but also the way that self-representation is a fundamental first step towards prolonged “going on.” Much as groups of neurons within cortical columns create thousands of competing models the Earth is creating models of itself that like all models are tools for action and greater understanding.

  32. The “simulation intelligence” stack is outlined in a paper with a conspicuously high-profile line-up co-authored by figures from across industry (Intel, Qualcomm AI) and research (Oxford University, Alan Turing Institute and the Santa Fe Institute). It includes multi-physics and multi-scale modeling, surrogate modeling and emulation, simulation-based inference, causal modeling and inference, agent-based modeling, probabilistic programming, differentiable programming, open-ended optimization and machine programming. See Lavin A, Krakauer D, Zenil H et. al. Simulation Intelligence: Towards a New Generation of Scientific Methods (2022)

  33. Over 30 years, engineers achieved a resolution reduction of two orders of magnitude by working on a combination of three factors: the wavelength of the light; the k1 coefficient, which stands in for a range of process-related factors, and the numerical aperture, a measure of the range of angles over which the system can emit light. The critical dimension – that is, the smallest possible feature size you can print with a certain photolithography-exposure tool – is proportional to the wavelength of light divided by the numerical aperture of the optics. In order to achieve smaller critical dimensions engineers must either use shorter light wavelengths or larger numerical apertures or a combination of the two.

  34. As ASML staff explained at IEEE, EUV involves “hitting molten tin droplets in mid flight with a powerful CO2 laser. The laser vaporizes the tin into a plasma, emitting a spectrum of photonic energy. From this spectrum, the EUV optics harvest the required 13.5-nm wavelength and direct it through a series of mirrors before it is reflected off a patterned mask to project that pattern onto the wafer. And all of this must be done in an ultraclean vacuum, because the 13.5-nm wavelength is absorbed by air.”

  35. A 2016 study by the Semiconductor Industry Association produced a road map assessment of the likely future use of energy in information processing. They estimated that sometime around 2040 the world’s computer chips would demand more electricity than is expected to be produced globally.