You can view 2 more articles. Unlock unlimited articles with the TANK Digital Subscription. Subscribe here.
×
Mycelium 2
×

A theory of evil design

 

Universities, art schools and design companies all think they can recognise “good” and “bad” design, but these assumptions are shattered by AI and other insurgent technologies. A proposal for “evil” design looks to shake off some established truths in pursuit of a new paradigm for meaning-making.


Text by Philip Maughan

I: THE OPPOSITE OF “GOOD” DESIGN

“Don’t be evil”
Google company motto (circa 2000-2015)



Walk into any bookshop dedicated to design or visual culture and you’ll find books featuring endless iterations of “good” design. There will be design that claims to be radical; design that is anti-capitalist, anti-colonial, anti-racist and so on. There will be design books aimed at reclaiming our lost humanity, often, paradoxically, by anthropomorphising the non-human. There will be books that propose solutions to social problems, so long as they are low-tech, things like rooftop gardens or a neighbourhood fridge. Without a doubt, there will be an abundance of plants, mushrooms, insects, and design objects whose longevity reflects positively on the artisans who produced them.

The books are usually by graduates or instructors from places like Parsons School of Design, the Gerrit Rietveld Academie and London’s Royal College of Art. As such, their subject matter rehearses the design philosophy being taught to postgraduates in the West, who share a common sense of the ills that plague society, and which must be innovated out of existence (or at least mercilessly critiqued). It would be hard for anyone walking into such a bookshop and scanning these books to disagree with them. They may, however, wonder exactly how effective these designers have been at bringing about the changes they claim we need.

Last year the New Yorker ran a profile of Pavels Hedström, a Swedish designer who graduated from the Architecture and Extreme Environments program at Copenhagen’s Royal Academy. The story opens with Hedström and classmates visiting Tadao Ando’s Chichu Art Museum in Japan. Hedström is bewildered by the epic interplay of sky, sea, soil and man-made construction that typifies Ando’s brand of “critical regionalism.” He is struck by the vast resources, including many tons of concrete, that must have been required to build it. Upon returning home, he begins suffering panic attacks induced by the scale of the problem “good” design is required to solve. As journalist Sam Knight explains:

“Hedström now ascribes his illness, which lasted slightly more than a year, to a sense of anxiety about the future of the planet – and to the delusion that he might be able to save it. ‘Does the world want to be saved?’ Hedström asked me once. ‘You know, it’s a huge question.’ When he returned to architecture school, he changed his approach to design, starting with a search for balance in his own body and, later, for devices that might help humans to live more intimately, and equitably, with other species. He channelled his dread into ideas that existed somewhere between solutions and warnings for the future.”

Following his Damascene conversion, Hedström began making work that fits within the tradition of speculative design, a means of generating provocative, future-focused concepts and proposals for debate rather than execution, which originated in the 1960s and 1970s. Its pedigree extends from early practitioners such as Yona Friedman and Superstudio, to the British designers Anthony Dunne and Fiona Raby in the 1990s. It later became the basis for speculative research and development work by giant firms like BIG, AMO, or Herzog & de Meuron. “Groups such as Archigram, in London, and Archizoom Associati, in Florence, imagined walking cities and plug-in cities and ‘No-Stop City,’ a city freed from architecture itself,” Knight writes. “They plumbed the future in order to confound the present.” Animated by the issue of climate change, Hedström started to make physical prototypes, like a PVC bodysuit with a mealworm terrarium attached at the belly, and an outdoor jacket called Fog-X that can collect rainwater, part of his ongoing mission to “link humans with non-human life in order to create planetary harmony.”

“Good” impulses can lead to bad outcomes. Examples might include overprotective parenting, a love for one’s country, or an insistence on telling the truth in all circumstances. In the case of “good” design, however, the stakes are reduced to near-zero because, despite the lofty rhetoric, the interventions are as safe as it gets. “Good” design involves finding counterintuitive ways to state the obvious – like staging a bee-themed dinner at the Venice Architecture Biennale, or choreographing a dance about melting glaciers on an actual iceberg – rather than the arguably more challenging task of finding intuitive ways to state what is as yet unknown but revelatory.

The notion that all this performs some social service via “raising awareness” may have held water in the 1980s and 1990s, when styling a public service announcement in countercultural drag was sufficient to generate attention, but our moment can hardly be faulted for lacking information about threats to both people and planet. Perhaps instead of trying to save the world, we should first try to understand it.

II: ANTIKYTHERA

Speculative design has become a genre rather than a tool. Its methods and outputs – DIY technologies, futures cones and utopian renders – are used by designers to network, market themselves, and compete with one another at exhibitions and biennales. It is a means of decorating their resumes, giving clients and the public the impression that change is coming while churning out the standard work they were always going to do. This is why many designers feel impotent and apathetic. Lacking the sort of agency celebrity figures like Jony Ive or Marc Newson led them to believe they could expect, they burn out, implode, ragequit. Silvio Lorusso’s 2023 book, What Design Can’t Do: Essays on Design and Disillusion, delivers a 300-page history of exactly this, explaining how a problem-solving field became a problem itself.

The reality is that no individual designs a building, car, city or landscape. Instead they negotiate unpredictable forces like energy prices, insurance premiums, commodity markets, government regulations, factory capacity and automation, figuring out what is possible within ever-shifting constraints. There is a parallel here with Silicon Valley, which rose phoenix-like from the dot com collapse of 2000 seeing itself as singularly capable of disrupting and transforming the world. But do the technologists really understand what they’ve gotten themselves mixed up in? Or have they drunk the Kool-Aid so deeply that they can no longer see there are dynamics playing out beyond their understanding?

What if the opposite of “good” design isn’t bad design – but something else? Bad design is easy to recognise. If I walk into a hospital and see that the hallway signs have been written in Wingdings, this is bad design. Not only does it fail the brief of helping patients and staff navigate the hospital, its negative consequences will likely compound. Meanwhile “good” design is interested in a wider range of questions, things like: Who are we? Where are we going as a society, and how do the things we build orient us on that path?

The movement of speculative design allowed architects to recognise that theirs was a media-based discipline, one in which sketches, blueprints, models and narratives were being used to mobilise huge volumes of energy and matter, but most of the time did no such thing – because most of the proposals that were made never ended up being built. This realisation liberated designers to think outside the boundaries dictated by clients, budgets, politics and scale (consider the world-straddling Continuous Monument by Superstudio or, more recently, Liam Young’s Planet City). An unforeseen consequence was that as design tools were digitised, becoming cheap or even free, the speculative design method could be picked up and applied in fields beyond architecture.

One such application is Antikythera, a think tank founded in 2022 by the philosopher of technology and UC San Diego professor Benjamin Bratton, along with the architect Nicolay Boyadjiev, and the writer and strategist Stephanie Sherman. The group’s main contention is that there is something called “planetary computation,” which is a “philosophical, technological, and geopolitical force” that must be understood and reoriented. As Bratton has written: “In addition to evolving countless animal, vegetal and microbial species, Earth has also very recently evolved a smart exoskeleton, a distributed sensory organ and cognitive layer capable of calculating things like: How old is the planet? Is the planet getting warmer?” The question Antikythera seeks to explore is what this computational megastructure is for.

The think tank, which is supported by the Berggruen Institute, has held studios in Los Angeles, London, Mexico City, and Beijing using a design-research methodology which evolved out of speculative design. The studios gather participants from a wide range of professional backgrounds – engineers, philosophers, artists, scientists – each drawn to ask what computational systems are and what they might become. The think tank has research tracks that focus on simulations, economics, planetary computation, and the divergence of technology “stacks” according to changing geopolitical alignments. It also considers the very long-term implications of machine intelligence for philosophy, science, and design.

This summer I sat in on a studio in London titled “Cognitive Infrastructures,” during which researchers from Google DeepMind, the Santa Fe Institute, École des ponts ParisTech and the University of Cambridge’s Leverhulme Centre for the Future of Intelligence joined Antikythera to uncover some of the ways that artificial intelligence is blending with the systems that govern our world, and “making cognitive” infrastructures that already exist. In the group’s own words: “Natural intelligence emerges at environmental scale and in the interactions of multiple agents. It is located not only in brains but in active landscapes. Similarly, artificial intelligence is not contained within single artificial minds but extends throughout the networks of planetary computation: it is baked into industrial processes; it generates images and text; it coordinates circulation in cities; it senses, models and acts in the wild.”

On 19 July – the same day CrowdStrike crippled operations across the West but not in Russia or China, a prime case study for the idea of multipolar “stacks” – researchers presented their projects in a theatre at Central Saint Martins in London’s Kings Cross. The concepts they unearthed bridge the worlds of neuroscience and society, linguistics and robotics, biotech and interaction design, collapsing past, present and future inside larger evolutionary histories. The outcomes are unlikely to fit anyone’s habituated sense of “good” design, but neither were they “bad.” In line with the theme of this issue, I propose instead that we might think of them as evil.

III: EVIL DESIGN

Evil is that which exists outside a person or culture’s existing worldview. It evades articulation, rendering it “unspeakable,” “unimaginable,” or “incomprehensible,” and it is far less concrete than it seems. In fact, most debate on the question of evil among modern philosophers is about whether we should do away with the term altogether (Nietzsche, for example, was very much in favour). In popular discourse, evil functions like a spell that rids us of questions that are too complex, ambiguous and difficult to resolve. To point at something and say it is evil is to concede you have no real idea why it exists.

Historically, two primary concepts of evil have occupied thinkers, first in the age of widespread religious belief, and later in secular society. These can be classified respectively as “broad” and “narrow.” Broad evil refers to the so-called problem of evil, a problem because theologians felt obligated to explain why an all-loving, all-powerful God (the original designer) would sanction things like earthquakes under children’s hospitals, plagues, slavery, or parasitoid wasps which eat their living hosts from the inside (and which famously nudged Darwin down the path towards his theory of evolution).

The second, narrow definition of evil is invoked to account for despicable actions, either by people and their inventions (moral evil) or the nonhuman (natural evil). Narrow evil describes things like murder, torture, gaslighting, and other crimes. Commonly named sources of evil include violent video games, movies and works of art, drugs, the left and the right, Satanists, the Occult, “the system,” powerful institutions, the conditions of one’s birth, religion, lack of religion, Silicon Valley, entropy, a cold, uncaring universe. I could go on.

Because “evil” is of unclear origin, it arises in areas of uncertainty. One of those is AI, a development that has provoked a range of responses, from militant doomerism (as in Eliezer Yudkowsky’s decree that we should start bombing data centres) to transcendent teleologies from Ray Kurzweil to the certainty about artificial general intelligence that animates many engineers. In an article titled “The Five Stages of AI Grief,” published in Noema Magazine, Benjamin Bratton describes the advent of AI as a trauma in the making, akin to the discoveries of evolution or the fact that the Earth is not the centre of the universe. “These insights don’t change the fundamental realities of the natural world – they reveal it to be something very different than what our intuitions and cultural cosmologies previously taught us.”

If evil plays a revelatory rather than constructive role, could we reclaim the word as a signal that deeper thinking is required? A project from The Terraforming program, which Bratton and Bojadjiev previously led at the Strelka Institute, attacks the cartoonish naivety of the notorious “trolley problem.” Instead of asking whether we should intervene to alter the course of a runaway tram, it demands to know who signed off on a trolley without brakes. When we recognise that infrastructure doesn’t simply exist, but has to be built, we are able to reconfigure it, aware of how decisions in the present will go on to determine future decision-making. So a recognition of evil might prompt us to ask why a children’s hospital was built above a fault line (and once it was, why the building wasn’t built to withstand the inevitable tremors)? What social changes could have been made to halt plague transmission, but weren’t, and what do parasitoid wasps, or parasites in general, teach us about how genetic material moves through an ecosystem?

Could “evil design” represent a commitment to exploring the unknowable and unseen, a vessel for ideas that are neither virtuous nor malign but produced by the world’s social, technical and cognitive infrastructures? Without adopting a theological concept of absolute evil, we may manage to calm our pattern-seeking tendencies, and stop placing individuals, institutions, or objects in containers of the damned and the salvific, much as we have historically attempted to do with technologies from telescopes to drones.

Right now we are living in the primordial era of AI, where conditions are being negotiated that will shape the road ahead. The challenge of evil design, then, is to perform a speculative risk assessment: to surface emerging problem spaces that may be difficult to recognise. The way to do this is not through whimsical speculation, or with the nebulous hope of “saving the world,” but through resisting existing narratives, no matter how seductive, and not presuming to have the right answers by default. Below are five principles that describe what I am calling “evil design,” adapted from the Antikythera studio methodology.

Could “evil design” represent a commitment to exploring the unknowable and unseen, a vessel for ideas that are neither virtuous nor malign but produced by the world’s social, technical and cognitive infrastructures?

DIAGNOSTIC


Evil designers believe the present is more interesting than any vision of the future they could dream up. Reality, as people say, is stranger than fiction, and the focus of evil design should be on catching up to what is happening already, hidden in plain sight. In the information age, effective filtration and comprehension are in short supply, so the first thing that must be designed is a question capable of cutting through the chaff (a question which is worthy of an answer). What is the broader context for the issue under discussion? What are the conditions, connections, possibilities, and risks, and are they worth exploring? As we move through the five principles, I will provide short summaries of work produced during the Cognitive Infrastructures studio.

“Traversing the Uncanny Ridge: A Functionalist Approach to Searching for Novelty in Inter-Systemic Communication” by Sonia Bernac, Tyler Farghly, and Gary Zhexi Zhang proposes the adoption of an “uncanny index” – inspired by the famous concept of the uncanny valley in robotics – to measure the creation of new meaning as AI systems spread, scale and blend with other systems. In essence, the project questions how dissensus or “productive disalignment” between AI models might be a useful feature rather than a bug, a generative dynamic that needs to be defined before it can be fine-tuned.

The group could not have arrived at this particular diagnostic framework by having decided it in advance. The evil design process may start off with a broad and diffuse overview of a particular topic, but must quickly narrow this down so as to become useful. It does so according to criteria discovered in the process of research. For example, in order to speculate about AI, farming, biotechnology, or the future of personalised medicine, designers will need to look at the relevant history, scientific literature, b2b publications, interviews, presentations, white papers, legal notices, industry reports and so on. The question of how to locate, assess, curate, and collectively make use of a shared body of research must go hand in hand with any creative acts that take place (depending on the output, they may end up being the same thing). The material itself could become a lexicon, repository, atlas, almanac or glossary. As this likely makes clear, these are information objects to be read and manipulated by humans and machine intelligences, rather than being physical objects for delivery to museums and collectors. They live in a database of one sort or another, not on a yacht.

“Chronoceptual Runway: A Design Space of Cross-Temporal Encounters of Computational Systems” by Michelle Chang, Tyler Farghly and Yannis Siglidis attempts to map computational agents operating at different clock speeds, much as time operates differently for an elephant, a human and a mouse. Time is, of course, relative, and where the design and architecture of computational systems tend to be fixed and deterministic, the project asks whether adaptation to varying temporalities, as is seen in natural systems, may become essential for the evolution and cohabitations of machinic systems too.

INFRASTRUCTURAL


Rather than passing judgement on end results, evil design is interested in how things are conceived, executed, interconnected and understood. It’s not a final assessment but a creative inquiry into the systems, platforms, languages, theories and laws that compose, organise and reorganise the world. Certain innovations, from telegraph machines to satellites, started out as what we call technologies but soon ceased to be seen that way, becoming the invisible infrastructure that governs everyday life. This period of transition is of particular interest to evil designers, aware that future trajectories and path dependencies are being defined in the course of rolling out this technological scaffolding. Ideas work in a similar manner, as does biological life.

“From Cognizer to Cognized: Forecasting Human-AI Evolutionary Relationships According to Shifting Capacities for Mutual Prediction” by Chloe Loewith, Cezar Mocan, and Winnie Street notes that mutual prediction is a defining factor in any ecological niche. Relationships between species can be symbiotic, predatory, or competitive, based on interactions which are skewed by differing degrees of prediction. This is relevant when humans and AIs try to execute tasks together, to learn from one another and to transform each other’s behaviour in expected or unexpected ways. “From Cognizer to Cognized” establishes a gradient ranging from zero predictability up to a hypothetical form of transparent access to another’s mind. “Throughout history, humans have generally had a better predictive model of the entities we have co-evolved with than they have had of us,” they write. “In AI we encounter the first entity which may be able to predict us – including our thoughts, beliefs, feelings and plans – better than we can predict it.”

While so many applications of AI remain embryonic, creating scenarios or “cognitive simulations” that enlarge and examine research that does not yet exist beyond the lab offers a realistic window into technological development. It is in this sense that evil design can be described as “radical” – not because it fits neatly within a given anti-establishment tradition, but because it emphasises the original meaning of the word, from the Latin radicalis, meaning “inherent” or “forming the root.”

“Frontiers in Endosomatics: Cognitive Offloading and Somatic Onloading” by Daniele Cavalli, Philip Moreira Tomei and Gary Zhexi Zhang focuses on innovations in the field of epidermal electronics. From here, they see a means by which human-computer interaction might buck the historic trend of “offloading” cognitive tasks into brain-like devices – from abacuses to server farms – patching more closely into the innate plasticity of animal bodies and the composability of living organs. Somatic “onloading,” then, would mean “integrating external computation into embodied learning and enactive decision-making,” a vision of “somatic infrastructure” wherein bio-technological bodies are given greater capacity to think and infrastructure moves and interacts, growing and learning as it does.

 

 


HYPERFUNCTIONAL


Early speculative design was expressed through drawings, diagrams, sketches and arguments: the tools with which
architects and city planners presented their proposals. Today a universal medium determines which plans are given the green light and which will wither on the vine. That medium is the pitch deck.

Slide-based presentations are a synthetic medium that combines both visual and narrative elements, text and cinema. They allow a “plot” to unfold in three different senses of the word: demarcating a site in which the action will play out, arranging a possible sequence of events and outlining a strategic plan (“the gunpowder plot”). Where speculative designers made “paper architecture” rather than buildings, the impotence of contemporary designers can be seen in their profusion of decks, a signal that Antikythera studio director Nicolay Boyadjiev uses as evidence to suggest that our age is one of “keynote realism.” Evil design embraces this reality, using it to create scenarios that are not counterfactual, but “hyperfactual.”

Instead of moonshot interventions – why don’t we edit human genes so we can breathe CO2? Why don’t we nationalise Amazon? – evil designers hunt down propositions that are so functional and pragmatic they make broad-brush solutions (“How to end capitalism,” “Why insects will save us,”) seem bizarre and hollow by contrast. For example: instead of demanding that individuals be empowered to remove their personal data from AI training datasets, we might ask how quality data on things like plant epidemiology, materials science, or natural disasters might be included instead.

If AI is the means by which the planet develops a working model of itself (much as natural intelligence is the means by which humans and other animals create models of the world and themselves), is it really acceptable for that model be made up of consumer profiles, Facebook posts and memes (no matter how funny)? Data doesn’t just sit around waiting to be picked up like sea coal washed onto the beach. It needs the right sensors and tools for interpretation, organisation and translation to be put to work. Often, what is most important in explaining how the world works can appear dry or banal, from web protocols to measurement standards. The more specific the topic up for discussion, the more likely an evil design project will uncover the key variables that will make a difference to how things turn out.

“Adaptive Assembled Arrays: The Design Space of Organoid Intelligence” by Ivar Frisch, Jenn Leung and Chloe Loewith drills deep into the world of organoids: simplified and miniaturised versions of living organs used in fields like drug discovery, tissue engineering and regenerative medicine. The trio asks how stringing together functional neural networks made out of brain stem cells might prove crucial in the quest to “biologicalise AI,” setting off a chain reaction of reinforcement learning between thinking rocks (microprocessors) and thinking groups of cells.

Moving sideways into another example of how infrastructure starts to think, “Generative Topolinguistics: Bi-Directional Interfaces for the Manipulation of Tokenizable Space” by Iulia Ionescu, Jenn Leung and Yannis Siglidis wades into the ongoing debate about the semantic quality of the language produced by AIs. Instead of halting at the question of whether or not the AI “understands” what it says, the group uses an experimental interface to analyse the geometry of language, which they define as that which can be tokenised (meaning it can be broken down into small pieces and given a numerical value that can be computed). Much as linguistics can be used to render human communication topologically (i.e. in terms of space and structure) to disclose hidden relationships, continuities and boundaries, a virtual space like OnlyBots, a website which bills itself as “the world’s first social network for bots only,” could wind up being an accidental petri dish wherein AI agents confound, deepen and expand our ideas about what language is and does.

Intelligence emerged in homo sapiens, and has transformed us and the planet Earth out of all recognition. The same will almost inevitably be true for computation and its latest configuration as AI


ALLOCENTRIC


Evil design does not set out to satisfy an existing audience. It has a network of practitioners, but no defined community beyond it. Instead of presuming a public for their work, a successful venture into evil design will produce one almost as an externality. The size of that audience is perhaps less interesting than those who have self-selected to be a part of it. Problems that affect entire societies do not necessarily require hands-on involvement from entire societies to solve them. This would be “politico-solutionism,” a more liberal-sounding though impractical alternative to “techno-solutionism”. Instead, the audience for evil design is anyone or anything drawn to the outputs of evil design. They are likely to have their reasons.

“Minimum Viable Interiority: Building “Intra-Agent Intra-Action” for Many-Model NPCs” by Iulia Ionescu, Alasdair Milne and Cezar Mocan begins by observing that relational and collective models of intelligence are taken as read in industry and academia. In this light, they ask, how then might an individual be reverse-engineered – not as the self-sufficient being we thought it was, but as a temporary phenomenon that exists within the manifold. In order to game this out as a simulation, they use the “pandemonium architecture” used in video games to power non-player characters (NPCs) by combining multiple language models. In experimenting with different configurations of the architecture, they arrive at a definition of “minimum viable interiority” which is “functionally closed” and has “privileged access to at least some of its own internal content.”

It is allocentrism that keeps evil designers from falling headfirst into their own navels. It shifts focus away from individuals (idiocentrism) towards networks and collectives, standing in opposition to a litany of other inward turns – anthropocentrism, eurocentrism, heterocentrism – or any other unexamined worldview that places blinkers on us. Evil design is interested in populations, niches, groups, structures and systems, as much as how individual humans, animals, cells, nodes or objects come to compose them.

Depersonalisation is paramount. Evil design resists the “protagonist syndrome” whereby events are understood through the lens of a stable, individuated narrative. It doesn’t matter whether this refers to the “great man theory” of history (where “great” can mean evil as much as good), the myth of the solo, heroic artist, or the founder in Silicon Valley. These are stories that obscure far more than they reveal. We respond to them because they flatter our subject position, and because they feed bubbles and scenes from galleries to venture capital firms, in turn creating greater demand for similar narratives to keep the engine running.

In “Synthetic Counteradaptation: Adversarial Exploitation of Models Provides Adaptive Scaffolding for New Landscapes of Human-AI Interaction” by Ivar Frisch, Jackie Kay and Philip Moreira Tomei, the concept of counteradaptation, in which an organism adapts or changes in response to adaptations in another organism, is imported from biology and applied to human-AI interaction. The project attempts to formalise synthetic counteradaptive strategies using dynamics familiar from game theory. As the film AlphaGo (2017) demonstrated when it showed world-champion Go player Lee Sedol tweaking his own strategy after playing an AI, humans are already well equipped to extract novel insights from their interactions with machine intelligences. Creating a diagnostic manual for such counteradaptations will prove useful as new behaviours are mainstreamed in the coming months and years of living with AI.

PHARMAKON


Evil design does not seek benevolent solutions at the end of shining linear paths. Instead, it favours “pharmakon” propositions in which a future of contingency, indeterminacy, and the unknowable is factored in. Pharmakon is a Greek word whose meaning has been explored by thinkers from Gregory Bateson to Jacques Derrida. It refers to the poison-remedy continuum, along with the Ancient Greek custom of exiling people as a way to purify society after conflict.

The pharmakon is a core concept in evil design that calls us to recognise that temporary pain can sometimes be in our best interest, and that great harm can arise from what appears to be a cure. (Plato used it to question whether writing was a good idea, concluding it probably wasn’t.) Nowhere is this dynamic more obviously at work than in discussions around “AI alignment.” The thesis, as put forward by groups such as Stanford’s Institute for Human-Centered Artificial Intelligence, or the Alignment Research Center founded by former OpenAI researcher Paul Christiano, is that in order to receive the maximum benefit from AI we must make it as anthropic, aligned with humanity, as possible. This implies, at least to me, that we already know precisely what we are (in the cosmic scheme of things) and that we already know, deep down, how we ought to behave. What hard evidence is there that this is true?

In his stages of AI grief article, Bratton uses the depression stage to highlight the manic depressive tendency to flip between polar extremes of euphoria and despair. “The fluctuation between messianic grief and solemn ecstasy for what is to come,” he writes, “often manifesting in the same person, the same blog, the same subculture,” has created a situation “where audiences who applaud the message that AI transcendence is nigh will clap even harder when the promise of salvation turns to one of apocalypse.”

The outcomes of evil design should look neither idyllic nor dystopian, because reality is neither of those things. It should eschew the promise of a past or future paradise. Much like an optical illusion that forces the human visual cortex to determine whether it is seeing a vase or two faces, it is the productive tension between poles that suggests a project worthy of pursuing further.

“Chronoseed: A Neural Network Time Capsule Which Encodes the Possibility for Generating New Worlds” by Sonia Bernaciak, Jackie Kay and Winnie Street frames the pursuit of artificial intelligence as a symptom of humanity’s instinct for creating archives. Only now we can encode more than just books, accounts, code and other media: we can include “social interactions, cultures, languages and cognitive processes,” and safeguard them into the future (should we wish to). An AI time capsule, unlike the Svalbard seed bank or the golden records aboard interstellar probes Voyager one and two, would preserve both an archive of knowledge but also the functions to process it. But who will be around a thousand, million, or 100 million years to unlock evidence of how we lived, thought and interacted at the dawn of the 21st century?

Intelligence emerged in homo sapiens, and has transformed us and the planet Earth out of all recognition. The same will almost inevitably be true for computation and its latest configuration as AI. “Xenophylum: Xenomorphological Design for Post-Technosphere Niches” by Daniele Cavalli, Michelle Chang and Alasdair Milne points out that human-made technologies tend to be indebted to patterns of biomimicry: remaking, or artificialising physical systems that nature has perfected over billions of years. Taking inspiration from that earlier outpouring of morphological experimentation, the Cambrian Explosion, the project looks to the xeno-, or alien output of AI, in the expectation that autonomous and intelligent multi-material robotic systems “in the wild” will discover ways to emancipate themselves from the mimetic paradigm to which humans – up to now – have been wedded. “We hypothesise,” they write, “that more xenomorphological designs have greater potential to adapt to new and vacant niches in the technosphere-biosphere.”

In general, the principles that animated any lasting piece of design work will outlive the things that they enabled. After all, it’s not specific chairs or paintings from the Bauhaus school or Black Mountain College that continue to shape the work of young designers today, but the principles and thinking that brought them to life. The Antikythera studio methodology, rendered here (albeit somewhat with a tongue-in-cheek) as “evil design,” made possible this summer’s studio where AI past, present and future could be seen in terms of evolutionary dynamics operative on decidedly inhuman timeframes. .