Interview, Katherine Hayles and David Familian

David Familian and Professor N. Katherine Hayles, scholars aligned in their art-science interests, met at UC Irvine in August 2024 to discuss the exhibition Future Tense: Art, Complexity, and Uncertainty, which was then running at the Beall Center for Art + Technology at UC Irvine from August 2024 through January 2025. They used this opportunity to further explore Hayles’ work relating to chaos and complexity, as well as her early observations of cybernetics.

David Familian: Can you talk a little about your background and your interest in chaos and complexity in your research?

Katherine Hayles: I started my professional career in the sciences, as a chemist. I did my undergraduate work in chemistry and then got a master’s degree in the field, and only then did I start the serious study of literature and culture. What I acquired during my scientific training were the basic mathematical tools to understand the world of science. I was really interested in how ideas circulate between the humanistic and scientific communities. At the time that I started my research into quantum field theory, there were a lot of influence studies–the influence of Newton on Alexander Pope and so forth–but it seemed to me that there existed much richer connections. My first book, The Cosmic Web: Scientific Field Models and Literary Strategies in the 20th Century (1984), was an attempt to begin to develop a theory in which one could talk about those interconnections. My second book, Chaos Bound: Orderly Disorder in Contemporary Literature and Science (1990), was a case study of how that actually works in practice. So, my interest in chaos goes back to the 1980s and 1990s. Chaos science seemed to me a marvelously fertile paradigm. When it matured into complexity theory, it was able to explain and investigate a huge range of phenomena—which is on display in this show.

DF: One of the things about your Chaos book that I remember is when you were writing about Mandelbrot, you talked about how his contemporaries were making fun of his work, as if it were too feminine or intuitive. How could a scientist not be curious, or not have intuition?

KH: Or, how could a scientist not be human, and have all of the insights and frailties of humans?

DF: It’s fascinating how you went from the sciences to the humanities. I’m reading a book now which mentions Heidegger saying that philosophy is not the handmaiden of science. But then, thirty years later, he gave that lecture where he said that cybernetics is the new metaphysics.  And in your book, How We Became Posthuman (1999), you go back to the origins of cybernetics, if I remember correctly. 

KH: I went back to the Macy Conferences [on Cybernetics, held in New York from 1946 to 1953] and did a close reading of several of their annual conferences. But I was really interested in some fundamental questions, one of which was how information lost its body. And I was also interested in the emergence of this new vision of what the human was, a notion I labeled as the Posthuman, which was rapidly subverting and rearranging traditional ideas from the Enlightenment like free will, the autonomous self, and so forth, and cybernetics had a role to play in that. It’s kind of a conundrum that even as cybernetic ideas diffused into culture generally, cybernetics as an academic discipline virtually disappeared. Cybernetic ideas were taken up and became central in any number of fields, but not in themselves as a coherent discipline. 

DF: What’s interesting in the Steve Strogatz book I’m reading, Sync, is that he mentions cybernetics and claims, falsely, that we don’t practice cybernetics any more, yet his whole book is about ideas that came out of cybernetics. The way I look at it is that cybernetics is a methodology, not necessarily a science. It’s a way of looking at our world. Maybe the idea, like Wittgenstein says, is that we don’t think up language, we only pick it up socially. It’s the feedback between us that makes us human, it’s not all internal.

KH: Absolutely. 

KH: Some of the ideas of cybernetics are being resurrected in other contexts, for example one of the important papers that Norbert Wiener co-wrote was called “Behavior, Purpose and Teleology.” Their argument was that cybernetics leads to teleological conclusions, because the whole idea of goal-seeking behavior was one of the fundamentals of their early robotic installations. More recently, Michael Levin at Tufts University has picked up that argument about teleological behavior in the context of microorganisms, bacteria, and such, arguing that the only way to understand these microscopic forms of behavior is as a desire to continue their existence. He collaborated on an essay with Daniel Dennett called “Agents with Agendas.” Their argument basically is “if you don’t understand the agenda, you can’t understand the behavior,” which is precisely the argument that Weiner was making in the 1950s. So these arguments get resurrected. Now, of course, Levin is talking about microorganisms, not about robotics, but I think there’s a deep connection there. 

DF: Can you define teleological?

KH: Teleological means behavior directed toward some goal, and that the goal is preset and defined, that the goal dictates the behavior rather than the behavior dictating the end result.

DF: And then the goal changes because you have to adapt to new environments. So the environment then becomes a trigger.

KH: Absolutely. There’s a fascinating recent paper out by Stewart Kauffman, a theoretical biologist, and Guiseppo Longo, a French mathematician, and they’re basically arguing that there’s a definitive divide between physical systems and living systems. The difference is precisely that living systems are teleological. Living systems want to continue their existence. And physical systems don’t want anything, things just happen, like a lake evaporating or a mountain eroding. There’s no agent there to give it a teleological impetus. But as soon as we get into the biological realm, those teleological impulses are everywhere. The progress of a physical system, even very complex ones with criticality points and so on, can be mapped into  phase spaces showing how their trajectories will proceed.  With biological systems, no such tidy evolutionary trajectory is possible.

KH: Kaufman and Longo give the example of the lungfish, a species which breathes air and water through both gills and lungs. They say that as water entered the fish’s lungs, the organ turned into a swim bladder and was used to regulate buoyancy. This created a new niche and opened the pathway for a new species—say a worm or a bacterium– to evolve that would exploit this development. That’s what Kaufman means by the adjacent possible.  Something opens a new opportunity adjacent to an existing opportunity, and now evolution can take on a new trajectory.

DF: In Gail Wight’s work in the show, Ostracod Rising, global warming is affecting evolution in microorganisms and creating new adjacent possibilities for these species to evolve and enter new environments.  

KH: Maybe we can use that as a segue to talk about the exhibition. What really struck me about the show was the equal playing field on which robots, microorganisms, non-human species and humans appear, all woven together in complex ecology. If you think in environmental terms, that’s exactly the way that complex systems work. It may start with symbiosis between two species but then begins to spread more and more. 

My new book, Bacteria to AI: Human Futures with our Nonhuman Symbionts, tries to approach these complex interactions through the notion of cognition. Cognition was very carefully chosen because the human species considers itself to have superior cognitive powers to any other species, and that’s what accounts for our putative ability to dominate every other species on earth. But this new research, of the kind in which Michael Levin engages, is beginning to expand the notion of cognition beyond the human, into the nonhuman. These moves are inviting a radical reappraisal of what constitutes cognition. It inevitably feeds back to affect how we think about human cognition as well; instead of a simplistic idea of human superiority, now we have this much more complex and humbling view of human cognition as one form of cognition active with many other forms in complex ecosystems. I very much saw that on display in this exhibition, and it’s a wonderful way to approach complex systems and complexity in general. 

DF: One of the goals of this show was not just to have people intellectually process the idea of a complex system, which I think is very important to our survival, but to feel it. That’s what art allows us to do is to feel what this looks like, because our minds don’t work that way. We try to bring sequential order to our lives, but now, in order to understand the world, we have to change that. We have to see the world as a complex system, and our brains have to change to think in patterns rather than in cause and effect. We’re trapped in this linear thinking that has produced all this wonderful knowledge, but it’s almost useless for solving the wicked problems we face today. 

Responding to your comment, I specifically wanted the artworks included in Future Tense to each engage multiple systems.

KH: Right, and at different scales as well, from the intercellular to the forest. I was really taken by Laura Splan’s work, Baroque Bodies (Sway). That was, to me, quite remarkable, both visually and conceptually. It showed this interplay between the environmental and genetic factors, and how they could work together to create a very complex system.

I was also extremely interested in Cesar & Lois’s sculptural display of the ecosystem in Being hyphaenated, with the plants and the fungi collaborating to convey messages back and forth. I recently had occasion to read Suzanne Simard’s seminal paper on what would come to be called the “Wood Wide Web.” Her research was fundamental in showing the way fungi and root systems collaborate to carry messages and distribute nutrients, leading to a really complex symbiotic system. I thought the sculptural quality of that artwork made these interactions visually apparent and quite appealing. 

I’ll add one more element to this tapestry we’re weaving here, and that is Lynn Hershman Leeson’s piece on facial recognition and AI interpretations. I think if we’re going to re-envision cognition, crucial components of that are the cognitive abilities of computational media, including AI, algorithms, and so forth. We’re beginning to understand more fully how there is this deep interconnection between human cognition and the cognitive abilities of computational media, and to explore our ongoing symbiosis with computational media. Re-envisioning  cognition, in my view, has to have a place not only for non-human life forms, but also for non-human cognizers. 

DF: So it’s not just biological, it’s also computational.

KH: That’s right. I call this new framework to understand cognition the Integrated Cognitive Framework, the ICF. The “integrated part” of that phrase refers to emphasizing and exploring more fully the way in which feedback loops are operative throughout this system. From AI to human, from human to AI, from nonhuman life forms to human and back again.

DF: Feedback is the engine, it’s what keeps the thing alive.

In Roy Ascott’s idea of moistmedia, there is a constant exchange between the “dry” computation and “moist” nature. Maybe AI needs more than the inputs we provide based on what we want to get out of it, but something else that comes from the environment or other sources.

KH: Imagine an AI that could choose its data training set. What if it really wants the data from a forest in England, for example, rather than human-authored texts. That’s fascinating. 

Notes

  1. For more information and recordings documenting the Macy Conferences, see https://www.asc-cybernetics.org/foundations/.
  2. For more information, see: Sophie Yeo, “The ‘wood wide web’ theory charmed us all – but now it’s the subject of a bitter fight among scientists,” The Guardian, July 9, 2024, .

Curator’s Introduction

David Familian

Future Tense: Art, Complexity, and Uncertainty is an exhibition of contemporary artists whose work explores complex systems at the nexus of art and science. The word complex,  derived from the Latin plectere “to weave,” describes how interconnected, complex parts in each system are woven together, but also interact with other systems. One cannot think of them in isolation because they are constantly interacting with each other through feedback, generating chaos, emergent behavior, and self-organization, all which create uncertainty. For instance, climate change, pandemics, social unrest, and geo-political turmoil are all generated within complex interactions between elements that lead to unpredictable, dynamic outcomes. Another element in complex systems is that we can never have complete knowledge of every interaction of a given system. All systems become less predictable over time, producing tension and uncertainty, which explains the exhibition’s title.

Future Tense: Art, Complexity, and Uncertainty includes interdisciplinary works of art that represent, demonstrate, and embody the fundamental nature of complex systems. The emerging and established artists selected for this exhibition focused on different aspects of complexity, using both traditional media and new technologies such as computer modeling, robotics, and dynamic AI generated data visualizations. The works explored complex systems at various scales from that of microscopic organisms to the totalizing implications of global warming on a planetary scale. The exhibition endeavored to help audiences understand how complexity functions within the individual works but also engaged them to actively experience the overarching aesthetics and effect of each system, which is reflected in the exhibition’s installation.    

A Walk Through Future Tense: Art, Complexity, and Uncertainty

Future Tense: Art, Complexity, and Uncertainty includes interdisciplinary works of art that represent, demonstrate, and embody the fundamental nature of complex systems. The emerging and established artists selected for this exhibition focused on different aspects of complexity, using both traditional media and new technologies such as computer modeling, robotics, and dynamic AI generated data visualizations. The works explored complex systems at various scales from that of microscopic organisms to the totalizing implications of global warming on a planetary scale. The exhibition endeavored to help audiences understand how complexity functions within the individual works but also engaged them to actively experience the overarching aesthetics and effect of each system, which is reflected in the exhibition’s installation.    

The exhibition is arranged thematically in three distinct galleries, each with a different ambience aimed to produce a disparate effect. The first gallery focuses on one characteristic of complex systems: chaos, where works demonstrate various chaotic elements or systems out of control. For instance, Clare Rojas’s Circle of Infinite Chaos (2022), a representational painting of a woman (possibly the artist) lying peacefully in a hollow below grade with objects buzzing above her reflecting chaos in her inner mind. The tension between both chaos and order is visible in several works: Julie Mehretu’s Landscape Allegories (2004), etchings of turbulence landscapes; Fernando Palma Rodriguez’s Huitzlampa (2023), kinetic sculptures programmed to respond to live weather in Los Angeles; and Ralf Baecker’s Interface I (2016), a kinetic sculpture of thin bands driven by signals from the ambient radiation of the earth, some that form smooth sine waves, others that move with frenetic energy. Carolina Caycedo and David de Rozas’s Measuring the Immeasurable (2022), a hanging installation of surveying tools and drawings of indigenous rock paintings, references how the Public Land Survey System (PLSS) created displacement and environmental chaos for native peoples and their land.  

The second gallery features three video projections in a dark, interstitial, liminal space where viewers could have a respite from the previous chaos. Laura Splan’s Baroque Bodies (Sway) (2022), an interactive video projection on a wall, zoomed in and out in response to the movement of participants moving in the space as they explored the impact of the environment on gene expression. Theresa Schubert’s Glacier Trilogy — Part 3: Simulating glacial water systems (2022), included two large flat-screen monitors stacked vertically screening a real-time simulation of melting glacial ice that varied according to the amount of carbon dioxide in the room. David Rojas and Carolina Caycedo’s video, Teaching of the Hands (2020), a companion piece to Measuring the Immeasurable, oscillates between a documentary about the history of colonization with its ravaging extraction policies and a poetic meditation on indigenous knowledge and caretaking of their land.

The third gallery focuses on the complex and unknowable in both past histories and speculative futures. Gail Wight’s Ostracod Rising (2024), a timeline drawn like a 19th century naturalist illustration, explores both previous and anticipated future extinctions of organisms on our planet. Chico MacMurtrie’s Dual Pneuma (2024) is a large installation of futuristic, soft robots whose movement is powered by air from pneumatic tubes. They are installed alongside ceramic “doubles” cast from the robots, which play sounds using the same water whistle technology as ancient Meso-American flutes. Pilar Yoldas’s Alphabet of Life (2024) are glass orbs etched with the molecular structures of twenty amino acids, the building blocks of life. Hege Tapio’s Ephemeral (2024) consists of a promotional video, didactic materials, and a prototype to imagine a fictitious corporation where venture capitalists embrace a brand new technology: hormonal implants to alter our emotions. Cesar and Lois’s Being hyphaenated (Ser hifanizado) (2024), a living sculpture/experiment with sensors, measures the complex communication between mycelium and other plants with the data visualized on a computer. 

The final part of this exhibition in the fourth gallery features two artists who provide cautionary tales for the future: Lynn Hershman Leeson, one of the first media artists, and Newton Harrison, an innovator of environmental art. Lynn Hershman Leeson’s Logic Paralyzes the Heart (2022), a photo and video installation about AI-generated life forms, includes a character that reveals the historical secrets of past surveillance technology and potentially dystopic future technologies. Newton Harrison’s Epitaph (2022), his final work, is a drawing of an imagined tombstone for himself and his long-time collaborator, the late Helen Mayer Harrison. Referencing their dialogical artistic process, the text on the tablet-like tombstone is a conversation between Harrison and what he calls the “life web.” The penultimate line on the tombstone reads, “Human, Human to survive, learn from your companion species how to rejoin me,” which reflects Newton Harrison’s frequent admonition for humans to coexist with nature to prevent human extinction.

The overarching goal of Future Tense: Art, Complexity and Uncertainty, was for audiences to understand how complexity functions in our everyday life through works of art. Audiences could see and experience (perhaps even feel) the dynamic interplay of chaos and order, uncertainty and unpredictability in extreme weather events and climate change, robotics and AI, and in biological systems in plants and within our own bodies, just to reiterate a few. Ultimately, the artists in Future Tense: Art, Complexity and Uncertainty propose ideas about this paradigm of complex systems that helps us understand that while we cannot solve all these vexing global issues, we must learn to manage them, or our survival will continue to be at risk.

Historical and Curatorial Precedents

For the last twenty years as a curator, I have had a long-standing interest in the concepts of chance and probability that control randomness and can generate uncertainty in art and science. At critical junctures, paradigmatic shifts occurred in art, philosophy and science that embraced randomness, indeterminacy, and unpredictability.  In the visual arts, the modernist avant-garde redefined the very meaning, function, and materiality of art, by welcoming chance in the production and reception of art.

A few voices in the late 19th century began to question the determinism of science and its goal of certainty. For instance, scientists including physicist/mathematician James Clerk Maxwell and physicist/philosopher Ludwig Boltzmann, could only understands the kinetic  behavior of the molecule through probability using statistical mechanics. In trying to solve the “Three-body problem” (why do three celestial objects i.e. planets, stars, or suns with similar mass exert force on each other when they are in proximity?) physicist/mathematician Henri Poincaré realized even a minute miscalculation in the starting position of each body would give an inaccurate result, as the systems are so chaotic and unpredictable. 

This skepticism about certainty continues into the 20th century.  In his Creative Evolution (1907) philosopher Henri Bergson, argued that Darwinian mutation cannot be predictable, but must involve chance and uncertainty.  As scientists looked further into the structure of atoms, they found bizarre behavior which led to the theory of quantum mechanics. In 1927 when Werner Heisenberg and Niels Bohr presented their “Uncertainty Principle” positing that you cannot know both the speed and the position of a particle at the same time, they confronted much opposition because it depended on probability. Albert Einstein, who did not believe that nature is unpredictable, reacted to their principle famously stating that:“god does not play with dice.”

Artist Marcel Duchamp became interested in the concept of uncertainty and read the writings of Henri Poincaré, including Science and Hypothesis (1903) and Esprit Jouffret’s book on the fourth dimension, a popularization of Poincaré’s book that read by many artists at that time.  When most art historians think of Duchamp it is in relation to his impact on the development of conceptual art, but equally important is how he dealt with uncertainty, indeterminacy, and chance. One may see this in his Three Standard Stoppages (1913-14) where three one-meter threads held horizontally were each dropped from a height of one meter onto a piece of canvas and fixed in position by means of varnish. His composition Erratum Musical (1913) a score for three voices composed by Duchamp and his two sisters, Yvonne and Magdeleine, both musicians, was created by randomly picking up a range of twenty-five notes from a hat.

Artists responded to the rupture of World War I also using chance along with found objects and nonsensical language in raucous performances at Hugo Ball’s Cabaret Voltaire in Zurich. This is where Richard Huelsenbeck reportedly slid a paper knife into a dictionary and randomly selected the word “Dada”—hobbyhorse in French––as the name for the group of artists. In the 1920s surrealists embraced the unconscious and the irrational to reflect the extreme level of destruction, death, and trauma of war. Like the scientists and philosophers, Dada and surrealist artists also continued question certainty, predictability and rationality, which became the zeitgeist of the post-World War I era.

In the 1950s, John Cage developed “chance operations” indeterminate procedures to make visual scores for compositions, often open-ended and including audience participation. Beginning with has famous Untitled Theater Piece #1 (1952) at Black Mountain College, he developed this early intermedia performances combining painting, poetry, dance, music, everyday sounds and found objects. Influenced by Cage, Fluxus artists wrote their own “event scores “for brief, multi-sensory intermedia performances. For instance, La Monte Young’s  “Composition 1960 #10,” also known as Draw A Straight Line and Follow It, was interpreted by Nam June Paik in his Zen for Head (1961) where he draws a straight line by dipping his head (or sometimes his tie) in ink and drags it along the paper, making what looks like a Zen ink brush scroll.

Complexity, Wicked Problems and Systems Thinking

When I first heard about Getty’s PST Art and Science Collide theme, I knew that it presented an opportunity to expand further upon the concepts of probability and uncertainty that I engaged with in previous Beall exhibitions. What I did not know was this would result in four years of research into adaptive complex systems, its histories and methodologies. This started out with a deep dive into the literature of complexity theory that led me to cybernetics, Immanuel Kant’s philosophy of the organics, and eventually, to ubiquitous, adaptive complex systems all around us that effect our lives in very profound ways we cannot ignore. 

 René Descartes described the old scientific method of problem-solving succinctly: “Divide each difficulty into as many parts as is feasible and necessary to resolve it.” In short, everything is like a clock: in that manner of thinking one knows what every part does and can predict with certainty how it will operate. In contrast, Bruno Latour explains in a discussion about global warming: “The disorientation people feel regarding the planet, and the fact that they can’t react quickly to a situation that everyone knows is catastrophic, is largely because they continue to be in the world in the past, one where there are objects that have no agency and can be controlled by calculation and science.” Today, like  Latour, we assume climate change consists of a set of complex, dynamic, interrelated systems that produce unpredictable outcomes that, unlike the “clockwork” paradigm, cannot be resolved by calculations or science.

Adaptive complex systems theory attempts to explain how large numbers of relatively simple entities organize themselves into a collective whole that creates a pattern, such as a group of flocking birds. While adaptive complex systems y more accurately reflect our current conditions, their complexities make them difficult to comprehend. In 1973, urban planners Horst W.J. Rittel and Milton M. Webber coined the phrase “wicked problems”:

A wicked problem has innumerable causes, is tough to describe, and doesn’t have a right answer… Environmental degradation, terrorism, and poverty—these are classic examples of wicked problems. They’re the opposite of hard, but ordinary problems, which people can solve in a finite time by applying standard techniques. Not only do conventional processes fail to tackle wicked problems, but they may exacerbate situations by generating undesirable consequences.

Twenty years later, Barry Richmond developed the idea of “systems thinking,” proposing a possible way to solve “wicked problems.” Richmond emphasizes how systems thinking       observes complex systems bi-focally and that “one must see both the forest and the trees.” Systems thinkers strive to see both the generic and the specific, as well shifting patterns in an event. Finally, Richmond noted if one does not employ systems thinking and tries to change just one element of a complex problem, the chance of unintended consequences will increase over time. To summarize, we cannot fix these problems easily and must consider they are within a matrix of many intersecting nodes of a complex system.

Helen and Newton Harrison were aware of the concepts of wicked problems and systems thinking, which they integrated into their practice from the beginning. They employed systems thinking in conversations with one another, and with collaborators in architecture, urban planning, marine biology, and environmental science across the globe. As Newton Harrison stated: “we have to live with nature, not try to control it.” 

Today, this kind of global systems thinking resonates with Yuk Hui, a philosopher of technology, who recently wrote,  “To think planetary…means formulating a language of co-existence that will allow diverse people and species to live on the same planet.”  The essential aspect of complex systems—feedback—combined with collaborative, interdisciplinary systems thinking, could possibly help to begin to a co-existence where we can tackle our current wicked problems and the cataclysmic events that will continue into the future.  

Notes

  1. This “Three body” conundrum was not solved until discovery of chaos theory in the 1970s.
  2. Henri Bergson, Creative Evolution, translated by Arthur Mitchell. (New York: Henry Holt and Company), 1911.
  3. Presented at the Solvay Conference in Denmark, this became known as the “Copenhagen Interpretation.” 
  4. Herbert Molderings, Duchamp and the Aesthetics of Chance: Art as Experiment. (New York: Columbia University Press), 2006, pp references Jouffret’s Traité Élémentaire De Géométrie À Quatre Dimensions. (Elementary Treatise of Geometry in Four Dimensions. (Paris: Gauthier-Villars), 1903.
  5. Ibid, 6. Many years after Duchamp made this work, he would state that it was his most important work of art. 
  6. Ya-Ling Chen, Tout-Fait Marcel Duchamp, issue 1, vol. 1, December 1999, accessed at https://www.toutfait.com/issues/issue_1/Music/erratum.html      
  7. Ibid. Jean Arp, Johannes Baader, Hugo Ball, Marcel Duchamp, Max Ernst, Elsa von Freytag-Loringhoven, George Grosz, Raoul Hausmann, John Heartfield, Emmy Hennings, Hannah Höch, Richard Huelsenbeck, Francis Picabia, Man Ray, Hans Richter, Kurt Schwitters, Sophie Taeuber-Arp, Tristan Tzara, and Beatrice Wood, among others.
  8. These previous exhibitions include: Drawn from a Score (2017), which featured artists whose work emanates from a written, visual or code-based score https://beallcenter.uci.edu/exhibitions/drawn-score Vera Molnar: Variations (2022), a solo show of a pioneer of computational works https://beallcenter.uci.edu/exhibitions/vera-molnar-variations and Computational Poetics (2023), curated with art historian Hannah Higgins, that surveyed human/computer collaborations in poetry, with historic work from the 1960s and contemporary experimentation. https://beallcenter.uci.edu/exhibitions/computational-poetics
  9. René Descartes, Discourse on Method and Meditations on First Philosophy, 4th edition. (New York: Hackett, 1998), 11. 
  10.  Interview with Bruno LaTour, Part I, date to be added, video, length, ARTE.tv, accessed at https://www.youtube.com/watch?v=sYfwkTgEpmE.com.
  11. UC Berkeley Planning professors coined the phrase “wicked problems” in 1973. See Horst W.J. Rittel and Milton M. Webber, “Dilemmas in a General Theory of Planning,” Policy Sciences 4, no. 2 (1973): 155-69.   
  12. Barry Richmond, “Systems Thinking: Critical Thinking Skills for the 1990s and Beyond.” Systems Dynamics Review 9, no. 2 (1993).
  13. Barry Richmond, “System Dynamics/Systems Thinking: Let’s Just Get On With It,” transcript from a lecture at the International Systems Dynamics Conference in Sterling, Scotland, System Dynamics Review vol 10, issue 2-3, (summer-fall 1994): 135-157, accessed at https://doi.org/10.1002/sdr.4260100204
  14. Ibid.
  15. Newton Harrison, multiple verbal conversation with the author, 2019 onwards.     
  16. Yuk Hui, Machine and Sovereignty: For a Planetary Thinking (Minneapolis: University of Minnesota Press, 2024): 1.









Emergent Meaning

Jeffrey Barrett

A central theme of the exhibition Future Tense: Art, Complexity, and Uncertainty is the capacity of complex systems to self-assemble ordered structure. A notable example of this is when meaning emerges from randomness. The meaning may be expressed in the intentions of an artist or the interpretations of a viewer experiencing the artist’s work where both the artist and viewer are understood as complex systems interacting with each other and the world. In order to understand better how meaning might arise from random interactions, we will focus on two simple models that illustrate the emergence of meaningful language—a signaling game and a coding game where randomness leads to successful communication.

The effects of randomness are manifest in the behavior of complex systems. We see them in the shape of cracks in the sidewalk, the behavior of markets, the flow of history, the genetic mutations that cause cancer, and the acts of friends.

Our best physical theories aim to characterize the sort of randomness we find in nature and how it behaves. The second law of thermodynamics describes the effects of the random motions of molecules. It says that entropy, a measure of microscopic randomness, tends to increase with time. A quick example illustrates the type of phenomena the law describes.

Consider an intricate ice sculpture on the beach on a hot day. It might start depicting a bouquet of flowers with intricately carved stems and petals. But as the molecules in the surrounding air randomly bounce against the sculpture, the ice is warmed, and it begins to melt. As it melts, the representational information encoded in its structure is irretrievably lost. Someone looking at the patch of wet sand an hour later will be unable to tell what the ice flowers looked like or even that there was once a sculpture. The process that destroys macroscopic information through random microscopic interactions is itself statistically inevitable.

The gradual loss of representational structure, of information, ultimately applies to all we create. Nearly all that the ancient Greeks did, said, and made is forever lost. And such processes apply to us as complex biological systems. As the Book of Common Prayer instructs the congregation on Ash Wednesday, “Remember that you are dust, and to dust you shall return.”

But randomness also plays an essential role in creation. Meaningful order can emerge from random interactions in the context of an adaptive dynamics, an evolutionary process with positive feedback. When this sort of dynamics acts on a system, it may adapt the system to a pattern or purpose in surprising ways.

An example of order from randomness is the evolution of life itself, but a simpler example is seen in the spontaneous formation of a crystal. The crystal forms with the appearance of a random seed that provides an initial scaffolding. The seed might be an impurity in the medium in which the crystal forms or a scratch in the container holding the solution. The elements of the crystal then begin to self-organize by means of random local interactions that settle the molecules into positions of minimum energy. This process tends to build and complete sequential layers of elements. And a highly ordered crystalline lattice thereby emerges. 

A similarly simple evolutionary process can explain the emergence of meaningful signals in the context of a signaling game. The philosopher David Lewis used signaling games to explain how linguistic conventions might be established without the aid of prior linguistic conventions.1 Brian Skyrms, professor of Logic and Philosophy of Science at UC Irvine, later generalized Lewis’s framework using evolutionary signaling games to show how meaningful language might emerge from random interactions.2 The simplest sort of signaling game involves two players: a sender and a receiver as in figure 1. The sender can see the state of nature and send a signal, but she cannot act. The receiver cannot see the state of nature, but he can see the signal and can act in a way that might benefit both agents depending on the current state of nature. We will suppose that each player learns by simple reinforcement. Simple reinforcement plays the role of an adaptive dynamics in the evolutionary game. How it works can be represented in terms of drawing balls from urns and adding new balls when the agents’ actions are successful.

IMG 1

On each play of the signaling game, the current state of nature is determined in a random and unbiased way by the toss of a fair coin. The sender has two urns, one for each possible state (0 and 1), and each of these urns starts with two balls, one for each possible signal (a and b). To determine her signal, she observes the state of nature (whether 0 or 1) then draws a signal ball at random from the corresponding urn. Then she sends that signal. She may send a or may send b on a play. Initially, the probability of each is equal as there is one ball of each type in the urn. The receiver has one urn for each possible signal (a and b), and each of these urns starts with one ball for each possible act (0 and 1). The receiver sees the sender’s signal then draws an act ball at random from the corresponding urn. Then he performs the act, whether 0 or 1. The sender and receiver are successful if and only if the receiver’s act matches the current state of nature. In the case of success, each player returns their ball to the urn from which it was drawn and adds a ball of the same type to the urn, thus increasing the probability of this ball being picked again; otherwise, each simply replaces the ball she drew.

This simple type of trial-and-error reinforcement learning is found throughout nature.3 A puppy who finds a scrap of salmon under a picnic table at the park will be more likely to look under that table on future walks. A commuter who gets home easier driving a new route will be more likely to use the new route on future commutes. When an agent learns by simple reinforcement, she may not even reflect on her experience as her dispositions slowly adapt to the world by trial and error. We are using balls and urns to model an adaptive dynamics where the probability of an action increases when the state where that action was successful occurs again.

Consider a concrete example. Suppose state 0 is “it’s sunny,” state 1 is “it’s raining,” signal a is “flash a green light,” signal b is “flash a blue light,” action 0 is “take two straw hats to the picnic” (the right thing to do on a sunny day) and action 1 is “take two umbrellas to the picnic” (the right thing to do on a rainy day). Here the sender wants to send a signal that communicates the weather to the receiver, and the receiver wants to take straw hats if it is sunny and umbrellas if it is raining. But the green and blue signals are meaningless, so there is initially no way to communicate.

On each day it is either sunny or rainy at random and with equal probability. Suppose it is sunny on the first day. The sender sees the sun and draws a ball at random from her corresponding 0 urn. Suppose that she draws a b ball and so flashes a blue light. The receiver sees the flash and draws a ball at random from his corresponding b urn. Suppose he draws a 0 ball, so he takes straw hats to the picnic, the right thing to do on a sunny day. This was a lucky draw since he might have drawn a 1 ball and taken umbrellas. But since he took straw hats on a sunny day, the play is successful, and each player returns the ball she drew to the urn from which it was drawn and adds another ball of the same type to that urn. This makes it more likely in the future that if the sender sees that it is sunny, then she will flash a blue light and more likely that if the receiver sees a blue flash, then he will take straw hats to the picnic. Had the receiver brought umbrellas, the play would have been unsuccessful, and each player would just return their ball to the urn from which it was drawn. In that case, the probabilities of signals and actions would remain the same as they were at the beginning of the day.

It is easy to write a computer program that models two players repeatedly playing this game. On simulation, the sender and receiver initially signal and act at random since each urn starts with one ball of each type and their draws are random. As a result, their signals are initially meaningless. But sometimes the receiver’s random act, by chance, matches the current state and the play is successful. As the players learn by simple reinforcement on what they did when they were successful, their signals gradually evolve meanings that allow the sender to communicate the state of nature reliably and the receiver to do the right thing. Argiento, Pemantle, Skyrms, and Volkov have proven that a perfect signaling language will almost surely eventually emerge in this very simple signaling game.4 And when we run this on a computer, the players typically evolve a reliable signaling language in a few dozen plays. 

In this simple game, meaningful order emerges from random trial-and-error learning as the behavior of the two players evolves, and each signal ends up communicating precisely one bit of information. Of course, there is nothing in the system that determines up front what the meaning of each signal will be. Their meanings are created by the players as they play. They are the product of random events in the context of the adaptive learning dynamics. A more complicated game illustrates a more subtle type of emergence.5 The coding game involves two senders and one receiver as in figure 2. At the start of each play, nature chooses one of four states, each with equal probability. The two senders observe the state, then each randomly draws a 0 or 1 signal ball from her corresponding urn. The receiver observes both signals and who sent them then draws an act ball from his corresponding urn and performs the action.

IMG 2

As in the simple signaling game we considered earlier, a play here is successful if and only if the receiver’s action matches the current state. If so, each player returns their ball to the urn from which it was drawn and adds a ball of the same type; otherwise, each player just returns their ball to the urn from which it was drawn.

In the coding game, neither sender has the resources required to communicate the state on her own. In order to evolve an optimal language, they must learn to partition nature in a systematic way so that the two signals together fully express which of the four states of nature the sender observes. One way that this might happen is if sender A learns to distinguish between nature being either state 0 or 2 (and sends a 0 if this is true) or state 1 or 3 (and sends a 1 if this is true) and sender B learns to distinguish between nature being either state 1 or 2 (and sends a 0 if this is true) or state 0 or 3 (and sends a 1 if this is true). In this case, 00 would mean state 2, 01 would mean state 0, 10 would mean state 1, and 11 would mean state 3 (see the left hand side of figure 3). This requires each sender to learn to attend to the conventional properties they have jointly invented as they play. When successful, they create and learn to use a systematic code to represent each of the four possible states and actions.

On simulation, the players in the coding game always evolve a meaningful signaling language, but an optimal language evolves in this game only about 73% of the time. When the players are successful in evolving an optimal language, random trial-and-error learning has led to a perfect system for communicating the information required for successful coordinated action. But again, this can only happen when the players create systematically interrelated properties and then attend to whether not each property holds. It is impossible to predict which properties the players will create on a run. Figure 3 shows the properties the players created and learned to track on two different runs. On the first run (on the left), A sends 0 and B sends 0 to represent state 2. But on the second run (on the right), A sends 1 and B sends 0 to represent state 2. The adaptive dynamics allows the players to evolve an optimal language on each run, but the inherent randomness in the dynamics leads to different languages on different runs.

IMG 3

As with the simple signaling game, the coding game illustrates how a meaningful language may emerge in the context of a random, trial-and-error adaptive dynamics. In both games, meaning is created as the players interact with each other and the world. But the coding game also requires the agents to learn to attend to their created properties in the world.

The evolution of attention is the key idea behind large language models like ChatGPT and Claude. The transformer architecture of such models was invented and first described in a paper by a group of Google scientists entitled “Attention is All You Need,” a play on the Beatles’ song “All You Need is Love.”6

A large language model (LLM) is a network with ordered layers of nodes connected by weighted edges. One might picture the network as a set of dots (nodes) and lines (edges) that connect nodes on one layer to nodes on the next layer. Every node is connected to at least one other, and each edge is assigned a numbered weight between 0 and 1 — the larger the number, the stronger the effect of one node on the nodes it is connected to on the next layer. When an input is presented to the input nodes of a trained model, the network produces a random result in the output nodes with probabilities that depend on the weights associated with the edges in the network. The weights are determined by how the model was trained. An LLM is trained using a huge collection of data, a standard of success, and an adaptive dynamics that gradually fits the behavior of the network to the standard of success by a trial-and-error learning process.

As a transformer model is trained, the features of the input to which the model attends and how it behaves when it observes these features, both of which depend on edge weights, are gradually tuned by the adaptive dynamics to fit the actual output of the model to the best possible output given the user-specified standard of success. In this way, as the system evolves, it learns both what to attend to and how to act on what it finds. Later, when it is presented with something it has never seen, it looks in places that have mattered for successful action in the past, then it uses what it sees to produce an output.

One can assess how well a transformer model learns saliences and connections by probing it with inputs that are unlikely to have been used in its training data. Consider the following prompt and Anthropic Claude 3.5 Sonnet’s reply.

Prompt: Explain two connections between emergent meaning by means of a random adaptive dynamics and art, complexity, and uncertainty.

Reply: There are several interesting connections between emergent meaning through random adaptive dynamics and the concepts of art, complexity, and uncertainty. Here are two key connections to consider:

Unpredictability and creative emergence: Both random adaptive dynamics and artistic processes often involve unpredictable outcomes emerging from complex interactions. In adaptive systems, meaning can emerge unexpectedly as elements interact and evolve over time without a predetermined plan. Similarly, in art, the creative process often involves experimentation and allowing unexpected results to emerge. Artists may start with an initial idea but then respond intuitively to how their work develops, embracing uncertainty and complexity. The final artistic product often has emergent qualities that weren’t fully anticipated at the outset. This parallel highlights how both domains harness uncertainty and complexity as generative forces.

Pattern recognition and meaning-making: Adaptive systems and artistic interpretation both rely heavily on pattern recognition to derive meaning from complexity. In random adaptive dynamics, meaningful structures or behaviors can self-organize out of initially disordered states as the system identifies and amplifies useful patterns. Likewise, both artists and viewers engage in pattern recognition to create and interpret meaning in artworks. Abstract art in particular often presents complex visual information that viewers must actively engage with to discern patterns and construct meaning. This process mirrors how meaning emerges from the complexity of adaptive systems, with the human mind acting as the pattern-recognition mechanism that extracts signal from noise.

The prompt asks Claude 3.5 for connections between the themes of the present exhibition and the related themes of this essay. While Claude likely never saw this prompt when it was trained, it does strikingly well in its reply. Strengthening Claude’s reply, I would argue that both artists and viewers are nothing but adaptive systems whose actions are forged by random adaptive processes; but given the inherent uncertainty in its audience, Claude shows wisdom in its more modest position.

Future AI systems will certainly do better, but that Claude has learned to associate a rich collection of topics with the present meager prompt and to thread them together to form cogent, and interesting positions is impressive. Its behavior illustrates how well a simple trial-and-error algorithm can learn to identify natural saliences and to find connections between them if presented with enough data.

While the details are a little different, the emergent behavior of an LLM like Claude is similar to that of the players in the coding game. Each system is initially capable of only random actions. The players in the coding game learn by trial and error to communicate using a meaningful language they gradually create for the purpose. Claude learns by trial and error to identify salient aspects of a prompt then learns patterns in these aspects and how to generate sensible responses. In each case, the adaptive system’s ability to identify saliences and to exploit them for successful action emerges from a trial-and-error adaptive dynamics as it interacts with the world.

Inventing languages, finding saliences, and learning connections involve creativity not unlike that exhibited by an artist. Each of the processes we have considered involves a random element. It is this that makes the process unpredictable.  The players in the coding game must create conventions and learn to adhere to them in order to be successful. It is impossible to predict which conventions they will create. Claude must learn to reply to questions it has never been asked to be successful, and in doing so it may say things that have never been said. And, like the works in the exhibition, its creations may spark new insights.

Our capacities have evolved under trial-and-error natural selection and by means of trial-and-error learning. The inherent randomness of these processes is baked into our nature. It is this that endows us with the ability to create things that are novel and surprising. The present exhibition itself is the product of randomness in the context of an adaptive dynamics. Each artist’s work has been influenced by their noisy experience. The curator has selected works in ways that depend on his experience and interactions with the artists. The resulting collection has the capacity to inspire new insights and understanding in us. The artworks become a part of us, a part of how we act and understand.

Notes

  1. David Lewis, Convention (Harvard University Press, 2020). 
  2. Brian Skyrms “Signals,” Philosophy of Science 75 (5) (2006): 489–500. 
  3. See Richard J. Herrnstein, “On the Law of Effect,” Journal of the Experimental Analysis of Behavior 13 (1970): 243–266, for early work on reinforcement learning in birds. For an example of reinforcement learning in humans, see Alvin E. Roth and Ido Erev, “Learning in Extensive Form Games: Experimental Data and Simple Dynamical Models in the Immediate Term,” Games and Economic Behavior 8 (1995): 164–212.4.
  4. See Raffaele Argiento, Robin Pemantle, Brian Skyrms and Stanislav Volkov, “Learning to Signal: Analysis of a Micro-Level Reinforcement Model,” Stochastic Processes and Their Applications 119, issue 2 (2009): 373–390. The theorem only applies to this two-state, two-term, two-action signaling game with unbiased nature. Convergence is not guaranteed if there are more than two states and acts or if nature is biased. The coding game discussed below is an example of a game where convergence is uncertain.
  5. See Jeffrey A. Barrett, “Dynamic Partitioning and the Conventionality of Kinds,” Philosophy of Science 74 (2007): 527–546, for a discussion of this game.
  6. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aiden N. Gomez, Łukasz Kaiser and Illia Polosukhin, “Attention is All you Need,” Advances in Neural Information Processing Systems 30 (2017), 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, accessed February 2, 2025.