“How we experience emotions is driven by complex biochemistry. What if we could access that type of biochemical data to engineer and commodify emotions?”
Hege Tapio is a Norwegian artist based in Stavanger, the country’s oil capital. Tapio is currently pursuing artistic research with FeLT (Futures of Living Technologies) and is an interdisciplinary PhD fellow at the Innovation for Sustainability Program at OsloMet. Her practice examines the body as a landscape for “extreme self-mining” in bio-art installations, videos, and performances. She is the founder and director of i/o/lab – Center for Future Art, where she produced and curated a biennial from 2006–16.
Gabriel Tolson: It may be best to begin with the trajectory of how this work is unfolded.
Hege Tapio: I guess my question was this: How do we deal with creating human-like entities like AI, and how will it operate in terms of interacting with us, which also have emotions, like how we are thinking of putting emotions back into the machine? That’s how I got involved with this emotion technology and work on how we are operating now with interpreting emotions, with using computers to read facial expressions and biometric data. And I thought that it’s not enough to get really into how we experience emotions because that’s also driven by biochemicals. My intention was to think about what if we could access that type of information—what if we could access the data of our biochemicals?
I also thought about how that would involve neuropeptides that signal our brain and cause us to experience emotions in our bodies, and how neuropeptides are also data that can be translated and synthetically produced. It’s trying to approach our human features when we are trying to construct this digital entity as we do with AI. Funnily enough, having conversations with more of computational-minded scientists is that their sense is that everything is—we’re able to program or compute everything. Even our emotions, many interpret as just datasets.
David Familian: Are you sure? Are you sure about that?
Hege Tapio: Well, if you listen to people like Joscha Bach, who says that everything is a simulation, and also the other conversations I’ve had.
David Familian: The problem with using the datasets is that they are limited in the dimensions used to simulate the interaction between systems or influences. The computer can only look at three or four dimensions of a complex system, and the minute you say datasets, you have to account for the dimensionality of how those datasets interact.
Hege Tapio: Yes, Joscha Bach is not talking about true and false perspectives. He’s talking about states that move along gradients. A point which continues to trouble me is the fact that we are driven by this long heritage of viewing living systems as machine existence, like from the perspective of La Mettrie’s materialist philosophy Man a Machine (1747). We are not really embracing the fact of how complex living systems are. When I started to look into emotions, I found that they consist of so many complex layers. It’s not only the cultural, learned facial expressions. It’s not only whatever can be given away from the chemical compounds in your sweat or biometrics from your heart rate. Even if we could approach the biochemical layer, that only gives us a small amount of information because it’s affected by many factors. Even the composition of our DNA determines how these emotions operate in our bodies. Also, with epigenetics, and how this is culturally formed over a long time period.
There are so many complex layers contributing to our emotional states and expressions. If we were going to truly understand and replicate emotions, I believe we would need the full embodied experience. And I have a hard time buying into the viewpoint of the true computationalists, who state that everything can be computed or programmed. The fact that we have a body gives us the opportunity to have a genuine experience. I find it hard to believe we could replicate this in a machine. We could give an illusion of doing so, as in the Chinese room or the Turing test, and I believe that we can create machines that are very sophisticated and appear to have emotions.
“Emotions are incredibly complex. From cultural, learned expressions, to the chemical compounds in your sweat, the biometrics of your heart rate, to the composition of your DNA and the epigenetics that change how genes are expressed over time.”
David Familian: What you’re saying is really important. Yuk Hui implies in his book Cybernetics for the 21st Century (2024) that we’re still living under the assumption that human beings are machines that can be quantified. This was Descartes’s stance. Kant was the first thinker to talk about the organic in Western philosophy. My response to all this is that we’re now 300 years out from Kant, and we still haven’t fully transitioned from Descartes to Kant.
Hege Tapio: And much of the problem is that we carry these dualist perspectives of understanding things. We have very limited parameters of how we can interpret or conceptualize how to grasp very complex information and how to grasp complex systems as living systems [or, living systems as complex systems].
David Familian: And then, the way the narrative goes is, you know, what are the points that we’ve made, where we’ve made progress in this shift that’s taken 300 years. And you have cybernetics in Britain dealing with biological and not the mechanical implementation of cybernetics. Then you have Virella and those people and second-order cybernetics, which talks about the body, precisely as you are, embodiment, and Umwelt. All of those ideas came out in the sixties and seventies. And so there is this narrative that is somewhat separate from science.
One of my arguments is that science has a direct trajectory of trying to understand and mechanize everything. And then you have this philosophical process in cybernetics where the two weren’t talking to one another. I’ve been asking different people whether this history is true, and so far, the answer has been a unanimous yes. That’s why it’s helpful to look at the view of cybernetics on all this: these ideas have a long history and were part of the original concept of cybernetics as advanced by the British.
Hege Tapio: Yeah, but the cyberneticists also battled with the big issues of consciousness and qualia, which is still something that we keep pondering about. Because we cannot really explain how this is all coming together. It’s how cyberneticists use the perspective of understanding biological systems and translate this into how they construct the machine. But again, it is also rooted in this perspective of coming back again to Man a Machine. So, it’s not really approaching or embracing the whole complexity. We should rid ourselves of this dualist perspective and try to look at more complexity.
David Familian: Which dualist systems are you referring to?
Hege Tapio: This dualist perspective is even found in the history of metabolism, which is rooted in the dualism of nature and society. So, we are constantly bringing in that divide between things and looking at things very separately instead of looking at things from a more complex point of view.
David Familian: Yeah. I agree. Those are two different things to me. The illusion that AI will be biologically intelligent is silly. AI, especially generative AI, is as mechanistic as the automaton to me. We’ve achieved what Descartes thought was impossible: that the mind and body could be quantified. And in some ways, we’re achieving that.
Hege Tapio: But what is interesting, and also a bit worrying, is that we’re now constructing machines, or we’re bringing in some entities in our lives that are also shaping us, to a large extent. For instance, the invention of photography really changed how we perceive and interpret, even memorize. It’s a mechanical tool, but it has affected so many ways of how we view, understand, and interpret the world around us. And I think computational tools and AI are going to have an even stronger effect on how we perceive and understand ourselves. Katherine Hayles also mentions this, referring back to cybernetics and how we are creating tools that reflect back to us, mirroring us.
I’m concerned that if we have very advanced AI with emotional capabilities in the future, and this is implemented into whatever kind of avatars or tools we interact with on a daily basis, it is going to affect the way we understand and relate, even how we deal with the emotional aspects of our lives.
David Familian: I just heard a story where they introduced the Internet to a tribe that never had any technology. They put Starlink in some area in Africa, and all these people are doing is looking at porn and just existing, doing exactly what we do. They’re obsessed with looking at themselves and using their phones, and I agree that on a societal level, it separates us. We know that AI is used to generate rumors and populate these extreme differences and viewpoints. So, I think it’s already messing us up. And this is my problem with artists who use AI to make an image and not reflecting on it critically.
When artists created net.art, no one knew what it would be. It was brand new, and they were experimenting with something. Some people, you know, critiqued business and stuff or what the Internet would become. But now we know what the danger is. Many artists use dangerous technology to make pretty pictures.
Hege Tapio: Yeah, we’ve made dangerous stuff many times throughout history. But it’s paramount that we spend time reflecting on the possible outcomes of how we are rigging this system because it will affect so many ways in our lives that we cannot even comprehend at present.
David Familian: Or we have to outlaw in certain areas of society. You can use it to look at tumors or other medical applications where it’s useful. Still, in any living system, AI is incredibly dangerous, and I don’t think they’ll ever be able to take out or perfect its hazardous elements. It’s sort of like when McLuhan said that every technology has good and bad attributes in relation to society. There is no proof that anyone has come up with a way to use AI in a positive way to bring us together. Theoretically, it could, but it doesn’t seem interested in doing that because it doesn’t make money.
Hege Tapio: And by that note is the question of who drives this technology. It’s a big company. So that’s also an issue. And also we talk a lot about machines and AI, but my project also involves how we deal with synthetic biology and use our knowledge of the machine to control living systems. By synthesizing emotions or designing emotions, my project opens up a discussion of where our limits are. Where do we want to go with the development of our technology? And also, how do we understand ourselves? What is genuinely human?
David Familian: I have a question about your project. Are you reflecting on, or pseudo-reflecting on, the scientific experiments that are feeding this technology? Or are you dealing with just the finished product?
Hege Tapio: Well, my project speculates where emotion technology might go. So if affective technology would go beyond what it is today, which is limited to analyzing facial expressions and biometric data. What if we could access our biochemical data, and how would that be utilized in the hands of someone with commercial interest, which is the case today with companies like Affectiva?
David Familian: Because one of the things that happens with things like this is that they think they’ve got it figured out, and then something goes haywire. It eventually generates some unintended consequences. So you’re bringing the utopian view of it, which is that it is successful. But there could be a dystopian one, too.
“I want the viewers to be worried and scared when they see the project. And that’s the reaction I’ve got when I presented it: people got really angry. And I love that.”
Hege Tapio: I want the viewers to be worried and scared when they see it. That’s also the reaction I’ve had when I’ve presented the project; people got really angry. And I love that.
David Familian: Oh, that’s great! They’re supplying the dystopia part. You don’t have to embed it into the work. You create such an optimistic illustration of ‘Here’s what we can do,’ that people are pushed to react in horror and defensiveness. That helps me to understand your piece, too.
Hege Tapio: Yeah, people were so annoyed, saying, “What if you can create synthetic love?” And they were like, “Wait a minute. What if you discovered that the guy you were with was just wearing an implant, and he’s not really in love with you? Or, who’s going to afford real love in the future?” Or maybe we can design feelings or emotions we couldn’t imagine, like a different kind of love.
David Familian: Yeah, there’s a funny Star Trek moment where Kirk says something along the lines of “I want my fears. I don’t want to get rid of what I’m afraid of. That’s what makes me who I am.” And I think that’s part of what makes people freak out. Part of it is that we want to be rid of our foibles, our psychological weaknesses, and all that, but those faults make us who we are. Getting rid of our faults would make us all the same.
Hege Tapio: Yeah, I mean, my project is probably a transhumanist’s wet dream. I know that Bostrom has a manifesto that aspires to ‘create the best love.’ But yeah, I agree with you. We need the night and the day. We need to have a balance in things.
David Familian: Well, there’s a certain duality we do need. It is not necessarily the mind or body duality, but a Zen kind of duality. You wonder if people would just stop producing and doing anything if they had this device because you don’t get any of the rewards that you get from accomplishments.
Hege Tapio: You need the full gradient of all the emotions. Of course, you do.
David Familian: So you’re presenting this, for lack of a better word, a deadpan, commercial version of this thing, and people are filling in their fear of it, which is rational. It’s not an irrational reaction. What’s interesting to think about is whether this is how we all react to new technology or are we now at a point where technology can so alter us as human beings that we will be transformed, and not in a good way.
Hege Tapio: Ideally, it will give you both a feeling of viewing something you would like to have and a moment of pause, where viewers ask, ‘What does this mean? Who’s going to drive this? What’s it going to look like if people start selling emotions.’
David Familian: Your presentation in this sort of advertising format is really good. Is there a way for people to voice their fears if you had a website or somewhere where people could? And would you show that in this work, or perhaps store audience reactions for use in future works? I think there’s a lot of potential in sharing how people react. I don’t believe it is necessary to include that in this installation, but would you want to create a way for people to enter their responses into a database or something? So you have all these comments to use later in another work or another way.
Hege Tapio: That’s a very interesting idea. I presented my research in an exhibition in Oslo last year, and there, I had two boards where people could write down what kind of emotions they would like to have and what kind of emotions they would not like to have. But yeah, to have that response to the whole project would be very interesting.
David Familian: What you just said about emotions is very interesting: there were some emotions they wanted to buy and some emotions over which they still wanted to control. So come up with a series of questions, and let’s just see if we can put a QR code up and invite people to respond to the work, see what people say, and see if people use it. That would be really interesting.
Hege Tapio: Yeah, I mean, the video I’m showing is with this fake CEO running the Ephemeral Company. And she’s insinuating that companies out there are changing their slogans and will sell you the feeling of their emotions. For example, the narrator suggests that the company that currently goes under the slogan ‘just do it’ will change their slogan to ‘just feel it.’ You can even drive that toward becoming more sustainable, like if Chanel decides that it won’t produce so much fashion anymore but will sell you the feeling of Chanel. You can ask any woman why she likes to go shopping, buy makeup, or get new handbags. It’s because it makes us feel good.
“In the video I show the CEO of the fictional Ephemeral Company insinuating that major brands are pivoting from just selling you products to selling you feelings associated with those products.”
David Familian: And guys want to buy their toys.
Hege Tapio: So we can get rid of consumers and go directly to the feeling of buying.
David Familian: What’s interesting about this is that people may not know, intellectually, that you’re talking about a complex system. But their reaction to it is that they don’t want to be controlled, which is exactly what you can’t do with a complex system. I mean, there’s a whole paradoxical thing in your work where, if they come up with this, they solve the problem of simulating a complex system. Similar to the way they use specific medications for certain mental disorders. It’s not treating the body as a complex system; it’s just supplying hormones that give you the illusion that you’re happy.
Hege Tapio: Yes, I think my work is on the borderline of medicalization. Truly. In the conversation I had with the neuroscientist who’s working with the medication, she said that when we’re medicating people with psychiatric disorders, we’re just giving them something that we think might work. We’re not even in control of how it will regulate the body. Or, it’s almost like a blind shot. That’s a very coarse way of trying to regulate our bodies as we currently do. Still, she also admitted that we have not even started looking into other possibilities of regulating our bodies without synthetic or medical compounds.
David Familian: Is there any way in the advertisement, if you haven’t already done this is to say something like, ‘This isn’t about just supplying you a particular feeling. We are solving the complex interactions of all your emotions and how they feed off each other in some language that doesn’t sound like a complex system.’ But they’re literally claiming they’ve solved the problem, which is not possible. It is impossible to solve a complex problem because there are always emergent properties. And so the minute they create a system, something invariably happens where they have to recalibrate it again. Something emerges that they can’t plan for. But is there anything in the text that propagates the idea that they’ve made this leap from Se. Working with a single hormone, or you know, like what psychotropic drugs do to help people? Is this tuned to your own body? It takes into factor your personality, something over the top.
Hege Tapio: That’s going to be part of the storyline. The way it goes is that you’re going to receive an implant that will read your biochemicals, and you will self-report the kinds of emotions you are feeling. Before you receive this implant, you know all kinds of data will be measured, including your height and weight. All this information will then be fed into a database so that they will be given more and more parameters of how these biochemicals are effecting these bodies. Then you can come to this company and say, “I want this specific feeling,” and they will be able to synthesize it based on your height, your weight, and your personal parameters. So we’re talking about individually-fitted emotions.
David Familian: Well, the interesting thing is whether they have to continually monitor you and keep track of you to adjust the chemicals. That would freak people out. Another thing is that if a scientist walks up to this work and they go, “this is impossible, what she’s saying,” and they’re not getting that, you think it’s impossible too. I’m sure you’ve shown this to knowledgeable people who know it’s ridiculous in some way. They’re getting the joke and know that you know that this narrative you’re creating is over the top.
Hege Tapio: Of course. It is a speculative work, and I won’t hide that.
David Familian: No, no, that’s not what I’m saying. It’s like that fine line when telling a joke between insulting someone and getting them to laugh or getting someone just to say, oh, that’s silly. You know the difference between a joke being silly or insulting. There’s some tension between those two poles. The visceral part in this work can live in that dichotomy. So that’s how I’m envisioning the work to be experienced.
Hege Tapio: One of Elliot’s students picked up on that after she saw the movie. It was clear to her how our emotions are built upon many layers. And this is just one little piece of it.
David Familian: Well, that’s important, I agree. There’s this idea that language is not developed individually but socially. So if language is developed socially, and that’s the feedback system for language, how can this system account for all the complexity of the people interacting in the world? That’s what Elliott’s student is sort of saying. You certainly don’t want to be happy if someone’s beating you up. And maybe you have to say, this solves all the problems, but even when you’re interacting with people, it can adjust, like, just go over the top bit.
“Companies already create avatars with emotional capabilities. Give it five or ten years, and we’ll have a very complicated relationship between humans, human machines, and human avatars.”
Hege Tapio: But obviously, it doesn’t. I mean, even the opposite scenario as that which you’re describing, if you’re in a state of shock, you won’t be able to approach someone lovingly because your body is in shock. So, it has a very important say in how we feel about our environment and the state of our bodies. Also, back to the cybernetics again, I think an essential element of the work is how we view ourselves as living systems and how we interpret features that we can just analyze and pick out or displace, or move around because that’s what this work is implying, that we can extract emotions. And we can put them back in again.
This is what we are also trying to solve with AI, by trying to get it to operate with humans to understand our emotions and to respond to our emotions. What will it be like when you have companies like Soul Machines that are creating avatars with emotional, biochemical brains? They’ve replicated the whole nervous system in a computerized version that uses biochemical responses to generate digital biochemical response emotions. Give it five or ten years, and I think we’ll have a very complicated relationship between humans, human machines, human avatars, and whatnot.
David Familian: I know, and I don’t see how it’s going to be controlled. And I don’t think this is the traditional fear of a new technology. I believe that the more new technologies intersect with the complexity of our world, the more dangerous they get. I think.
Hege Tapio: It’s paramount that we hold onto what makes us human or try to understand what makes us human. That is necessary to preserve humanity as we know it.
David Familian: Or, as Katherine Hayles says, we’re all post-human. If we could get cybernetics scientists together to create truly transdisciplinary responses, maybe we could come up with some safeguards, and it would change the way we see all these issues.
→ Hege Tapio, Ephemeral, 2024