Emergence Magazine
James Bridle:

We think so much like computers today because they’ve defined what is thinkable. And so for me, rethinking the computer rethinks what is computable, and therefore rethinks what is thinkable at all about the world.

An Ecological Technology

An Interview with James Bridle

Interviewee

James Bridle is a writer, artist, and technologist whose artworks have been commissioned by galleries and institutions and exhibited worldwide. Author of New Dark Age and Ways of Being, their writing on literature, culture, and networks has appeared in magazines and newspapers, including Wired, The Atlantic, The New Statesman, The Guardian, and The Financial Times. For BBC Radio 4, they wrote and presented the four-part series “New Ways of Seeing.”

Interviewer

Emmanuel Vaughan-Lee is an Emmy- and Peabody Award–nominated filmmaker and a Sufi teacher. His films include: Earthrise, Sanctuaries of Silence, The Atomic Tree, Counter Mapping, Marie’s Dictionary, and Elemental. His films have been screened at New York Film Festival, Tribeca Film Festival, SXSW, and Hot Docs, exhibited at the Smithsonian Museum, and featured on PBS POV, National Geographic, and New York Times Op-Docs. He is the founder and executive editor of Emergence Magazine.

In this expansive interview, writer, artist, and technologist James Bridle seeks to widen our thinking beyond humancentric ways of knowing. In questioning our fundamental assumptions about intelligence, they explore how radical technological models can decentralize power and become portals into a deeper relationship with the living world.


Transcript

Emmanuel Vaughan-LeeIn your latest book, Ways of Being, you explore the many types of intelligences that exist in the more-than-human worlds—intelligences that we need to learn from and integrate into our consciousness and technologies if we are to learn to live in balance with the living world. And you write that for far too long, at least in our dominant Western society, we’ve had a very limited definition and understanding of intelligence that you describe in the book as “what humans do,” and that this definition has played a profound role in shaping technology and how we use it, from computers to, most recently, artificial intelligence. Can you talk about this human-centered definition of intelligence and its impact on technology and AI?

James BridleI come to this sphere, to this area, to this thinking, from a background in technology. That’s mostly what I’ve worked on for the last decade or more. And a little bit of that focus throughout has been on artificial intelligence. In the last few years, I’ve tried to consciously reframe my practice around more ecological interests while seeing what I could bring from what I know about already. And so the cultural dominance of AI seemed like a really interesting thing to think through, particularly as in my own life I was starting to broaden my own interests and pay more attention to the things around me. And intelligence was an interesting place to start.

I knew, setting out to do this, that I would have to at some point, as a writer about intelligence, define what I meant by intelligence. But I was very frustrated by the lack of what seemed to me to be clear, good definitions of what it is we’re all talking about. You can get all these lists of what people mean when they talk about intelligence, and it’s a kind of grab bag of different qualities that changes all the time: things like planning, counterfactual imagining or coming up with scenarios, theories of mind, tool use, all these different qualities. People pick from them according to whatever their particular field is, but they all come from a human perspective. That seemed to me to be what actually united almost all our common discussions about intelligence: that it was just whatever humans did. And so all our discussions about other potential forms of intelligence, other intelligences that we encountered in the world, or intelligences that we imagined, were all framed in terms of how we understood ourselves and our own thinking. It really struck me that this became an incredibly limiting factor in how we were thinking about intelligence more broadly—and not just intelligence, really, but all relationships we have in the world that are so often mediated by our own intelligence. On the one hand this has restricted our ability to recognize the intelligences of other beings—and I think we’ll probably come to that—but it’s also deeply shaped our history of technology, and particularly AI.

What I find fascinating about AI is its cultural weight, the fact that we just seem to be so endlessly fascinated with it. And this goes all the way back to long before the development of modern computers, but really takes off with the development of what we now call computers in the 1940s and 1950s. It goes right back to Alan Turing and the definition of the early computer, when he’s already talking about how intelligent computers might be. And then it extends all the way through the last sixty, seventy years of research, when there’s always this tendency to take whatever the current form of computation is and extrapolate it into what it might be if it was intelligent. And so we’re always trying to build these intelligences, but what we think intelligence is really shapes that. All the different ways we’ve tried to build AI over the years have always been shaped by that definition of human intelligence. And increasingly that’s looked damaging and dangerous, for all the ways that I explore in the book.

EVLYou challenge the notion that AI is actually artificial in the book, and that rather than only embodying the “what humans do” form of intelligence, it has the capacity to help us expand our definition of intelligence. And you ask the question, “What if the meaning of AI is not to be found in the way it competes with, supersedes, or supplants us? What if … its purpose is to open our eyes and minds to the reality of intelligence as something doable in all kinds of fantastic ways, many of them beyond our own rational understanding?” Tell me more about this other purpose of AI.

JBThe more I thought about AI, as I said, I started to really understand it as something that was an incredibly limited vision of what intelligence was. In particular, I focus on what I call corporate AI, or a vision of intelligence that mirrors the corporate model of intelligence, which is something that’s incredibly combative, acquisitive, extractive, profit seeking, that seeks always to dominate and control and increase its own power. That’s the model of a corporation, but it’s also the model of most of the AI that we’re building, because that AI is being built by corporations in their own image. So most of the AI we have at present is this incredibly narrow definition. Yet at the same time, over the last decade, science in general has started to recognize the other ways in which nonhuman beings are intelligent. And so you have this strange kind of parallel process occurring. On the one hand you have the human cultural obsession with artificial intelligence, and on the other hand you have this growing awareness of all these other things that are starting to look a lot like intelligence to us. And something crossed over for me when I understood that as a very narrow definition of intelligence, but also the increasing strangeness of that intelligence as it manifests in the world.

One of the key things about a lot of the latest forms of AI is that they’re quite obviously not like human intelligence. They’re doing things in systems, like “deep learning,” which are kind of inscrutable to us. They’re not understandable in terms of human cognitive processes, and yet they are clearly doing intelligence things in quite narrow domains. And so it struck me very clearly that while we’ve always insisted on the primacy of our own intelligence and always intended to build AI consciously as a model or a mirror of human intelligence, the fact that it’s actually turning out to be something quite different should tell us something important, which is that there are multiple ways of doing intelligence. And of course, if there’s more than one way of doing intelligence—the human way and this AI way—then of course there are infinite ways of doing intelligence. So for me what AI does, or these realizations about AI, is open up a window to thinking about all the many different forms of intelligence that intelligence could take. And there’s a lot of interesting precedent for this in technological history.

One of the examples I use a lot is the development of network theory as a result of building the internet. When we started to build the internet, it was a very ad hoc kind of process. People just liked connecting computers together and then building these protocols laid on top of them that would allow the computers to communicate. But these systems started to build up that were unlike any systems that we’d seen before. In particular, we were building what we called a “scale-free network,” where all of the nodes in the network seemed to have a kind of equal weight. It didn’t matter how many connections, inputs, outputs they had. You could take them out, replace them; you could cut bits off—the network would heal around them in an entirely novel way, in a way that we hadn’t encountered before. And one response to this was the development of a new branch of mathematics called network theory, which modeled this way using a different method than previous types of mathematical topology that looked at networks. So we developed this entirely new way of understanding how a network might function. It was only at that point that, retrospectively, mathematics was applied to forest networks, to the networks of trees and mycelium that we now know are communicating at all times in the forest. Now, those networks are not the same thing, but it seems really crucial to me that we needed to build a model in technology for ourselves in order to develop the mental models—the metaphors, really—for seeing this thing existing in the natural world. And it feels to me that possibly this is something that’s happening with AI—the more we build these model toy intelligences, the more we build a potential understanding of all kinds of intelligences. We have this solipsistic need to build it ourselves, to put the pieces together—a really understandable urge, I think—before we can start to recognize the broader processes in the world around us.

EVLThere’s a term you use in the book—“ecology of technology”—that I really love and that you say we need to discover. I wonder if you could unpack this term and share how it offers an alternative to a technology based on human exceptionalism.

JBIf you look back at the history of most sciences over the last 100, 150 years, there’s been an ecological turn within all of the sciences. Every science seems to have discovered its own ecology in turn. It starts out as a very narrow field looking at one particular specialism. And as it develops, it starts to recognize that, actually, it’s a matter of interconnections, it’s a matter of relationships; that the things that matter within any particular discipline are really the ways in which it interconnects with other bodies of knowledge and other ways of knowing the world. This is a thing that has happened to every discipline; it’s not just a biological or natural sciences thing. This happens within mathematics, it happens within sociology, happens within physics. Slowly, as fields expand, when they’re at their best, they become interdisciplinary—about learning from other disciplines. They become ecological. And they also become potentially more aware of their connections to the world, their social and political responsibilities. I think those are a key part of this kind of ecological thinking as well.

As I put it in the book, it feels like technology—computer science, related disciplines—is the last discipline to discover that ecology, because there’s a deep bias within technology, within computational studies, within the whole field, towards a kind of solipsism, towards a deep abstraction, an assumption that what is happening here belongs entirely in the realm of mathematics, in cold ones and zeros, in an entirely constructed universe that separates itself from the world. But we know this not to be the case. We know this at a practical level, and we know this at a social, physical level. At the very physical level, our machines are built out of the earth. They’re made of materials extracted often violently from the earth in multiple ways, and increasingly troubling ones. They also continue to affect the earth in very important ways. We tunnel through the earth, we bore into it to lay our cables, we build these huge data centers on top of it, which contribute vast amounts of greenhouse gases and other pollutants. So they are engaged materially with the earth.

We’re also increasingly aware that this technology shapes our societies in incredibly powerful ways. We live in a technologized civilization that is entirely dependent upon technologies but also, through the ways they imagine our society, shapes it in really important ways. And this has been mostly a subject of criticism of technology. It’s barely acknowledged in the field; it’s the kind of position that is largely just put forward by critics. And so I think we’re still awaiting a real ecology of technology, a way of thinking, building, making technology that actually takes this into account from the beginning and starts to acknowledge the fact that technology is as connected as anything else to the world around it.

EVLYou wrote that part of your hope with the book is “to help destroy the idea that there is only one way of being and doing which deserves the name ‘intelligence’—and even, perhaps, that intelligence itself is part of a greater wholeness of living and being that deserves our wider attention, one that isn’t easily classifiable, defined, and by its very nature challenges hierarchy; that there are no single answers or single questions.”

JBThere’s a lot in that statement, which is summing up quite a lot of theses. But one of the things I do in the book, and probably the most joyous part of it, is just to explore what some of those other forms of intelligences look like: the incredible abilities of everything from cephalopods to slime molds; everything from our closest relations—other apes, simians—all the way out to creatures that we can barely imagine, that in all kinds of strange ways demonstrate forms of intelligence. Once you are prepared to pay attention to them—and that’s really, really key—once you are prepared to admit the possibility of their intelligence, it becomes almost instantly undeniable. And so the project, really, then is to integrate that awareness into our lives. But what I learned also by thinking about those forms of intelligence is a kind of parallel realization of the intelligence that I was talking about. The intelligence that I’m interested in is really not an intelligence that just happens inside the head. It’s an intelligence that’s situated in the world.

A good example of this that I always use for humans is the fact that you can go to a place and have a memory related to that place, or extract some kind of information that you would not have been able to do or think or recall without visiting that place. So our memory and intelligence is linked to the physical world beyond our bodies. We can also think of it as simply the fact that we can come up with ideas—we can have conversations that go places in conversation with others; that intelligence is a kind of mutual thing, a thing of relationships; and that the relational nature of it can happen between all different kinds of bodies, and it happens differently depending on the body that you’re in.

My favorite example of this kind of embodied intelligence is one that’s quite close to the human, which is the gibbon. Famously, for decades now, we’ve been doing all these weird tests on animals to try and decide who gets to join the intelligence club. And they’re all modeled on, of course, how humans do things. So an absolute classic one is tool use. Can you give an animal a tool and see if it reacts in a particular way? Does it use that tool to achieve some kind of goal? And they did these tests for decades on other primates, and most of them do mostly what’s expected in various ways. If you give a gorilla or an orangutan or a chimpanzee a stick and leave some food outside its enclosure, it will use that stick to get the food. Critically, for decades, gibbons didn’t do this. They just seemed fundamentally uninterested in the task at hand, essentially. This posed a real problem for the scientists studying them because it implied that gibbons weren’t as smart as a bunch of their close relatives on the evolutionary tree, including us, but also including animals like baboons and macaques, other monkeys, that we understood to have kind of evolved earlier. And it was only when the framing of this experiment was changed— What happened was, one time the experimenters hung these sticks from the ceiling of the gibbons’ enclosure, and immediately, the gibbons reached up, took the tools, and used them to get the food, because gibbons are brachiators—they live most of their time in the trees. And so they have a body pattern and an awareness that’s upward focused. So it was only when we created a situation in which they would employ their form of intelligence that we were able to recognize it as a kind of intelligence. And that’s a definition of our own blindness to other forms of intelligence. But it also emphasizes this embodied nature of intelligence that I’m talking about, that it matters what kind of body you are in. And also, the recognition of intelligence is this relational process that allows us to make meaning out of it and to relate to one another by understanding it as something that emerges out of relationships.

EVLThere’s a lot of fear around AI, with people like Elon Musk, for example, saying things like it’s summoning the demon—which is the term you used in the book—and that it could destroy us partly because it could become unknowable and uncontrollable and, thus, dangerous, which you could say lies at the core of many of our fears about technology and historically the broader physical world. And in the book you challenge this equation between the unknowable and the dangerous and say that we need to embrace the unpredictable or unknowable, which would have broad implications not just for our relationship with technology, but perhaps could help shift our relationship to the more-than-human world and become very humbling as we recognize we’re not really as in control as we think we are.

JBIt’s really worth maybe casting back a little bit to some things that I’ve written about before. Before this book, I wrote another book, called New Dark Age, which was more centrally focused on technology. And one of the things that I investigate in that book quite intensely is this situation of unknowing: the fact that we exist, just at the technological level, in the middle of very large, complex systems that we do not fully understand; that no one person can fully understand—they are of a scale and complexity that is inherently kind of mystifying to us. We will never fully understand their operations. And that is, or can be, a very frightening place to be because it reduces our agency; it makes our every kind of action within that system deeply precarious. What I realized in writing Ways of Being was that a lot of that unknowingness that I was writing about—that gap in our awareness—was also present in the natural world in more interesting and useful ways. In New Dark Age I made the case that we also need to trust that unknowing to some extent, because it is a decentering of the human. It’s a necessary admission that we can’t, in fact, know and control everything, because one of the greatest causes of misery of the present is the overriding desire on the part of some to control the world, to control everything they can and, thus, to profit out of it. That’s a demand for domination, and that’s at the root of most human evil, I would say. And so an admission that we cannot control everything and we cannot know everything—because knowing itself is always a form of control, a form of domination—is a necessary step in acknowledging that we are not at the center of things, that actually we belong to a more-than-human chorus; that if we are prepared to listen to, rather than to know in this aggressive fashion, we can actually live alongside more hopefully.

It’s very telling to me that the greatest jeremiads against the dangers of AI come from these people who are actually making it and making huge amounts of money out of it, because of course they are the people who have the most to lose if this kind of terrifying AI vision that they have would take over. But really it’s more of a psychological insight into the way that they see the world, because they can only imagine other intelligences as being as dominating as they are. They can only imagine intelligence as being the urge to dominate and control. If there’s one thing one learns by looking at the intelligences of nonhumans—and, in fact, of non-rich people in many forms—it’s that that is not the only way to be intelligent in the world.

EVLYou talk about how the computational environment and the natural one are perhaps not as distinct and separate as we might imagine. And that acknowledging that the computational environment exerts a transformative influence upon us in a similar way as the natural world once did could allow us to realize several important things: (1) that this influence matters, (2) that the computational environment is continuous with the natural one, and (3) that acknowledging the reality of our technological landscape in which we’re embedded might allow us to reimagine our relationship with the biosphere. Could you talk about this and about this reimagining?

JBOnce again, that’s a concatenation of quite a lot of points into one, but I’ll try and back it up a little bit.

EVLThat’s a big question.

JBOf course. It is the one that I try to answer in the book. Again, we live inside this myth of technological superiority that is predicated in large part on the separation of ourselves from the environment, as though those are two entirely separate things. And the myth of technology supports that because it tells us that we can become autonomous from the environment. So much of the mostly unconscious but often conscious intention of the way that we build technologies is to separate the human from the environment in so many ways. And that goes for nondigital technologies: most of the clothes we make and all these practices that we engage in all the time also have similar things. But network technology, modern high technology, takes it to the cognitive level, where it says that we can think and live entirely separate from the world around us. And as I said in talking about the way that technology actually operates, as something that is embedded in the world and implicated in it in so many ways, that’s an illusion, and it’s an illusion we really need to break with.

One of the operating principles of my work is that we have all of these metaphors in our heads of the ways in which we relate to the world, and what I try to do is unpick some of these dominant metaphors to show how they actually, if considered another way, allow us a different access to the world around us. So every time we build some kind of tool that is intended to separate us from the world, I always feel that if you parse it slightly differently, it reveals within itself this constant reinforcing reconnection to the world.

EVLYou spoke about Alan Turing earlier and his influence on the development of modern computing. And it was interesting, because I learned in the book that at the same time he developed what became known as the Turing machine in the 1930s, or the automatic machine—which, as I understand it, is given instructions and tasks that it completes or computes, which has become the basis of modern computing—he developed a different kind of idea for a machine, which he called the oracle machine. Could you talk about these two different machines and why the oracle machine might be so relevant to us now?

JBIt’s just so endlessly fascinating to me to go back right into the moment of birth of the modern computer and see, laid out right in a couple of Turing’s original papers, this entirely alternative vision that was almost never followed up in any way. When Turing first describes what we now know today as the Turing machine, which he called the automatic machine, he leaves hanging an alternative vision. It is really important to understand that what we call the Turing machine, and he called the automatic machine, is almost all computers today—like, 99.9999 percent of computers today. Your laptop, your phone, the computer we’re talking through now, the ATM, the flight-control system, even the biggest supercomputers in the world—they’re all Turing machines, and they’re all one type of machine. They’re one possible way of thinking the world.

Turing called it an automatic machine because it’s simply a machine that acts out a step-by-step process. It does whatever you tell it to do, in Turing’s words. That also makes it a kind of closed system, right? It just takes a set of instructions and steps through them, and it has very little awareness of anything else outside the world of its own programming and the set of its controls. But at the same time, as he defined the automatic machine, Turing also mentioned incredibly briefly this thing called the oracle machine. He literally says we will not say anything about the oracle machine except, of course, that it cannot be a machine, which is an incredibly brilliant paradoxical statement. But the oracle is literally something else communicating with the computer. So instead of just having an automated machine—this completely self-contained computational object, a kind of tiny set of instructions in a box—you have something that is capable of communicating with the wider world; not just communicating but listening to it, taking some kind of prompt from it. And this is the kind of computation that was subsequently explored by fields like cybernetics, various kinds of robotics, basically computer systems that tried to look to the world around them to understand something.

My classic example of an oracle machine is, quite appropriately, a random-number generator. One of the real problems of computer systems is that they can’t generate true random numbers, because of course they’re just stepping through this set of automatic processes. And you can’t create randomness—i.e., something completely unexpected—by going through a set of programmed expected steps. And random numbers are really necessary. They’re needed for cryptographics, for credit card transactions, for lotteries, for example. And lottery machines have come up with all kinds of weird ways of generating randomness. There’s a reason they still use those ball-juggling machines, but they also do other things. There’s a series of British computers for picking lottery numbers that used neon tubes. So you connect the computer to a neon tube and you measure the electrical flux within that tube. Because that tube is connected to the universe. The electrical flux within the gas in that tube is affected by radio particles passing through, cosmic particles coming through the space. You’ve connected the computer to the universe and it’s listening to the universe to tell it something that it can’t do from a step-by-step process. That’s an oracle machine, right? It’s a computer that acknowledges its connection to the universe, to the world around that.

For me that’s a kind of refutation of everything we’ve built into almost all the computers that we have; because it shows the way in which all the computers that we have, and thus the world that those computers have contributed to building, are blind in all these crucial ways to the truth of the world. That the world is composed of interrelationships. That the world is more than human, more than human intelligence; that it is an ecological world. Computers have ignored that, most of them, since the foundations of computation. And that’s one of the big reasons we have the kind of world that we have. But as I wrote in the book, and as Turing suggested all those years ago, an entirely different form of computation is possible. And also—through my thinking, and through this fact that the technological is actually continuous with the rest of the world—another kind of thinking is possible for us as well. Computation has come to define the way that we think. We think so much like computers today because they’ve defined what is thinkable. And so for me, rethinking the computer rethinks what is computable, and therefore rethinks what is thinkable at all about the world.

EVLIt makes us blind, but it also leads to violence. You talk about this in the book, especially the reduction of the beauty of the world as a form of violence, and how numbers and data contribute to this. And this obviously in turn led to exploitation and destruction. Part of this happens when computers make models of the world, which of course makes it abstract and then distant. Can you talk about this and how we could move away from this violent abstraction and model for machines?

JBAgain, another of these properties of things like the Turing machine—like all computers, essentially—is that they think the world in ones and zeros; they think the world digitally. And the world is not digital; the world is analog. And I don’t mean that in terms of fuzzy records versus clean MP3s or something like that, though there is a corresponding kind of reduction in quality. But this is not some kind of nostalgic appeal. It’s simply the fact that the world is not divided into ones and zeros. And when you try and put everything into ones and zeros, something is lost. What happens in between those ones and zeros is lost, and the result of that is a deep violence, because what is lost is either erased or violently suppressed; because then you’ve started to act in the world according to the model that the computer provides. And you try to make the world more like the model. We are model-building creatures. It’s the way that our consciousness operates. We’re continually building a model of the world and essentially trying to make the world more like the model and the model more like the world—it’s what’s called being normative. We try and get those models to converge. But those models don’t really converge: one of them is digital and one of them is analog, or one of them is internal to a machine and one of them is the world itself. And so a huge amount of violence is the result of that. It is a violence that’s as much to our own ability to think as it is to the external world.

There are alternatives to this way of thinking and looking at the world. One thing I explore in the book quite extensively is this idea of analog computing. The 0.0001 percent of computers that aren’t Turing machines are incredibly fascinating and diverse. So an example of an analog computer, one of my favorites that I first encountered in the science museum in London when I was a small child, is a thing called the MONIAC. The MONIAC is a computer for simulating the British economy, and it’s made largely out of water. It’s the size of a big refrigerator, and it has a tank of water at the top and various pipes running down it. And then there are buckets marked with things like “personal savings,” “public spending,” “tax revenue,” and there are lots of little taps, little valves, that you can turn to tune things like various tax rates, the quantity of imports and exports. What is flowing through this computer is not ones and zeros, but water. It’s this beautiful enactment of a bunch of economical metaphors, really. We hear about the fluidity of markets and these kind of things, because they are qualities. Even as much as the market itself is a kind of inhuman abstraction, it does reflect the flow of the world itself, this kind of chaos that it presents. And it turns out that this computer that was built, this MONIAC, modeled this better than any other way that had been developed at the time. It was originally built as a teaching tool at the London School of Economics, but it proved to be so successful at modeling the economy that they built versions and used them in government departments to actually develop budgets, and so on and so forth.

There’s a point in there that’s very critical for thinking about how we make technology, which is about making things that are legible—because you can stand in front of that machine and you can understand how it works. And that’s not true of most of the machines that we build. So there’s something very powerful in having access to a technology where you can literally see how it works, and that changes how you interact with it. And that’s a very rare experience these days. But also I think it’s really important because it is a nonbinary computer. It’s a computer that recognizes the chaos and flux of the world rather than trying to split it and condense it and reduce it to a lesser representation of ones and zeros. You can also do it more extensively: something that more explicitly connects with the world, which is where a discipline like cybernetics comes in. Cybernetics has taken many forms, but one of its core understandings is that cognition, the brain, intelligence, is not a static thing: it’s something that is performed and done in the world and responsive to the world; and so it’s something that’s changeable and flexible and that evolves over time. That kind of evolution, that kind of flexibility, is something that natural systems are capable of doing in ways that technological systems really have never been able to, and probably never will be able to, if they are entirely constrained and contained within boxes. But by connecting these systems to larger, nonmechanical, nonhuman systems, you can bring in some of that awareness and thinking. Some of the stories I tell in the book are about people like Stafford Beer, who built attempts at computers involving ponds and tiny marine organisms and tried to get them to interact with complex computer systems as a way of counteracting this conservative, constrained impulse of digital computation.

EVLYou offered so many interesting examples of ecological machines, and you also, though, broke down what you propose as a better way to approach creating machines. And you mentioned nonbinary, but you also talk about the importance of decentralized systems and unknowing, which is related to randomness. Could you unpack the importance of nonbinary, decentralized, and unknowing-based systems and what these would offer?

JBThis is really a response to the way that I’ve come to understand how our contemporary computational systems operate, which is that they are centralizing binary and are based on systems of domination and control. And so the way in which we build computers today is largely centralizing. It centralizes power within certain computer systems, and those certain computer systems are owned by large corporations or nation states. And so the effect of most of our technology at present is to give power to the already powerful, not just by centralizing that power but also by centralizing the knowledge of how that power operates, so that very, very few people, as I said before, really understand how the things that we use every day actually work. That’s incredibly dangerous because it means power is handed over to a very small part of society that operates and owns these systems. And our agency, the agency of everyone else, is massively reduced because we can only do the things that are permitted to us by these systems, which are controlled by other people. That centralization, which is part and parcel of most of the way that we build technological systems today, is incredibly damaging.

The first principle I have of more just and equal ways of thinking about building technologies is decentralization, which means spreading the tools of computation as broadly and widely as possible. And of course education, then, is a huge part of this. Any technological problem at sufficient scale is really a political one. The political problem of decentralization is largely one of education. It’s not enough just to turn these tools over to people. We also need to have an education process, hopefully a collaborative one, in the use of those things so that people can imagine them anew. A couple of examples of decentralized technology that I always come back to are things like the open-source movement, the movement within programming that allows anyone to read the source code of programs, which sounds like an obvious thing to do but actually is not how most software is produced. Most software is proprietary. You don’t know how it works and you can’t know how it works because you can’t see the source code. By opening that up, you allow people to understand how the things around them work, but you also decentralize the knowledge of how that’s worked. And because being able to see the code is also a way of learning not just about that particular piece of code but how all code functions, you’re decentralizing a much broader understanding and literacy. And of course that decentralization is a political act. You’re spreading the power literally among more and more people. In the old web that we used to use, you could always view the source code. You still can see the source of some web pages, but most of the operation of the internet now is basically closed source. You can’t look at the “view source” of a website and see how it works anymore because that’s not how the web works—by design of the powerful. So that centralization process is still ongoing, even through something as apparently decentralized as the web, which has become incredibly centralized.

The second thing is this question of nonbinariness. As I said, binary computation is the way we’ve come to think all computation should be, and hence all thinking. It lies behind so much of our construction of society, and thus also the constructions of hierarchies within those societies. Because when everything has to be assigned a role within a binary system, the relationships between things in that system become immediately hierarchical, because there’s no other way to differentiate between them. And so an insistence upon the nonbinariness of our thinking as a result of the possibilities of nonbinaries within computation changes our relationships to everything within our own society and species: towards things like gender and sexuality, crucially, but also all other relationships between people, relationships towards the world. There’s a big part of the book in which I talk about how our notions of things apparently as big and heavy and weighty and historical as species are really starting to fall apart. Our understanding of even how species are separated through evolutionary processes has come under incredible sustained attack within the last few decades as we come to understand things like horizontal gene transfer, viral genes transfer, the fact that DNA does not only get written down the line through parental reproduction but actually could be rewritten across and between individuals. Not only have we started to lose our ability to draw such fine lines between species, we’re even starting to lose the concept of an individual at all. We know now that we are not singular, atomic individuals, but we are ourselves walking, multispecies assemblies, and the language of binariness just doesn’t hold at almost any level—at the individual level, at the species level, at the planetary level. And so to continue to build our machines—through which we categorize, understand, and think the world in a binary model—doesn’t work anymore and requires total rethinking. That’s where the nonbinary requirement comes in.

And finally, the unknowing is a way of thinking the world without seeking to know it in this form of domination and control that’s deeply rooted in Enlightenment ideas and also in much of science. The ways we have of knowing the world destroy the world: whether that’s our colonialist, imperialist ways of overruling non-Western cultures with a Western way of viewing and understanding them, whether it’s the scientific practice of splitting everything down into its component parts and understanding the world mechanically, it literally destroys it as a process of knowing it and comes to dominate and rule what remains. We need another way of thinking about the world, another way of coming into understanding agreement, becoming with it, that doesn’t depend on knowing it in this kind of destructive, domineering way. And that’s why I talk about unknowing. Unknowing is not the same as ignorance. It’s not a blindness to the world, but it’s a refusal to project our own forms of thinking directly onto it in a way that obscures its actual reality.

EVLWhen you talk about unknowing in your book, it evokes this sense of the importance of embracing mystery, which is a foundation in so many spiritual and mystical traditions and ways of being—the unknowable that creates awe and humbleness about how we relate to the world. And you don’t speak about the spiritual or mystical side directly, but it seems to be an underlying theme through your book—the spiritual side of needing to see how we relate to the world—that these relationships and technologies might need to be spiritual as well.

JBIt’s something that I think about a lot and struggle to write about. But to get to my own experience, let me use the example of someone who I think you’ve—I dunno if you’ve had them on the show before but you’ve certainly mentioned—Monica Gagliano, whose work was hugely influential to me in understanding many things, but particularly the fact that it’s possible to approach an understanding with the world through multiple ways of understanding it, multiple approaches. Her experiments in the existence of plant memory are rigorous scientific experiments that she designed according to the scientific method that allow her to make these extraordinary, powerful, and persuasive claims about the abilities of plants to remember and to do all kinds of other extraordinary things. And they’re constructed in such a way that they fit within the scientific method—they’re reproducible, they’re peer reviewed—but she’s also very explicit about the fact that she has a shamanic practice and had spiritual communication with the plants as well, and that this informed her ability to communicate with the plants in order to prepare them and work with them on the scientific side.

And I, too, have had encounters with plant spirits through the use of ayahuasca that deeply informed this writing. But even I struggle to bring those different ways of thinking and knowing together. That’s a huge part, I think, of what I’m doing in the book: finding a way within some of the discourses that I’m more familiar with and that I’m more comfortable writing in—and that a lot of other people might be able to follow more clearly—that also expresses what I consider to be the deep interconnectedness of everything on an entirely different dimensional level. That is absolutely fundamental to what I understand. But I’ve come to it, like Gagliano, along these multiple paths. I come to it through an open spiritual encounter with the world around me. And I also come to it through thinking through these deeply human technological models of thinking the world. To me they point towards the same thing, and for me that strengthens or empowers both approaches in a really powerful way, that you come to a lot of the same conclusions about our relationship with the world if you think deeply along any one of these lines. But the fact that they all converge seems to me to be the most powerful quality that they have.

EVLTowards the end of the book, you write about solidarity, which you describe as “that form of politics which best describes a yearning towards entanglement, to the mutual benefit of all parties, and sets itself against division and hierarchy,” that we must “declare solidarity with the more-than-human world,” and that “solidarity is a product of imagination as well as of action.” What does solidarity with the more-than-human world look like to you?

JBIt looks like the awareness and care and attention that I’ve been maybe hinting at throughout this conversation. First of all, it implies a deep, deep equality. It says that there is no justification, no place for any kind of human superiority in our relationships. But crucially, solidarity also removes the requirement for knowing, in all the dangerous ways I’ve been describing. There’s a common understanding that it’s only possible to really care for things that one understands on some kind of deep level. This is related to the failure of human empathy, the fact that we are only really capable of caring deeply, it seems quite often, for quite a close, narrow circle. And even those people who are far away from us in distance or in cultural experience, we seem as a society to find it harder to care for them because we cannot imagine ourselves into their experience. It isn’t deeply necessary for us to care and to think with all forms of being, who we cannot imagine ourselves into. This is part of the unknowing that I was talking about earlier, because it’s impossible for us to know what it’s like to be so many other forms of being. And yet we have the same goals in mind, which are surviving and thriving on this planet. And we share this world.

These are the lessons that I draw from my understanding of the extraordinary abilities and life worlds of nonhuman beings: that as much as our worlds differ radically, we share a world. We inhabit the same world as all of these nonhuman creatures. That is our deep and shared connection. And for me, it is necessary once we start to acknowledge the reality and the agency of nonhumans—as we’ve always struggled to acknowledge the agency of and beinghood of many humans—it is necessary to build a politics that enfolds all of that. And by “politics,” more broadly, I mean the ability to think and make decisions together, hopefully for our common benefit. And the politics that best fits that, for me, is this idea of solidarity, which simply starts from the position that you—unknowable you; unknowably, incredibly different you, who I cannot imagine—I still care for you and value you and think you are as important, and I will stand with you. That, for me, is the heart of solidarity. It’s a simple acknowledgment of the value of all forms of life and of our common, shared goals that have to lie at the heart of any movement towards a more just and equal world.

EVLAnd solidarity now means ecological politics just as much as it means ecological technology, right? There’s a demand for that, to embrace the more than human in the political system and the technological system that we’re being thrown towards realizing.

JBFor me there’s just no meaning to having a discussion about changing technological systems, changing political systems, without the acknowledgment that we have to include more than humans into those arrangements. Because that’s the absolute necessary step beyond acknowledging that humans are not the only game in town, that human supremacism has to be done with. We have to work out how we go on together. How do we make decisions together? In the book I explore quite extensively the world of nonhuman politics in all its forms. I look at the ways in which nonhuman animals do politics within their own societies, whether that’s the decision-making processes of herds of deer, whether it’s the ways in which bees perform a form of direct democracy to make decisions within their hives.

EVLThe waggle dance.

JBThe waggle dance, exactly—this incredibly complex way of communicating information that allows a large group to make a decision that is also an incredibly optimal decision-making strategy to bring as many different points of view to bear upon a practice. There are all these ways in which animals do politics, and there are all these other ways in which animals, nonhumans have done that politics in relation to humans over time. I look at medieval animal trials—early examples of having nonhumans within the human justice system. I look at the history of animals within captivity and the ways in which they demonstrate their politics through resistance to that captivity. And so if we are beginning to acknowledge the fact that nonhumans have all of this agency and that they do politics in all kinds of ways, then we have to develop a politics that includes them. I think that’s absolutely necessary. It’s an absolute foundation for any kind of ecological justice and collective future flourishing. If we don’t put nonhumans on the same level as humans within our decision-making processes, then all of that is meaningless. A fundamental step towards a brighter future is a more-than-human politics that is as broad and inclusive as we can possibly imagine, and probably more so.

EVLI have just one last question for you, James, and that’s about optimism and hope, which you wrote about in one of your latest blog posts, titled “Hope Needs a Place to Perch,” where you talk about how “optimism, pessimism, hope, and despair are not useful ways of thinking about the present crisis.” So talk to me about this and what hope needing a place to perch means.

JBIt’s nice that you’ve picked up on that because it’s something that I’m still thinking through and trying to understand myself. The book that I mentioned earlier, New Dark Age, that concentrated on technology included quite a section on climate, including a chapter on the climatic implications of high technology. And I learned a lot in writing that, a lot of things that terrified me, to be completely honest. And it seemed to strike a chord with readers as well. It’s something that gets cited back to me quite a lot. A lot of the work that I’ve been doing ever since has been a process of addressing and attempting to understand and deal with my own climate grief and trauma, which I know to be entirely real things because I have experienced them and I continue to experience them in all kinds of ways. And so part of this book was looking for ways through that, ways of understanding that.

The thing that I refer to a lot in both books is this concept of agency. Agency, for me, is the ability to know where one is, to understand one’s own position, to have one’s own capabilities, and to have the ability to affect and change one’s own life, in as broad a range of ways of thinking as possible. It’s something that is under huge pressure for most of us. It’s something that is taken away from us actively in many, many ways, and something that we struggle to realize in so many ways. The crisis of agency in the present moment—which is an effect of capitalist systems of control; an effect of the kinds of technology and the systems we have to engage with all the time that limit us in so many ways— I feel that that loss of agency—the collapse of political consensus and meaningful politics at a state or global level; the horror that strikes us and our sense of fear and anger that results, that being the dominant tenor of the world today—is a result of that loss of agency.

So how do you rebuild that agency? I do it through doing what I do, which is trying to think about and understand things in the world, by learning about things and hopefully listening, paying attention to the world around me, putting some of these ideas together. Those, for me, are a way of rebuilding agency, and that’s my response to climate trauma and grief as well, it turns out. It’s been the most effective thing for me. A good example of that is, a few years ago I started tinkering around with solar panels. Really, really, simple stuff, just learning to wire them up and so forth, going back to basic electronics. But I discovered within that the same expansion of agency that I did when I first started tinkering with computers; I felt I had an agency within this complex system. Now, I don’t think solar panels alone are going to save us or anything like that, but I felt for the first time that I was engaging with something that mattered within this complex environmental ecological system, that up to that point had only been something that was essentially oppressing me, frightening me, worrying me, and knocking me down. And that work has continued through all kinds of making. I spend a lot of my time now working on forms of really simple regenerative technology, things like permaculture—things that I do in part because I think they will help in some abstruse way. I think that’s the least one can do. But also because they are psychologically comforting in the sense that they build my sense of agency.

Without that sense of agency, we’re incapable of doing anything at all. That is the place that hope needs to perch on, that I’m trying to articulate: that hope without any foundation of actual agency, any thinking, any knowledge, any ability to make or do change in various ways, is a meaningless thing. Like optimism. They’re not quite the same thing, but for me they’re closely related in that they’re just words without some kind of basis of action. And that action can be as simple as building a little wooden box in your garden to purify water, which is something that I’ve been doing this week. Or it can be as big as developing computer programs to analyze satellite photographs. Whatever it is, it doesn’t have to be the one piece that will save the world, but it definitely has to be something that increases one’s own psychological sense of ability to make change that is a prerequisite for any other kind of hope or optimism that we might face.

EVLIt’s been a pleasure speaking with you today, James. Thank you so much.

JBThank you very much for having me.

Related Stories

10 10