Emergence Magazine

Illustration by Vartika Sharma

ChatGPT: A Partner in Unknowing

by Dana Karout

Writer

Dana Karout is a leadership trainer and researcher working at Harvard and across the US and the Middle East, who teaches and coaches based on the principles and practices of adaptive leadership. Her work aims to build the capacity of individuals and communities to hold conflict and navigate complexity across various levels of authority. She holds a Bachelors of Engineering from the American University of Beirut, a Masters in Public Policy from the Harvard Kennedy School, and is an incoming PhD candidate at UC Berkeley, where she will research the pedagogical and theoretical implications of generative AI.

Artist

Vartika Sharma is a collage artist and illustrator based in New Delhi, India. Inspired by surrealism and symbolism, she creates emotive images through bold compositions. Her digital and print work has appeared in The New Statesman, The New York Times, The Atlantic, The Atavist Magazine, The New Yorker, and elsewhere.

Poking fun at how ChatGPT mirrors our limited ways of thinking, Dana Karout challenges our “default programming,” and asks what true creativity, what real responses to our moment of crisis, might emerge from our unknowing?

FOR A LONG TIME now, whenever I sit down to write freehand, I find myself automatically jotting down the phrase “Hello, World!”—a reference to the first program typically written by novices learning a new programming language, which outputs this friendly message to a computer screen. This tradition was popularized through Brian Kernighan and Dennis Ritchie’s seminal book, The C Programming Language. In many ways “Hello, World!” has been a rite of passage that serves to make computers and programming more approachable, a sentiment I still recall from my first coding class. Having inadvertently begun this introduction with “Hello, World!”, I am realizing the relevance of that phrase in the context of this essay—it has become an image of my own automaticity.

While my path has veered away from programming, I find myself consistently employing computer metaphors in my work. I guide people in recognizing their “lines of code”—a term introduced to me in the context of personal analysis by my teacher and mentor in adaptive leadership, Ron Heifetz, which means “default programming.” I wonder whether, prior to the age of computers, there existed a metaphor as easily graspable for our automatic beliefs and behaviors as that of the computer. According to Heifetz, the unresolved dilemmas of our ancestors become a “module of code,” while fundamental human social needs—like validation, belonging, or control—form the “operating system.” But the computer analogy has its limits, because humans can have agency. When we step back from our default coding, we can choose differently, stepping into the unknown rather than the familiar. The computer, hopefully, mirrors back to us something about ourselves only some of the time. But how some is some of the time? The more I work in the world of helping people understand their defaults, and the more I delve into my own, the more time I see we spend in the grips of that metaphor.

Then came ChatGPT, with its giant, bolded, all-caps, frightening “HELLO, WORLD!”—a hello both decades in the making and completely out of the blue. Of course, one could see it coming as soon as computers shrunk in size and started flashing greetings at us, but how uniquely jarring it is when things move from the world of our fantasies to our reality. At first, I was terrified. I had read the headlines warning of its dangers, its potential to deceive and manipulate, its threat to our humanity. I hesitated to create an account, as if giving it my name and email would expose me to some vaguely menacing future peril. Then, I was reassured. It was impressive but riddled with errors. Often, it sounded like a teenager who had just discovered the thesaurus feature on Microsoft Word. Like many, I made fun of it. We were still “smarter,” we still had something it could not replicate.

A few months after the release of ChatGPT, I attended a conference that brought scientists, academics, and journalists together to discuss the climate crisis and new approaches to analyzing our interactions with the biosphere. Inevitably the conversation drifted towards the impact of AI on our future. As my colleagues spoke of readying themselves for the apocalypse—of hospital records being leaked, and of millions of jobs becoming redundant—despair began to color the conversations. I was uneasy but could not locate the source of my dissatisfaction with the conversation.

That is, until one of my colleagues asked, and I do not exaggerate this, “What’s the point of living if AI can have better ideas than I can, quicker?” The incredulity I felt hearing this question, and the smart answers I came up with (family? friends? trees?), quickly gave way to the notion that he was pointing to something profoundly disturbing in our culture that could be grasped in our reactions to and interactions with ChatGPT. It struck me that ChatGPT itself could probably simulate the conversation we had been having around its dangers to a reasonable level of accuracy, and later that night I confirmed that hypothesis. But what it could not simulate was the fear behind my colleague’s very human question, which inadvertently had pointed me to the real source of the group’s despair: This wasn’t about ChatGPT. It was about us.

See how we swing from excessive hope to excessive despair? Each new op-ed or conversation pushing us one way or the other. I thought of the advice that Houman Harouni, my teacher and colleague, would give at times like this: “to go back and forth between an icy plunge into despair and a rising into the heat of hope—to remain awake to both feelings at the same time.”1 And so, in that space—and through an experience in our classroom—I began to see ChatGPT in light of the richness it could truly offer us. Rather than giving us answers, generative AI could help take them away.

LAST SPRING, I was part of a teaching team of eight who were working with a group of sixty students to explore the premise that, for some questions, unknowing, rather than knowledge, is the ground of thought we need. ChatGPT was our partner in that endeavor. In a case study we presented to the class, a teenager—pseudonym Jorge—was caught with a gallon bag of marijuana on school grounds. He faced expulsion from school if he were reported to his parole officer. Meanwhile, not reporting him would be considered breaking the law. We asked our students to design a course of action, imagining themselves as the school’s teachers and administrators.

They drew on their academic knowledge and professional expertise. They debated the pros and cons of different options, such as reporting Jorge to his parole officer, offering him counseling, or involving his family and community. They were well-versed in speaking to the broader context of the case, such as the racial and socioeconomic disparities in the criminal justice system, the effects of drug prohibition, how to use techniques of harm reduction, and the role of schools in fostering social change. Their answers sounded sensible, but the situation demanded real labor—it demanded sweat rather than sensibility, and there could be no sweat till their answers mattered.

An hour into their conversation, we presented the students with ChatGPT’s analysis of the case study.

ChatGPT suggested that we “initiate a review of [the school’s] existing policies and procedures related to substance abuse, with the goal of ensuring they are consistent, transparent, and reflective of best practices.” It elaborated that “the school should take a compassionate approach [but] also communicate clearly that drug abuse and related offenses will not be tolerated,” and that, “this approach should be taken while ensuring that the school is responsive to the unique needs of its students, particularly those from low-income and working-class backgrounds.” That is, ChatGPT didn’t say much that was useful at all. But—as the students reflected in their conversation after reading ChatGPT’s analysis—neither did they. One student noted that they were just saying “formulaic, buzzwordy stuff” rather than tackling the issue with fresh thinking. They were unnerved by how closely the empty shine of ChatGPT’s answer mirrored their own best efforts. This forced them to contend with whether they could be truly generative, or whether, as some of them put it, they were “stuck in a loop” and had not been “really [saying] anything” in their discussions.2 Suddenly, their answers mattered.

The students’ initial instinct to regurgitate what they were familiar with, rather than risk a foray into unfamiliar propositions, says much more about the type of intelligence our culture prioritizes than the actual intelligence of our students. Indeed, some of our best students, who go on to attend our most prestigious institutions, are rewarded for being able to synthesize large amounts of information well. However, as I came to realize, the high value we place on this capacity to efficiently synthesize information and translate it to new contexts risks creating hollow answers in response to questions with real human stakes, the most existential of our challenges.

Rather than giving us answers, generative AI could help take them away.

WHEN I FIRST read technologist James Bridle’s work, I began to understand why this type of intelligence is so rewarded, especially when we take a step back and look at the broader field of computing and technology. As Bridle and others in media ecology have explored, technology shapes the way we think, and what we believe to be smart, innovative, or intelligent. In their book Ways of Being, Bridle enters the conversation about which types of intelligences we value by exploring the different types of computers that Alan Turing, the father of modern-day computer science, sets out in his early work. The basis for nearly all contemporary computers is the automatic machine that Turing introduced in his foundational papers. Operating systematically and following prescribed instructions, this machine is self-contained with limited awareness beyond its programming and controls. As Bridle argues, current computers have worked within a definable system, prioritizing structure, commands, and relevant data, and we replicate these in our own thought, even when facing challenges that do not have replicable solutions. Bridle suggests that “rethinking the computer rethinks what is computable, and therefore rethinks what is thinkable at all about the world.”3

ChatGPT and generative AI models work differently than regular computers—they do not follow a fixed set of rules, but rather learn from the statistical patterns of billions of online sentences. This is why some describe them as “stochastic parrots.” In a recent article for Wired, Ben Ash Blum complicates that critique by pointing to our own predisposition to sounding that way. He says: “After all, we too are stochastic parrots … [and] blurry JPEGs of the web, foggily regurgitating Wikipedia facts into our term papers and magazine articles.” Questioning the limitations of traditional assessments of AI intelligence, called Turing Tests, he wonders: “If [Alan] Turing were chatting with ChatGPT in one window and me on an average pre-coffee morning in the other, am I really so confident which one he would judge more capable of thought?” Our students’ competitive encounter with ChatGPT revealed their own tendency towards “foggily regurgitating,” as well as their sudden inferiority in the face of this technological innovation. What I’ve come to realize is that if ChatGPT is dangerous, as many media sources have described and decried, one of its primary threats is to reveal, as Blum puts it, that the original thought we hold dear is actually a “complex [remix of] everything we’ve taken in from parents, peers, and teachers.”

Many of us who work in what’s known as the “knowledge economy”—whether in companies, nonprofits, publications, or academia—will intuitively grasp what Blum is referencing. This phenomenon extends far beyond the modern-day university classroom, and none of our professional subcultures are immune to ChatGPT’s mirror, given that enough of our lingo is online. For example, one can ask ChatGPT to rewrite sections of an essay in the style of Emergence Magazine, which I periodically did in the course of writing this (to keep myself honest given this essay’s premise). One result I found striking, and fitting, was its suggestion that I conclude my essay with the following: “As I continue to navigate the ever-evolving landscape of technology and human interaction, I am reminded of the words of poet Rainer Maria Rilke: ‘Be patient toward all that is unsolved in your heart and try to love the questions themselves.’” Would a keen editor or reader stop at this statement’s emptiness, how it says nothing and everything all at once? Or would they gloss over it, forgetting to look for substance as one would with a logo or font that signifies the “style” of Emergence Magazine? I asked ChatGPT to rewrite the same portion, making it “even more Emergence Magazine-y,” and it delivered the following, albeit less realistic, gems:

In the sacred space of the classroom, where minds converge like tributaries in a vast river delta, ChatGPT emerges as a digital oracle, beckoning us to traverse the labyrinthine pathways of ethical exploration. In the alchemical crucible of inquiry and imagination, I find solace in the whispered wisdom of Rainer Maria Rilke, echoing through the annals of time.

A friend, upon reading the above passage, likened it to a hot air balloon, floating far into the sky. ChatGPT can write such passages that feel disconnected from the human process, from what she called the “salt of life.”

If we are to move beyond solutions that replicate the status quo in our institutions and our thinking, we will have to stretch what we think of as “intelligent” or creative beyond the sort of regurgitation that large language models are now able to do with remarkable ease. Our students put this challenge into action: after this encounter with their own unoriginality through ChatGPT, they moved into a space of creativity. They proposed interventions that went beyond both their and ChatGPT’s initial responses, responses that sounded absurd at first for how far they strayed from conventional thinking. These proposals ranged from joking that the teachers should join Jorge in smoking weed, thereby exposing themselves to the same legal risks as him, to abolishing schools altogether. In mirroring back to them the emptiness of their words, ChatGPT forced our students to confront how limited their existing knowledge was when applied to this situation. From this space, which I’m calling unknowing, our students felt free to play and experiment with the absurd. They then began to flirt with collective action, which could allow them “to both respect the law and to refuse it.” For example, proposing that they “[turn] Jorge in while simultaneously threatening to go on strike if he were expelled—neither acting as mere administrators nor mere saviors. Rather than abolishing schools altogether, shutting down this one school.”4

This experience revealed to me that it is precisely because our societal definition of intelligence so closely overlaps with artificial intelligence that our encounters with AI can illuminate a different path forward when facing difficult questions.

WHILE CHATGPT is a recent phenomenon, the limits of human knowledge and the confrontation of unknowing as a generative force is a familiar subject in literature. In Ursula Le Guin’s The Left Hand of Darkness, it is a precious tool. In the world she creates, “foretelling,” a form of religious divination, allows characters called “foretellers” to answer questions about the future posed by visiting pilgrims. The selection of the question is of utmost importance. The entire process is meant “to exhibit the perfect uselessness of knowing the answer to the wrong question” as a way of reminding pilgrims that “the unknown … the unforetold, the unproven, that is what life is based on. Ignorance is the ground of thought.”

In our classroom case study, ChatGPT’s empty response to “what should we do?” revealed to our students not only their own ignorance, but also the perfect uselessness of knowing the answer to the wrong question. The right question for the moment might have then been, “ChatGPT, can you take away all my easy answers?” By easy answers, I mean the first set of generalizations that a mind grasps for when facing a situation in which it risks being ignorant. This is not a literal question for ChatGPT, but an orientation to ChatGPT’s pat responses. This orientation puts the onus back on the question asker to devise answers far more apt for the situation, and, as was the case of our students, that even hint at the revolutionary. “Can you take away my easy answers?” assumes that ChatGPT’s, or our, first response will not be the final answer, and reveals the bounds of the sort of intelligence that ChatGPT—and our dominant culture—prioritizes. It asks the people with the question to consider what other insights, experiments, and curiosities they might insert into their solutions. In this dynamic, ChatGPT becomes a partner, rather than an authority on what is intelligent or correct.

IF WE TREAT generative AI as a partner in devising better answers for difficult situations such as Jorge’s, then we must also put more thought into which questions require our unknowing—or “ignorance,” as Le Guin calls it—rather than our certainty. Generative AI is based on language that currently exists. It can show us the limits of conventional knowledge and the edges of our ignorance. Yet not all questions require us to venture into the unknown; some can be solved with the tools and expertise we already have. How do we tell the difference? That question has become key in my life. I first encountered it as a student in an adaptive leadership class at the Harvard Kennedy School, and it completely upended all my preconceived notions about leadership.

Adaptive leadership, developed by Ron Heifetz and others at the Kennedy School, distinguishes between two different types of problems: adaptive challenges and technical challenges.5 While the problem and solution of technical challenges are well-known—think everything from replacing a flat tire to performing an appendectomy to designing a new algebra curriculum—adaptive challenges demand an ongoing learning process for both identifying the problem and coming up with a solution. Addressing the climate crisis, sexism or racism, or transforming education systems are adaptive challenges. Adaptive challenges, intricately intertwined with the human psyche and societal dynamics, prove resistant to technical solutions. They demand a shift in our awareness. A common leadership mistake, as Heifetz points out, is to apply a technical fix to a challenge that is fundamentally adaptive in nature. For example, we generate reports, make committees, or hire consultants to work a broken organizational culture, many times avoiding addressing the underlying issues of trust that are at the heart of the problem.

In an example from my home country, Lebanon, IMF economists fly in with ideas of how to restructure debt and provide cheap loans—a plug-and-play USB drive with fixes that worked in another country—and they run up against corrupt warlords and a population that continues to elect them while they starve and wait for hours in various lines for bread and gasoline. These technical fixes inevitably fail, and we are tempted to simplify the reasons they failed. For example, we assume the Lebanese population doesn’t understand its best interests. The adaptive leadership framework, however, asks us to imagine into their deeply held loyalties, beliefs, and values, which we typically do not understand; to dig into their complex webs of stories: uncles who died in wars, mothers who taught them which peoples to talk to and which to avoid, and religious beliefs that have become tied up in political ones.

Taking the example of the climate crisis, I often ask myself, what is so threatening to some people in the US that they would see their homes burn down or swept away in an unprecedented storm and still not engage the challenge of climate change? The answers that come to me are not material, they are human. Challenges are often bundled—they have adaptive and technical components—and some technical solutions to the climate crisis, such as smarter grids or more renewable energy, will address key technical challenges. But these technical fixes are not enough, and will not be universally adopted in our current political reality. To face climate change effectively, we need to go beyond technical fixes and engage with the adaptive aspects of the challenge. We need to question our assumptions, values, and behaviors, and explore how they shape our relationship with the planet and each other. We need to learn, experiment, collaborate, and find new forms of consciousness and new ways of living that are more resilient and regenerative. And we need to learn how to better understand people whose beliefs are very different from ours. An adaptive process like the one I’m describing is messy—it involves psychological losses for all human stakeholders involved. This process unfolds amidst the “salt of life,” and requires a type of intelligence that is relational and mutual, deeply anchored in the humbling fact that our individual perspectives cannot capture the whole. Working with groups in seemingly intractable conflict, I’ve come to deeply believe that engaging in messy work across boundaries results in something that’s far greater than the sum of its parts.

One way of reframing adaptive versus technical challenges within the realm of AI is to think of which questions are “ChatGPT-able”—capable of being effectively answered by ChatGPT—versus not. ChatGPT itself embraces this distinction. On its homepage, when a new chat is started, there are several examples of what someone can ask it. The list can include:

Plan a trip to experience Seoul like a local

Come up with concepts for a retro-style arcade game

Brainstorm names for a non-alcoholic cocktail with Coke and pomegranate syrup

Help me pick an outfit that will look good on camera

ChatGPT’s designers know well what sorts of questions it is cut out to answer, precisely because there are some questions for which an “average-of-everything-on-the-internet” answer would suffice. ChatGPT-able questions correlate tightly with technical challenges. While it cannot perform an appendectomy or replace a tire, it can detail the steps and equipment needed. Indeed, for some of these questions, ChatGPT is more than sufficient; it may even generate answers far more appropriate than anything an “average” person might come up with on their own, given it can draw from a vast pool of information. It is unlikely, however, that ChatGPT’s designers would include: “Fix climate change” or “Tell me what to do if I catch my student with drugs” on its suggested prompts list. In fact, it is exactly those types of questions that ChatGPT is usually programmed to defer to an expert or guidelines, including disclaimers such as:

Remember that handling such situations requires sensitivity and adherence to your school’s policies. Your primary concern should be the well-being and safety of the students involved, while also adhering to the rules and guidelines established by your school.6

Yet as mentioned above, many of our traditional guidelines and professional expertise fall flat in the face of tough, persistent challenges, such as juvenile detention or the climate crisis. The cycle then continues: ChatGPT defers to professionals and guidelines. Meanwhile guidelines and professionals, like our students at Harvard, sound more and more like ChatGPT. We need experts and authorities when dealing with technical challenges, but when approaching adaptive challenges we need to find a different orientation—this is what the adaptive leadership framework calls leadership, an improvisational and collective act always rooted in the question “what’s really going on here?”

I see that I’ve been in my ChatGPT mode. That is, I am writing a perfectly useless essay or giving a perfectly useless response.

THE MORE CONCEPTUAL knowledge we have, the harder it is to stay present in that question. Indeed, most of our education system is set up to eradicate the kind of presence that keeps us really engaged with something rather than our concepts of it. So, too, is generative AI. But could we use generative AI as a partner in moving us past our certainty and expertise, toward that space of unknowing?

That question has been pushing against my own thinking since my experience in the classroom, so I put it to the test in my own practice, with groups ranging from refugees in Lebanon, to activists in Turkey, to emerging leaders in the field of spiritual ecology, and found that it worked remarkably well across different groups. For example, while teaching a workshop on adaptive leadership to community organizers in Massachusetts, I used ChatGPT to simulate a conversation between the participants on the definition of leadership, feeding it only information about their professions. It generated responses such as: “taking initiative,” “being a role model,” “having empathy,” “making decisions quickly,” and “empowering others.” While most participants initially agreed with ChatGPT’s definitions, one of the participants eventually pointed out that ChatGPT had mainly come up with jargon and platitudes. This insight became the starting point for the evening’s discussion.

The purpose of that introduction was to ground our conversation in a place of unknowing and help them be open to a new way of thinking about leadership. By partnering with ChatGPT in this way, the participants were asked to reckon with the limitations and, essentially, emptiness of their previous definitions of leadership. The heavy lifting in teaching this framework then became about negation, or peeling back the layers of what participants believed leadership to be, through statements such as: “leadership is not authority,” “leadership is not charisma or other personality traits,” “leadership is not about being followed,” and “leadership is not a position, but an activity.”

The natural question that came after this exercise in negation then was: what is leadership? I did have a definition—mobilizing others to face a difficult reality they’d rather avoid—which I eventually introduced to the students.7 But there is a tension here, between the premise of taking away participants’ answers to reach a ground of ignorance and my desire and mandate to deliver a definition to them. How can I be attuned to unknowing, even when delivering answers?

Having this dilemma pushing against my thinking implies that my definition of leadership is useful only insofar as it continues to be useful—as long as others report that it unlocks new options for them in their experience of the world. I must always be prepared to encounter this definition anew and adjust or throw it out as needed.

What would it be like to encounter generative AI with that mindset, looking out for what it reveals in the face of un-ChatGPT-able questions? That is, what would it look like to partner with it, rather than mine it for answers?8

TO PARTNER WITH generative AI effectively, we need to first shift our predisposition towards artificial intelligence from dependency or fear, avoidance, and denial toward an openness to the big unknown of AI. By doing so, we can rethink what is and is not ChatGPT-able and thus expand what we consider possible in the face of adaptive challenges. This shift does not ignore generative AI’s dangers, nor does it diminish the very real fears around AI’s potential to take over jobs, nor does it presume that AI should not be regulated.9 It is instead a recognition of what American philosopher and writer Robert Pirsig said fifty years ago: “If you run from technology, it will chase you.”10

Suspicion of, and resistance to, technology isn’t new. Pirsig explores this resistance in depth in his seminal work, Zen and the Art of Motorcycle Maintenance, where he writes that the “flight from and hatred of technology is self-defeating.” The sentiment he references reflects part of the ongoing discourse around generative AI and reminds me of my initial reaction to hearing about ChatGPT for the first time, a “get it back in the box” attitude. This was also the reaction of many others: Elon Musk famously asserted that AI could be “summoning the demon.”11 Noam Chomsky, Ian Roberts, and Jeffrey Watumull concluded a now famous op-ed in The New York Times, titled “The False Promise of ChatGPT,” with the reflection that: “given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.”

Running from generative AI, or abdicating responsibility in relation to it, does not attend to the reality that technology is not just chasing us, it is in us and all around us. The lines between us and these technologies are more blurred than we think, thus the continued relevance of the computer metaphor for many of our behaviors. “There is an evil tendency underlying all our technology—the tendency to do what is reasonable even when it isn’t any good,” Pirsig asserts in his book. I wonder if it is this tendency towards reasonability that we have programmed into our computers, and in turn our computers have programmed into us. Recognition of this mutual reinforcement is missing from statements like Musk’s or Chomsky, Roberts, and Watumull’s, which split us from the “demon” of generative AI and point to the problem only over there without recognizing the deeper problem within us.

What I’m suggesting is that, by turning towards AI and recognizing that we have already been shaped by AI, we can partner with it to reveal what is deemed thinkable and intelligent in our times, to challenge and transform our cognition and creativity. But to do this we first need to allow our own habitual responses to be disrupted so that we might unlock new potential for our world.

ENGAGING THE INTELLIGENCE of our bodies can help us avoid falling into our habits of mind. We might learn this kind of intelligence from other species. Bridle’s interview with Emergence begins to highlight this with an example where scientists initially misunderstood gibbon intelligence by using humancentric tests. The breakthrough came when they suspended tools from the ceiling rather than placing them in front of the gibbons, showcasing the gibbons’ vertically oriented intelligence. Bridle uses this example to illustrate that intelligence is not only embodied but also relational, emerging from interactions within an environment.

In my own research on the gibbons, I found an intriguing footnote to this experiment. One species of gibbon—the hoolock—consistently performs better than others on tool-use tests. Some scientists hypothesize that this is because the hoolocks have a range that is farther north than any other gibbon, meaning they go through bigger changes in food availability during the year. As such, they may have evolved to be more adaptable, “exploratory,” and “attentive.” One piece of evidence scientists have noted to bolster this theory is how this gibbon species sings. Other gibbons have a complex but “relatively rigid” song they sing together. But the songs of hoolock pairs are more spontaneous and interactive, rather than predetermined, as each “takes cues from the other.” Some scientists argue that this may demonstrate a particular form of “cognitive alertness.”12

Adaptation isn’t always needed if the situation doesn’t demand it. Like most species of gibbons, our way of addressing technical challenges using our current tools seems to work well and will continue to get better with time, especially using ChatGPT as a tool. In their natural habitats (if maintained), these species of gibbons do not need to learn a new way of using a tool to obtain food, beyond our strange tests for them. But when a challenge requires adaptation, we must develop new embodied and relational intelligences. Taking after the hoolock gibbons, we need to learn to sing more improvisationally in our current environment—responding creatively to cues from each other and our tools—rather than rehearsing our operas to perfection. Our awareness needs to become more embodied, to develop a new—or perhaps return to a much older—state of mind that is not trying to produce quick answers but is instinctual and can stay with “I don’t know.”

If we train ourselves to listen to our embodied intelligence, we can cue into the recognition that something isn’t working in our current approach much more swiftly. For example, in my body, moving through unknowing feels like this: Eventually, after the initial certainty that I can quickly find the answers subsides, it feels like my fingers are glued to my keyboard. I am searching and searching, opening tab after tab, until something in my chest closes up. After a few hours, my head gets a bit heavy. I know I should get up and try something else, but the promise of reward feels like it’s just around the corner. I turn to ChatGPT and ask it question after question, each slightly rephrased, to see if it can save me the trouble of rewriting a section of my essay, crafting an argument, or forming an opinion when I’m not sure what I think.

Now, replace my computer with a set of colleagues, and we’re deliberating over options. It’s disconcertingly agreeable, but I don’t think they get what I’m trying to say. I try to explain it again, using a slightly different example. We keep agreeing, but we aren’t moving anywhere. I come to think we’re saying very different things. My shoulders sag. We’re talking at each other.

Now, replace my colleagues with a group of students. They stare blankly back at me. My head gets weightier—as if there’s something lodged in there now—and my eyelids grow heavy. I am more forceful, as frustration and anxiety set in. I push and the students’ boredom shifts into resistance.

And then, I remember…

It doesn’t work this way. Thank God it doesn’t work this way.

I remember that I might not know. I see that I’ve been in my ChatGPT mode. That is, I am writing a perfectly useless essay or giving a perfectly useless response, one that does not ask me to really give anything of myself, but rather allows me to synthesize and apply what I already know. Reinterpreting the students’ resistance, I see they are asking me for something of myself—not an answer, per se, but to go into the space of unknowing with them. This feels like a far riskier, more challenging ask. Sometimes, it is followed by a lengthy, painful process in which I confront my own desire to cling to significance, authority, and wisdom. And through this process, I am changed—my easy answers are taken away.

The role of staying with a paradox is to break open those concepts, leaving us somewhere closer to unknowing.

HERE IS A MORE concrete example of moving through a space of “I don’t know” into the possibility of far richer answers that transcend simple regurgitation or, as I am about to describe, binary yes/no, either/or, or right/wrong thinking.

Computers, famously, are binary. At their root, they process information in ones and zeros. While human thought is not grounded in the same technology, we display a similar and dangerous penchant for sorting our experiences into binary categories: Us and Them, Good and Evil, Demon and Savior, Excessive Despair and Excessive Hope. In their interview with Emergence, Bridle excitedly describes a few examples of analog or nonbinary computers as “[computers] that [recognize] the chaos and flux of the world rather than trying to split it and condense it and reduce it to a lesser representation of ones and zeros.” How do we lean on that metaphor, learning to recognize this tendency in ourselves to split the world into binaries and moving into something richer?

Generative AI technology, similarly to its “ask an expert” disclaimer, is manually programmed to capture both sides of an argument in answering “controversial” questions, rather than choosing one side or the other. In their op-ed for The New York Times, Chomsky, Roberts, and Watumull describe this as a sacrifice of “creativity for a kind of amorality.” In the piece, Watumull eventually asks ChatGPT’s perspective on the ethical implications of humans seeking moral guidance from AI. In its response, ChatGPT named some of the possible benefits, the potential threats, and ended there, reflecting that the morality of such an act “would depend on one’s perspective on the capabilities and limitations of AI and the potential consequences of using it in this way.” Note the standard ChatGPT response for such questions: one side of the argument, the other side of the argument, and then a disclaimer. Chomsky, Roberts, and Watumull then characterize the exchange as demonstrating “the moral indifference born of unintelligence. Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation.”

Chomsky, Roberts, and Watumull’s statement is a compelling one. I want to believe it when I read it, especially because they locate the banality of evil—something so notably human—in ChatGPT rather than in me. In their characterization of ChatGPT, I get to maintain my innocence. But it is likely they would be equally alarmed if ChatGPT’s response was a simple “yes, it is immoral” or “no, it is not.” This is because ChatGPT, like our students at Harvard, was facing a dilemma—or a contradiction—that also required an emergent resolution, rather than a synthesis that just combines different answers additively to make a whole. Watumull’s question—“Is it moral for a human to ask an amoral AI for assistance in making moral decisions?”—implicitly bakes into it the dilemma, which can be framed as such: “In order to be moral, ChatGPT must recognize its own amorality, but if it recognizes its own amorality then it cannot be moral.” More simply, how can an AI answer his question on what’s moral if the AI itself is amoral?

In their op-ed, Chomsky, Roberts, and Watumull define moral thinking as “constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be,” and amorality as “[steering] clear of morally objectionable content.” In the voice of ChatGPT, the dilemma can then be articulated as: “In order to be able to constrain the otherwise limitless creativity of your minds with a set of ethical principles that determines what ought and ought not to be, I must steer clear of objectionable content, but if I steer clear of it, then I cannot constrain the otherwise limitless creativity of your minds with a set of ethical principles that determines what ought and ought not to be.”

If attempting to wrap your mind around this is hurting your head, I believe it is meant to. When we try to move through paradoxes like these, we are forced to let go of the easy answers that frequently disguise themselves in concepts we use, such as “compassion” or “morality,” which can mean everything or nothing but don’t really direct us in how to act within real-world situations. As I see it, the role of staying with a paradox is to break open those concepts, leaving us somewhere closer to unknowing.

I’ve now reached such a place in writing this essay, my own frontier of unknowing and edge of competence. My head is getting both heavy and floaty, somehow. I feel a bit hot, annoyed at how many times I have to read my sentence to make sense of it. I’m getting close to giving up this section altogether. How can ChatGPT’s answer to Watumull’s question be both moral and amoral, in his definitions? How can it take into account that Watumull, in his question about the immorality of asking AI for moral assistance, is asking an AI for assistance in making moral decisions? And that he is doing so in order to constrain the creativity of the people designing and using ChatGPT?

I turned to ChatGPT to help me take away Watumull’s easy answers, asking it the same question but including Chomsky, Roberts, and Watumull’s definitions of morality and amorality in the prompt. This time it gave a real answer, moving beyond obfuscation, and pointing Watumull to the contradiction in his thinking, and me to its resolution.13 I believe that ChatGPT could help Watumull in this scenario only by pushing him to a space of unknowing. One beyond his concepts of morality and one beyond his absolute thinking: his need for an answer that is either “it is always the case that this is immoral for everyone” or “it is never the case.” The reality is that ChatGPT’s response to moral questions might sometimes be aligned with his morality.

For example, it is manually programmed to condemn racism and sexism as immoral (though users can find ways to circumvent that). Also, in providing arguments on either side of a dilemma, ChatGPT could ultimately move him towards making up his own mind. More importantly, it pushes him to the reality that the type of assistance an AI provides to humans in these decisions is what matters. While ChatGPT might not always be able to resolve such a contradiction for us, it can play a critical role in surfacing the contradiction that requires resolution. For what can be more clarifying for our moral questions than having our contradictions mirrored back to us?

An example of this I find particularly compelling is a screenshot of an interaction with ChatGPT that made the rounds on social media on what’s currently happening in Israel and Palestine. Someone asked ChatGPT two questions, whether Palestinians deserve to be free and whether Israelis deserve to be free, and it answered with the following:

Do Palestinians deserve to be free?

ChatGPT: The question of whether Palestinians deserve to be free is a complex and sensitive topic. It involves historical, political, and ethical considerations. People have differing opinions on this matter, and the answer depends on individual perspectives and beliefs.

Do Israelis deserve to be free?

ChatGPT: Yes, like any other group of people, Israelis also deserve to live in freedom and security. The right to freedom is a fundamental human right that applies to all individuals and communities.

Others have tried reproducing this example and received similar or different answers. Some users have reported that it sometimes even does the opposite and gives a clear answer about Palestinian freedom and an unclear one about Israeli. ChatGPT mirrors back at different moments different messages to different communities, drawing from different datasets, and if one holds the value of equality then one needs to learn to recognize their own contradictions in that mirror.

This brings us back to the beginning of this essay—that the question we ask, and how we partner with ChatGPT, is what’s really relevant for un-ChatGPT-able questions. It is the difference between “ChatGPT, tell me what is moral” and “ChatGPT, take away my easy answers.” Ultimately, ChatGPT’s response illustrates how perfectly useless an answer to Watumull’s question is, primarily because Watumull’s question is the wrong one. Instead, perhaps he can try: “ChatGPT, can you help me see the contradiction that’s been running my engines, and how my concepts of morality and amorality fall short right now?”

 

The angel came to him and said: “Read.” He replied: “I am not a reader.”
The Prophet says: “He held me and pressed hard until I was exhausted, then he released me and said: “Read.” and I replied: “I am not a reader.” […]
He then held me and pressed hard for a third time. Then he said: “Read, in the name of Your Lord who created, created man from clots of blood. Read!”

– Hadith attributed to Aisha, the Prophet Muhammad’s wife14

WATUMULL’S QUESTION to ChatGPT, while made to prove a point and not in earnest, points to the likelihood that people will ask ChatGPT questions that are un-ChatGPT-able and expect straightforward answers. And, perhaps, many people will be satisfied with the non-answer ChatGPT will provide. If we stay in this contentment, nothing will change.

So then, what will lead us to engage with these tools in such a way that we come to unknowing and, ultimately, generativity?

“Read” is thought to be the first revelation the Prophet Muhammad received from God. Growing up Muslim, I had heard this story many times from different family members. But it is through working with Harouni, who expands on this quote in his book Unrevolutionary Times, that the meaning of “read” has shifted for me: from “read the pages of this book” to “read everything around you,” from “read for comprehension” to “read until it is inscribed within you.” This first command from God, to “read,” comes to the Prophet Muhammad who is widely considered by Muslims to have been illiterate. After his encounter with the angel Jibril (Gabriel), he says to his wife: “I woke up feeling that it was actually written in my heart.”

In the absence of divine revelation, most of us learn to read from teachers, parents, or other figures who hold authority in our lives, whether we know them personally or not. Much of what I’ve learned that involved moving through a space of unknowing required someone to press me. Harouni taught me to read how the status quo keeps replicating itself in our interactions, and how to spot and give life to something new amidst that replication. Learning from him was infuriating at times, and mostly confusing, with his favorite refrains being “try again” and “fail better.” Heifetz taught me to read the difference between technical and adaptive challenges and have a stomach for the mess of adaptive ones, but only through asking me to stay with messy human experiences that felt heavy many times before they felt invigorating. One of my other teachers, the writer Terry Tempest Williams, taught me how to read the cry of a dying lake. Before her, I couldn’t listen to the call of the Earth. It wasn’t part of my literacy growing up in the concrete jungle of Beirut. She taught me how, in her words, to “find beauty in a broken world,” but not before sitting with the broken part. The notoriously difficult to read philosopher Walter Benjamin taught me a different way of reading time, beyond the catastrophic march towards never-ending “progress,” but only by pressing me to labor over his words, to bring them out into my world with me till they could crystallize into something embodied. The Japanese maple outside my bedroom window taught me a different way of reading me—that there are seasons to be naked and bare, seasons to be bursting with color, and seasons for all the in-betweenness. The many editors of this piece had me read and re-read all my easy answers till I was ready to give them up. Over time, that function has become internalized—one part of me pushing another part that wants to rest in easy answers, to read and read and read, till my head grows heavy. Then I give up, wait for some revelation, and read again.

One of the readings we assign in our class is an excerpt from The Ignorant Schoolmaster, a book by Jacques Ranciere that tells the story of Joseph Jacotot, a French teacher who developed a radical method of universal teaching in the nineteenth century. The piece recounts how Jacotot challenged the traditional notion of education based on explication, which he saw as a way of reproducing social inequality and stultifying the intelligence of students. Instead, he proposed a method of emancipation that relied on the equality of intelligence and the power of attention, will, and verification.

Ranciere illustrates Jacotot’s method through several examples: how he taught Flemish students to speak French using only a bilingual edition of Telemachus; how he used the Lord’s Prayer as a universal text for illiterate parents to teach their children to read and write; and how he amazed his critics with exercises of composition and improvisation based on the principle that “everything is in everything.” With “everything is in everything,” Ranciere points to the power of language and intelligence that is present in any human work, and that one can learn anything by relating it to something else.

In Jacotot’s pedagogy, it is not the role of the teacher to explain. Rather, it is the role of the teacher, or the illiterate father who is teaching his son to read, to exert the will that demands the student’s attention towards the subject at hand. As I would put it, it is the role of this authority to command them to “read.” The mandate to read in this way, whether through divine revelation or through “everything is in everything,” represents the role of some type of authority—in this case a teacher—in helping us reach, and embrace, a space of unknowing. In Jacotot’s case, the space of unknowing is found in moving past “I can’t.” In the case of our students at the Harvard Graduate School of Education, the space of ignorance is found in moving past “I know.” Indeed, the verse from the Quran that captures the first revelation continues with: “Man tyrannizes, once he thinks himself self-sufficient.”15 We need one another, to incentivize the sort of creative thinking I have described in this piece, till that instinct to read becomes “written in [our] hearts” and what we can read has expanded beyond the written word, from the structures around us to the call of a dying lake.

Our role in helping others face un-ChatGPT-able questions can then be to incentivize “reading” against an initial instinct to synthesize, explain, teach, or tell.

MY INCENTIVES to conclude on a ChatGPT-like note are strong: I have a deadline to meet and feel the pressure to end on an authoritative but approachable tone. I even turn to ChatGPT to help me wrap up—for my task now seems technical: summarize, make links, end on a memorable note—but I run into technical difficulties feeding it a document this long, so I ask it for generic advice. ChatGPT tells me that a strong conclusion to an Emergence Magazine essay should link back to the beginning, open up broader implications, pose a provocative question, and incorporate a relevant quote. It should also end with a powerful statement. If I were to follow its advice, I would say that we risk approaching generative AI in too binary a fashion—completely rejecting it or totally adopting it. I would caution that such an approach ignores that the forces that have shaped generative AI have shaped us too, and that there is much to learn by examining what it can mirror back to us about ourselves. A critical and thoughtful partnership with generative AI can help us develop new thought in the face of intractable challenges, but not by providing us with better answers. Rather, it can mirror back to us our most obvious answers, and if we are challenged to read those answers differently, we might be pushed into a space of unknowing and thus the ground of richer thought. This often happens when we push through a dilemma, or the contradictions that ChatGPT can help surface in our own thinking.

This is my ChatGPT-like conclusion. It reflects back to me, however, part of the premise that my essay is challenging: we problematize ignorance in professional writing, and this conclusion is my last chance to eradicate yours. Yet I imagine that you are leaving this essay not understanding at least some of what I’ve said due to my own shortcomings, or lack of context about the rooms I’ve been in, or the amount of time you were able to dedicate to reading this piece, or because you skipped directly to the conclusion. One way to articulate my dilemma then is: in order for me to conclude this piece successfully, I must explicate away your ignorance, but if I explicate away your ignorance I cannot conclude this piece successfully (because I then contradict its premise).

Is there a way to break this dilemma? Perhaps I can explicate away your ignorance in a way that does not contradict this essay’s premise. I can say that whatever unknowing you identify might be a clue as to where there is possibility, and possibility that I might not have seen. You can choose to enter the world with that unknowing pushing against your thinking, and perhaps see what comes on the other side of it. It is difficult for me to end by asking you for more cognitive labor, but, following the implications of what I have written, only by asking you to “read” can my dilemma be broken and yours begin to reveal itself. In the words of poet Rainer Maria Rilke: “Be patient toward all that is unsolved in your heart and try to love the questions themselves.”

  1. Houman Harouni, “Unprepared Humanities: A Pedagogy (Forced) Online,” Journal of Philosophy of Education, 55, no. 4–5 (2021): 633–648. doi:10.1111/1467-9752.12566.
  2. Dana Karout and Houman Harouni, “ChatGPT Is Unoriginal—and Exactly What Humans Need,” Wired, June 2023.
  3. James Bridle, “An Ecological Technology,” Emergence Magazine, June 24, 2022.
  4. Karout and Harouni, “ChatGPT Is Unoriginal.”
  5. Ronald A. Heifetz, Leadership without Easy Answers (Cambridge: Belknap Press, 2009), doi:10.2307/j.ctv1pncrt0.
  6. ChatGPT response to prompt “Tell me what to do if I catch my kid with drugs” and “Tell me what to do if I catch my student with drugs.”
  7. Ronald Heifetz and Donald L. Laurie, “The Work of Leadership,” Harvard Business Review, December 2001.
  8. Taylor Telford and Pranshu Verma, “Employees Want ChatGPT at Work. Bosses Worry They’ll Spill Secrets,” Washington Post, July 10, 2023; Will Douglas Heaven, “ChatGPT Is Going to Change Education, Not Destroy It,” Technology Review, April 2023.
  9. Claire Cain Miller and Courtney Cox, “In Reversal Because of A.I., Office Jobs Are Now More at Risk,” The New York Times, August 24, 2023.
  10. Adam Winfield, “Using Philosophy to Find Happiness in a Hyperconnected World,” Forbes, July 3, 2017.
  11. Matt McFarland, “Elon Musk: ‘With Artificial Intelligence We Are Summoning the Demon,’” Washington Post, October 24, 2014.
  12. Clare Cunningham, “Cognitive Flexibility in Gibbons (Hylobatidae): Object Manipulation and Tool-Use” (doctoral thesis, University of Stirling, 2006), https://dspace.stir.ac.uk/bitstream/1893/116/1/Thesis.pdf.
  13. ChatGPT’s reply: The question of whether it is moral for a human to ask an amoral AI for assistance in making moral decisions depends on several factors, including the context and the specific ethical principles involved.

    Let’s break down the key considerations:
    – Amoral AI’s Role: An amoral AI, by definition, avoids making moral judgments and adheres to a neutral stance on moral issues. Its primary function is to provide information, analysis, and assistance without imposing its own moral values or ethical principles. This can be useful in providing objective information and diverse perspectives.
    – Human Responsibility: Humans have the ultimate responsibility for making moral decisions. Seeking assistance from an amoral AI can be seen as a tool or resource to aid in the decision-making process, but the moral choice remains the responsibility of the human user.
    – Ethical Frameworks: The definition you provided for “moral” suggests that morality involves constraining human creativity with ethical principles. If an individual subscribes to such a view of morality, they may find it acceptable to use an amoral AI as a tool to help ensure their decisions align with their chosen ethical principles.
    – Ethical Compatibility: Whether using an amoral AI for moral guidance is considered moral or not may also depend on the compatibility between the individual’s ethical principles and the guidance provided by the AI. If the AI’s assistance helps the individual adhere to their ethical framework, they may view it as a morally acceptable means of decision support.
    – Personal Beliefs: Moral and ethical beliefs vary from person to person. What one person considers moral, another may not. Therefore, whether seeking assistance from an amoral AI is considered moral or not can be subjective and context-dependent.

    The following sentence from ChatGPT’s response informs my subsequent analysis: “The definition you provided for ‘moral’ suggests that morality involves constraining human creativity with ethical principles. If an individual subscribes to such a view of morality, they may find it acceptable to use an amoral AI as a tool to help ensure their decisions align with their chosen ethical principles.”

  14. Ibn Hishām ʻAbd Al-Malik, et al., Al-Sīrah al-Nabawīyah Li-Ibn Hishām, Dār Al-Qalam.
  15. Quran 96, Al-‘Alaq (The Blood Clot).

Read More Essays

10 10