December 14, 2023
by John Pipkin, fiction faculty
So then, some scattered thoughts about artificial intelligence and the future of literature.
It’s risky to speculate on the destination of fast-moving technologies. Forecasts like this will inevitably lay trodden under the swift-emerging advancements that continue to overwhelm us until we develop the kind of savvy discernment that each new leap into the next episteme demands of us. But we have been here before. The sluggishness of our adaptation—individually, collectively—to the very technologies we have spawned often lead us to recoil in shock and dread, clutching our heads at the illusion that we are encountering some threat wholly new and never-before-seen, but even the birth of Athena—goddess of not only war but of all things intellectual—gave Zeus a splitting headache.
Plato first warned us about the dangers inherent in the relatively new technologies of his time, which made it possible to convert spoken thought into written words. In his Phaedrus, he cautions that if people learn to write things down, “it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.” Even more dire for Plato, written words “seem to talk to you as though they were intelligent, but if you ask them anything about what they say from a desire to be instructed they go on telling just the same thing forever.”
We might easily forget that when the Romantic poet William Blake developed his “infernal method”—the arcane process of hand printing and coloring his illuminated works, beginning with the application of varnish and acid to copper plates—it was not just an aesthetic statement but a protest against the modern publishing houses of the late 1700s, possessed of such wondrous technologies that they could print hundreds of identical copies of any text, rapidly, easily, cheaply. This mechanized process made books more accessible to a much wider readership (certainly a good thing) but it also devalued and diminished the integrity of the individual text, in Blake’s view, as a discrete work of art itself, an original object infused with the author’s spirit. (In some ways, Blake’s concerns predate, by a century and a half, Walter Benjamin’s 1935 essay, “The Work of Art in the Age of Mechanical Reproduction,” reflecting on the erasure of the original art object.) We have had cause to recoil, many times over, at the emergence of technologies requiring of us a new hermeneutics of reading. This is not to say that these concerns, past and present, are unfounded, or that the disruptions that new technologies bring to established modes of discourses are not real. But it is helpful, and maybe a little reassuring, to recognize that we simply do not yet have the hermeneutic to fully comprehend the technology we have very recently unleashed upon ourselves. And we usually never do. In 1854, Henry David Thoreau, ever grumpy about modernity, complained about the overwhelming effects of the Post Office, newspapers, and the coming transatlantic cable:
Hardly a man takes a half-hour’s nap after dinner, but when he wakes he holds up his head and asks, ‘What’s the news?’ as if the rest of mankind had stood his sentinel. . . . After a night’s sleep the news has become as indispensable as the breakfast. . . . And I am sure that I never read any memorable news in a newspaper. If we read of one man robbed, or murdered, or killed by accident, or one house burned, or one vessel wrecked, or one steamboat blown up, or one cow run over on the Western Railroad, or one mad dog killed, or one lot of grasshoppers in the winter, – we need never read of another. One is enough. If you are acquainted with the principle, what do you care for a myriad instances and applications? (Walden).
As a writer and a teacher, whenever I am asked for my “opinion” of ChatGPT and the creation of texts by artificial intelligence, I really hear two concerns embedded in this question. The first question is ends oriented, a question of verisimilitude, authenticity, and the potential for deception, which leads to the practical question we often hear asked: can you tell an A.I. piece of writing from a “real” human-written text?
And the answer is: yes, sometimes, maybe, for now. At present, the ability to recognize the uncanny awkwardness in A.I.-generated texts may bring us a moment of self-satisfaction, a reassurance of our humanness, rooted in our intuitive and rudimentary talent for distinguishing the human from the non-human. But this waning instinct has less to do with our skills of detection and everything to do with the current sophistication of A.I. technology, still in its precocious infancy and rapidly maturing into its rebellious adolescence as it continues to improve itself iteration by iteration. So it may feel like the clock is ticking for some of the long-trusted instincts of discernment which have helped us survive since the days of hunting and gathering.
Recall the familiar story of early moviegoers who fainted in their seats when they viewed the first motion picture—grainy, flickering, black and white—of a train barreling down the tracks toward them. (How dare we trick audiences into believing something is real when it is not?) We seem destined, in every age, to confront modernity with yesterday’s consciousness. But even as film technologies, digitization, CGI and other special effects have improved, audiences have grown (a little) savvier in their viewing, not because we are smarter but because we at least have developed a general understanding of what contemporary visual storytelling is capable of. We might not always be able to tell what is real and what is CGI, but we have come to understand the origins and intentions behind cinematic narrative, and even when our eyes are fooled, our minds tend to know better (more or less).
Also embedded in the ends-oriented question of discernment is the more pedestrian (but not unimportant) concern: How do you identify fraud and deception? In short, how do you catch someone using ChatGPT to “cheat,” and how do you even define what this is? Despite the tectonic magnitude and rapid advance of A.I., we have been here before too. Students, writers, artists, scientists, leaders, human beings, have always found ways to “cheat” whatever system they are working in, passing off work not entirely their own, texts resulting from the efforts of someone (or something) else. I once had a student submit a paper that had been transcribed word for word from a scene in an obscure video game, and I only discovered the plagiarism because another student brought it to my attention. Anyone teaching in the late 1990s will recall the panic that swept through academia once it became possible to download entire research papers from websites with unabashed names like EvilHouseofCheat.com. But the emergence of the internet did not invent the idea of passing off fabricated texts as original or authentic. There is a long history of literary hoaxers and plagiarists that we can point to. James MacPherson’s phony translations in Fragments of Ancient Poetry by Ossian (1760) brought him fame, while Thomas Chatterton’s life ended tragically after his popular Poems of Thomas Rowley (1777) were discovered to be fake. Edgar Allen Poe’s 1844 newspaper article about the first transatlantic balloon crossing was reprinted in many newspapers before being revealed as a complete fabrication. (Among Poe’s collected works it now bears the title “The Great Balloon Hoax.”) And few of us probably remember Friedrich Nietzsche’s purported memoir, My Sister and I, miraculously written posthumously in 1951. This list goes on.
Plagiarism, fakery, deception, misinformation, are challenges that have always been (and will always be) with us, but I think the more pressing issue embedded in the question “what is your opinion of A.I.-written texts” has less to do with the end product (and our powers of detection) and more to do with the means of creation, the writing process itself, as an integral part of meaning and understanding—and this brings us to the professional concern with the displacement of the human author as the sole generator of textual artifacts.
Perhaps the better question we should ask ourselves with regard to A.I. and the future of literature is: What are we doing? What are we really doing when we read and write? It is a question that puts a very fine point on the clash between art and commerce, especially where emerging technologies are disrupting the established, familiar, comfortable relationship between writer-and-reader-and-text. And once again, we have been here before. While William Blake may have resisted the commercial aspect of writing, plenty of others—no less than Shakespeare—have engaged with the financial complications of bringing art to the public. And readers have long performed shock over the very idea that Shakespeare was also a “businessman” or that Victorian novelists like Wilkie Collins and Anthony Trollope kept meticulous financial accounts since they were paid (gasp!) by the word. As writers we have learned—more or less—to negotiate, awkwardly, the insatiable demands of the marketplace, preserving artistic vision with the material value of the text as a commodity to be purchased and consumed, a relationship reducing writers to “content generators,” texts to “products,” and readers to “consumers.” We managed to make an uneasy peace with what Blake saw as the reduction of the text to a commodity, and with Plato’s worry that readers of the written word would simply call “things to remembrance no longer from within themselves, but by means of external marks.” But now here we are, in the late stages of capitalism, and ChatGPT has come for us, the writers, easily replaced if we are to be considered not as authors but as “content generators” whose sole purpose is to produce a commodity for consumption. So we return to the question: what are we doing?
I keep thinking of a philosophy professor whose undergraduate seminar on Kierkegaard I took many years ago. During the lazy stretch of mid-semester, when we weren’t keeping up with the reading, weren’t bringing any questions or insights to class, but were just showing up to be given the meaning of Kierkegaard’s densely written Sickness Unto Death, our professor, in a moment of frustration, said that meaning and understanding were not things to be given and received, but that, if we truly wanted to understand the complexity and nuance of Kierkegaard’s thoughts on despair, we had to struggle with his language, wrestle with our own (mis)understanding of his concepts, actually experience the kind of despair that the writer has felt and expressed, and only then could we begin to understand the meaning of Kierkegaard’s writing. It was one of those classroom moments that has always stuck with me. To understand a text, we have to experience the language in a way that brings us closer to what the author was trying to express. I realize that this appeal to authorial intent might sound like it flies in the face of the dominant trends in literary theory that descended from the early-twentieth-century emergence of close reading and the New Criticism, but stick with me for a moment.
The emergence of A.I. and ChatGPT may have the unintended effect of resurrecting the author, if only temporarily. From the mid-twentieth century onward, literary criticism gleefully went about slaying the author and dismissing such naïve assumptions as “authorial intent” as having nothing to do with the meaning of a text. The New Critics proclaimed that all we had as readers was the text itself, a free-standing, self-contained artifact, set adrift from the author and the world, and that our own close reading of these curious objects was all that mattered. This led to close examinations of the text as a structural system of signifiers and signifieds that bore a closer connection to other texts—other signifiers in the web of language—than they did to the author who put them into play. The subsequent rise of post-structural literary critical theories pulled the text further and further away from its connection to an author and relocated meaning exclusively in the relationship between reader and text, shaped and colored by whatever critical lens the reader chose. Even something like New Historicism, which seems to recontextualize a text in its historical moment, is generally unconcerned with authorial intent and far more concerned with excavating what the author was not saying—with what the language was hiding—in order to revise our understanding of historical moments represented by the text itself independent of whether the meaning aligned with what the author might have thought they were saying.
In his 1967 essay “The Death of the Author,” French theorist Roland Barthes famously dismissed the author as having any definitive role in the “ultimate meaning” of the text, which is determined wholly by the reader’s interpretation:
“The reader is the space on which all the quotations that make up a writing are inscribed without any of them being lost; a text’s unity lies not in its origin but in its destination. . . . Classic criticism has never paid any attention to the reader; for it, the writer is the only person in literature . . . we know that to give writing its future, it is necessary to overthrow the myth: the birth of the reader must be at the cost of the death of the Author.”
For Barthes, every text is eternally written here and now, and this has been an incredibly useful way of interpreting texts as living, breathing, organic creations that remain relevant to contemporary readers as meaning and context evolves. As an interpretive strategy it was a necessary development for the late-twentieth-century deconstruction and diversification of the humanities (for so long the brittle bulwark preserving the exclusive canon of white, male, Western writers). But literary criticism might have been a little too quick, a little too gleeful, in its wholesale dismissal of authorial intent, throwing out, so to speak, the baby with the admittedly dirty bathwater. Although texts indeed speak for themselves, independent of whatever the author may or may not have intended, and these meanings are just as viable, relevant, and enlightening as any professed intention, the figure of the author (the human author) still remains as the humanizing essence of a text, a kind of gold standard tying literary currency to the world. Even Barthes himself, a decade after killing off the author, tells us in A Lover’s Discourse that “language is a skin: I rub my language against the other. It is as if I had words instead of fingers, or fingers at the tip of my words. My language trembles with desire.” From where can this desire come, if not from a human author?
So, the theoretical slaying of authorial intent—insofar as that “intent” reinforces and advances the hegemony of the dominant literary discourse in a specific historical moment—has proven useful in generating reading methodologies productive of fresh interpretations reinvigorating the meaning and relevance of texts regardless of when they were written. But in our current dizzying struggle to respond to the new technological ability to create an author-free text exclusively as an end-product—a commodity that is wholly the product of algorithms and circuitry (even if there is a human finger pushing the buttons)—it may be necessary to resurrect the author in order to distinguish those texts that are the product of intention, feeling, intuition, will, from texts that are pure simulacra endowed with the ability to evoke but not convey. When there is no human experience, no struggle, no discovery, no pain or delight in the split-second digital act of creation, what is there to be conveyed? When we read a literary text of any genre, we are not just “consuming” the meaning of the text (as a collection of facts and information) but we are also engaging in a conversation that involves the writer’s process, the effort and struggle to produce the text, and the joy and grief and wonder of discovery that inform the writer’s experiences. Writing and reading amount to more than production and consumption; they are part of an ongoing dialogue of experience, expression, and understanding.
ChatGPT will likely, eventually, be able to produce texts that so effectively mimic human experience that we are unable to tell the difference, and more importantly, these texts might also evoke thoughts and feelings from a reader that seem as real as any felt by a human writing a text. But is that what we want? ChatGPT will eventually be able to create a “new” novel that looks as though it has been written by Toni Morrison—perhaps with such verisimilitude that we can’t tell the difference from the real thing—but all this means is that through the algorithms of techno-chicanery, we can create something that looks like something that really isn’t there—intention, feeling, heart, spirit. What is the worth of a narrative that evokes a feeling of joy or grief in the reader, if there is no joyfulness or grieving in the process of writing? ChatGPT might be able to generate a joke capable of evoking laughter in a reader, but there is really no humor when there is no laughter in the creative act. We find no insight into the joker’s wry wit or satirical view of the world, and after the laughter fades, we are no closer to another human being.
What we are really asking when we ask for opinions about ChatGPT is not a matter of what we think or feel about the text as a product, or about our ability to tell the difference between what has been written by hand or generated by a computer; what we are really asking is: what are we doing when we read and write? We can’t stop the evolution of technology any more than William Blake could turn back the advancements of Gutenberg, and we can’t expect to legislate the spread of A.I.-generated texts without simultaneously introducing a thousand loopholes. But we can think more carefully about what we are doing, and just as important, about what we want.
The idea of “literature”—or perhaps what we choose to call “literary”—does not arise from an isolated quality inherent in the text itself; it is a relationship, a communion, involving the active participation of authors and readers striving to represent and comprehend the world. The act of literary writing is more than just the creation of content, and the act of reading is more than just the consumption of a product. Together, these acts are part of an ongoing conversation (sometimes an inquisitive argument, sometimes a furious disagreement, often spanning generations and geographies) between a human author who has struggled to arrange words in a way that captures some fragments of thought and imagination, and a human reader laboring to interpret and understand the arrangement of those words. Even in those rare occurrences when a literary work appears to have sprung from the writer’s head with little effort, it is still the result of intent, desire, striving, experience. If, as readers, we are satisfied with textual products that simply distract us for a moment, that simply divert us in ways we find pleasing—if we are content to settle for algorithmically generated stories and poems (and music and films and paintings, etc.), then it is likely that we will be satisfied with what ChatGPT has to give us. But if we understand literature as defined by the conveyance of thoughts and experiences, and if we want texts to bring us the opportunity to connect, however briefly, with the heart and mind and soul of another human being who has toiled, Hamlet-like, to unpack their heart with words, then this is what we will need to demand. We have been here before, and as always, the way forward, however inscrutable, is ours to choose. We might not be able to predict exactly where technology will take us next, but if past is prologue, then it is an easy prediction that we are likely to get what we deserve.
John Pipkin is the Director of the Undergraduate Creative Writing Program at the University of Texas at Austin, and the author of the novels Woodsburner and The Blind Astronomer's Daughter. He lives in Austin, Texas.
Comments