Are the Artificials Expressive? 🐂

Stepping into AI discussions since November 2022 has felt to me like stepping into a mixed gravity bounce house, enthusiasts bounding miles-high right next to cautionaries clinging clutch-knuckled to whatever handles avail themselves of the seeming-eternal humanistic basics.

Me, I’m just doing what I can to check the conversations, keep walk-jog sideline pace, or possibly bounce high enough for an occasional dunk-thought, sort of like those tenth grade lunch breaks when the gymnastics spring boards were theatrically repurposed so that everyone who wanted one could have an attempt at reaching the rim. Just a touch! I hope that’s not too much mixing, from bounce house to springboard-boosted basketball, considering I am over here trying to make a point about artificial intelligence, large language model “writing,” and the scoops of words masquerading as discourse from ChatGPT.

I was listening to a podcast—Ezra Klein, I think—while driving to Virginia from Michigan on August 2, and although the podcast wasn’t about AI, per se, the discussion of First Amendment law and free speech got me puzzling through a question about whether AI-generated prose is legally expressive. I am not the first; I am also not a lawyer. But. To illustrate, consider this: a local politician is running for a seat on the Board of Supervisors. Not being much of a speech writer, they tap GPT4 on its non-shoulder, prompting it to return for them an applause raising statement about democratic values. The AI returns a lukewarm soup of a statement, and it just so happens to include in it a damaging and slanderous falsehood about another local official. Litigious gloves are off. Legal teams are enlisted. And the candidate mea culpas with the grandest of agentic shifts: “GPT4 made me say it!”

It reads to me as one of the most ground floor conditions, a lower order stases: Is AI expressive? Is ChatGPT responsible, legally or otherwise, for its so-called writing?

If no, then follows a corresponding set of questions about what writing qua “content generation” actually boils down to. Humans are, arguably and correspondingly, small(er) language models (SLMs). Certainly this doesn’t mean that an SLM can’t every so often augment their repertoire of inventional, compositional, and interpretive range with a sidekick LLM, a backdrop behemoth spitting possibly everything ever. But my hunch is that the SLM should be cautious about surrendering its language to this other phenomenon overmuch, or all-out ventriloquizing the LLM as though its expressions will be satisfactory, sufficient, or both, just because it is big.

Writing, as a verb, doesn’t shield itself especially well from contending, sometimes mismatched, activities. In fact, three decades of writing studies scholarly activity has worked mightily to expand writing, sparing writing its alphabetic-linear reduction, and pluralizing it loftily with overtures of multimodality. Much of this has been good and necessary and warranted, but there has been a trade-off. The trade-off is the you can fit a whole lot of yes-that-too under the baggiest of umbrellas, and then along came the LLMs. I wouldn’t argue that anyone should revert to exclusive or narrow-banded definitions of writing, tempting as it might be (e.g., only a pencil-holding activity, or a thing that happens when a human hand makes a keystroke). But I would say that the lines have blurred between “content generation” and “writing” in ways that are not always helpful for demarcating reasonably distinctive activities and in ways that risk promoting shortcut mindsets when writing is presumed to be ready-made, extractive, and infinitely/generically scoopable from an allegedly ever-improving LLM.

Collin recently referred me to Alan Jacobs’ recent entry, “on technologies and trust,” which aptly sketches the position that we wouldn’t ever think of enticing prospective students to cooking school only to tell them that everything they learn will be derived from HelloFresh boxes. A similar logic extends to graphic designers from templated fallbacks. While the masticated options might be appealing to the uninitiated, they are not quite the same as learning by practicing when that practice entails selection, decision, trial and error, and so on.

I am not convinced that LLMs are expressive, and I want to work on making evaluative sense of AI more forwardly in these terms.

A final illustration: In April an HVAC technician visited the house for routine maintenance on the heat pump leading into the air conditioning season. Before leaving, he started to tell me about how he used to manage a big game preserve in Tennessee, though it closed, and so he changed careers. He then went on to tell me about his daughter who was taking an interest in cattle AI because she had a friend who was working with ranchers in Texas; the friend was finding cattle AI quite lucrative, he explained.

It took me a while to figure out that large-scale livestock procreation, too, has an artificial alternative; that’s “cattle AI,” for us non-ranchers. I think about this often as a checkpoint in conversations about AI and content generation. Might be, cattle AI is for cows what ChatGPT is for writing–artificial, expedient, not to be mistaken for the other embodied, developmentally-dependent, organic-contextual (more than mechanistic) act.