Are the Artificials Expressive?

Stepping into AI discussions since November 2022 has felt to me like stepping into a mixed gravity bounce house, enthusiasts bounding miles-high right next to cautionaries clinging clutch-knuckled to whatever handles avail themselves of the seeming-eternal humanistic basics.

Me, I’m just doing what I can to check the conversations, keep walk-jog sideline pace, or possibly bounce high enough for an occasional dunk-thought, sort of like those tenth grade lunch breaks when the gymnastics spring boards were theatrically repurposed so that everyone who wanted one could have an attempt at reaching the rim. Just a touch! I hope that’s not too much mixing, from bounce house to springboard-boosted basketball, considering I am over here trying to make a point about artificial intelligence, large language model “writing,” and the scoops of words masquerading as discourse from ChatGPT.

I was listening to a podcast—Ezra Klein, I think—while driving to Virginia from Michigan on August 2, and although the podcast wasn’t about AI, per se, the discussion of First Amendment law and free speech got me puzzling through a question about whether AI-generated prose is legally expressive. I am not the first; I am also not a lawyer. But. To illustrate, consider this: a local politician is running for a seat on the Board of Supervisors. Not being much of a speech writer, they tap GPT4 on its non-shoulder, prompting it to return for them an applause raising statement about democratic values. The AI returns a lukewarm soup of a statement, and it just so happens to include in it a damaging and slanderous falsehood about another local official. Litigious gloves are off. Legal teams are enlisted. And the candidate mea culpas with the grandest of agentic shifts: “GPT4 made me say it!”

It reads to me as one of the most ground floor conditions, a lower order stases: Is AI expressive? Is ChatGPT responsible, legally or otherwise, for its so-called writing?

If no, then follows a corresponding set of questions about what writing qua “content generation” actually boils down to. Humans are, arguably and correspondingly, small(er) language models (SLMs). Certainly this doesn’t mean that an SLM can’t every so often augment their repertoire of inventional, compositional, and interpretive range with a sidekick LLM, a backdrop behemoth spitting possibly everything ever. But my hunch is that the SLM should be cautious about surrendering its language to this other phenomenon overmuch, or all-out ventriloquizing the LLM as though its expressions will be satisfactory, sufficient, or both, just because it is big.

Writing, as a verb, doesn’t shield itself especially well from contending, sometimes mismatched, activities. In fact, three decades of writing studies scholarly activity has worked mightily to expand writing, sparing writing its alphabetic-linear reduction, and pluralizing it loftily with overtures of multimodality. Much of this has been good and necessary and warranted, but there has been a trade-off. The trade-off is the you can fit a whole lot of yes-that-too under the baggiest of umbrellas, and then along came the LLMs. I wouldn’t argue that anyone should revert to exclusive or narrow-banded definitions of writing, tempting as it might be (e.g., only a pencil-holding activity, or a thing that happens when a human hand makes a keystroke). But I would say that the lines have blurred between “content generation” and “writing” in ways that are not always helpful for demarcating reasonably distinctive activities and in ways that risk promoting shortcut mindsets when writing is presumed to be ready-made, extractive, and infinitely/generically scoopable from an allegedly ever-improving LLM.

Collin recently referred me to Alan Jacobs’ recent entry, “on technologies and trust,” which aptly sketches the position that we wouldn’t ever think of enticing prospective students to cooking school only to tell them that everything they learn will be derived from HelloFresh boxes. A similar logic extends to graphic designers from templated fallbacks. While the masticated options might be appealing to the uninitiated, they are not quite the same as learning by practicing when that practice entails selection, decision, trial and error, and so on.

I am not convinced that LLMs are expressive, and I want to work on making evaluative sense of AI more forwardly in these terms.

A final illustration: In April an HVAC technician visited the house for routine maintenance on the heat pump leading into the air conditioning season. Before leaving, he started to tell me about how he used to manage a big game preserve in Tennessee, though it closed, and so he changed careers. He then went on to tell me about his daughter who was taking an interest in cattle AI because she had a friend who was working with ranchers in Texas; the friend was finding cattle AI quite lucrative, he explained.

It took me a while to figure out that large-scale livestock procreation, too, has an artificial alternative; that’s “cattle AI,” for us non-ranchers. I think about this often as a checkpoint in conversations about AI and content generation. Might be, cattle AI is for cows what ChatGPT is for writing–artificial, expedient, not to be mistaken for the other embodied, developmentally-dependent, organic-contextual (more than mechanistic) act.

Primary Flavors

Primary Flavors

So that the sweet tooths of the house (my own included) would stop gnashing
at me about how little we have on hand to please (and also to rot) them, I
boiled together three half-batches of rock candy early this afternoon:
peppermint, anise, and cinnamon. Can you tell from the photo that I’ve
never made rock candy before?

Continue reading →

Ting-a-ling

Alone on a plate, a tingaling is not the most eye-appealing treat of the
season. But what of it? What their presentational aesthetic lacks is
recovered ten times over in their flavor. These are indulgent, easy cookies.

Ting-a-Ling

Just like I do every year (it is customary), I mixed together a batch of them
the other day. When I was a kid, these were a sure bet: a seasonal staple.
They were in all of my grandparents’ kitchens (or cookie tins, elsewhere
positioned) at the holidays. These simple cookies are, for me, like a portal to
another time and place. By scent alone they relocate me in Sheboygan, Wisc.,
fill my head with strong impressions of that happy, recurrent scene that played
out year after year throughout the late 70’s and early 80’s.

Tingalings

First, the family recipe:
1 – 8 oz. bag of butterscotch chips
1 – 6 oz. bag of semisweet chocolate chips
1 – 4 oz. can of chow mein noodles
1 – cup salted Spanish peanuts

When I made them the other day, however, I used the following combination
for a double-batch:
2 – 8 oz. bags of butterscotch chips
1 – 8 oz. bag of milk chocolate chips
2 – 6 oz. bags of chow mein noodles
2 – cups dry roasted peanuts

Combine the crunchy noodles and the peanuts in a medium bowl. In a
glass dish, melt the chips into a liquid. I did this using a medium
setting in the microwave. Pour the melted chocolate and butterscotch
over the dry ingredients in the bowl. Stir it together until
everything is covered. Spoon the mixture onto parchment, wax paper, or
aluminum foil, and let cool.

The
gobstuff archive
at E.W.M.–a well of alimentary delights–would not be
complete (nor ready for The Food Network to sponsor) without this recipe in it.

Poco de Pica

During the summer of ’00, I spent six weeks in Xalapa, Veracruzana, studying language and culture at the Universidad de Veracruzana while on excursion from UMKC, the institution from which I took my MA in Aughtgust of aught-aught (language requirement completed). Typical arrangements: in pairs, students were matched with families. I lived with a family on the south side of Xalapa, maybe two miles from the Universidad’s space near the central district; out the family’s dining room windows, we looked toward Orizaba during most morning and evening meals.

Continue reading →