Collectanea 27.25 Atrophy-Monarchs-Garage

Week of June 30, 2025

Cognitive Atrophy

“The integration of LLMs into learning environments presents a complex duality: while they enhance accessibility and personalization of education, they may inadvertently contribute to cognitive atrophy through excessive reliance on AI-driven solutions [3]. Prior research points out that there is a strong negative correlation between AI tool usage and critical thinking skills, with younger users exhibiting higher dependence on AI tools and consequently lower cognitive performance scores [3]” (10).

—Nataliya Kosmyna, Eugene Haptmann, Ye Tong Yuan, Jessica Situ, Zian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, Pattie Maes (MIT, MassArt, Wellesley1Why don’t citation systems include institutional affiliations?). (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task (arXiv:2506.08872). arXiv. https://doi.org/10.48550/arXiv.2506.08872 #cognition #debt #writing #LLMs #AI #frenzy #atrophy #performance #humanbrains #consequences

A prepublication version of the Kosmyna et al. article, “Your Brain on ChatGPT,” circulated a couple of weeks ago. At >200 pages, whew, it is long and ornately specialized at points. I’ve read enough of it to conclude a) it will, on the sunny side of review, be a landmark study, an important account leading to further research on the cognitive consequences of LLM over-reliance (notably, a developmental vortex fueled by enthusiastic, uncritical adoptions and battering-ram marketing efforts by boom or bust AI startups), b) the approach to writing essays at the center of the study is woefully reductive (i.e., timed for 20 minutes, AP-test-styled prompting without much context or purpose), c) the ability to quote one’s own essay shortly after writing it is a bizarre and not altogether persuasive indicator of cognitive performance, yet this was the greatest differentiator among the three participating groups (brain-only, search engine, and LLM), and d) there remains a vast gulf between cognitive neurosciences, rhetorical invention/eureka/epiphany studies2We don’t really have anything like Eureka Studies or Epiphany Studies; perhaps all of the Humanities should retool in this direction, renaming minors as Epiphany Studies, or, if you are at a tech/stem school, Epiphany Engineering. The curriculum would draw upon writing and rhetoric, philosophy, history, language and literature, and cognitive neuroscience, regarding learning as a so-called “open period,” of the sort that the neuroscientists studying psychedelics describe in reparative/therapeutic terms as a window for synaptic rerigging., and reading and writing research as it is valued in the humanities, much less in Rhetoric and Composition/Writing Studies.


Visual Portmanteau: Monarch Butterfly + Mandala

Figure 1. “Monarchdala.”

Mature, blooming milkweed at the back of the holler is aflutter, buzzing with pollinators, including a small kaleidoscope of Monarch butterflies. And lately I had been exploring in Procreate various brush and palette customizations, watching a few tutorials, learning how to make stamps. What followed were experimental, exploratory pieces, like this one, which uses drawing guides for mirrored quadrants, then bending and combining selected elements, adding color from a custom butterfly photograph palette, and inlayering a gradient backdrop for a fade of center-to-periphery brightness.


The Standard Way into the Sheepfold

“What is the good of research ?
What is worth doing ?
Shall we be allowed to do it ?
Who will do it ?
In answering the first question, I hold that by the scholarship which is the product of research the standing of our work in the academic world will be improved. It will make us orthodox. Research is the standard way into the sheepfold” (17).

~

“Now, is there any reason, in this age when every other branch of human knowledge is being ruthlessly pulled to pieces and tested why our branch should be passed over?” (18).

—James Winans (Cornell U). (1915). The need for research. The Quarterly Journal of Speech, 1(1), 17–23. https://doi.org/10.1080/00335631509360453. #branches #research #orthodoxy #sheepfold #speech #communication #disciplinarity


From the Mail Bag 📭

Sadly, there was no mail from readers this week.


G-l-o-r-i-a

Figure 2. Summer 2025’s first morning glory bloom.

5ives

Five years, five program covers from past Conferences on College Composition and Communication. Why these? Albeit somewhat peripheral to my current research project, they’re quirky with their idiomatic, time-spanning expressions of then and now: the phrase “composition and communication” repeated 53 times from 1960; warpy, nested Cs from 1962; an optical illusion from 1974; seven missiles soaring from left to right in 1977; and an earth-sized pencil from 1983.


Walking as Artistic Practice Syllabus

“This workshop is designed as a brief survey of some of the origins, theories, processes, and manifestations of walking as art. We will read, watch, and discuss perspectives on walking-based projects. Using this information as a springboard, we will complete walking exercises, and execute our own original walking projects.”

—Ellen Mueller (Arts Midwest). (2021). Walking as Artistic Practice. https://teaching.ellenmueller.com/walking/.

I have been meandering in wide arcs toward a plan for this fall’s pair of online-asynchronous sections of ENGL3844: Writing and Digital Media. The course description mentions digital writing within “the context of business, organizational, and political practices.” It also mentions production, devices, data visualizations, videos, web design, “and more.” Sample syllabi I have been able to track down tend to outline three major projects, usually something related to podcasting or sound editing, something related to data analysis and visualization, and something related to video. The official, CUSP-approved outcomes are keyed toward ethics (“ethical design strategies”) with three bullets emphasizing visual, video, and web. I haven’t taught this particular class before at Virginia Tech. The online-asynchronous format adds complications to the kinds of engagement and interaction one can reasonably expect, of course. But I have been thinking more about short-form exercises paired with an anthologics-styled (perhaps ABCDEary format) assortment. Self-introductory account of digital mecology/technologies of self; microthemes prompted with alternatingly terrestrial (food, walking, fieldwork) and digital (photo, sound, hypertext, map, 4D/time, etc.). Inflections of Ashley Holmes’ device-mediated environs (writing on location), inflections of Ellen Mueller’s walking courses reframed as writing on foot, Geoffrey Sirc’s seriality, throwback maps of the imagination (e.g., what goes on in that building I walk past every day?), and more. “And more” as the inventive indeterminacy better fitted with digital writing than anything else I am finding or can think of as of yet, seven weeks or so from the start of classes.


Waiting There for Me

Figure 3. In the garage.

About Collectanea

Collectanea is a series I’m trying out in Summer 2025 at Earth Wide Moth. Each entry accumulates throughout the week and is formed by gathering quotations, links, drawings, and miscellany. The title of the entry notes the week and year (the sixth in this series from Week 27 of 2025, or the Week of June 30). I open a tab, add a little of this or that most days. Why? Years ago my habitude toward serial composition and, thus, toward blogging, favored lighter, less formal, and more varied fragments; gradually, social media began to reel in many of these short form entries, recasting them as posts dropped a Facebook or Instagram or Twitter (while it lasted), albeit with dwindling ripple effect into the ad-addled and algorithm-ambivalent streams. This space, meanwhile, began to feel to me like it wanted more thoughtfully developed entries bearing the shape and length of what you might find on Medium or Substack. But, because I am drafting toward a book project most mornings, I don’t quite have reliable essayistic bandwidth for Earth Wide Moth this summer. Collectanea, if it goes according to my small bites chicken scratch plan, will be a release valve for the piling up of too many tabs open, functioning as a shareable, intermittent (weekly?) repository for small pieces cut and pasted from stuff I am reading, and also as a scrapbook for illustrations. -DM

Buck moth larva, near posts at SW corner of the holler. #stinging #caterpillar #wonderhollow #rollcall
Subscribe

Enter your email address to receive notification when any new entry is posted. It’s free, direct, and timely, routed to your inbox so that you may never have to visit again the grim, extractive vortex of social media platforms to be reminded that EWM exists.

Meal moth

Notes

  • 1
    Why don’t citation systems include institutional affiliations?
  • 2
    We don’t really have anything like Eureka Studies or Epiphany Studies; perhaps all of the Humanities should retool in this direction, renaming minors as Epiphany Studies, or, if you are at a tech/stem school, Epiphany Engineering. The curriculum would draw upon writing and rhetoric, philosophy, history, language and literature, and cognitive neuroscience, regarding learning as a so-called “open period,” of the sort that the neuroscientists studying psychedelics describe in reparative/therapeutic terms as a window for synaptic rerigging.

No Telescope Except Our Attention

I shouldn’t pick back up here before first acknowledging head bow hands folded and humbly that Earth Wide Moth received the John Lovas Award from Kairos last Friday evening at the 2025 Computers & Writing Conference. I learned about the award early that week, so I drove to Athens, Ga. to accept the award on Earth Wide Moth’s behalf. Striking to realize this event as punctuation, a pause EWM—dash to notice simultaneously how much and how little a two-decade-plus installation of this serial variety holds. The nomination was co-signed by sixteen or so brilliant, generous, and ever-supportive colleagues; some of them even wrote brief rationale, testimony to the value of what happens here from time to time. I’m grateful for the twenty-one years of write-living, a variation on life-living (Manning), the sorts of activation and articulation loops that, come what meandering-may, dances as moth to flame and flame to moth.

Figure 1. Athena statue, Athens, Ga., stony and still before the Classics Center at the University of Georgia.

While in Georgia, I attended a few sessions, the opening reception, the Kairos-Digital Rhetoric Collaborative karaoke event, a meeting, Saturday’s keynote by Jen Sano-Franchini, titled “What’s Critical about Critical Interface Analysis? A Recommitment to Humanistic Inquiry In the March to Hyper-Automation,” and the social gathering at Creature Comforts. I drove home on Sunday, on the road by 8 a.m. ET, 370 miles, four states, giant peach water towers and turbulent speed differentials from one lane to the other along I-85, and as I drove I kept thinking about conferences and bandwidths, about desires for disciplinary community and mutual attention. It’s not such a surprise that Computers & Writing was saturated with polemics, gestures, and questions revolving heavily around generative AI. What are we, 2.5 years on since the November 2022 release of Chat GPT? 

Many have turned sharply to AI; love AI or hate AI, the polemic casts triumphalists and refusalists in sometimes-heated exchanges, though much of the time we are nevertheless grasping for context and honing definitions that eventually return us to earth.

Returning to Earth Wide Moth, I happened across an entry from a decade ago, “Overlooking,” the entire entry consisting of a quotation from Oliver Sacks’ book, A Leg to Stand On (1994). Here it is:

I thought of a dream related by Leibniz, in which he found himself at a great height overlooking the world–with provinces, towns, lakes, fields, villages, hamlets, all spread beneath him. If he wished to see a single person–a peasant tilling, an old woman washing clothes–he had only to direct and concentrate his gaze: “I needed no telescope except my attention.”

It helps to remember that dreams, though they are not the same as windows, shake up monadic tendencies. There was a time, too bad it has elapsed, when the digital opened up a comparable sense of possibility. Byung-Chul Han writes in Hyperculture about how the hypertextual world roils with “possibilit[ies] of choice” (43), its windowing refrains inviting inhabitants–hypercultural tourists–to experience the vastness of boundless opening. Yet, as Han continues, screens akin to windows, the possibilities of choice run their course, and the “Being-before-a-window” resembles “the old windowless monads” (45).

I understand why there is so much wrapped up in generative AI, its swift onset flaring as it has across every sector, informational and communicative, industrial and material. Academics are thrashing AI for its promises and pitfalls, separating out its big-tech-pushed inevitabilities and coming to terms with its consequences. Monadic routines, or call them turtles, lurk all the way down. Post-C&W 2025, though, I don’t harbor any particularly renewed perspective on AI, digitality, or the panacea of a World Brain, impressively omnipotent. Something about a cheaper (seeming) writing tutorbot who never sleeps. Something about assessment magic and administrators raising course caps because automation frees up your time. Is the hype gaining? Fading by now? Still-glinty gewgaw, I don’t know. But I have returned from the conference uneasy about the hype cycle, for in the event of swivel-necking toward AI, what are we turning away from, abandoning, suspending mid-gesture as unsuspecting mortals covered over by volcanic ash. Almost had that last slurp of ramen, almost gathered that last fleck of pollen, almost fetched today’s eggs from the nesting box, almost sighted something marvelous through the telescope, almost, almost, but for AI’s dooming and dominant gusts.

AIlingualism

By omitting a space and setting it in a san serif font, AIlingualism piles on ambiguities. On page or screen, it might tempt you to see all lingualism, the heteroglossiac babelsong, much like Adriano Celentano’s Prisencolinensinainciusol might tempt you to hear Anglophone snippets in what is stylized nonsense. “AIlingualism” sounds like eye-lingualism, I suppose, or the act of entongued seeing, which without going into the subtleties of synesthesia might be as simple as tracing tooth-shape, fishing for an offshed hair from a bite of egg salad, or checking the odontal in-betweens for temporarily trapped foodstuff. Hull from a popcorn kernel? When did I have popcorn? A similar phenomenon would be something like “retronasal olfaction,” which Michael Pollan describes in Cooked, as the crossover between senses, the role of olfactory processing within experiences of taste, or where smell and taste commingle and coinform.

Yet I mean something altogether different with AIlingualism. Used to be the over-assisted writing revealed itself owing to too many thesaurus look-ups. You’ve betrayed a faithful expressive act because we could almost hear Peter Roget himself whispering through your words. But thesaurus overuse is a lesser crime than the wholesale substitutive “assists” that walk us nearer and nearer to overt plagiarism: patchwriting, ghost writing, essay milling, unattributed quotation, and so on. An assist from a thesaurus was usually keyed to a smaller unit of discourse, which in turn amounted to petty ventriloquism. But as the discursive magnitude increases, so too does the feeling that the utterance betrays the spirit of humanistic communication, that fleshly-terrestrial milieu where language seats, swirls, and percolates, elemental and embodied. I think this is close to what Roland Barthes characterizes as the “pact of speech” (20) in “To Write: An Intransitive Verb” (1970) from The Rustle of Language (1989).

AIlingualism creates phrasal strings from a vast reservoir of language, not the ‘Grand Vat’ but in the vaguest of terms, a large language model, or LLM, whose largesse blooms on the shoulders of other people’s language–papers, books, discussion boards, social media chatter, and utterances in whatever additional ways collected and compiled. Not that utterances have shoulders. But they do, at their genesis, stem from beings in contexts, and although the writing itself is a technology that rebodies utterances, LLMs as an extractable reserve and pseudo-sense-making melange yet further extend that rebodiment. To invent with the assistance of artificial intelligence is to compose in a way uniquely hybridized and synthetic. Language games, in this case, work by different but non-obvious rules. AIlinguals, or users of LLMs to write, suspend the pact and engage in pactless speech.

It isn’t so much the case that pactless speech of this machine-assisted sort is destined to be disappointing, underwhelming, detached from terrestrial contexts, or otherwise experientially vapid. I can’t say I am in a hurry to devote any time to reading AI writing, other than comes with the shallowest of headlines glancing. And now that we’re solidly a year and a half into this “summer” (or buzzy hot streak) of AI, it continues to hold true that most everyday people are still puzzling over what, exactly, is assisting when a writer enlists the assistance of AI. AI is as often as not fumbling along with poor customer service chat help, with returning Amazon orders, and with perfunctory Web MD advice (“Have you tried sipping chamomile tea for your sore throat, Derek?”). It is helping to offer safe-playing might-rain-but-might-not weather forecasts. Looks up; no rain. And in this sense, it still functions, albeit within my admittedly small and mostly rural lifeworld, innocuously.

In a section called “5. Creatures as Machines,” Wendell Berry puzzles out a series of questions that, though they appeared in Life Is A Miracle, which was published in 2000, might just as well have been about ChatGPT:

Is there such a thing as a mind which is merely a brain which is a machine? Would one have a mind if one had no body, or no body except for a brain (whether or not it is a machine)–if one had no sense organs, no hands, no ability to move or speak, no sensory pains or pleasures, no appetites, no bodily needs? If we grant (for the sake of argument) that such may be theoretically possible, we must concede at the same time it is not imaginable, and for the most literal of reasons: Such a mind could contain no image. (47)

Such a mind could contain no image. AIlingualism propagates pactless speech; its intelligence can generate but not contain an image. Its memory is contrived (or dependent upon contrivance), not organic, fleshly, or pulsed neurologically. This is the greatest and gravest indicator of all: still, it better than holds on. AI is ascendant, picking up steam. What can this mirror about the world we’ve built, grinding along with its paradoxically gainful backsliding, AIlingual utterances–today–amounting to no more and no less than the throat clearings, ahem ahem, of commercial science and militarism. Of all the possible energias to put to language, to sacrifice our tongues to, these? Ahem ahem ahem.

Are the Artificials Expressive?

Stepping into AI discussions since November 2022 has felt to me like stepping into a mixed gravity bounce house, enthusiasts bounding miles-high right next to cautionaries clinging clutch-knuckled to whatever handles avail themselves of the seeming-eternal humanistic basics.

Me, I’m just doing what I can to check the conversations, keep walk-jog sideline pace, or possibly bounce high enough for an occasional dunk-thought, sort of like those tenth grade lunch breaks when the gymnastics spring boards were theatrically repurposed so that everyone who wanted one could have an attempt at reaching the rim. Just a touch! I hope that’s not too much mixing, from bounce house to springboard-boosted basketball, considering I am over here trying to make a point about artificial intelligence, large language model “writing,” and the scoops of words masquerading as discourse from ChatGPT.

I was listening to a podcast—Ezra Klein, I think—while driving to Virginia from Michigan on August 2, and although the podcast wasn’t about AI, per se, the discussion of First Amendment law and free speech got me puzzling through a question about whether AI-generated prose is legally expressive. I am not the first; I am also not a lawyer. But. To illustrate, consider this: a local politician is running for a seat on the Board of Supervisors. Not being much of a speech writer, they tap GPT4 on its non-shoulder, prompting it to return for them an applause raising statement about democratic values. The AI returns a lukewarm soup of a statement, and it just so happens to include in it a damaging and slanderous falsehood about another local official. Litigious gloves are off. Legal teams are enlisted. And the candidate mea culpas with the grandest of agentic shifts: “GPT4 made me say it!”

It reads to me as one of the most ground floor conditions, a lower order stases: Is AI expressive? Is ChatGPT responsible, legally or otherwise, for its so-called writing?

If no, then follows a corresponding set of questions about what writing qua “content generation” actually boils down to. Humans are, arguably and correspondingly, small(er) language models (SLMs). Certainly this doesn’t mean that an SLM can’t every so often augment their repertoire of inventional, compositional, and interpretive range with a sidekick LLM, a backdrop behemoth spitting possibly everything ever. But my hunch is that the SLM should be cautious about surrendering its language to this other phenomenon overmuch, or all-out ventriloquizing the LLM as though its expressions will be satisfactory, sufficient, or both, just because it is big.

Writing, as a verb, doesn’t shield itself especially well from contending, sometimes mismatched, activities. In fact, three decades of writing studies scholarly activity has worked mightily to expand writing, sparing writing its alphabetic-linear reduction, and pluralizing it loftily with overtures of multimodality. Much of this has been good and necessary and warranted, but there has been a trade-off. The trade-off is the you can fit a whole lot of yes-that-too under the baggiest of umbrellas, and then along came the LLMs. I wouldn’t argue that anyone should revert to exclusive or narrow-banded definitions of writing, tempting as it might be (e.g., only a pencil-holding activity, or a thing that happens when a human hand makes a keystroke). But I would say that the lines have blurred between “content generation” and “writing” in ways that are not always helpful for demarcating reasonably distinctive activities and in ways that risk promoting shortcut mindsets when writing is presumed to be ready-made, extractive, and infinitely/generically scoopable from an allegedly ever-improving LLM.

Collin recently referred me to Alan Jacobs’ recent entry, “on technologies and trust,” which aptly sketches the position that we wouldn’t ever think of enticing prospective students to cooking school only to tell them that everything they learn will be derived from HelloFresh boxes. A similar logic extends to graphic designers from templated fallbacks. While the masticated options might be appealing to the uninitiated, they are not quite the same as learning by practicing when that practice entails selection, decision, trial and error, and so on.

I am not convinced that LLMs are expressive, and I want to work on making evaluative sense of AI more forwardly in these terms.

A final illustration: In April an HVAC technician visited the house for routine maintenance on the heat pump leading into the air conditioning season. Before leaving, he started to tell me about how he used to manage a big game preserve in Tennessee, though it closed, and so he changed careers. He then went on to tell me about his daughter who was taking an interest in cattle AI because she had a friend who was working with ranchers in Texas; the friend was finding cattle AI quite lucrative, he explained.

It took me a while to figure out that large-scale livestock procreation, too, has an artificial alternative; that’s “cattle AI,” for us non-ranchers. I think about this often as a checkpoint in conversations about AI and content generation. Might be, cattle AI is for cows what ChatGPT is for writing–artificial, expedient, not to be mistaken for the other embodied, developmentally-dependent, organic-contextual (more than mechanistic) act.