Sub Insert()

Ended up working on the Sony Viao all morning, its poor fan whirring like a twin-prop airplane, so I could execute this macro on the Big Data Set. Going to need a macro solution for the Macbook eventually, which would appear to require 1) figuring out Applescript, 2) trying Keyboard Maestro, or 3) making better use of the Bootcamp partition. For good keeping, today’s macro:

Sub Insert()
'
' Insert Macro
' Macro recorded 7/12/2011 by Derek Mueller
'
' Keyboard Shortcut: Ctrl+w
'
ActiveCell.Offset(-1, 0).Range("A1:M1").Select
Selection.Copy
ActiveCell.Offset(1).EntireRow.Insert
ActiveCell.Offset(1, 0).Range("A1").Select
ActiveSheet.Paste
ActiveCell.Offset(0, 7).Range("A1").Select
Application.CutCopyMode = False
ActiveCell.FormulaR1C1 = "D"
ActiveCell.Offset(0, 3).Range("A1").Select
Application.CutCopyMode = False
ActiveCell.FormulaR1C1 = " "
ActiveCell.Offset(0, 1).Range("A1").Select
Application.CutCopyMode = False
ActiveCell.FormulaR1C1 = "NAME"
ActiveCell.Offset(1, -11).Range("A1").Select
End Sub

Bus Turns

From Flowing Data, a terrific public transportation mash-up, Mapnificent, which reports estimated travel times along multiple radiating routes relative to an adjustable map marker. AATA estimates for The Ride are included in the data-set, although I assume the site would be prove more useful in complex urban zones with more routes than we have running in Washtenaw County.

mapnificent.png

Mapnificent is elegant enough that it doesn’t require much more explanation. I’m saving it as a nice example of maps concerned with time, thinking about its resemblance to river turns and big box turns (not all that far removed from the CCC word turns I showed at C&W a couple of weeks ago).

Also calls to mind a question about whether there is some sort of time-scape parallel to trap streets. Trap streets are geographic fictions embedded into proprietary maps meant to shield them from theft. If a copy of a map turned up showing the trap street, it made for easy sleuthing. What, then, is the temporal equivalent of a trap street? I suppose it could be an altered time-to-destination whose falsehood would establish duplication (e.g., Carpenter Road Meijer to AA Public Library in 8 minutes). And I’m not so inclined to think of these traps out of an interest in security (or copyright or plagiarism), but rather as a variety of imagined geography (much like the character in Mieville’s Kraken who sets out to ground-truth London’s trap streets as if they might, by the cartographer’s articulation, conjure up a potential space).

Coding the UWC

I’m not the nimblest programmer, and because I can count my successes with PHP on one hand, I feel compelled to document them, to extend and preserve them through self-congratulatory accounts like this one.

I am working this semester as a faculty consultant to the University Writing Center. I probably mentioned that before. Basically, my charge is to get online consulting systems up and running at EMU, provide a few months of support and training, and spread the word. The main piece here is asynchronous consulting via email. Much like what we built at Syracuse, this process relies on a form. The student fills it out, uploads an attachment, submits it. The submission calls a PHP script, which in turn displays a You did it! message, a readout of the form data fed to the screen (for saving, for verification), and an email message that routes the form data and the attachment to a listserv. The listserv consists of a handful of subscribers who will comment and send back the uploadeds in turn, in time.

The system works reasonably well, but managing the queue can become a headache. Whose turn is it? At Syracuse, the queue was filled in with four or five rotations, and then as form-fed drafts arrived, consultants would access a shared Google Spreadsheet and manually enter a few vital details: name, email address, time received, time returned, and turnaround (time returned minus time received). These few crumbs of data were helpful, but many of the trackable-sortable pieces of the form were not otherwise captured systematically.

Until Zend Gdata. With this installed, it’s now possible to run a second PHP process that will push all of the form data into a shared Google Spreadsheet automatically. I puzzled over this on Friday, figured it out on Saturday. My initial stumble was that I was trying to integrate the new PHP code into the script that turned out the email and screen readout. Didn’t work. But then I figured out that I could instead route the form to a relay file (I doubt this is what programmers would call it, but I don’t have the vocabulary to name it anything else). The relay file was something like simple.php.

Simple.php is a script with a couple of lines: include formemail.php and include formtospreadsheet.php. Now, when the form gets submitted, both scripts run. The email routes the document like it should, and the Google Spreadsheet (queue) grabs a new line of data. The only element requiring manual entry is the time the consultant returned the commented draft. The shared spreadsheet does everything else: calls the list of consultant names from another page, calculates the turnaround, and records a comprehensive record of who is using the service, the classes they come from, etc. Over time, the comprehensive record will allow us to sort by different classes, different faculty, different colleges, which will help us identify patterns that might prove insightful for how writing is assigned and taught across the curriculum.

I should add that our recent launch of the service limits it to four targeted programs. This is necessary because we are not currently staffed to handle a deluge of submissions, and while we do want the service to get solidly off the ground this semester, we want foremost to extend it to a segment of the 17,000 students who are enrolled in some sort of online class.

UL

  • How is the resolution to blog every day in 2011 going? Not too shabby. Not too shabby, at all.
  • Shabby or shabbily? Shab. Shabulous.
  • IHE today reports that distance ed critic David Noble died last week at the age of 65. I read an article or two by Noble in 2004, but I never did get around to picking up his book, Digital Diploma Mills. I should, though. In fact, it undoubtedly connects with work I’m doing lately (and in the semester to come) to shift EMU’s UWC into online consultation. Also, for that matter, stuff like power adjuncting (a topic of fascination for me more than anything else) and, too, the dissoi logoi that for all of our belly-aching about automaticity in higher ed (in the humanities, particularly), there are a whole lot of ways in which we could better adopt and apply automation to some aspects of our work, especially where long-term data-keeping is at issue. Anyway, I live in an Automation Alley county, surely indicative of something.
  • Winter semester begins Wednesday. I am teaching a Tuesday night grad class, ENGL516: Computers and Writing: Theory and Practice (the titular colonpede tempts me to add another segment: 011000010111011101100101011100110110111101101101011001010000110100001010).
  • That we meet on Tuesday the 11th for the first session leaves me no other choice than to assign two articles for the first class. Right? Right! I am mildly concerned the articles will be met with a chorus of “Shabulous!” Besides the grad class, I have a faculty consulting appointment in the UWC (mentioned that earlier) and then a course release carried over from last semester from an internal research grant. My plan is to make this the hardest working semester ever and actually get a couple, maybe three, of these two-thirds finished projects sent off by May.
  • Ph. flies back to Kansas City on Saturday, ending his month-long visit. I guess this can only mean I owe him a day snowboarding at Alpine Valley, probably tomorrow.
  • Will put together a slow-cooker lentil soup so that everybody has something hot and good to come home to. They might be thinking this tastes shabulous, but their mouths will be too full to say it.
  • Last thing: Weird about the fallen birds in Arkansas, right? I mean, 1,000 birds within one square mile? The question I can’t put down is to what extent this is rhetorical–a rhetorical happening, perhaps purely of nature’s precarious course. We don’t know a cause. But then! A school of fish were found belly up in the Arkansas River a few days later, and, according to one report, “Investigators said there is no connection between the dead fish and the dead birds.” No connection? If these are rare events whose cause(s) remain(s) unknown(s) and they are geographically proximate, why assert that they are disconnected? Even if it is too early to identify a causal connection, their coincidence does foist upon them at least a choral connection. Then again, what better than “no connection” and “this happens all the time” to suppress panic. (Reminds me of this entry on dropping paper messenger “birds” during wartime)

    Saw a clever tweet linking this curious event with taking Angy Birds too seriously. I’m inclined to relate it to Twitter, though, more along the lines of subjecting my own Twitter account to “lightning or high-altitude hail.” To be continued.

    More: a turn to labs for answers. Though still no speculation about zombie scarecrows.

“The Humanities Was Nice”

In late May, media theorist Lev Manovich presented “How to Read 1,000,000 Manga Pages: Visualizing Patterns in Games, Comics, Art, Cinema, Animation, TV, and Print Media” at MIT’s HyperStudio (via). The talk is relevant to my work because Manovich wants to create visualizations that deliberately alter the default scale at which we experience something like magazine covers or Manga pages. His “exploratory analysis of visual media” offers insights into culture, he says; visualizations “allow you to ask questions you never knew you had.”

Manovich wears a t-shirt that reads, “Smart Critique Stupid Create,” and he uses this slogan to gain create some separation between his work (stupid create) and traditional humanities (smart critique). Manovich kicks sand–maybe playfully, though it’s hard to say for sure–at the humanities again at the end of the Q&A when he says, “The Humanities was nice, but it was a false dream.” Obviously machine-reading and computational processing of images ring heretical for anyone deeply (e.g., career-deep) invested in one-at-a-time interpretations of aesthetic objects. The all-at-once presentation brings us to the edge of gestalt and permits us to grasp large-scale continuities. Manovich also mentions that this works differently for visual media than for semantic mining because the images are not in the same way confined by the prison house of language. The “how” promised in the lecture’s title carries well enough, but I would expect to hear ongoing questions about the “why,” especially “why Manga?” or “why Time Magazine covers”?

The video includes a couple of unusual moments: at 17:30 when Manovich grumbles about not being able to see his screen and around the 59th minute when host Ian Condry poses an exposition-heavy “question.” As for the practical side of the talk, Manovich’s frameworks for “direct visualization” and “visualization without quantification” are worth noting, and I would be surprised if we don’t hear more about them as these projects play out and are variously composed and circulated.

I-Search and Quantified Self

I am 70-percent committed to a plan for ENGL326: Research Writing this fall revolving around research networks. I’ve been reading over the syllabus and materials Geof Carter generously shared with me from a similar class he taught at SVSU recently. The basic idea here is to begin with a key (or keyless, as circumstances warrant) scholarly article in a given field of study (i.e., the student’s declared major, probably) and then trace linkages from the article to/through the various places (inc. schools of thought), times, affinities (inspirational sources, pedigree/halo re: terminal degree), and semantic fields (inc. contested terms) out of which it was written.  We will probably adopt a workshop model, maybe use CMap Tools for representing these research yarns, develop reading and research logs in something semi-private, such as Penzu, and, if things go well, lay some groundwork for a relatively focused going over of what entails “research” in their respective areas while also doing a lot of reading and writing, including some sort of an update or response to the first article. We could even write those in Etherpad for the way it lets us present a document’s evolution as video (video which invites a layer of commentary and reflection, a­­­­­s I imagine it possibly working out). If this sounds like June thinking for a class that starts in September, well, it is. Anyway, what good is early summer if not for breezily mulling things over?

Now, had I to begin again, I might create a different version of Research Writing tied in with the Quantified Self stuff. Monday’s entry on Seth Roberts’ work reminded me about this. Here is a small slice of Roberts’ article abstract, which is posted on The QS blog:

My subject-matter knowledge and methodological skills (e.g., in data analysis) improved the distribution from which I sampled (i.e., increased the average amount of progress per sample). Self-experimentation allowed me to sample from it much more often than conventional research. Another reason my self-experimentation was unusually effective is that, unlike professional science, it resembled the exploration of our ancestors, including foragers, hobbyists, and artisans.

Although the QS projects are rooted in quantification, they are not exactly bound to traditional science or notions of experimentation and measurement for public good.  Instead, they assume a useful blend between quantitative tracking and personal knowledge.  I don’t have in mind a QS-based research writing class concerned so much with “optimal living” or with diet and exercise, although I guess there’s no good reasons these things should be excluded from possibilities.  I’m thinking more along the lines of Quantified Self meets McLuhan’s media inventories meets Macrorie’s I-Search.  The class would inquire into data tracking, narrating spreadsheets, rhetorics/design of data visualization, and the epistemological bases of the sciences, while it “grabs hold of the word ‘authority’ and shakes it to find out what it means” (Macrorie, “Preface”). Again, just thinking aloud, June thinking for a class that, depending upon how things turn out this fall, starts in September 2011 or 2012.

Manovich, "Data Visualization as New Abstraction and as Anti-Sublime"

 Manovich, Lev. "Data Visualization as New Abstraction and as Anti-Sublime."
Small Tech: The Culture of Digital Tools. Eds. Byron Hawk, David Reider,
and Ollie Oviedo. Electronic Mediations Ser. 22. Minneapolis: U of Minnesota P,
2008.

Why render data visually? Lev Manovich, in "Data Visualization as New
Abstraction and as Anti-sublime," the opening chapter in Small Tech
(reprinted from ArtPhoto, 2003),
responds to this with an answer that, in spirit, moves beyond the "data
epistemology" of a cumbersome, old (perhaps even mythical) scientism. Why render
data visually? "[T]o show us the other realities embedded in our own, to show us
the ambiguity always present in our perception and experience, to show us what
we normally don’t notice or pay attention to" (9). By the end of this brief
article, Manovich begins to get round to the idea of a rhetoric of data
visualization, even if he never calls it this. Despite being caught up in a
representationalist framework as he accounts for what data visualization does,
Manovich eventually keys on "daily interaction with volumes of data and numerous
messages" as the "more important challenge" facing us. That is, we are
steeped now in a new "data-subjectivity."

Continue reading →