Coronavirus Questions

Reading Time: 3 minutes

Today circulated a Facebook, Reddit, Google, LinkedIn, Microsoft, Twitter, and YouTube statement on misinformation related to COVID-19, or Coronavirus. The statement is laudable and timely; its goals are sound. But it also sidesteps the wide gulf between facts and uncertainties in a messy and complexly unfolding public health crisis. Turn to social media with wonder, fine. Express curiosities, unknowns, and so on, yes. Speculate and even sense-make together. This is slow-built knowledge, and it’s especially messy when it intermixes non-experts, heightened anxieties, and unverifiable contagion. I have questions, too, and I’m no expert on viruses, much less the Coronavirus.

  1. How did it begin? From a March 12 Vox article, “The genetic evidence and epidemiological information, according to three esteemed infectious disease researchers writing in the New England Journal of Medicine, ‘implicates a bat-origin virus infecting unidentified animal species sold in China’s live-animal markets.'” There are numerous other conspiracy theories. Those aside, to the point of this “bat sniffles and some other succession of animals” theory, what does this mean for continued contagion that moves between humans and animals? I’ve read of swine flu and bird flu resulting in the slaughter of carrier-animals. In the case of COVID-19, is it clear yet that animal cross-contagion is not an issue?
  2. If there aren’t enough tests (yet), or if the tests are sparingly issued such that everyday people calling their general practitioners to disclose symptoms are being told, you don’t meet the CDC criteria for testing, how are the rate of spread analytics considered reliable? Word of mouth indicates that some people with symptoms have been told they do not qualify for testing. They wait. But this alone would indicate serious limitations on what is knowable insofar as rates of spread.
  3. How lightly experienced are the most lightly experienced cases? That is, can someone have Coronavirus, experience negligible or mild symptoms for only a short period of time, and thereafter carry on (after two weeks) without putting others at risk? Without further risk, themselves? Does the lightest possible case of Coronavirus generate in a system the antibodies that will mitigate future risk of susceptibility or contagion?
  4. What is the relationship between viral load and severity of symptoms? If someone is exposed to a high viral load, is that person more likely to contract a serious case? Is the gravity of the illness linked to the viral load exposure? Does viral load in a patient fluctuate throughout the arc of affliction (the duration of the illness)?
  5. What is the relationship between the number of tests given and the number of people tested? Does one test mean one person has been tested? Two tests mean two people? Are most people who are tested tested twice? Is anyone tested more than twice? Are tests yielding inconsistent results counted as tests given?
  6. Has anyone answered directly/concretely how the Utah Jazz and other NBA teams were so swiftly able to get their player personnel tested? Or how an asymptomatic Idris Elba was tested? Are these simply matters of income or celebrity capitalizing on improved medical treatment?
  7. Is there any credence to homeopathic interventions, whether tinctures, infusions (vinegars), kombuchas or other fermented drinks, probiotics, or atmospherics (smudging)? That is, are there any dietary or physiological aids in anticipation of continued spread that chance mitigating the grip and spread of infection? Health advice circulating seems status quo generic–“take good care of yourself, eat right, get exercise, and so on.” Is there anything else likely to reduce or disrupt vulnerability? Like gargling salt water, taking extra vitamin C, use of a humidifier, and so on?
  8. Is there anything at all to be said for sequestered acceleration clusters (e.g., teams of ten who intentionally contract but who do so in isolation), particularly for intentionally getting some responders ahead of the curve? This is perhaps outlandish, and yet it chances being a reasonable tactic, if after contracting it and recovering, one’s system is emboldened so as to be better positioned for aiding others.

I realize the questions here cover quite a bit of range–from speculative scenarios to highly pragmatic decision points. They’re not meant to inspire misinformation but instead to put a finer point on concrete details that, to be fair, perhaps just are not known or knowable at this time. I’m wholly on board with curtailing the circulation of misinformation, but I hope can do better to express uncertainties as questions that might find their way to those who can–sooner or later–answer them well.

Weinberger’s Talk at Michigan

Reading Time: 2 minutes

Earlier this month, I disregarded office-hour responsibilities (“Will return by 4:30 p.m -DM”) on a Monday afternoon and went over to Ann Arbor for David Weinberger’s talk, “”Too Big to Know: How the Internet Affects What and How We Know,” based on his soon-to-be-released book of a similar title.

It’s worth a look; the talk hits several important notes, particularly in light of the information studies slice of ENGL505, a rhetoric of science and technology class I’m teaching right now. In 505, we finished reading Brown and Duguid’s The Social Life of Information earlier this week, and although several aspects of the book are dated, that datedness is largely a function of print’s fixity. I know this isn’t big news, but because Weinberger’s talk works with a related set of issues, their pairing (for my thinking as much as for the class) has been worthwhile.

A couple of quick side notes:

  • Brown’s introduction of Weinberger is a nice illustration of differences between Information Studies and C&W or PTC. That “invent” is cast in the shadows of technological determinism is, well, curious. Or, it’s what happens when rhetoric has gone missing. I had to turn to an authoritative decision-maker to verify my sense that invent still has some mojo.
  • I like Weinberger’s account of the history of facts, and while I understand that facts are useful for argument, their solidity and their restfulness touch off other problems for argument.
  • I left Weinberger’s talk largely satisfied with his characterization of the moment we are in and the shifting epistemological sands digital circulation has stirred. But, if the paper paradigm has really met its match, why should Too Big To Know be printed at all? An obvious answer is that book will produce substantially more revenue than the blog where bits and pieces of the book draft surfaced. Yet, it seems like this cuts against the grain of the talk. I will, of course, withdraw this question if Kindle copies of TBTK outsell paper copies.

Can Writing Studies Claim Craft Knowledge and More?

Reading Time: 3 minutesRobert Johnson’s recent CCC article, “Craft Knowledge: Of Disciplinarity in Writing Studies,” argues that “craft knowledge” can function effectively as a warrant for disciplinary legitimacy.  He sets up “craft knowledge” against an Aristotelian backdrop of techne, or arts of making, and advances a view of “craft knowledge” as a solution to still-raging disputes over the disciplinary status of writing studies (notably not “rhetoric and composition”).  “Still-raging” is casting it too strongly; unsettled and ongoing are perhaps better matches with the characterization of those disputes in this speculative discipliniography–an article that imagines felicitous horizons for the field. As I read, I wasn’t especially clear whose conflicted sensibility would be rectified by invoking craft knowledge. Among Johnson’s concerns with the status of writing studies are 1) that it does not carry adequate clout (or recognition, for that matter) necessary for grant writing and 2) that it does not influence neighboring fields whose inquiries would be, by the input of those trained in writing studies, enriched.

On the problem of disciplinary status for grant writing, Johnson writes,

When the traditional disciplines–the so-called established fields of inquiry and production–work in an interdisciplinary manner, they in most cases still hold onto their disciplinary identity. This is painfully evident for those in writing studies when applying for external grant funding.  On the application forms from such agencies as the National Science Foundation (NSF), National Institutes of Health (NIH), and even the National Endowment for the Humanities (NEH), for example, applicants must identify their resident discipline in order to be eligible. (680-681)

Continue reading →

I-Search and Quantified Self

Reading Time: 2 minutesI am 70-percent committed to a plan for ENGL326: Research Writing this fall revolving around research networks. I’ve been reading over the syllabus and materials Geof Carter generously shared with me from a similar class he taught at SVSU recently. The basic idea here is to begin with a key (or keyless, as circumstances warrant) scholarly article in a given field of study (i.e., the student’s declared major, probably) and then trace linkages from the article to/through the various places (inc. schools of thought), times, affinities (inspirational sources, pedigree/halo re: terminal degree), and semantic fields (inc. contested terms) out of which it was written.  We will probably adopt a workshop model, maybe use CMap Tools for representing these research yarns, develop reading and research logs in something semi-private, such as Penzu, and, if things go well, lay some groundwork for a relatively focused going over of what entails “research” in their respective areas while also doing a lot of reading and writing, including some sort of an update or response to the first article. We could even write those in Etherpad for the way it lets us present a document’s evolution as video (video which invites a layer of commentary and reflection, a­­­­­s I imagine it possibly working out). If this sounds like June thinking for a class that starts in September, well, it is. Anyway, what good is early summer if not for breezily mulling things over?

Now, had I to begin again, I might create a different version of Research Writing tied in with the Quantified Self stuff. Monday’s entry on Seth Roberts’ work reminded me about this. Here is a small slice of Roberts’ article abstract, which is posted on The QS blog:

My subject-matter knowledge and methodological skills (e.g., in data analysis) improved the distribution from which I sampled (i.e., increased the average amount of progress per sample). Self-experimentation allowed me to sample from it much more often than conventional research. Another reason my self-experimentation was unusually effective is that, unlike professional science, it resembled the exploration of our ancestors, including foragers, hobbyists, and artisans.

Although the QS projects are rooted in quantification, they are not exactly bound to traditional science or notions of experimentation and measurement for public good.  Instead, they assume a useful blend between quantitative tracking and personal knowledge.  I don’t have in mind a QS-based research writing class concerned so much with “optimal living” or with diet and exercise, although I guess there’s no good reasons these things should be excluded from possibilities.  I’m thinking more along the lines of Quantified Self meets McLuhan’s media inventories meets Macrorie’s I-Search.  The class would inquire into data tracking, narrating spreadsheets, rhetorics/design of data visualization, and the epistemological bases of the sciences, while it “grabs hold of the word ‘authority’ and shakes it to find out what it means” (Macrorie, “Preface”). Again, just thinking aloud, June thinking for a class that, depending upon how things turn out this fall, starts in September 2011 or 2012.

Method’s Con-trails

Reading Time: 2 minutes

Caught a small
blip of discussion
yesterday concerned with whether or not Google Earth

satellighted
upon

the lost city of Atlantis
. Remnants of the elusive, underwater cityscape?

According to Google Maps Mania,
Google

says
no:

It’s true that many amazing discoveries have been made in Google Earth
including a pristine forest in Mozambique that is home to previously unknown
species and the remains of an Ancient Roman villa.

In this case, however, what users are seeing is an artefact of the data
collection process. Bathymetric (or sea floor terrain) data is often
collected from boats using sonar to take measurements of the sea floor.

The lines reflect the path of the boat as it gathers the data. The fact
that there are blank spots between each of these lines is a sign of how
little we really know about the world’s oceans.

How little we know, indeed. Is this Atlantis? The conspiracy doesn’t interest me all that much.
Instead, I’m struck by the impression: the stamp left by the "systematic"
tracing, the residue of the surface-to-sea-floor method (a term others
have smartly untangled it into meta-hodos or something like ‘beyond
ways’, even ‘ways
beyond’; this etymological dig lingers with me). The deep blue grid of
"bathymetric data" elicits questions: why don’t we see these in the adjacent
areas? What was it about this boat, this collection process,
this
translation from sound to image, that left behind the vivid trails?


Robert Sarmast
elaborated on the image’s trail-grid, noting:

The lines you’re referring to are known as "ship-path artifacts" in the
underwater mapping world. They merely show the path of the ship itself as it
zig-zagged over a predetermined grid. Sonar devices cannot see directly
underneath themselves. The lines you see are the number of turns that the
ship had to make for the sonar to be able to collect data for the entire
grid. I’ve checked with my associate who is a world-renowned geophysicist
and he confirmed that it is artifact. Sorry, no Atlantis.

More provocations here: the grid’s unevenness, its predetermination, the
inability of the sonar devices to see (erm…hear) directly below. And
yet, a telling illustration of method alongside method: seems to me a subtle
allegory in the adjacency of ocean floor imagery with lines and without.
Presumably, the surrounding ground was measured similarly. Why no lines?