Project Name: Holographic Reading
Grantee: Andrew Dunn
Funding Cycle: 2017-2018
White Paper: Dunn – Holographic Reading – White Paper
About the Project
In the 2012 edition of Debates in the Digital Humanities, Johanna Drucker argues that computational tools and humanities scholarship are fundamentally incompatible; while the former function in “strictly quantitative, mechanistic, reductive and literal” ways, the latter function qualitatively, in ways that are “necessarily probabilistic rather than deterministic, performative rather than declarative” (“Humanistic Theory and Digital Scholarship,” para. 3). My project responds to key aspects of Drucker’s critique, but without conceding that computational tools and literary study are entirely incompatible. More specifically, this project essays a new computational protocol for literary study, one whose roots lie in traditional humanistic theories and methods, viz. structuralism, the phenomenology of reading, cognitive narratology, and Possible Worlds Literary Theory (PWLT), and which I am calling holographic reading, in order to position it among other “post-critical” movements (surface reading, machine reading, distant reading, etc.). The ultimate goal of this protocol, which attempts to model the internal topography of narrative fiction (the structures and intersections of life-worlds; the internal arrangement of minds/voices; their relative levels of depth; and the ways in which they alternate to create a narrative), is to mark up, and thereby preserve, key aspects of narrative context, so that computational tools can be focused more precisely on subsets with similar features (a consistent narrator; consistent attribution to a single character, etc.). In addition, this project will result in a shared resource for critical analysis—a public database of encoded texts that can support a broad range of computational abilities.
Inspired by Roland Barthes’s methodology in S/Z, I will use the “elementary rules of manipulation” (textual segmentation, mark-up, the creation of sets, and computational analysis) to turn sample literary texts into databases—structures that critics can visualize, and which they can use to isolate unique passages and telling exceptions (for instance, using anomalous ratios between word count and modality), which they can then subject to further analysis.