MonthSeptember 2016

Take it and Read – Andrew Piper

Katrina Abel
Publishing 401
September 27, 2016

In chapter one, Take it and Read, Piper discusses historical text dated back to the end of the fourth century. Piper outlines the important relationship between text and touch and how the book has served as a tool of reflection. He introduces ‘touch’ as the most elementary sense. Piper uses the sense of touch to recognize that the physical connection between humans and text is constantly being challenged by new technology. He argues that digital text has altered the human physical connectivity with the written word. Our generation has become reliant upon digital text which provides a stimulation that diminishes the authenticity of the book.

Piper discusses the end of the fourth century, the time of St. Augustine and his influence during the development of Western Christianity. Piper addresses this time period when codex formally took the place of the scroll. At the time, codex was the most common and everyday form of reading material. St. Augustine’s conversion to Christianity was enlightened through his physical connection he felt when reading the bible. The bible and its material function was a symbol for St. Augustine during his conversion. It “was an affirmation of the new technology of the book that within the lives of individuals, indeed, as the technology that helped turn readers into individuals” (Piper, 2012). Turning the pages of the book was powerful enough to change him from a reader to a spiritual human being.

Piper conveys his ideas through St. Augustine’s experience with codex. Piper’s idea that the book can turn a reader into an individual carries immense responsibility. I find it intriguing that by holding a book, the reader can feel the way that it is bound together. The pages and print provide a different connection; one that many prefer over reading off of a screen. Before reading this chapter I never put much thought into why I preferred the authenticity of holding a book as opposed to digital text. Piper’s discussion of the sense of ‘touch’ and its definition as “the most elementary sense” (Piper,2012) provided some answers for me. Pipers explanation of the book as a tool of self reflection allowed me to better understand his argument.

The ‘at hand’ concept of the book provided a climactic period in St. Augustine’s life. The material ability of the book created a physical sensation through touch. It also created an unexpected spiritual relationship. I learned throughout this reading that the hand is symbolic for openness. While our hands are open our minds are open. The transition from the scroll to the book allowed for humans to physically feel, what Piper calls, the ‘graspability’ of a book. I prefer to hold a book and read the pages over reading the same story from an e-reader. When I am holding the book in my hands, the book also holds my attention; immersing me in the story and characters without any distractions. This codex form allows me to access the story that is inside of the book.

I realized that my imagination is far more creative and meaningful when I plunge into a book’s story line than when I have the ability to skim through digital text on an e-reader. The words on the page become multidimensional. For many, books have been an immediate source of media from a young age. It is comforting to know that the book is there, regardless off its different formats.

Piper continues his argument by stating that digital text cannot be held or grasped. The folds and pages of a book have now been transformed into an unreliable online world. Digital text can be altered and deleted making it hard for readers to trust. I agree with Piper when he says the online world may seem a little out of touch. This is because anyone is capable of going online and changing information. I strongly believe that technology has made the mind of our generation lazy.

A book should always be a symbol of freedom of mind and imagination. As Piper states, “our hands are becoming brooms, sweeping away the alphabetic dust before us”, (Piper,2012) just as every new media is sweeping away a book’s authenticity in some form. I hope that in future years, readers will still have an urge to pick up a book and make a connection, similar to the one that I feel. Physical touch with a book has allowed me to realize the potential in my mind that I do not feel with digital text. I believe that there will always be an intimacy in holding the book and flipping the pages.

Work Cited

Piper, Andrew. 2012. “Take It and Read”. In Book Was There: Reading in Electronic Times. Chicago: University of Chicago Press. 1-44.

Mediations and the Vitality of Media Life after New Media: Mediation as a Vital Process by Kember, S. and Zyliska, J.

In the first chapter of “Life after New Media: Mediation as a Vital Process (2015),” Sarah Kember and Joanna Zyliska have drawn theoretical foundation and applied case study for understanding the old and new media and its dynamic processes of mediation. New media, which authors define as ‘a set of discrete objects to understanding media, old and new, in terms of the interlocked and dynamic processes of mediation’ (p. 1) have resulted a division of the world into categories, also called false divisions. Our limited dualism or binary or oppositional thinking such as analog vs. digital, readerly vs. writerly, mass vs. participatory, constructs ontological conceptualizations of them (p. 3). The authors then address the concept of ‘originary technicity,’ which proposes, “We have always been technical,” in other words, “we have always been mediated” (p.18). They argue that mediation is interconnected with ‘life’. Their study concludes that “mediation” is being-with and emerging-with the technological world.

The initial aspect of the term “new media” is ‘newness.’ However, authors highlight that the newness of the products and processes that get described as “new media” should not be taken at face value (p. 3) because their meaning is different. They express “newness” functions as a commercial imperative (p. 4). For example, new products or services demand to upgrade computers, smartphones and any devices in order to exhibit the one’s advanced labour and social relations. However, authors point out two key terms illustrating “new media,” which are converging and interactive. It is aware that new media are categorized as more active consumption whereas old media such as newspaper, radio and books as passive consumption. However, according to the reading, authors have depicted one of the old media, book as a medium that isn’t different from new media: “… a philosophical plane of immanence or a fictional world of novel has always required an active participation and contribution from the reader, not to mention the efforts of all those who have been involved in their editing, design, production and distribution. Arguably, books are thus as hypertextual, immersive and interactive as any computerized media (p. 4)”. Therefore, it is significant that old media is already interactive and converged and thus there is no borderline between new and old media. Although Gary Hall, the author of Digitize This Book!, demonstrates old media, including book is inherently instable and becomes obfuscated since it is difficult to find interactivity between authors and readers and more leaning toward to creativeness and collaboration of “new media”, this point of view enhances the binary concept, resulted in linear, cause-and-effect way thinking, which is the major problem of making false division. Thus, its binary and an cause-and-effect thinking about media and the process of mediation must be eliminated.

The authors then bring the concept of ‘originary technicity’ by illustrating the history of Greeks by Stiegler. This history uncovers that human is a technological being: human being that has the power to create but also relies on external elements to fully realize his being (p. 16). He states, “orignary technicity can therefore be understood as a condition of openness to what is not part the human, of having to dependent on alterity – be it in the form of gods, other humans, fire or utensils – to fully constitute and actualize one’s being” (p.17-18). This statement underlines that technology was and still is part of us (human) and we have always been technical or mediated. According to Bergson, mediation can be seen as another term for “life,” for being-in and emerging-with world (p. 22). The authors believe that the possibility of the emergence of forms always new or potentiality to generate unprecedented connections and unexpected events (p. 24).

This reading has changed the way I think of new media and the process of mediation. As a student in the field of Communication, the concept of digital media is understood as more structuralized and more technologically advanced version of old media, ensuring there is a significant transition from old to new media. However, throughout the study of philosophical literatures in this reading, it is significant that digital media is a part of a long historical trajectory. Stieger’s study of the Greeks is one of the examples reveals that human has an instinct power, which endures and reaches for what is not in human, from creating tools to making fire. Therefore, it is true that human himself is a tekhne, or technology for achievement.

Overall, it is impossible to speak about media without indicating the process of mediation. The limited dualism and false divisions of old and new media have been problematic in understanding the actual meaning of media. Given the philosophical works, it is significant that media need to be observed as particular tekhne, which enables the “temporary “fixings” of technological and other forms of becoming (p. 21)”. The authors remark, “by saying the logic of technology (as well as use, investment and so on) underpins and shapes mediation, we are trying to emphasize the forces at work in the emergence of media and ongoing processes of mediation” (p. 21). As mentioned in the introduction, the term “mediation” means ‘being-with’ and ‘emerging-with’ the technological world. The lifeness, or vitality of media indicates that human and media are strongly interrelated. More specifically, media in terms of the process of mediation has been already associated with the world. Thus, we should not make a cause-and-effect way thinking or linear thinking when understanding the media.

Work Cited

Kember, S., & Zylinska, J. (2015). Mediation and the Vitality of Media. In Life after New Media:Mediation as a Vital Process. Cambridge: The MIT Press. 1-28.

Immediacy, Hypermediacy, and Remediation

Jay David Bolter and Richard Grusin provide an in-depth exploration of the logic that surrounds new media in Immedicacy, Hypermediacy, and Remediation (1998), particularly how digital technologies begin to arouse those aforementioned ways of thinking through their existence in contrasting media outlets.

The first hurdle I came across with this reading was the lack of familiarity with the two latter terms, so I pulled some dictionary definitions off of Google to start my note taking process.

 Hypermediacy is ” a style of visual representation whose goal is to remind the viewer of the medium” (Bolter and Grusin 272). Hypermediacy plays upon the desire for immediacy and transparent immediacy, making us hyper-conscious of our act of seeing (or gazing).”

We then learn from the authors that Remediation is “the representation of one medium in another,” and later on in the reading they argue that this is actually a defining characteristic of digital media.

Initially, I assumed that hypermediacy was something that always worked against establishing immediacy, which is considered the quality of bringing one into direct and instant involvement with something to give rise to a sense of urgency or excitement. This was not the case, as Bolter and Grusin begin introducing these three terms by saying:

“We do not claim that immediacy, hypermediacy, and remediation are universal truths; rather, we regard them as practices of specific groups in specific times” (Page 2).

To expand on that, we learn that immediacy can be seen differently from the perspective of artists, designers, theorists, or any viewers with less knowledge of the processes associated with the creation or presentation of media forms. Although, we also discover that hypermediacy brings a wider array of reactions that occur according to contemporary ideas that surround immediacy, so we already start to see how these two forms of logic are intertwined in that respect and not always polarized.

Furthermore, Bolter and Grusin assert that remediation will always operate under whatever cultural assumptions are associated with those two aforementioned themes. Yet before any contemporary examples of remediation are picked out for deciphering, first the historical resonances to Renaissance painting, nineteenth century photography and twentieth century film are examined among many other technologies.

Beginning with virtual reality, we are introduced to the term “transparency” and its relation to the immediacy of a medium, or our way of getting lost in the moment of being exposed to it. Virtual reality is presented as a very obvious example, as Bolter and Grusin describe it as so realistic that we are meant to forget about the fact that we are interacting with technology.

Transparency is then identified in Renaissance painting methods, where artists make use of linear perspective to draw what Alberti (1972) calls “an open window through which the subject to be painted is seen” (55). Additionally, paint artists use erasive methods such as removing brush strokes to establish a stronger sense of immediacy for viewers.

With the advent of photography and television, these technologies began to automate the techniques associated with linear perspective, thus also making it even easier to conceal the artist and artistic process so much more through their remediation of painting concepts. The same can be said for computer animation as well, where it is now commonplace to function as a film by presenting a “sequence of predetermined camera shots” (Page 9-10).

From this point of analysis, Bolter and Grusin then begin to highlight how the logics of immediacy and hypermediacy are governed by contemporary thoughts surrounding new media, such as a computer desktop full of windows.

“If the logic of immediacy leads one either to erase or to render automatic the act of representation, the logic of hypermediacy acknowledges multiple acts of representation and makes them visible. Where immediacy suggests a unified visual space, contemporary hypermediacy offers a heterogeneous space, in which representation is conceived of not as a window on to the world, but rather as “windowed” itself—with windows that open on to other representations or other media” (Page 15).

Thus our contemporary logic of hypermediacy falls in line with how we interact with the digital media of today, which are proving to be increasingly more multidimensional and versatile according to how we handle it. The desktop computer screen’s graphical user interface (GUI) was already mentioned before, but there are other examples such as opening multiple tabs on an internet browser which fall into Bolter and Grusin’s notion of “replacement” being the operative strategy in our windowed technology nowadays.

In addition, our ability to scroll through or zoom in on photos while using smart phones allows us users to become the mediators of the technology in more of a transparent fashion, as these methods remediate the older ideas introduced by computers, where visible buttons for a magnifying glass tool or scroll bar was accessible.

To conclude the analysis, the authors rebut against the argument made by media theorist Steven Holtzman (1997), who states that digital media “cannot be significant until they make a radical break with the past (Page 31). I would agree with this position as well, and the fact that there will always be a reflection or an idea of older media when it is compared to the actions of newer digital media.

It is exactly as Bolter and Grusin put it in their final sentence: “Repurposing as remediation is both what is ‘unique to digital worlds’ and what denies the possibility of that uniqueness” (Page 31).

Works Cited

Bolter, Jay David, and Richard Grusin. 1998. Immediacy, Hypermediacy, and Remediation.  In Remediation: Understanding New Media. Cambridge: The MIT Press.

Immediacy, Hypermediacy, and Remediation

In Immediacy, Hypermediacy, and Remediation, Bolter and Grusin focus on three different topics, each of which involves how media is portrayed, represented, and presented. The paper highlights how remediation operates under cultural assumptions about immediacy and hypermediacy, and touches on all three subjects. Bolter and Grusin make the point that these three concepts did not get their start with the digital age. Rather, they have existed long before that in various different forms of media.

The first section of the paper focuses exclusively on immediacy. Immediacy is our need to have media that reflects our reality as close as possible. There is a trail throughout our cultural history of attempts to create media that do this. The example used first is that of virtual reality. It is supposed to make us feel closer somehow but still, contains many ruptures. Bolter and Grusin say that this sort of transparent interface is born out of the need to gloss over the fact that digital technology is by definition mediated. Later on, the examples of renaissance painting and photography are used to illustrate immediacy through transparency. They make the point that each was the best attempt at immediacy up until that point. Each was the best representation up to that point. (Bolter & Grusin, 26) They then connect the concept to most recent times, suggesting that computer graphics are an extension of the need for immediacy. Later on, they state that the human agent being erased from the media is a big part of immediacy. It is what makes it seem legitimate or not.

The next section is based around the concept of hypermediacy. Hypermediacy can be defined simply as multiple forms of media combined together in a viewing experience. Hypermediacy “privileges fragmentation, indeterminacy, and heterogeneity and emphasizes process on performance rather than finished art object. ” (31) Although one can think of the internet as a good example of this, it didn’t start with it. Bolter and Grusin use the example of magazines such as Wired to illustrate that this is not new. A magazine layout features many combinations of mediums such as text and images, all together but not one overbearing on the other. Much like windows on a desktop, they don’t all try and blend into each other. They contrast with each other, and give you different perspectives. They also explain the difference between immediacy and hypermediacy. Immediacy is a unified visual space; hypermediacy is windows that open to other representations or other media. Finally, they comment that the internet is culture’s “most influential expression of hypermediacy.” (43) They also state that the internet is an exercise in replacement. It is most radical when new space is a different medium, such as reading an online article and then switching to a video. Finally, they state that the difference between immediacy and hypermediacy is the difference between looking at, versus looking through something.

Finally, they touch on the topic of remediation. This is a concept that should be familiar to most communications students. Remediation, to quote Bolter and Grusin is when content is borrowed from a certain form of media, but the medium is different. (44)  With remediation, the medium borrowing the content rarely mentions the medium being borrowed from. For example, a movie based on a book would never mention the novel that it is based off. This is because it would ruin the illusion of immediacy. Remediation has permeated culture and society, Bolter and Grusin actually define remediation by different degrees. The first one is when an older medium is represented digitally without irony or critique. An example of this is CD-ROM picture galleries. The second one is when a medium emphasises the differences rather than try to erase is. The example they give is Microsoft’s Encarta, a digital encyclopedia that highlights the fact that it is a digital version. The third one is refashioning the older medium while still marking the presence. An example is e Emergency Broadcast Network’s Telecommunications Breakdown, where television and movie clips are inserted with techno music. And finally, when a new medium tries to absorb the old medium entirely. An example of this is the video game Doom, which remediates cinema.

In all, the article focuses heavily on immediacy and hypermediacy, which is understandable. Remediation is a fairly basic concept that most can easily understand. However, the first two take a bit of time to grasp, thus the pages of examples. As well, the concept of remediation builds upon both of the first two concepts. You get a better understanding of remediation by knowing in depth what immediacy and hypermediacy are, and how they relate and contrast. It is also interesting how they use previous examples of media to illustrate all three concepts. Most people think that they have their origins with digital media; it is interesting to see the various examples that are used to illustrate that these concepts have existed as long as media has existed.

Works Cited

Bolter, Jay David, and Richard Grusin. 1998. Immediacy, Hypermediacy, and Remediation.  In Remediation: Understanding New Media. Cambridge: The MIT Press.

Print Culture (Other Than Codex): Job Printing and Its Importance by Lisa Gitelman

Print culture in its self is very ambiguous to define, as the word culture is often characterised by various aspects of collective behaviour and social constructs.  In her exploration of such discourse, author Lisa Gitelman examines the role of noncodex work through her written piece fittingly entitled Print Culture (Other Than Codex): Job Printing and Its Importance. In this article Gitelman highlights the overlooked and almost erased history of job printing as a discipline of publishing that grew from distinct practices surrounding printers. As she reveals through her analysis, the meanings and definitions of print and print cultures are not only difficult to identify, but shaped by specific historical agents and structures. Thus by focusing on job printing, Gitelman emphasizes their economic importance and significance on changing the public as she argues from passive readers to active users (p. 192).

Beginning with distinguishing publication formats, Gitelman discusses how codices are essentially any form of text that resembles a book. In this sense the codex is interpreted in relation to older formats such as the scroll, in which she illustrates the dynamic connotations of media. Similarly, the semantics of the word print are also under scrutiny as it has “come to encompass many diverse technologies for the mechanical reproduction of text” (Gitelman, p. 184). As new advancements in print are presented over time, the use of the word print has become free of technology, and even human hand. While this may be a result of such technology, when discussing print as a culture one cannot ignore the influences of socio-economic circumstances in any given time period. Print culture as a whole is then subject to the developments and usage of print in affinity to modernity and the customs of social actors (Gitelman, p. 185). That the rise of other expanding institutions in Western society intertwined with print to create new decentralized industries, with revision to format and consumption.

Gitelman quotes Stallybrass in pointing out that “printers do not print books. They print sheets of paper.” (p. 186). The quote is symbolic because it communicates the idea that not everything printed is always traditionally published. This is in contrast to the historical belief and acceptance of publishing being typically in codex format as some sort of book. Although with printing capabilities being around for quite some time, it was not the technology that drove for innovation; but the social and institutional changes as discussed earlier. The surge in noncodex work that was heavily produced in the early 20th century brought upon a new use for print that left behind the old presumed characteristics of codex. Gitelman addresses this by looking at how noncodex works had slim survival rates, and were consumed immediately, losing value overtime. As a result, she views these textual snippets as vital aspects of the publishing industry that are often seen as meaningless, despite the overlapping implications they had on society, commerce and print culture overall. While being something to be indulged and not last the test of time, this type of work known as job printing was transformative in using the noncodex format as a way to expand the utility of publishing.

Since noncodex print is in contrast to conventional publishing, it was not measured and recorded in circulation. From this Gitelman suggests that job printing was not heavily monitored, and at one point might have even accounted for 30% of industry labour (p. 189). With such large numbers, work consisting of making receipts, labels, letters and so on are vastly underrepresented in publishing scholarship and studies. Ultimately, job printing became an underground section of the publishing industry that connected it to other forms of production as a dominate medium at the time through modern capitalism. This conversion from publisher to individual, now became business to the business as a way to “function as instruments of corporate speech” (Gitelman, p. 190). Gitelman observes that this stands in opposition to most literary works, as a way to simply see printing as solely printing instead of distinct publication. Thus with changes to the product, citizens as agents consume them differently within the public sphere. Gitelman argues that readers under the control of “corporate speech” become users of this text instead of readers because they do not read them, or share the same romanticized ideals as the text fades (p. 191-192). Job printing also then brought upon contemporary issues of copyright and ownership that are still debated in the digital age over the “idea-expression dichotomy”.

In consideration to my own interpretation of the topic, I think Gitelman presents a case of trying to understand publishing from its direct response and evolution to other establishments. That job printing existed not from a need of publishers, but from a society that saw its potential not being fully utilized. Just as with any technology, the changes brought upon format and usage were not dependent on the technology alone, but in conjunction with social actors as Gitelman noted. We see the same debates happening today with copyright noted in the article, but also with physical and digital books. That while print is free of technology, the definition of it just like print culture is constantly changing relative to the time and society at large. Whether it be the different format text takes on via codex, or the type of work performed such as job printing, we cannot undermine the ramifications of any technical instrument in shaping the future of publishing from the proceeding.

 

Works Cited

Gitelman, L. (2013). Print Culture (Other Than Codex): Job Printing and Its Importance. Comparative Textual Media Transforming the Humanities in the Postprint Era, 183-198. doi:10.5749/minnesota/9780816680030.003.0008

Pulp’s Big Moment

In The New Yorker’s “Pulp’s Big Moment” (January 5, 2015), Louis Menand traces the history of the pulp paperback and describes how its explosive entrance into the book market in the 1930s and 1940s changed the landscape of publishing. Prior to 1935, when Allen Lane launched Penguin paperbacks in the UK, books were sold primarily in bookstores (which were limited to urban areas) and through slow methods that required planning and intention on the part of the consumer, such as catalogues and book clubs. For the most part, books were seen as a “highbrow,” intellectual medium, targeted at consumers with a certain level of education and financial resources.

When cheap paperbacks hit the market in Britain in 1935 and four years later in America with Robert de Graff’s Pocket Books–the country’s first line of mass-market paperbacks–the market shifted dramatically. Suddenly, paperback books were accessible in both price and location. They were sold for pocket change in railway stations, drug and grocery stores, newsstands, and any other retail space that could fit a rack of small paperbacks.

Menand writes that once paperbacks flooded the market, “books were not like, say, classical music, a sophisticated pleasure for a coterie audience. Books were like ice cream; they were for everyone. Human beings like stories. In the years before television, mass-market paperbacks met this basic need” (Menand, 2015). These stories were not intended to promote moral messages or academic discussions–they were written and marketed purely for pleasure and to sell as many copies to as many people as possible. Often, this involved content that would be unacceptable in traditional literature. Although this was not a wholly new phenomenon (for instance, penny dreadfuls in the nineteenth century also capitalized on the scandalous and lurid and claimed little moral value), it was the first time “pulp” was produced and marketed in such mass quantities.

At the same time, novels such as The Catcher in the Rye and The Great Gatsby–as well as older, established classics such as Shakespeare–were packaged to look like pulp novels and sold at similar prices, blurring the lines between “highbrow” and “lowbrow” reading. Although the market was eventually oversaturated with cheap paperbacks and companies such as Pocket Books failed to make large profits because their books were sold so cheaply, the tidal wave of pulp fiction had altered the way books were approached and sold. Books written for pleasure and created for the masses were now commonplace, and even books considered classics or originally intended to educate for were marketed to appeal to the casual reader looking for entertainment.

As well, these novels were not held to the same standard of morality that “serious” literature was–so that along with content meant purely to sensationalize, some novels were able to tell stories that would otherwise not be told, for instance of interracial or same-sex relationships. Although social change was not the intent of people like Robert de Graff, whose goal was to sell as many books as possible, according to Menand, pulp paperbacks were “market disrupters. They put pressure on the hardcover houses, and that meant putting pressure, in turn, on the legal regulation of print” (Menand, 2015). Eventually, reading for pleasure became mainstream enough that hardcover publishers could reclaim the practice as their own–the public had proved that there was a market for less censored, less intentionally constructive and moral reading material, and content that was previously seen as taboo was now legitimized.

One of the things that interested me most about Menand’s article is how the rise of the cheap paperback foreshadows in some ways what is happening now in the realm of ebooks and online publishing. Like the pulp market during its golden age, we are increasingly saturated in mass and digital media. Though the Internet has provided a platform for authors and other creators who otherwise might not have been able to pass through traditional gatekeepers such as publishing houses, the sheer amount of information and media “noise” set in front of consumers makes it difficult to differentiate one’s self from the mass of other voices. Often, both mass market novels and online content were or are offered for rock-bottom prices compared to the forms of media that preceded them. Ultimately, although mass market paperbacks are far from dead, pulp fiction ceased to be the omnipresent force that it was in the mid-20th century after the market was oversaturated with cheap novels–how can online publishing avoid this while still remaining accessible?

In addition, self-published ebooks and online publishing platforms are raising similar questions that consumers, critics, and publishers of pulp novels faced, regarding what qualifies as “legitimate” art. For example, fanfiction (as well as fan art and other fan work) is one example of a genre that has exploded in popularity in the last two decades because of easy digital distribution, even though like pulp novels, it is dismissed by critics and rarely seen as a valid form of writing. Fanfiction is not only read but also written purely for enjoyment, and is easily accessible to anyone with an internet connection. The internet has already changed the landscape of the publishing industry and continues to do so in ways that parallel Menand’s summary of the pulp paperback industry of the 20th century–cheap or free readily available content, blurring the lines between traditional, acceptable and “lowbrow” art, and a potential for oversaturation of the market.

References:

Menand, Louis. “Pulp’s Big Moment,” The New Yorker, January 5, 2015. Retrieved from  http://www.newyorker.com/magazine/2015/01/05/pulps-big-moment

Interview with Matthew Kirschenbaum – Kelsey Wilson

Kelsey Wilson
PUB 401
September 20, 2016

Development of the Digital Humanities in Manuel Portela’s
An Interview with Matthew Kirschenbaum

This interview provides a discussion with Matthew Kirschenbaum, author of Track Changes: A Literary History of Word Processing released earlier this year. Following the themes of his novel, Kirschenbaum answers several questions that compare and contrast the development of writing through word processing and more traditional mediums including the typewriter and long form writing. His book centers on two decades spanning from 1964-1984, the era in which word processing went from new innovation towards becoming a modern convenience. Importantly, neither Portela nor Kirschenbaum offer an overall theory towards the impact of word processing technology onto the publishing industry, but merely offer a fascinating exploration of the many developments and changes that have led to such rapid change in literature production. Portela divides his interview into a series of nine questions and responses, and I will focus on three interesting points that I found throughout these questions.

Kirschenbaum’s closing comment to Portela’s question about significant moments in the adoption of word processing for literary writing purposes emphasizes the stunning diagram drawn by our professor in last week’s class: “the history [of word processing in literary writing] itself is rarely one of simply linear progress” (Portela 2016). Technological advancements that impacted publishing seemed to be coming out every year from the late 70s throughout the 80s. This can be exemplified through the release of the Apple II in 1977 and the Macintosh in 1984 from that company alone. However, despite many successful developments, many more were flops, resulting in circular patterns and dead ends throughout the history of word processing.

A very minor comment made by Kirschenbaum when answering a question about how discourse around word processing was developed as it became more advanced and prevalent throughout society really made me take pause. Departing from authors mentioning word processing in their works, Kirschenbaum cites 1984 as “the year the illustrator David Levine began sometimes drawing authors with computers instead of typewriters or fountain pens in his caricatures for the New York Review of Books” (Portela 2016). This moment seems incredibly significant to me as it implies that the majority of viewers of this comic would have been familiar with computers and word processors and their usage in literary writing in order for the illustration to make sense. Additionally, it is interesting that this quotation mentions fountain pens alongside typewriters as earlier tools of authors found in comics, as this made me question the fall of the typewriter due to the development of word processing. For me personally at least, when writing comes to mind I think of both the computer and a pen and paper while the typewriter falls by the wayside, and this made me wonder what the next tool of the trade to disappear will be.

As a World Literature major, I am very interested in how texts are shaped, both physically and figuratively, and this is something that is addressed in the seventh question of the interview. Kirschenbaum emphasizes both the texture of the prose and cites composition theorist Christina Haas’s notion of the “‘sense of the text,’” which I believe answer for both versions of how texts are shaped (Portela 2016). Both of these concepts refer to the mental model that an author has of their works in progress, and through the ease of word processing the need for a purely mental model is disappearing. Rather than having to keep tabs on various elements of their prose, writers can now refer back to a physical model of their work at a single click. This is also exemplified in the ability for instant changes to be made to a draft throughout the writing process, which definitely impacts a writer’s approach to their work as they are able to jump around and write in a non-linear fashion. Additionally, due to the ease that word processors lend to formatting, particularly in more recent developments that Kirschenbaum mentions in his book and interview, the possibilities for unique and engaging formatting of a text are far more diverse than those that a typewriter or earlier processor could offer.

To conclude, I believe that Kirschenbaum’s article was admirably neutral surrounding such a polarizing and hotly debated topic of the development of technology and its impact on literature. The article managed to bring facts about the history of word processing into a current debate, while making a variety of predictions about the future. I am most looking forward to observing the outcome of the paradox set forth by Portela regarding  the “excess of information” and the “loss of information” surrounding word processing technologies and digital information, as I believe that this will be a crucial element in the near future (Portela 2016).

WORKS CITED

Portela, Manuel. “This strange process of typing on a glowing glass screen: An Interview with Matthew Kirschenbaum.” Capa 4:2 (2016): http://iduc.uc.pt/index.php/matlit/article/view/3017/2283. Accessed 16 September 2016.

Matthew Kirschenbaum: De-naturalizing the Word Processor

Growing up in the 90’s, word processors were simply part of the backdrop of my everyday life and I never thought much about their emergence; they seemed to be the natural ‘upgrade’ to the typewriter as the dominantly used writing technology. What I appreciated about the interview, This Strange Process of Typing on a Glowing Grass Screen, was Matthew Kirschenbaum’s de-naturalization of a process that I had had previously mistaken as natural: the transition from typewriting to word processing.

As Kirschenbaum stated, word processing was not inevitable. He demonstrated this by highlighting examples of how “messy” and non-linear the transition to word processing really was, beginning with significant differences in design. While the typewriters confined writers to a rigidly linear process due to it’s engineering (i.e. one did not even have the freedom to delete text and could thus only proceed in a forward fashion), word processors offered freedom (i.e. the ability to delete text and to start from virtually anywhere; one did not have to proceed in a linear fashion from beginning to end when writing, since the cursor could move anywhere on the page). In this way, Kirschenbaum suggested that word processing was actually more reminiscent of the pen rather than the typewriter in its freedom and provision of access to the entire document space.

Kirschenbaum also outlined challenges that people faced when adapting to this new technology. For instance, buyers had a difficult time choosing the best word processing program, due to the sheer variety that emerged and their incompatible features. Adding to this was more befuddlement about how to actually use the new technology. For instance, users needed to be instructed on the once complicated process of deleting text and to be reassured them that the text still existed, even when it rolled off the edge of the screen and out or sight. Such considerations, which seem like common sense to us nowadays, were once very foreign—a fact that I found very amusing and interesting at the same time.

Further adding to Kirschenbaum’s argument about the “messiness” of this transition were conflicting attitudes people had about word processors. This was surprising to me; I would have thought that the freedom of revision and composition that word processors offered would send people jumping for joy immediately—which some did, in fact, do—but others still approached the change with fear, apprehension, and resistance. For them, the coding behind word processing was very strange. People even feared that the ‘finished look’ and professional appearance that word processors gave to written work would deceive writers into thinking that their work was more polished than it really was. Even more fascinating was how this tone of apprehension shone through in cultural works, like in fiction stories about “paranormal word processors” and the like.

Yet other cultural works served to do the opposite: rather than instilling more fear and apprehension towards the new technology, some cultural images—in the form of illustrations, advertisements, photographs, etc.—served to normalize and assimilate word processors into everyday life. For instance, portraying writers with word processors rather than typewriters helped naturalize the use of word processors in the creation of literature.

What I found particularly intriguing was the tendency for the producers of cultural imagery to feel the need to draw connections between word processors and their predecessor technologies (i.e. typewriters and pens) in order to make them ‘safer’ to consume. It was almost as if by putting an image of a pen directly beside an image of a word processor—the former having already been accepted by mainstream society—they hoped to transfer the ‘acceptance’ people had for the older technology onto the newer one (this would probably make for a very interesting semiotic analysis!).

Finally, I was drawn by Kirschenbaum’s assertion of word processing as being the Internet’s infrastructure. Not only was text used primarily for surfing the Web in the form of online searches, etc., but the coding behind Internet functions was also composed largely of text. Kirschenbaum’s interviewer, Manuel Portela, went a step further by suggesting that word processing had enabled a new form of “behavioral and social control,” handed to corporations in the form of Big Data: online searches were now analyzed computationally by algorithms, giving rise to recommendation systems, customized advertisements, and other forms of surveillance.

At the time of their invention, writing technologies like the pen, typewriter, and word processor emerged under the assumption that writers would profit from their own writing in the form of money, recognition, or the thrill of having their ideas spread widely beyond their lifetime. Yet, who would have predicted that word processing, in terms of text used for online activities, would enable corporations to collect private information about writers that they did not intend to be identified—such as their demographics, preferences, and online behaviors—and to profit by selling them without the writers’ (i.e. Internet users’) direct consent?

Perhaps, as Kirschenbaum eloquently stated, writers had a small reason to be “simultaneously captivated and terrified by the prospect of consigning their prose to the mutely glowing glass screen, wondering what would happen once the pixels went out.”

Nicholas Lisicin-Wilson

PUB 401

13 September 2016

The Print and Digital Book Was There

Piper’s prologue to Book Was There takes a broad look at reading past, present and future, through the lens of his own experiences. He briefly examines the relationship between books and modern technology and how each relates to the experience of reading, noting that “Reading is beginning to change” (xii). The altered physicality and interactive features of electronic reading contribute “To a different relationship to reading, and thus thinking” (Piper x). The tools that we use shape the way we think and approach an action, so if technological advancements are really changing the way that we think about reading, is it for better or for worse?

Piper specifies the “Roamable, zoomable, or clickable surfaces” (x) of digital interfaces—these unique features can provide tools such as instant access to dictionaries or encyclopedias to explain a word or phrase without the need to pause reading and search somewhere else. In this manner, digital technology certainly heightens thinking, as it lessens the separation between the source text and external references. However, is it these same clickable screens that can also separate the reader from the text. While sometimes useful, scrolling and zooming around a text can serve as distractions, flipping between different pages can be cumbersome, and typically smaller screens spread related pieces of information farther from each other. The new tools available in digital formats often serve only to replace basic features of print books that were lost in translation, like the ability to highlight a passage or find a particular page. Where digital takes the clear advantage is in solving tedious tasks such as finding a specific phrase or defining a word. But these particular improvements do not actually change the way that we think, rather they only speed up the same task.

Although anecdotal, an overwhelming majority of readers, both casual and committed, will readily say that they prefer to read on paper than on a screen. Piper mentions the physical element of reading (x) as an undeniable factor. Staring at a lighted screen strains the eyes, affects sleep, and is often accompanied by a stiff posture if reading is done on a computer. The words on a screen intuitively feel less real than those on a page—something about ink on paper makes meaning clearer, mistakes more apparent, and reading more enjoyable. Print reading is also free of the miasma of distractions that plague e-reading, making it a more committed and immersive experience. It is easy to lose oneself in a book when the story is the sole focus, but on a screen with notifications and sounds continually appearing and interactive elements pulling the reader away, deep involvement with the text is harder to maintain.

Through the prologue, Piper frequently refers to his own childhood growing up with books; while he uses these stories as a personalized introduction to the text, he is also addressing the role of nostalgia in print reading. The vast majority of the adult population grew up reading print books or having print books read to them—we can forget just how recently these technological advancements that permit e-reading have appeared. For many, reading a book is a form of childlike escapism not only for the stories and characters, but for the return to the familiar feeling of curling up with a book. E-reading simply hasn’t had the time (or more importantly, the opportunity in early childhood) to form the same emotional impression. It remains to be seen whether the next generation will view reading from a screen as fondly as we do ink. E-reading has objective benefits in convenience and portability—if the emotional component of print is removed, what purpose remains for books?

Piper succinctly introduces the core themes of Book Was Here in his prologue, raising the questions which are to be answered. He indirectly and implicitly asks “Which is superior, the print book or electronic reading?” Unlike most industries which have readily embraced technological innovation, the publishing industry is temporally torn. Readers on one side want the convenience that comes with digital readers and all their accessories, while others cling to the print book for reasons of comfort, whether physical or emotional. One thing is certain, it is impossible to dismiss the concerns of the lovers of ink, as they still hold the majority of the publishing market (Perrin) and their concerns are widely shared. The future of publishing is in their hands, and their decision to embrace digital or stay the course will shape an industry.

Works Cited

Perrin, Andrew. “Book Reading 2016.” Pew Research Center, 1 September 2016, http://www.pewinternet.org/2016/09/01/book-reading-2016. Accessed 12 September 2016.

Piper, Andrew. “Prologue.” Book Was There: Reading in Electronic Times. University of Chicago Press, 2013, vii-xiii.

Introduction to “Planned Obsolescence” by Kathleen Fitzpatrick

In the introduction to her book Planned Obsolescence: Publishing, Technology, and the Future of the Academy, Fitzpatrick (2009) claims that academic publishing is in danger of becoming obsolete if it continues in its current state, not because of outdated technology per se, but mainly because of outdated social and institutional structures and scholarship practices. Therefore, in order for academic publishing to thrive, there must be “social, intellectual, and institutional” (Fitzpatrick, 2009, p. 9) changes, not just technological ones.

To support her claim that the current state of academic publishing is unsustainable, Fitzpatrick briefly recounts how academic publishing reached its current state of near obsolescence, and what that state entails. This was all new to me, since I haven’t really thought about the world of academic publishing before, and I was still in elementary school when the impetus for the current decline happened. Basically, the dot-com collapse of 2000 led to a decrease in funding for universities, including their presses and libraries, which led to libraries buying much fewer titles and presses publishing fewer of them out of sales concerns (Fitzpatrick, 2009). As Fitzpatrick (2009) observes, in academic publishing, “marketing concerns have come at times, and of necessity, to outweigh scholarly merit I making publication decisions” (p. 6). That this situation exists in academic publishing is baffling to me. How can the marketability of an academic book over another be assessed? By the trendiness of its central argument? By the renown of the author, or whether they have been published before? I agree with Fitzpatrick and her colleague Matt Kirschenbaum, whom she quotes: “What ought to count is peer review and scholarly merit” (p. 6). If scholarly merit is no longer paramount in academic publishing, then I have to believe Fitzpatrick’s argument that academic publishing needs to change.

Through this reading, I learned a little about the culture surrounding career advancement for scholars in universities. It seems that publishing a book goes a long way in securing tenure or other promotion, while publishing articles is less valuable in that area. Reviewing the work of peers, according to Fitzpatrick (2009), does not count for much in the way of credentials, even though it “requires an astonishing amount of labour” (p. 9). This strikes me as unfortunate, since peer review is an essential part of the scholarship process: without it, the production of knowledge would be less rigorous, and a scholar’s work would not be taken as seriously. Yet, individual achievement, especially in the form of a book, is still prioritized, and the contribution of peers is relegated to a page or two of acknowledgments. Of course, the focus on individual achievement and originality is dominant outside of the university as well, so it is not surprising that peer review is overlooked.

To give peer review more weight, Fitzpatrick (2009) suggests that academic publishing become more community-oriented, privileging collaboration and the process of producing texts, as well as “bringing together and highlighting and remixing significant ideas in existing texts rather than remaining solely focused on the production of more ostensibly original text” (p. 9). That sounds similar to the kinds of activity I often see online, in places like YouTube. Though videos get posted to only one channel, many people could be involved in the production in a video and receive credit. Perhaps academic publishing could shift its focus to groups or publications, not individual authors, and everyone involved could receive the benefits. This would require a major shift in what universities value and how scholars think and work, which would be difficult, given the “fundamentally conservative nature of academic institutions and … the similar conservatism of the academics that comprise them” (Fitzpatrick, 2009, p. 8). In order for scholarship to change, scholars must change, but, as Fitzpatrick (2009) notes, they are not likely to do so if it is risky career-wise, or if the current system works for them. In response to this, I suggest that maybe young scholars—those who don’t have as much at stake, who may find the current system lacking, who are already familiar with new technologies online—can lead the way towards community-based scholarly work. This course, PUB 401, seems to be the perfect place to explore the possibility.

Though the article focuses on academic books, I believe the main argument put forward by Fitzpatrick can be extended to books in general. Fitzpatrick (2009) argues that academic publishing faces obsolescence that is largely due to factors other than technological change. In the same way, the physical book has yet to become obsolete, even among the e-readers and other mobile devices that we have today, partly because we still find value in them besides their technological value. Therefore, advances in technology by themselves are not sufficient to save or end the book. Instead, we should consider the cultural practices, systems, and values that surround it.

 

References

  • Fitzpatrick, Kathleen. 2009. Introduction. In Planned Obsolescence: Publishing, Technology, and the Future of the Academy. Media Commons Press.  Retrieved from http://mcpress.media-commons.org/plannedobsolescence/introduction/

© 2018 mlei. Unless otherwise noted, all material on this site is licensed under a Creative Commons Attribution 4.0 License.


Theme by Anders Noren

Up ↑