Verification is the Key to Successful Crowdsourcing

Despite the immense increase in accessibility of digital content creation, the current condition of publishing in the digital sphere has not come without its limitations, and content creators are not always independently capable of reaching the material and financial demands necessary to pursue their endeavours. These challenges present themselves in manifold ways, most often stemming from time constraints, individual skillsets, and the monetary means necessary to fund one’s ventures.

Crowdsourcing was thus borne as a way of tapping into the external talents, creative insights, and knowledge base of broader online communities to further one’s projects and ensure audiences are able to receive what they want most. This essay will investigate how crowdsourcing has been adopted by journalists and those working toward the digitization of print books. By offering a comparison of how crowdsourcing is being utilized in these two endeavours, I hope to inspire discourse of how crowd sourced journalism may be more effectively implemented in the future.

 

Crowdsourcing as Commons-based Peer Production

Crowdsourcing in the present decade has witnessed the rise of volunteer captioning, translation and citizen journalism, proving to be a consistently employed strategy for content creation. As Yochai Benkler and Helen Nissenbaum discuss at length in their article “Commons-based Peer Production and Truth”, crowdsourcing is akin to what they define as commons-based peer production:

 

“Facilitated by the technical infrastructure of the Internet, the hallmark of this socio-technical system is collaboration among large groups of individuals, sometimes in the order of tens or even hundreds of thousands, who cooperate effectively to provide information, knowledge or cultural goods without relying on either market pricing or managerial hierarchies to coordinate their common enterprise.”

 

Indeed, the above examples (captioning, translating, and citizen journalism) coincide with this definition, because the efforts undertaken are done so collaboratively, with the aim of enhancing the spread of knowledge and culture, and are done so freely without the expectation of financial compensation. Drawing on Wikipedia as an early example, Benkler and Nissenbaum illustrate how peer production begins with “a statement of community intent” and achieves its ends via “a technical architecture that allows anyone to contribute, edit and review the history of any document easily.”

 

Digitizing Books, One Word at a Time

First introduced as Google Print at the 2004 Frankfurt Book Fair, Google Books has now scanned over 25 million book titles using Optical Character Recognition (OCR) technology. OCR works by creating electronic conversions of images of typed, handwritten, or printed text which Google Books then stores into its digital database. Since its inception, Google Books has slowed its output for two primary reasons: Copyright violations, and errors in scanning relating to the OCR process. Such errors include pages being unreadable, upside, crumpled, blurry, as well as fingers obscuring text.

To begin remedying these scanning errors, Google acquired reCAPTCHA in 2009 as a means of amending the unreadable pages and blurry scans resulting from the OCR process. reCAPTCHA is an evolution of CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) in which the scanning process errors of OCR are amended by users of websites. We have all filled out web-based forms containing CAPTCHAs at some point in time, and as their name implies, their purpose is to ensure that the user filling out the form is a human and not some sort of computer program.

recaptcha

From Wufoo Team’s Flickr

The co-founder of reCAPTCHA, Luis von Ahn of Carnegie Mellon, explains reCAPTCHA’s inception in his Ted Talk “Massive-scale online collaboration”:

 

“Approximately 200 million CAPTCHAs are typed every day by people around the world…and each time you type a CAPTCHA, essentially you waste 10 seconds of your time. And if you multiply that by 200 million, you get that humanity as a whole is wasting about 500,000 hours each day…”

 

Ahn wished to effectively seize these accumulated hours spent typing in CAPTCHAs by putting them to a secondary use, ultimately benefiting the global collective knowledge base. His innovation means that for every CAPTCHA an individual enters on a web form, books are being digitized one word at a time. He continues to illustrate how reCAPTCHA technology draws on the commons-based peer production to remedy text from books that OCR has difficulty accurately deciphering:

“OCR is not perfect, especially for older books where the ink has faded and the pages have turned yellow… Things that were written more than 50 years ago, the computer cannot recognize about 30 percent of the words. So what we’re doing now is taking all of the words that the computer cannot recognize and getting people to read them for us while they’re typing a CAPTCHA on the internet.”

If you’ve noticed that CAPTCHAs now contain two words instead of one, it isn’t to speed up digitizing efforts. Rather, a two-step process is being followed. The first word is one that has already been verified as being correct, and an individual’s correct inputting serves to verify they are human. The second word is one still requiring verification, and is being shown in CAPTCHAs to ten other people as a means of ensuring the correct digitization of a text. As of 2011, 750,000,000 people (10% of the world’s population) have helped to digitize at least one word of a book because of reCAPTCHA.

 

Journalism and Crowdsourcing: The Truth is out There

Traditionally, journalists have had to rely either on sources on the ground or on connections acquired via personal connections or professional external networks. It is undoubtedly beneficial to have a pool of experts at one’s disposal to provide statistical context, ideological interpretation, legal knowledge, etc. Likewise, it is enviable to have dedicated individuals who interview scholars, document raw footage, and correspond with locals as events unfold in real time. However, not all events are created equal – “home-grown” stories vs. international reporting, for example – and not all events are known beforehand (ex. Natural disasters), making on the ground coverage not always possible. Additionally, budgets do not always permit live reporting, especially for smaller publications already stretched thin paying in-house staff, freelance writers, and photographers.

The consequences of these monetary and time constraints most often manifest in the type of stories chosen for coverage, and the level of depth awarded to their investigation. As Madelaine Drohan explains in her report Does serious journalism have a future in Canada?:

 

“When time is at a premium, other parts of the job inevitably fall by the wayside, like the research required for accuracy, context and balance. Journalists and their editors are tempted to avoid harder, longer projects that require both money and time in favour of quick and easy hits…”

 

Drohan also states in her report that time-crunched journalists are “prone to circulating misinformation” and are “more inclined to put opinion over fact.” Thus, new solutions such as crowdsourcing journalistic efforts serve to combat these stresses to ensure timely coverage and the enhanced accuracy of details.

Crowdsourcing can be observed in multiple dimensions, from interviewing to corroborating details, and from video footage to audio recordings. Certainly, Benkler and Nissenbaum’s discourse on commons-based peer production apply to these activities, aligning with the two core characteristics of peer production itself: Decentralization and the use of social cues and motivations (rather than finances and commands) to drive and navigate the actions of participating agents. In the most direct sense, these efforts are inspired by a “call to action” by the news publications we read regularly, inviting us to share our photos and videos, our eye witness accounts, and to correct any errors our typos noted in the articles posted.

bbc

As seen on the BBC at the end of an article regarding wildfires in Israel

vancouver-sun

The end of an article on the Vancouver Sun’s website, inviting readers to submit comments regarding typos and missing information

Returning to Drohan’s report on journalism, it becomes apparent why news outlets are so dependent on peer production to source details and footage, and to amend the content of articles – the financial limitations and time constraints plaguing the 24-hour news cycle prove challenging, even for large-scale outlets like the BBC. Realizing these demands, news outlets are honest about their inability to rapidly turnaround factually correct and investigative pieces, inviting readers to wear the badge of citizen journalist in order to fill in the missing pieces and to provide refutation whenever necessary.

Another side of commons-based peer production in journalism concerns news outlets and government intervention. In his TED Talk titled “Citizen Journalism”, journalist Paul Lewis powerfully illustrates how journalism benefits from crowdsourcing to expose the truth being covered up by government bodies. His talk focuses on two stories involving the controversial deaths of Ian Tomlinson and Jimmy Mubenga that he wished to investigate further. In both instances, authorities released details of their deaths in a skeptical, misleading fashion. As he explains, his decision to put out a call to action on Twitter stemmed from the following:

 

“For journalists, it means accepting that you can’t know everything, and allowing other people, through technology, to be your eyes and your ears… And for other members of the public, it means not just being the passive consumers of news, but also co-producing news… This can be a very empowering process. It can enable ordinary people to hold powerful organizations to account.”

 

Upon receiving tweets, emails, and raw footage from members of the public surrounding both of the stories above, Lewis was able to determine the truth behind Tomlinson’s death – he was knocked to the ground by police with a baton to the back of his leg – as well as Mubenga’s death – he was held down by three airplane security personnel until he lost consciousness.

While the truth behind these two cases is undeniably thanks to commons-based peer production, it is crucial to note that discretion is necessary when relying on crowd sourced information, because information gleaned via social media messaging and email needs to be combed for bias, lies, and credibility to the same extent as traditional journalism. As Lewis asserts: “Verification is absolutely essential.” Similarly, Anahi Ayala Iacucci of the Standby Task Force, a non-profit dedicated to providing a humanitarian link between the digital world and disaster response, explains the necessary processes of judgment and filtering when making sense of the deluge of information shared on social media: “Crowd sourced information is a lot of noise… not always comprehensible, not always relevant, not always precise or accurate, and that’s still something journalists need to do [curate and verify].”

Because individuals exist who aim to spread false information and divert attention elsewhere – as well as to outright confuse and deceive – I believe it is necessary to re-consider the means through which discretion is performed and information is corroborated. As Benkler and Nissenbaum explain, common-based peer production must seek to achieve a system of checks and balances in order for a project’s or task’s goals to be successful:

“It enforces the behavior it requires primarily through appeal to the common enterprise in which the participants are engaged, coupled with a thoroughly transparent platform that faithfully records and renders all individual interventions in the common project and facilitates discourse among participants about how their contributions do, or do not, contribute to this common enterprise.”

When crowdsourcing in journalism fails, it is because of the very means through which information is sourced. Social media may be transparent in the way that it is a public platform, but it lacks transparency in terms of traceability and faithful recording; individuals do, after all, delete posts or accounts and amend details shared, but once a post has been shared and then read and re-shared, the damage is already done. Moreover, not all participants possess overlapping motivations surrounding journalistic efforts. As I said above, many people are out to confuse, mislead, or outright lie about events because of wide-ranging personal interests.

 

 Reading Crowd Sourced Journalism and reCAPTCHA Together

The success of Luis von Ahn’s reCAPTCHA efforts is contingent on the meticulous method of verification he imposes; showing CAPTCHAs to ten different individuals to ensure their correct digitization demonstrates the level of checks and balances necessary to render commons-based peer production effective. Returning again to Benkler and Nissenbaum, one can observe this systematic order in their example of Wikipedia: “The ability of participants to identify each other’s actions and counteract them—that is, edit out “bad” or “faithless” definitions—seems to have succeeded in keeping this community from devolving into inefficacy or worse.” In the case of reCAPTCHAs, this identification of actions can be accepted as the corresponding text typed in a web form, and the editing can be perceived of as the check performed when verifying which CAPTCHAs yield overlapping interpretations.

Unfortunately, peer produced journalism in its present state does not result in the same level of scrupulous verification. With news stories being churned out in incomplete variations to keep pace with the demands of the 24-hour news cycle, and news being heavily aggregated by sites like Buzzfeed and Huffington Post, proper checks of facts and footage are not consistently being conducted prior to publication. Moreover, people are more likely to share a story than read it, and online reading completion rates aren’t always reassuring, exhibiting the severity of unverified news sources being circulated en masse.

Thus, there is a great need for peer produced journalism to implement more thorough systems of verification, and to shift its focus from speed of delivery to accuracy of reporting. Just as the Standby Task Force works to help “filter the noise” of crowd sourced coverage to produce accurate mapping during crisis response, online news outlets, too, should consider partnering with similar external organizations to better corroborate details and “filter out” incorrect and misleading information.


 Works Cited

Benkler, Yochai and Nissenbaum, Helen, “Commons-based Peer Production and Virtue”, https://www.nyu.edu/projects/nissenbaum/papers/jopp_235.pdf

Heyman, Stephen, “Google Books: A Complex and Controversial Experiment”, http://www.nytimes.com/2015/10/29/arts/international/google-books-a-complex-and-controversial-experiment.html?_r=1

Weir, David, “Google Acquisition Will Help Correct Errors in Scanned Works”, http://www.cbsnews.com/news/google-acquisition-will-help-correct-errors-in-scanned-works/

Wufoo Team, https://c2.staticflickr.com/4/3598/3683064794_95824f2135.jpg

Massive-scale online collaboration, https://www.youtube.com/watch?v=-Ht4qiDRZE8&t=609s

Drohan, Madelaine, “Does serious journalism have a future in Canada?”, http://www.ppforum.ca/sites/default/files/PM%20Fellow_March_11_EN_1.pdf

Citizen journalism, https://www.youtube.com/watch?v=9APO9_yNbcg

Death of Ian Tomlinson, https://en.wikipedia.org/wiki/Death_of_Ian_Tomlinson

Unlawful killing of Jimmy Mubenga, https://en.wikipedia.org/wiki/Controversies_surrounding_G4S#Unlawful_killing_of_Jimmy_Mubenga

The importance of crowdsourced mapping in journalism, https://www.youtube.com/watch?v=uSrpZ8UXyzw

Standby Task Force, http://www.standbytaskforce.org/about-us/

Dewey, Caitlin, “6 in 10 of you will share this link without reading it, a new, depressing study says”, https://www.washingtonpost.com/news/the-intersect/wp/2016/06/16/six-in-10-of-you-will-share-this-link-without-reading-it-according-to-a-new-and-depressing-study/

Manjoo, Farhad, “You Won’t Finish This Article”, http://www.slate.com/articles/technology/technology/2013/06/how_people_read_online_why_you_won_t_finish_this_article.single.html

Babeling Tongues: Literary Translation in the Technological Era

According to the Canada Council for the Arts, “It all starts with a good book. Then a translator, writer, or publisher is inspired to see it translated.” Indeed, in the present moment, translation is becoming ever important to both the globalizing world generally and the publishing industry specifically. Despite the increased role translations must undoubtedly take in the world market today, Three Percent, a resource for international literature at the University of Rochester, reports that “Unfortunately, only about 3% of all books published in the United States are works in translation… And that 3% figure includes all books in translation—in terms of literary fiction and poetry, the number is actually closer to 0.7%.” This paper justifies the need for increasing the number of translations available in the market, and explores the problems and possibilities doing so.

This essay is divided into three sections. It begins by examining the role of language in literature. It will use the political importance of a wider canon and the mass appeal of World Literature to establish the importance of works in translation. It will then explore the different processes by which professional translators and machine translation softwares approach the translation of texts. In this section, I will demonstrate how machine translations differ from human translations in their conception and execution. The final section of this essay will discuss the limits and possibilities applicable to both types of translation. In particular, it will suggest that machine based translations are, at present, largely capable of translating only literally, while literary translations require translations that go beyond simply literal forms, relying as they do on cadence, metaphor, connotations, and a detailed knowledge of context. Finally, I conclude by showing how work in this regard remains nonetheless open as different groups are attempting to perfect machines equipped with Artificial Intelligence that can deal with more complex types of decision-making required for such translation.

On Translation and World Literature

In order to realize the importance of translation today, we must first recognize that we are at present in the present 21st century, in which the world has become incredibly globalized. In this globalized world we have, for the first time in history, so many individuals from so many different cultures interacting with each other on a daily basis. Consequently, such individuals speak to each other on a daily basis, and also take interest in the literatures of each other. This having been said, the communication predominantly takes place in English, which has, because of the British empire, historically been a very important language at the global scale. Thus, though individuals may speak several languages at once, the dominant language of communication across cultures is English. Indeed, one would be hard-pressed to find someone who does not find that English is the global language of the present world.

The position of English deserves more thought, particularly as, according to the Kenyan scholar Ngugi wa Thiong’o, languages are not politically neutral. They are not inert objects used simply for communication, but rather, every language is “both a means of communication and a carrier of culture [emphasis added]” (Ngugi 13). By calling language a carrier of culture, Ngugi informs us that each language, in its very form and vocabulary, carries the experience of a certain people. Effectively, languages carry “the entire body of values by which we come to perceive ourselves and our place in the world” (Ngugi 390).

Though insightful in itself, Ngugi’s provocation proves particularly pertinent when we consider the historic progenitors of English language: usually first-world, native speakers, who are almost always white. And while, in the past, this fact may not have been an issue as the language circulated in only this same sphere of people, it is problematic in the modern world where the literature published has failed to match the diversity of its leadership. Indeed, as The Guardian’s recent list of the 100 Best Novels attests, the majority of what is considered ‘great literature in English’ is still considered to come from the people mentioned above. Despite surveying novels across five centuries, the Guardian list acknowledges less than 10 novels by authors of colour. Political repercussions notwithstanding, this difference reflects present literary trends even outside of novels published originally in English. At present, we find ourselves in a world whose supposedly ‘global’  literature does not represent the diversity of the people who read it. Given the role of editors and publishers in shaping the literary landscape, such individuals and groups must strive to ensure that the available literature reflects the experiences of those who are to read it. An integral step in this process is to make more English translations available for the current, global readership of the language.

Apart from the responsibility they hold, publishers need to increase the translations they publish because, quite frankly, it makes economic sense. Literature in translation is a growing market, particularly in diasporic communities whose second and third generation readers cannot read their original languages, yet still desire to reconnect with their roots. Moreover, as more and more people are have become aware of the limited perspectives inherent in ‘purely English’ literature, they have also recognized the importance of books in translation. As a result, they have created a huge demand for literature in these languages that reflects their respective cultures. Moreover, such books are also of interest to native English readers as such individuals are curious to know what people from other countries are writing about. For this reason, many authors have gained worldwide appeal despite never having written in English: Gabriel Garcia Marquez, Milan Kundera, Yukio Mishima, and Jorges Luis Borges– to name just a few.

This immense demand for translations is visible in that such books generally claim much more than of the market than the ratio in which they are produced. According to Nielsen,

“‘On average, translated fiction books sell better than books originally written in English, particularly in literary fiction.’ Looking specifically at translated literary fiction, [we can see that] sales rose from 1m copies in 2001 to 1.5m in 2015, with translated literary fiction accounting for just 3.5% of literary fiction titles published, but 7% of the volume of sales in 2015.” (from The Guardian)

Given the above-mentioned statistics, it is obvious that the translation market is an incredibly fruitful avenue for publishers to explore. Moreover, given how small the current production share of translations is, there is still a lot of potential for exploiting the market for translated literature. Rebecca Carter corroborates this point as she notes that “Amazon had identified an extremely strong and under-served community: readers with an interest in books from other countries.”

In light of these findings, it is obvious that we need to increase the number of translations available, and to see what avenues are available for rendering high-quality translations. As we seek to do so, I argue that it becomes prudent that we look into newer methods of translation, particularly machine-based translations (MT), which could possibly prove more efficient and economical than traditional means. As such, it is necessary for us to see how these two processes of translation, by humans and by machines, work, and what the problems and possibilities are of each.

Translating: By Machine and By Hand

At present, the dominant translation methodology is that followed by professional translators. Given that translation is a niche profession, we must examine the motivations of professional translators in order to understand the techniques they use in translating works. Deborah Smith, translator of, The Vegetarian, winner of the 2016 Man Booker International Prize, explains that “[p]art of the reason I became a translator in the first place was because Anglophone or Eurocentric writing often felt quite parochial” (from The Guardian). Smith’s view is very much in tune with the current ascendancy of World Literature and the movement towards a more global canon. Andrew Wilson, author of Translators on Translating, and a translator himself, is “struck by the enjoyment that so many translators seem to get from their work” (Wilson 23). In fact, the various accounts from translators in his book indicate that passion is the driving force behind the profession. Per Dohler is immensely proud of himself and fellow translators because “[w]e come from an incredible wealth of backgrounds and bring this diversity to the incredible wealth of worlds that we translate from and into” (Wilson 29). While many translators, like Dohler, come with a background in literature and linguistics, others, like Smith, are self-taught.

Building off the motivations expressed by these other translators, Andrew Fenner describes a general approach to translation. He points out that, firstly, the translator reads the whole work thoroughly, in order to get a sense of the concepts in the text, the tone of the author, the style of the document and the intended audience. The translator then follows this by translating what they can. They prepare the first draft, leaving unknown words as is. After doing so, the translator leaves the work aside for a day, and allows their subconscious to mull over ambiguous words or phrases. They then return to the work sometime to later to make checks, to correct any errors, and to refine the translation. Lastly, the translator repeats this last process a few more times (Wilson 52-3).

The salient feature of Fenner’s process is that the human translator takes into consideration the work as a whole. They do not imagine the text simply as an object built from the connection of the literal meanings of words. This idea of the work as a whole being reflected in each individual segment will become exponentially important later, when we explore machine-based translations. For now, however, we must only note that this approach ties into Peter Newmark’s diagram of the dynamics of translation:

The Dynamics of Translation

Note: 9 should say “The truth (the facts of the matter) SL = Source language TL = Target Language

As the above diagram shows, the translator must keep in mind both the source and target language’s norms, culture, settings, as well as the literal meaning of the text – a challenging task to say the least– one that involves a complex system of processes and judgements.

Machine translations, in contrast to human translations, use a different series of processes, which generally do not take into account different factors in the same way. For the purposes of this essay, we will look at two machine translation softwares: Duolingo and Google Translate. By its own admission, Duolingo uses a crowdsourcing model to “translate the Internet.” Founder Luis von Ahn strove to build a “fair business model” for language education– where users pay with time, not money, and create value along the way. Duolingo allows users to learn a language on the app, while simultaneously contributing to the translation of the internet.

Ahn introduced the project and the process of crowdsourcing these translations at a Ted Talk:

He claims that Duolingo “combines translations of multiple beginners to get the quality of professional translators” (from video above). In the video, Ahn demonstrates the quality of translations derived from the app. The image below contains translations from German to English by a professional translator, who was paid 20 cents a word (row 2), and by multiple beginners (row 3). Comparing Translations: Professionals versus Duolingo Beginners

As is evident, the two translations seen in the bottom rows are very similar to each other. Using the ‘power of the crowd,’ Ahn estimates that it would take 1 week for him to translate Wikipedia from English to Spanish, a project that is “$50 million worth of value” (from video). From this estimate alone, we can see that machine translation provides the possibility of saving a lot on the cost of translation– a prospect that, in itself, may allow for many more  translations to be produced with the same amount of financial capital.

Apart from Duolingo, one of the more common translation softwares is Google Translate. Unlike Duolingo, which relies on the input of many users translating the same sentence, Google translate works in an entirely computational manner. It performs a two-step translation, using English as an intermediary language, although it undertakes a longer process for certain other languages (Boitet et al.). As the video below shows us, Google translate relies on pre-existing patterns in a huge corpus of documents, and uses these patterns to determine appropriate translations.

While we grant professional translators the benefit of doubt, in that we do not expect their translations to be ‘perfect,’ it is important to note that we seem to have exalted expectations of work done by machines. Machine translations, with their statistically sound algorithmic models, are assumed to provide accurate and appropriate translations. As we go forward with this essay, especially as we discuss the limitations and possibilities of these approaches to translation, it is important to realize that while machine-based translations may indeed advance the pace and quality of translations, we still cannot always assume their translations to be perfect, or always reliable.

Translating: Problems and Possibilities

In terms of limitations, I have noticed that the primary issue with machine-based translation at present is that seem capable only of doing literal translations. In short, this method of translation is most suitable in translating individual words occurring in simple sequences, one after the other. Now, this limitation proves especially debilitating because many texts, particularly literary texts, do much more than simply convey literal meaning. As Philip Sidney explained in his Defence of Poesie, Literature with the capital L means to both “teach and delight.” Literature, in its attempt to delight and entertain, involves an infinitely complex interaction between words, their sound and cadence, their denotation and connotation. It is not simply an object of beauty, but ascends to the level of metaphor, symbolism, and leitmotif, and, n so doing, becomes an object of beauty. To put it simply, when we talk about Literature, it is not just what is said (what we can see in literal translation) that matters, but also how it is said (which is not easy to execute).

Given this supra-literal quality of literary fiction, we must question the applicability of machine translations to such literary forms. Indeed, because machine translations do not seem capable of accounting for this metaphoric dimension of literary language, they may be better suited to types of writing whose goal is the simple transferral of meaning, or communicative writing. Machine translations are thus more applicable to knowledge-orientated genres of writing such as encyclopaedic articles, newspapers, academic texts, whose main focus is to educate and whose core linguistic operations are literal and not metaphoric. However, though machines seem less apt for translating these more complex forms of writing, I maintain that there is the possibility of having machines perform translations of such texts with the aid of limited human intervention, and Artificial Intelligence.

According to a recent study of Google’s Neural Machine Translation system conducted at MIT, the quality of machine translations could possibly be made to be very similar to translations performed by a professional. Tom Simonite reveals that, “When people fluent in two languages were asked to compare the work of Google’s new system against that of human translators, they sometimes couldn’t see much difference between them.” The inherent challenges of translating literary works lie in that multiple connotations of the same word are often context dependent, and therefore, programming a system that can intelligently select one connotation over the other is no easy feat. Will Knight explains the advancement of artificial intelligence, “In the 1980s, researchers had come up with a clever idea about how to turn language into the type of problem a neural network can tackle. They showed that words can be represented as mathematical vectors, allowing similarities between related words to be calculated…By using two such networks, it is possible to translate between two languages with excellent accuracy.”

The official MIT report concludes: “Using human-rated side-by-side comparison as a metric, we show that our GNMT system approaches the accuracy achieved by average bilingual human translators on some of our test sets.” Now, there is definitely no indication that this model is perfect, as yet, but it is a fascinating possibility for the future of translation.

Similar to how we read and process language in texts, Google’s software “reads and creates text without bothering with the concept of words” (web). Simonite describes that the software, in a manner similar to humans’ processing of language, “works out its own way to break up text into smaller fragments that often look nonsensical and don’t generally correspond to the phonemes of speech.” Much like the professional translators approaching the text in chunks that they feel are appropriate, this software does the same. For publishing, the benefits of machines performing high-quality translations equivalent to that of professional translators are manifold.

Primarily, such form of translations would mean lesser production times per translation, and increased accessibility of the work. In the current system where translations are usually performed only when funding or grant money is available, or when there is an assured demand or number of sales in the target market, quality machine translations would ensure that lack of funds would not hinder the development of a translation project. When professional translators themselves may not be readily available for certain languages, machines could step in to do the work. Of course, the financial and physical accessibility of such software to publishers themselves is another matter of consideration. But these are dreams worth considering, and pursuing.

The question remains, however: how can this machine translation model be perfected? Without delving too much into the technicalities of the matter, one finds that it is evident that one of the best ways to fine-tune translation models such as these is to provide the system as much parallel data as possible. According to Franz Josef Och, the former head of Machine Translation at Google, Google Translate has relied on documentation from the Canadian government (in both English and French), and files from the United Nations database. In a similar manner, we can ask publishers to provide literary texts, either original works or translations, to which they currently hold the copyright. By providing copious amounts of data, and by using processes of machine learning, we may be able to teach computers to increasingly translate better. This, in turn, could lead to very advanced machine translations, capable of even translating highly metaphoric forms of literature. In so doing, we can possibly arrive at a stage where, as in the words of Jo-Anne Elder, the former president of the Literary Translators Association of Canada, “A translated book is not a lesser book.” In pursuit of this goal, our aim must be to not simply give up in recognition of the present hurdles confronting machine-based translations, but, like a literary Usain Bolt, we must strive to ascend above them, and succeed.


Works Cited

About Three Percent.” Three Percent. University of Rochester. Web. 7 Nov. 2016.

Boitet, Christian, et al. “MT on and for the Web.” (2010):10. Web. 24 Nov. 2016.

Carter, Rebecca. “New Ways of Publishing Translations – Publishing Perspectives.Publishing Perspectives. 05 Jan. 2015. Web. 20 Nov. 2016.

Duolingo – The Next Chapter in Human Computation. YouTube, 25 Apr. 2011. Web. 28 Nov. 2016.

English Essays: Sidney to Macaulay. Vol. XXVII. The Harvard Classics. New York: P.F. Collier & Son, 1909–14; Bartleby.com, 2001. Web. 24 Nov. 2016.

Flood, Alison. “Translated Fiction Sells Better in the UK than English Fiction, Research Finds.” The Guardian. Guardian News and Media, 09 May 2016. Web. 10 Nov. 2016.

Google. Inside Google Translate. YouTube, 09 July 2010. Web. 26 Nov. 2016.

Knight, Will. “AI’s Language Problem.MIT Technology Review. MIT Technology Review, 09 Aug. 2016. Web. 25 Nov. 2016.

Literary Translators Association of Canada.Literary Translators Association of Canada. Web. 28 Nov. 2016.

Medley, Mark. “Found in Translation.National Post. National Post, 15 Feb. 2013. Web. 28 Nov. 2016.

McCrum, Robert. “The 100 Best Novels Written in English: The Full List.The 100 Best Novels. Guardian News and Media, 17 Aug. 2015. Web. 26 Nov. 2016.

Och, Franz Josef. “Statistical Machine Translation: Foundations and Recent Advances.” Google Inc. 12 Sept. 2009. Web. 25 Nov. 2016.

Simonite, Tom. “Google’s New Service Translates Languages Almost as Well as Humans Can.MIT Technology Review. MIT Technology Review, 27 Sept. 2016. Web. 28 Nov. 2016.

The Butterfly Effect of Translation.Translation. The Canada Council for the Arts. Web. 13 Nov. 2016.

Thiong’o, Ngugi Wa. Decolonizing the Mind: The Politics of Language in African Literature. London: J. Currey, 1986. Web. 26 Nov. 2016.

Wilson, Andrew. Translators on Translating: Inside the Invisible Art. Vancouver: CCSP, 2009. Print.

Wu, Yonghui, et al. “Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation.” (2016). Web. 25 Nov. 2016.

Objective Journalism in the Online Age: Paramount or Pipe Dream?

The traditional ideals of journalism are under siege, colourfully illustrated during John Oliver’s entertaining diatribe on modern journalism on Last Week Tonight. In particular, the idea of objectivity – one of the cornerstones of journalistic integrity – is in flux in the online age. This is especially relevant in regards to headlines, which media outlets are beginning to be laxer with allowing their bias to show in. In an ideal world, the public should be presented with the unbiased facts that they need to come to their own informed decision, but this type of coverage is becoming more and more rare.

There is a plethora of considerations in regards to online publishing that go beyond objectivity and good writing, but in turn influence those two concepts. Headlines in an online age must keep in mind Search Engine Optimization (SEO), click-through rates, and the fact that traditional journalism has to be competitive with think pieces written by citizen journalists – which can be more appealing to share online and therefore can go viral. If mainstream journalists include buzzwords in their headlines, which are more likely to be Googled or shared, they may be adding bias to the piece – whether that bias is intended or not.

More than occasionally, social media users share or retweet articles based solely on the headline, without ever having read the article itself. Caitlin Dewey, for the Washington Post, reported on a study by computer scientists at Columbia University and the French National Institute which stated that “59 percent of links shared on social media have never actually been clicked: In other words, most people appear to retweet news without ever reading it.” Given that Facebook’s algorithm favours the posts that are most interacted with, these blind shares help determine what others read on their newsfeeds. Even for those who do click through to the article itself, studies show that they are unlikely to read it in full. In these cases, when consumers are forming opinions based solely on headlines and/or short summaries, any explicit bias in a headline becomes far more important than the editor or journalist who chose it may have originally intended.

When Oxford Dictionaries announced “post-truth” as its 2016 international word of the year, it was unsurprising in a time embroiled in emotions in regards to Brexit and the US federal election. Post-truth is defined by Oxford Dictionaries as an adjective “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” Living in a post-truth society, it is perhaps unsurprising that mainstream journalism is struggling against a tide of “fake news” that can be at best annoying and mischievous, and at worst, propagandistic. The Washington Post reported on Russia’s involvement with the fake news cycle during the US election, saying “[t]he Russian campaign during this election season … worked by harnessing the online world’s fascination with “buzzy” content that is surprising and emotionally potent, and tracks with popular conspiracy theories about how secret forces dictate world events.”

It must be noted however that the idea of “buzzy” content is not unique to fake news, with sites like Buzzfeed dominating the online world using clickbait-y headlines that promise “You Won’t Believe” what is contained in their articles (or listicles.) So, if these are the headlines that have the average web user clicking through to an article, how can the mainstream news media compete without abandoning the original ethical principles journalism? Further, should they even be “competing” at all? To take it a step further, when the news being covered hits passion points for the journalists reporting on it, should they be allowed to take a personal stance if they feel it is important? For example, criticisms of Donald Trump’s stances on immigration. Perhaps we need to look back to allow us to move forward.

The Idea of Objectivity and Bias in Mainstream Journalism

Objectivity and bias are oft debated topics in the world of journalism and ethics, going back much, much further than the 2016 election cycle. Some, like Walter Lippmann, argue that objectivity is paramount to an informed population, while others claim that it is impossible to truly avoid bias and that it is lazy for journalists to not use their investigative skills to present the public with fully formed opinions. In cases of social justice issues, the lines between what bias is acceptable becomes blurred.

So, can journalism ever achieve true objectivity? As Robert McChesney said in his essay “That was Now and This is Then: Walter Lippmann and the Crisis of Journalism” for Will the Last Reporter Please Turn Out the Lights “institutional and human biases are unavoidable, and the starting point is to be honest about it” (159). In Liberty and the News Walter Lippmann said that “the really important thing is to try and make opinion increasingly responsible to the facts” (38).

In the essay “A Test of the News” that Lippmann co-authored, he also referred to the public’s perception of the news as “a widespread and a growing doubt whether there exists such an access to the news about contentious affairs. This doubt ranges from accusations of unconscious bias to downright charges of corruption” (Lippmann and Merz 1). However, the main omission to this rule is when the news that audiences are consuming aligns closely with their own pre-existing biases. In The News: A Users Manual Alain De Botton argues the dangers of personalizing the news. That is, audiences only paying attention to subjects that are already of interest and in line with their current beliefs. The tendency to seek out news that confirms standing notions and ideologies rather than challenges them is something that becomes a risk of a society that consumes media passively. Yet, when topics that are overwhelming seen as negative are covered, for example racism or homophobia, does it become okay to allow this bias to creep into the coverage of such events? Chris Hedges, in his essay “The Disease of Objectivity,” said that aiming for objectivity takes the journalist away from empathy and passion, and distracts them from one of the main abilities of reporting: a quest for justice. These are all things society should theoretically be striving towards.

What Would Walter Lippmann Say?

Robert McChesney, who I quoted in the previous section, is a scholar and professor, concentrating on the history and political economy of communication with a particular interest in journalism and self-governance. During his essay “That was Now and This is Then: Walter Lippmann and the Crisis of Journalism” he addresses many common criticisms of Walter Lippmann’s popular works, such as claims of him being elitist and “anti-democracy.” Most of the piece, however, focuses on Lippmann’s lesser known works that deal directly with journalism: “A Test of the News” an essay co-authored with Charles Merz, and Liberty and the News, a short book.

Lippmann was a Pulitzer Prize winning journalist, writer, and political commentator, who was outspoken in his views regarding journalism’s role in democracy. He is best known for his works Public Opinion and The Phantom Public. However, McChesney argues that the importance of “A Test of the News” and Liberty and the News is magnified by the fact that they were written at what he calls “the climax of the last truly great defining crisis for journalism” (McChesney 153). This lends to a feeling of them being intensely timely and “of the moment” for the 1920’s. And this is, of course, relevant to today because we are now in another defining crisis for journalism.

The main issue of the day was the emerging trend towards organized propaganda, or what we now consider public relations. Lippmann referred to the public’s perception of the news at the time as “a widespread and a growing doubt whether there exists such an access to the news about contentious affairs. This doubt ranges from accusations of unconscious bias to downright charges of corruption” (Lippmann and Merz 1). With the rise of fake news, and further the fact that media is relying on native advertising and content marketing to fund online publications, these same doubts are once again becoming realized.

In “A Test of the News” Lippmann focuses on the New York Times coverage of the Russian Revolution from 1917-1920. He was upset with how the news was colored by “the wishes, distortions and lies of [anti-revolutionary] forces as gospel truths” (McChesney 153). The New York Times was particularly guilty of being misled by its reliance on the government as an official source of information. The frightening implications of such a system led Lippmann to propose that journalism not be considered a private enterprise, but as a public institution, and therefore suggested that public money should be used to improve its quality.

Lippmann, somewhat surprisingly given his socialist background, had no class analysis when evaluating the state of the commercial news system. He did not “entertain the idea … that those with property and privilege greatly benefited by an ignorant and ill-informed populace.” (McChesney 155). To him, the power of the news was in the hands of the editors, not the publishers. On that particular note, McChesney commented that Lippmann did not take into account how the concerns of said publishers influenced who became the editors, which fairly clearly shortsighted.

Lippmann particularly respected C.P. Scott, publisher and editor of The Manchester Guardian. After his death, his family placed The Guardian in a nonprofit trust, to “preserve the financial and editorial independence of The Guardian in perpetuity while its subsidiary aims are to champion its principles and to promote freedom of the press in the UK and abroad” (McChesney 176). Today, The Guardian is still widely read and respected around the world.

Despite his support of The Guardian becoming a nonprofit newspaper, Lippmann was not actually calling for all news media to adopt the model. Instead, he was calling for them to change course from the current status quo, and to embrace professional training. He called for standards of “the highest quality of factually accurate and contextually honest information unpolluted by personal, commercial, or political bias” (Lippman and Merz 41).

In his work Lippmann wanted to stray away from society remaining “dependent upon untrained accidental witnesses” (Lippmann 46). However, it seems that we are currently moving closer towards that again, with the rise of citizen journalism, which quite often invites personal biases.

Despite common criticisms of being elitist, Lippmann was determined that for the news media to succeed in changing for the better, the public needed to become more loudly involved. “Change will come only by the drastic competition of those whose interest are not represented in the existing news-organization” (Lippmann 60).

He posed the following as “jobs” for the reporter:

  1. Ignore bias (personal or otherwise) to ensure an accurate understanding of events
  2. Operate under, and enforce, a professional code of honor.

Under these guidelines, schools of journalism boomed after World War I, and “the notion that the news should be unbiased and objective became commonplace” (McChesney 158).

However, McChesney pointed out that somehow the current standard of professional journalism in the United States has “veered dramatically from the core values [Lippman] prescribed” (McChesney 158). He cites the coverage of the lead up to the War on Terror as a large example of the presses tendency to take the claims of the government at face value.

Knowing the history and context of Lippmann’s works, we must acknowledge that his vision is not entirely feasible in a world ruled by the commercialism he disregarded. The resources that Lippmann’s theories relied on are no longer in place, and instead we are left with what McChesney calls the shambles of commercial journalism in a significantly monopolistic news media system.

What Should We Be Aiming For?

In the case of news stories related to social justice, where empathy and passions are more likely to be involved, it becomes a question of if the news has an obligation to report as objectively as possible, or if reporters can fulfill their personal, moral obligation to express distaste towards subjects such as homophobia and racism. When it’s a topic that is overwhelmingly seen as outdated or distasteful, should journalists be allowed to show their bias as long as it does not affect accurate and fair reporting? Potentially, emotional decisions could be made, leading to inaccurate reporting being posted online. In the days of the Internet, tides of public opinion can change quickly. With the rise of citizen journalism and the blogosphere, opinion being touted as fact is becoming increasingly common, and the mainstream media (especially in regards to news reporting) should be held to a higher standard of objectivity.

This is not to say that journalists cannot follow their passions and take up the mantle for a cause like Chris Hedges recommended. Rather, they just must keep journalistic integrity in mind while doing so. Perhaps rather than trying to remain wholly objective, they should be trying to examine more angles than just the standard two disparate ones that journalists look for to prove they are unbiased. While standard news writing does not allow for in depth analysis due to both word counts and time constraints, reporters such as the late David Carr of the New York Times are champions of well-researched, dogged investigative reporting. Acknowledging that a certain amount of bias is unavoidable, and doing their best to align opinion with fact is integral to journalists keeping the public informed on world issues while staunching the flow of rampant misinformation. For society to progress beyond issues of sexuality and race, which should be outdated and obsolete, it is important to have passionate whistleblowers who have the skills and training necessary to get to the heart of the story.

The crucial lessons in Lippmann’s works remain relevant today, no matter the format journalists are publishing in ­– online or in print. The relationship between journalism and democracy, and the importance of the public’s role in holding them accountable, remain. Therefore the difficult, but not impossible, mission of creating an independent fourth estate is central to ideas of self-government and freedom. Despite journalists’ bias and feelings of moral obligation, the mainstream news media must do their best to maintain unbiased coverage. Presenting the facts of a news event without using language that leads their reader to a conclusion, but rather allows the reader to come to their own, is one of the main purposes of media coverage. Citizen journalism can be extremely biased and one-dimensional, and as such, it is increasingly important for the mainstream news media to remain unbiased in their reporting. If the tendency towards bias can be ignored by professional journalists, mainstream media has the potential to infiltrate the Internet with better researched pieces.


Works Cited

Botton, Alain De. The News: A User’s Manual. New York: Pantheon Books, 2014. Print.

Dewey, Caitlin. “6 in 10 of you will share this link without reading it, a new, depressing study says.” The Washington Post 16 Jun 2016. Web. 25 Nov 2016.

Hedges, Chris. “The Disease of Objectivity.” Will the Last Reporter Please Turn Out the Lights. New York: New Press, 2011. Print.

Journalism: Last Week Tonight with John OliverLast Week Tonight with John Oliver, HBO, 7 Aug 2016.

Lippmann, Walter and Charles Merz. A Test of the News: An Examination of the News Reports in the New York times on Aspects of the Russian Revolution of Special Importance to Americans, March 1917 — March 1920. New York: New Republic, 1920. Print.

Lippmann, Walter. Liberty and the News. New York: Harcourt, Brace and Howe, 1920. Print.

Maksym Gabielkov, Arthi Ramachandran, Augustin Chaintreau, Arnaud Legout. “Social Clicks: What and Who Gets Read on Twitter?.” ACM SIGMETRICS / IFIP Performance 2016, Jun 2016, Antibes Juan-les-Pins, France. 25 November 2016.

Manjoo, Farhad. “You Won’t Finish This Article.” Slate 6 Jun 2013. Web. 18 Nov 2016.

McChesney, Robert. “That Was Now and This Is Then: Walter Lippmann and the Crisis of Journalism.” Will the Last Reporter Please Turn Out the Lights. New York: New Press, 2011. Print.

Timberg, Craig. “Russian propaganda effort helped spread ‘fake news’ during election, experts say.” The Washington Post 24 Nov 2016. Web. 26 Nov 2016.

Something About a Book: Why Hasn’t Digital Reading Taken Over?

In 2011 Phillip Jones, the deputy editor of the Bookseller, shared predictions that ebooks would “account for 50% of the US market by 2014 or 2015, and then… probably plateau,” (qtd in The Guardian). Yet, here we are, rapidly nearing the end of 2016, and according to BookNet, ebook sales have leveled at a fraction of that (hovering between 17% and 18% of the Canadian book market for the past three years).

And while it is true that digital reading materials of other kinds are constantly being made available online (blog posts, news articles, opinion pieces), the amount of actual reading the average user does online is questionable to say the least. We see this particularly in the ways marketers are adapting their content strategies to make everything more concise and skim-able. User experience professional Steve Krug wrote that when creating digital content, “We’re thinking ‘great literature’… while the user’s reality is much closer to ‘billboard going by at 60 miles an hour.’”

So, contrary to the expectations of many, the bulk of reading is still happening in print. In fact, according to a 2016 PEW research survey, the number of Americans who read a print book in the last year is double the number who read an ebook. Even more interesting is the finding that only 6% of respondents claimed to read digital exclusively, meaning that an overwhelming majority of ebook readers are still also reading print.

How could all the doom and gloom predictions that digital would take over be so far from the truth? Is there something inherently problematic with digital reading or does a bound book have some sort of je ne sais quoi factor that needs to be taken seriously for once?

At this point, we cannot keep brushing these statistics off by attributing them simply to nostalgia. There’s evidence abound that many other forces are at work here, from studies indicating that deep reading is not achieved on digital platforms, to reports on the rise of screen fatigue, right back to that irritating argument that there is just something about a book. It’s time we considered some of these influences before the ebook is caught as off-guard by some new disrupting innovation as the print book was by the ebook.

 

Deep Reading 

Moving away from science for a moment, I would like to point out that I have subjectively suspected that I do not read as deeply off of screens for many years now. Within a few sentences of an article, I frequently catch myself skimming for relevant information rather than actually reading the text as was intended. While at times this may be a helpful study skill, it becomes a problem when skimming stops being a conscious decision and starts becoming the way you automatically read digital works. In order to write this essay, I actually printed a dozen or so digital articles in order to ensure that I was actually understanding the sources I would be referencing. Yet, despite there being an almost limitless stream of dialogue on this subject online, the research in this area appears to be inconclusive.

A study conducted by Anne Niccoli with Educause last Fall tested reading comprehension across ebook and print formats. The study tested 231 students with multiple choice and short answer questions based on an article (roughly 800 words) they were assigned to read either in print or digitally. Niccoli found no “statistically significant difference” (Niccoli) in the average test scores of groups reading on a digital device versus groups reading print.

However, in 2013 the UK National Literacy Trust studied children’s reading habits and found that daily print readers are nearly twice as likely to be above average readers as those who read daily on-screen.

The idea that digital reading may have an impact on one’s ability to read in the long term is supported by recent neuroscientific studies. Not only does the brain use different circuitry to read on-screen versus paper, but if you read on screen frequently, your mind may shift to “non-linear” reading rather than “deep reading” (Raphael).

“Because we literally and physiologically can read in multiple ways, how we read—and what we absorb from our reading—will be influenced by both the content of our reading and the medium we use,” explains Maryanne Wolf in her essay on the brain’s digital evolution. Reading on-screen promotes the rapid skimming of texts and “an incessant need to fill every millisecond with new information,” she notes, arguing that while this can increase our reading efficiency, we need to consider what we are losing in the process.

Is the loss of deep-reading the reason readers are putting their screens down in favor of paper, or is something else at work here?

 

Distraction

While different brain circuits could be to blame for the inability to focus online, the amount of distraction we face on a computer screen should not be discounted. A 2009 Stanford Study inferred that people who multitask on digital devices have trouble focusing and do not perform as well on tests. “They’re suckers for irrelevancy… Everything distracts them,” explained researcher Clifford Nass (qtd. in Gorlick). It’s probably fair to say that distraction levels in 2009 were not near what they are today, with smartphone use in 2016 more than triple what it was then. The truth is, we’re all digital multitaskers now.

When you click on a digital article you’d be lucky to read more than a paragraph before stumbling upon a link to take you elsewhere. You then have to make a choice as to whether to follow the link or ignore it, or perhaps you’ll make a mental note to return to it later. Regardless of the decision, the split second pause of consideration constitutes just one of many distractions we face with digital reading.

There’s also the endless notifications from our social media feeds, and (assuming you don’t use an Ad Blocker) the flashing banners and towers in the margins. We might have any number of tabs and applications running at a given time that entice us to click away momentarily from whatever piece of writing we are focusing on.

This is a problem with ebooks as well now that people are turning to tablets rather than designated e-readers. How tempting is it to check your email or flip over to Netflix or Candy Crush? Even if we sincerely want to pay attention to what we’re reading, we’re likely to get distracted. So what happens if you’re reading out of obligation rather than pleasure and you already have motivation to procrastinate? No wonder 92% of post-secondary students in the US, Germany, Slovakia and Japan recently reported a preference for print study materials over ebooks.

 

Screen Fatigue

Another possible cause for keeping paper books around is that users who already stare at a screen all day (most of us, now that the smart phone is here) are experiencing the negative side effects of those back-lit pages. Young adult readers (ages 18-34) are especially tired of screen time, with over 30% of this group indicating that they would like to spend less time on digital devices (Publishers Weekly), and while we may require digital technology to listen to music or watch television, books are one area of entertainment that allows us to unplug.

According to a 2016 study by The Vision Council, the 18-34 group is also the greatest victim of digital eyestrain, with 73% reporting tired, sore eyes and headaches as well as neck, shoulder, and back pain. Millennials are also the most likely to use multiple devices simultaneously. But it’s not just the younger generation. 90% of Americans spend at least a couple hours a day on a screen.

We awake to the glow of a phone acting as an alarm clock. We work for hours on our computer screens, perhaps stopping to look at something on another screen—a television, a tablet, a smartphone. The pattern is repeated again and again as our days are filled with electronic images of news reports, online shopping, video games, movies, emails and texts.

— The Vision Council

It could be that people just physically need a break from screens, and reading printed works is one means of entertaining and informing ourselves without using an electronic device.

 

Digital Technology Fatigue

Beyond just the physical symptoms of screen over-indulgence, there is a prevalence of general tech fatigue and a desire to escape from the virtual reality of the web and engage with people and ideas in the physical world.

We are constantly connected, a Mobile Mindset study by Lookout reporting that 60% of us check our phones every hour, and there are even symptoms of smartphone addiction: “phantom smartphone twitches: the perception that your phone is ringing, buzzing or bleeping even when it’s nowhere in sight” (Lookout, 2012). The constant need to check our digital devices would suggest they make us happier, but more evidence is showing that our screens are actually causing anxiety.

Research has even shown too much screen time, especially related to gaming, can cause physical damage to the brain and impair cognitive functioning. It’s no surprise people are now seeking ways to de-tech. You can even find travel guides for “remote vacation spots with unreliable cell phone service and internet access” (Fitzgerald). If people are willing to go to such extreme lengths to escape screens, picking up a physical book instead of an ebook seems like a no-brainer (no pun intended).

 

Access

But perhaps the persistence of print is simpler than I have so far suggested. I have talked about the prevalence of smartphones, but few people will want to read exclusively on such a small display (about 14% to be exact), and tablets remain the top platform for ebook readers. Nonetheless, tablets are still expensive luxury items, and despite the familiar praise that ebooks are more economical than print books, if you do not have a device to read them on then the cost becomes much higher (starting at around $100, but some cost ten times that).

On top of this, agency-lite pricing has increased ebook prices across the board, so there are times where it may even be cheaper to purchase the print copy. Moreover, as Michael Kozlowski of Good E-Reader points out, the value of an ebook doesn’t tend to depreciate as the book gets older like a printed book. With print, the initial hardcover costs more than the softcover to follow, and the softcover becomes discounted by the retailer when it becomes time to move inventory at the end of the season. But the price of an individual ebook remains stagnant. “The entire reason I started to buy e-books was to save money,” Kozlowski laments, “Now the opposite is true; [it’s] more cost [efficient] to buy the hardcover or paperback. I can loan it out to friends, showcase it on my bookshelf and I truly own it.”

 

Physicality/Boundedness

Kozlowski’s last sentiment brings me to another point: it could be that the reason print has not died out is not due to some failure of digital products, but instead to some added value provided by a physical, bounded book. That same factor that prevents you from wrapping an ebook and sticking it under the tree at Christmas time could be the reason print has survived. After all, bookstores are busiest during the holiday shopping season and publishers push out their biggest titles in the Fall in preparation.

It’s not just about whether or not we can gift it. It’s the ever nagging fact that there’s some value we find in the physicality of a print book that’s just missing from an ebook, “the bookness of the book” as Verlyn Klinkenborg put it in his article, Books to Have and to Hold. He discusses how when you close an ebook, the text simply vanishes, but the reading experience “persists when you’ve finished it… A monument to the activity of reading.”

Joe Wikert acknowledged a similar valuation in a post about The Ebook Value Proposition Problem, explaining how he had bought an expensive Harry Potter collection for his daughter and wondered why she didn’t just want the ebook. His daughter pointed out that she couldn’t showcase a list of ebooks. “I’m sure she’ll smile every time she looks at the box on her shelf,” Wikert concedes. “My collection is a library buried deep within my iPad. When I look at my iPad I don’t smile . . . I just wonder if it’s fully charged…”

Books have become part of the atmosphere of a home. They are something we expect to see around us, even if we aren’t reading them every day. They indicate our interests and tastes, guard our memories, and give us an opportunity to escape our regular lives once in a while. They linger quietly on our shelves and remind us of their existence. An ebook might as well not exist once you’ve closed the app.

The truth is, we are used to seeing books around. They are comforting and decorative. They are expected. But maybe it is only this expectation that is responsible for their continued popularity.

 

Comfort

The physical book is what we are used to. It is what most of us learned to read with, and what we plan to read with our own children. When, as children, we developed the brain circuitry that allows us to read, it was done with a printed page. Paper is our first literacy language, but we are rapidly becoming bi-literate.

As an avid reader, I have an innumerable amount of memories associated with a printed page. But is this only because for the majority of my life ebooks were not a feasible option? Will kids born in the last decade experience the same nostalgia for books when they grow up?

For many children today, a touch screen will appear in as many (or more) memories as a printed page. When I read a book, I feel more in my element than I do on the web or in an ebook, but that may not be true for the next generation. Maybe the only thing holding back digital reading is our ability to adapt to it. We’re evolving more slowly than the technology, but eventually we will get there, and if comfort is the only factor keeping the printed book alive than it won’t last long.

 

Conclusion

That brings me to the real point of this conversation. Why have I just rambled on for a couple thousand words, messing about with an explanation for the continued perseverance of books? Because the answer is vital to the future of the book industry. It will determine where publishers need to adapt and expand, and where they need to proceed with caution.

If we still read physical books only because they are what is comfortable or what we have most access to, then soon enough digital will take over. But if any of the other factors discussed come into play, then the book may continue to have a role in our society, providing a unique value that digital texts cannot.

Unfortunately, I cannot say for sure which of the influences I’ve described is really responsible for the failure of digital reading to completely take over our lives. I suspect that, in reality, it is a combination of all of the above. I’m sure there are also factors I haven’t even considered. It is an area where, I believe, a lot more research could greatly benefit the book industry.

Certainly, the worst thing that could be done is to ignore the subject, to celebrate the immortality of the printed book and to assume that there is inherently something about a book that will guarantee its continued existence. For if anything is to spell the death of the book, it will be blatant overconfidence.

 


 

Works Cited

BookNet Canada. (2016, January 18). Print book sales were up in 2015. BookNet Canada. Retrieved from http://www.booknetcanada.ca/press-room/2016/1/18/print-book-sales-were-up-in-2015

Dunckley, Victoria M.D. (2014, February 27). Gray Matters: Too Much Screen Time Damages the Brain. Psychology Today. Retrieved from https://www.psychologytoday.com/blog/mental-wealth/201402/gray-matters-too-much-screen-time-damages-the-brain

Fitzgerald, Britney. (2012, June 12). Social Media Is Causing Anxiety, Study Finds. Huffington Post. Retrieved from http://www.huffingtonpost.com/2012/07/10/social-media-anxiety_n_1662224.html

Fitzgerald, Britney. (2012, July 31). Technology-Free Vacation: 7 Places Where You Can Escape the Internet. Huffington Post. Retrieved from http://www.huffingtonpost.com/2012/07/26/technology-free-vacation_n_1707478.html

Flood, Alison. (2011, April 15). Ebook sales pass another milestone. The Guardian. Retrieved from https://www.theguardian.com/books/2011/apr/15/ebook-sales-milestone

Gorlick, Adam. (2009, August 24). Media multitaskers pay mental price, Stanford study shows. Stanford News. Retrieved from http://news.stanford.edu/2009/08/24/multitask-research-study-082409/

Klinkenborg, Verlyn. (2013, August 10). Books to Have and to Hold. New York Times. Retrieved from http://www.nytimes.com/2013/08/11/opinion/sunday/books-to-have-and-to-hold.html?ref=verlynklinkenborg&_r=2&

Kozlowski, Michael. (2015, June 4). Major Publishers Are Screwing Readers with High e-book Prices. Good E-Reader. Retrieved from http://goodereader.com/blog/e-book-news/major-publishers-are-screwing-readers-with-high-e-book-prices

Krug, Steve. (2014). How we really use the Web. Don’t Make Me Think, Revisited. Retrieved from http://www.sensible.com/chapter.html

Lookout. (2012, June). Mobile Mindset Study. Lookout. Retrieved from https://www.mylookout.com/img/images/lookout-mobile-mindset-2012.pdf

Maloney, Jennifer. (2015, August 14). The Rise of Phone Reading. The Wall Street Journal. Retrieved from http://www.wsj.com/articles/the-rise-of-phone-reading-1439398395

Meehan, Trista. (2015, August 6). How Much are People Reading Online? Blink UX. Retrieved from https://blinkux.com/blog/reading-online/

Milliot, Jim. (2016, June 17). As Ebook Sales Decline, Digital Fatigue Grows. Publishers Weekly. Retrieved from http://www.publishersweekly.com/pw/by-topic/digital/retailing/article/70696-as-e-book-sales-decline-digital-fatigue-grows.html

National Literacy Trust. (2013, May 16). Raising UK Literacy Levels. National Literacy Trust. Retrieved from http://www.literacytrust.org.uk/media/5371

Niccoli, Anne. (September 28, 2015). Paper or Tablet? Reading Recall and Comprehension. Educause. Retrieved from http://er.educause.edu/articles/2015/9/paper-or-tablet-reading-recall-and-comprehension

Perrin, Andrew. (2016, September 1). Book Reading 2016. Pew Research Center. Retrieved from http://www.pewinternet.org/2016/09/01/book-reading-2016/

Rainie, Lee and Kathryn Zickuhr. (2014, January 16). E-Reading Rises as Device Ownership Jumps. Pew Research Center. Retrieved from http://www.pewinternet.org/files/old-media//Files/Reports/2014/PIP_E-reading_011614.pdf

Raphael, T.J. (2014, September 18). Your paper brain and your Kindle brain aren’t the same thing. PRI. Retrieved from http://www.pri.org/stories/2014-09-18/your-paper-brain-and-your-kindle-brain-arent-same-thing

Schaub, Michael. (2016, February 8). 92% of college students prefer print books to e-books, study finds. Los Angeles Times. Retrieved from http://www.latimes.com/books/jacketcopy/la-et-jc-92-percent-college-students-prefer-paper-over-pixels-20160208-story.html

Statista. (2016, October). Number of smartphone users in the United States from 2010 to 2021 (in millions). Statista. Retrieved from https://www.statista.com/statistics/201182/forecast-of-smartphone-users-in-the-us/

Vision Council. (2016). 2016 Digital Eye Strain Report. The Vision Council. Retrieved from https://www.thevisioncouncil.org/sites/default/files/2416_VC_2016EyeStrain_Report_WEB.pdf

Wikert, Joe. (2015, December 28). The Ebook Value Perception Problem. Book Business. Retrieved from http://www.bookbusinessmag.com/post/ebook-value-proposition-problem/

Wolf, Maryanne. (2010, June 29). Our ‘Deep Reading’ Brain: Its Digital Evolution Poses Questions. Nieman Reports. Retrieved from http://niemanreports.org/articles/our-deep-reading-brain-its-digital-evolution-poses-questions/

 

Looking at Accessibility and Inclusivity Online

Through the following observations on the current state of accessibility of digital content, I hope to inspire discourse toward making the internet more inclusive for users, with a focus on members of the blind and deaf communities.

To preface the conversation:

Data from 2015 shows that 11.5% of Canadians still do not have in-home access to the internet, as reported by the World Bank. Given the national population from a year ago, this leaves 4.1 million Canadians deprived of the ability to access, participate in, and create content in the digital sphere from their homes. This data represents access based on available infrastructure, and doesn’t take into account those with local infrastructure whose lack of access stems from financial barriers. A Vancouver Metro news article from February this year cites CRTC data from 2015 indicating that 41% of low-income Canadian households do not have internet access.

The Canadian National Institute for the Blind (CNIB) states that half a million Canadians are living with significant vision loss, with 50,000 individuals losing their sight completely each year. The Canadian Hearing Society states that one in four Canadian adults reports some degree of hearing loss, and cites StatsCan data reporting that over a million Canadian adults live with hearing-related disabilities.

For those living without access to the internet at home, local libraries provide users with the ability to access scholarly resources, read online articles, watch videos, listen to podcasts, and participate in conversation on social media platforms and forums.  In 2013, the Vancouver Public Library (VPL) saw 6.9 million visits by users, which included 1.3 million internet sessions and 572,554 searches on databases available through VPL.

However, not everyone is able to visit their local library and access the full spectrum of its offerings. For some, the cost of bus fare to and from the library on a consistent basis proves too expensive, or the library does not have the technology necessary to provide visually impaired and hearing impaired individuals with the same range of access other patrons are able to enjoy.

Individuals without in-home access to the internet are also excluded from submitting their work to many literary magazines and creative writing contests because of the popularity of Submittable, a cloud-based submissions manager used by many publications and organizations. If a publisher or organization only accepts submissions through Submittable, individuals without internet at home are not granted the opportunity to have their opinions and voices heard, resulting in said publications being inclusive only to those with access to Submittable.

 Internet Access for the Blind: Available Technologies and Trends

At present, there are multiple devices on the market for blind and visually impaired users: Traditionally speaking, devices like the Pacmate QX 420, HumanWare Apex, and Pacmate BT 400, and more recently, voice-over software for electronic products.

Pacmate QX 420, HumanWare Apex, and Pacmate BT 400

These are Braille note-taking devices drawing on voice recognition as a first point of access, meaning that the user speaks into the device, which then outputs a single line of Braille text on-screen one at a time. But humans do not think in lines of text – they think in strings of phrases and entire paragraphs of joint ideas and images. Bold, italics, bulleted lists, and more advanced formatting options are also not possible, further restricting a user’s creativity and ability to organize content.

Other immense shortfalls of these devices are their mechanical, closed-system platforms that do not allow users to easily edit content or award them with privacy (imagine reading your diary out loud with your family in the room). Moreover, they cannot display full web pages in Braille (display is restricted to what one sees when browsing on a mobile device), or permit the user to read e-books or .pdf uploads. They’re also very expensive – ranging between $6,600 and $9,500.

Seeing Through Sound

More recently, electronic devices such as laptops, tablets, and smartphones have powerfully integrated voice-over software (Voice Over for Apple products and JAWS for Microsoft products) to enable blind and visually impaired users to access digital content in a way much closer to how others do. Users run their finger across the screen, and the software reads the corresponding names of apps, textual contents of web pages, emails, menu list options, etc. out loud. Navigation on a laptop varies, in that users generally do not rely on the tracker pad and instead employ keyboard commands because of the size and display difference of the screen; full web pages showcase much more information, and these keyboard commands enhance navigability.

Molly Burke, a blind vlogger on YouTube, posted a video to her channel demonstrating how she engages with digital content on each of her electronic devices. When using social media sites or posting to YouTube, she explains that she prefers using her iPhone or iPad because sites like these are much more difficult to navigate on a laptop, and because she has more control with her finger being directly on the screen. For typing longer documents and emailing, laptops are more useful because of the full keyboard.

Because there is a discrepancy in how easy it is for blind users to navigate on different electronic devices, there is still work needed to ensure that these individuals are not excluded from content published online and in apps. For example, browser extensions and ads that cover up portions of a screen create accessibility issues for users, because the voice-over software is no longer able to read all of the content being displayed. Further, web developers need to consider the accessibility needs of blind and visually impaired users, ensuring that easy navigation via current voice-over software is considered at all times throughout the development process.

An example of fundamental accessibility considerations may be found in the checklist on Web Accessibility in Mind’s (WebAIM) website. This checklist comprises four key components of web accessibility for both blind and deaf users, stating that all elements of a web page should be perceivable, operable, understandable, and robust. Because many blind and visually impaired individuals rely on voice-over software, it’s imperative that online publishers and app and extension developers are trained in these aspects of web accessibility to ensure that users are not excluded due to poor navigability and operability.

Some examples of fundamental accessibility considerations for blind and visually impaired users include:

  • Contrast levels
  • Instructions existing independently from sounds and visuals
  • Colour not existing as the sole means through which meaning is conveyed
  • Page functionality is available via keyboard
  • Navigational order of links is logical
  • Each page contains a descriptive, relevant, and accurate title
  • Semantic mark-ups (<strong>, <ul>, <ol>, <h1>, etc.) are used appropriately

poor-accessibility

Taken from Google’s “Web Fundamentals in Accessibility”
                                                                                            Google’s “Web Fundamentals in Accessibility”  

The upper version is less accessible for blind and visually impaired users for the following reasons:

  1. The text is lower contrast, making it harder to read for individuals with vision impairment.
  2. The labels on the left are a great distance from their corresponding fields, making it challenging for individuals to associate them, especially if the user is needing to zoom in a lot to read the page.
  3. The “Remember details?” checkbox isn’t associated with the label itself, so it wouldn’t be easy for users to know if they had checked off the box.

A recent article published by the CBC discusses current typographical trends that are making websites less readable. As developer and technology writer Kevin Marks explains, the value of fashionable aesthetics is detracting from practical, accessible choices, with greyer, skinnier sans-serif typefaces being a popular choice among designers. Higher resolution screens on tablets and smartphones drive designers to select these lighter typefaces, but they worsen contrast levels between background and text, ultimately making reading more challenging for visually impaired users. Designers, therefore, should be mindful of choices concerning typefaces and contrast levels when laying out content published online.

Returning to Touch­­

In recent years, there has been substantial efforts toward re-visiting Braille devices as a way to enable full-page digital display at a user’s fingertips, moving away from voice-over software.

A powerful example of these efforts is the Blitab – a tablet being touted as “the iPad for the blind.” With a combined 285,000,000 blind and visually impaired people worldwide, co-founders Kristina Tsvetanova, Slavi Slavev, and Stanislav Slavev are driven by their motivation to provide members of the blind and visually impaired community with the opportunity “to grow and prosper, where education, technology, and knowledge are open to them.”

Smart liquid technology is the Blitab’s distinguishing feature, creating instant tactile relief via small liquid bubbles that transmit the Braille text that users pass their fingers along. The bottom of the Blitab possesses a small screen which shows the contents of a web page, and the larger upper portion is the area comprising the tactile relief. Because the Blitab is fully electronic rather than mechanical, entire web pages are able to be viewed by the user rather than single lines or isolated portions. Moreover, the Blitab is capable of converting Word documents and PDF files directly from USB sticks and memory cards, enabling users to read e-book files. Tactile relief also permits visual objects such as maps and geometric shapes to be created, improving accessibility to navigation tools, textbooks, works of non-fiction, and diagrams.

O Captions, Where Art Thou?

Information from Social Media Today’s “Top 5 Facebook Video Statistics for 2016” reveals that 8 billion video views are generated each day on Facebook, with videos earning 135% greater organic reach than photos. 85% of these views occur without sound (largely due to the popularity of food gif recipes and branded videos), which greatly opens up accessibility to deaf users and those with other hearing-related disorders; not only does it enable these individuals to watch and share content that otherwise would not be open to them, it also encourages publishers to create captions to accompany their uploads in their quest to attract greater audiences and reach on Facebook through shares. Videos uploaded to YouTube, on the other hand, fail on the caption front.

Offering a multitude of ways to help uploaders with the captioning process, one would think this would not be the case, especially considering companies such as Rev provide creators with closed captions for their videos for $1 per minute of audio, which is quite cheap considering the length of most videos online (3-7 minutes). Rikki Poynter, a deaf vlogger on YouTube, has created a series of videos on the subject, educating other YouTubers and viewers how to caption videos in the most accurate, accessible fashion. She explains the options offered, which include YouTubers uploading a text file containing a transcription of their video (either created by the user or by companies like Rev), manually entering captions into YouTube’s subtitles box, or crowdsourcing captions from viewers.

As she explains in another video, however, crowd sourced captions do not always result in a desired outcome. Often times, these captions include spelling, grammar, and punctuation errors that prove challenging to follow, are not properly synced with what is happening in the video, and can even contain extraneous content such as jokes or commentary added by the person doing the captioning.

cbc-marketplace

The above are two examples of poor captioning in the YouTube upload of the CBC’s Marketplace episode from October 28, 2016 (and yes, that is CBC - not the Centers for Disease Control).
The above are two examples of poor captioning in the YouTube upload of the CBC’s Marketplace episode from October 28, 2016 (and yes, that is CBC – not the Centers for Disease Control).

In response to the abundance of inadequate captioning, the movement #NoMoreCraptions has gained momentum with the mission of educating content creators about the problems deaf users experience when captioning fails to be mindfully executed. #NoMoreCraptions draws on the regulations for captioning enforced by the Federal Communications Corporation (FCC) in the USA (and the Canadian Radio-Television and Telecommunications Corporation (CRTC) in Canada). These regulations do not extend to video uploaded to websites, so advocacy efforts are aimed at educating YouTubers and others who caption videos about these best practices with the hope of improving the quality (and resulting accessibility) of captions in online videos.

One important component of the CRTC’s regulations centres on the delivery and speed of captions:

“Captions must be verbatim representations of the audio, regardless of the age of the target audience. Speech must only be edited as a last resort, when technical limitations or time and space restrictions will not accommodate all of the spoken words at an appropriate presentation rate.”

Because captions are the only means through which deaf viewers are able to know what is being spoken, it’s important to consider the effect when spoken words are edited. People naturally use filler words, start sentences that they don’t finish, and utter swear words and slang when talking. If a person captioning selectively removes these filler words and unfinished sentences, and replaces swear words and slang with other words, the resulting captions will vary immensely from the true contents of the video, providing a false impression of the people in the video to the viewer; in many cases, an individual’s tone, speaking style, and level of articulation will change drastically when captions are not verbatim representations.

A free, open-sourced project boasting the same name has also been initiated by Michael Lockrey, which can be found here. Deaf himself, he created the site to combat YouTube’s automatic captions (craptions) by providing a fast and easy-to-use way for individuals to fix the captioning errors found in videos. Users begin by pasting a video’s URL and proceeding through four steps to amend all the errors they wish to fix. Users can then download their captions.

In an interview with Amara, a non-profit dedicated to reducing barriers to accessibility and fostering a more democratic media ecosystem, Lockrey describes his discontent with the present state of captioning on YouTube:

“YouTube has admitted recently that only 25% of YouTube videos have captioning and most of these only have automatic craptioning, which doesn’t provide me with any accessibility outcomes, and I wrote a blog post recently that suggests that this means that only 5% of YouTube videos are likely to have good quality captioning and this simply isn’t good enough.”

This is extremely problematic for multiple reasons. For one, upwards of 95% of all videos uploaded to YouTube are either not accessible at all or are not adequately accessible for members of the deaf community. Second, automatic captions serve as a cop-out for content creators by allowing them to claim they’ve captioned their videos when really they have not. Finally, it means that even some of the largest online publishers such as The New York Times have not been, or have only very recently begun, captioning their videos.

With this survey of the current accessibility challenges facing blind and deaf users, it’s clear that the emphasis on accessibility standards needs to not only be communicated in the media, but actively encouraged among designers, developers, and creators to ensure all users of digitally published content are granted inclusion.

– References –

The World Bank, “Internet users (per 100 people)”, http://data.worldbank.org/indicator/IT.NET.USER.P2

Statistics Canada, Population by year, by province and territory (Number), http://www.statcan.gc.ca/tables-tableaux/sum-som/l01/cst01/demo02a-eng.htm

Matt Kieltyka, “Low-income Canadians struggle to afford Internet, bridge digital divide”, http://www.metronews.ca/news/vancouver/2016/02/02/low-income-canadians-struggle-to-afford-internet.html

CNIB, “Fast Facts about Vision Loss”, http://www.cnib.ca/en/about/media/visionloss/pages/default.aspx#canadians

CHS, “Facts and figures”, http://www.chs.ca/facts-and-figures

Vancouver Public Library, “Annual Report 2013”, http://www.vpl.ca/about/details/AR2013_text

Submittable, https://www.submittable.com/

Molly Burke, “How I use technology as a blind person! – Molly Burke (CC)”, https://www.youtube.com/watch?v=TiP7aantnvE

WebAIM, “WebAIM’s WCAG 2.0 Checklist for HTML documents”, http://webaim.org/standards/wcag/checklist

Google, “Web Fundamentals: Accessibility”, https://developers.google.com/web/fundamentals/accessibility/

Dan Misener, “Having more trouble reading websites? You’re not alone”, http://www.cbc.ca/news/technology/misener-web-readability-1.3831009

Blitab, http://blitab.com/

Social Media Today, “Top 5 Facebook Video Statistics for 2016 [Infographic]”, http://www.socialmediatoday.com/marketing/top-5-facebook-video-statistics-2016-infographic

Digiday, “85 percent of Facebook video is watched without sound”, http://digiday.com/platforms/silent-world-facebook-video/

Rev, https://www.rev.com/

Rikki Poynter, “3 Ways to Caption Your Videos!”, https://www.youtube.com/watch?v=7t-1kFDPceo

Rikki Poynter, “#NoMoreCraptions: How To Properly Caption Your Videos”, https://www.youtube.com/watch?v=-O4YcVQt5NM

CTRC, “Quality standards for English-language closed captioning”, http://www.crtc.gc.ca/eng/archive/2012/2012-362.htm

No More Craptions, http://nomorecraptions.com/

Amara, “YouTube Automatic Captions Need Work: A Chat with Michael Lockrey”, https://about.amara.org/2015/05/01/youtube-automatic-captions-need-work-a-chat-with-michael-lockrey/

Amara, https://pro.amara.org/ondemand

Michael Lockrey (TheDeafGuy), “OMG! I just found out there’s only 5% captioning* on YouTube”, https://medium.com/@mlockrey/omg-i-just-found-out-theres-only-5-captioning-on-youtube-9bbb8bc604f6#.dtm620kyj

Margaret Sullivan, “Perfectly Reasonable Question: Closed Captions on Times Videos”, http://publiceditor.blogs.nytimes.com/2015/03/24/perfectly-reasonable-questions-closed-captions-on-times-videos/?_r=2

 

Journalism in the Digital Age

Since the 1800’s the distribution of news and information has undergone continuous change. With new technologies such as the printing press, and more recently, the internet, new voices can reach broader audiences at lower costs. In the modern age of the web, everyone from large media giants to local daily newspapers have felt the effects of declining advertising revenue and readership. This has required newspapers and journalists to adjust their production and distribution models and find new ways to keep their audiences engaged and informed. This paper will discuss the newspaper industry’s transition from print to digital media. It will explore how the internet has changed the way consumers receive their news and the way that news is reported. I will argue that these changes have affected how news is defined. Digital news can take many forms and come from a range of sources. This is why consumers must critically assess and make informed decisions about how and what they consume online.

Consumers are spending more time on the web than ever before. According to CBC, “Canadians are among the biggest online addicts in the world, visiting more sites and spending more time visiting websites via desktop computers than anyone else in the world.” As readers move their time and attention online, media organizations have followed suit by developing online formats to try new ways of producing revenue. Newspapers have introduced subscription models via digital reading APPs for mobile phones and tablets, and created paywalls to fund the content they distribute on their websites. However, this does not make up for the loss of newsstand sales and advertising revenue. As indicated in a report in The Globe and Mail: “Postmedia Network Inc., publisher of the National Post and nine other metropolitan dailies, is looking to cut $120-million from its operating budget as part of a three-year program. Sun Media has cut more than a thousand jobs over the last several years, while the Toronto Star and Globe and Mail have both looked to buyouts and outsourcing to reduce their costs.” The new landscape of online publishing and content distribution has disrupted traditional news organizations and print journalism.

Readers are pulling news media into the digital world because that is where they consume. This means that advertising firms and companies are also choosing to advertise online. This has resulted in a considerable loss of earnings for print newspapers, who according to Suzanne M. Kirchhoff, traditionally relied on ad revenues for 80% of their overall revenues. Companies are choosing to advertise online because it is cheaper and more dynamic. They can advertise through Facebook for as little as $10 (depending on how many people they are trying to reach), meanwhile, a half-page color advertisement in The Globe and Mail can cost over $8,000. On the web, companies are able to reach a much wider audience through targeted interactive content. Advertisers no longer need to buy premium print ad space; instead they can advertise online, at very low costs, from large companies like Facebook and Google. This has resulted in a very unequal balance in the internet advertising market share, as indicated in the graph below:

untitled1

(Winseck, 2015)

Today, the entire internet churns out content at a very high volume—and it’s all instantly available at any time. Print news companies are now competing with a very large number of online sources. Social media “tech giants,” such as Facebook, Twitter, Instagram, Snapchat, and Google are finding new ways to distribute news. As noted in Madelaine Drohan’s report, “It started in January when Snapchat, used by 100 million people to share photos and short videos, started Discover…Facebook, with its estimated 1.6 billion users, caused a splash when it launched Instant Articles for mobile devices.” Evidently, traditional print newspapers cannot compete with the web. The large decline in ad revenue, and the rise of competition have caused a significant drop in profit for traditional print newspapers and resulted in their regression, and transition to online publishing.

 

The Internet has changed the way people receive/consume/interact with news.

A survey of Canadian media consumption by Microsoft determined that the average attention span of a person is eight seconds, down from 12 in the year 2000 (Egan, 2016). The growth of digital news publishing has affected consumers’ reading experiences. Consumers do not pick one website or digital news source to gather their information; they move around, find journalists they like, and quickly scroll through their options. Readers are finding and interacting with news in different ways than ever before.

The internet allows you to be anywhere in the world, access the website of a foreign country’s newspaper, and read about the local news. Martin Belam writes: “It used to be the case that if I wanted to read the Belfast Telegraph, I pretty much had to be in Belfast, and hand over some cash to the newspaper sellers and newsagents around the city. Now, of course, I can read the website for free from the comfort of my own home, whether that is in London, New York or New Delhi.” Outside of the traditional boundaries of press circulation, consumers can access information from across the world: “Against an almost exclusively national consumption of their traditional media, the leading European newspapers receive 22.9% of their online visits from abroad” (Peña-Fernández et al., 2015). Readers are able to connect with news outside their own community and obtain a broader view of world events.

In addition to having access to a worldwide scope of news sources, consumers are actively engaging with online content. As noted by Drohan, there is “growing clamour among online readers, viewers and listeners to be active participants in the creation of news rather than passive consumers of a product.” Consumers desire a dynamic interaction with the content they are reading: “‘News is not just a product anymore,’ says Mathew Ingram. ‘People are looking for a service and a relationship of some kind.’ (Drohan)” This engagement with news means that news stories can actively be challenged and ideas questioned. No longer do readers simply take a news story as fact and put down the paper; they are commenting, sharing, tweeting, and re-posting the information.

In the digital age, online readers can visit several websites and choose the source they find most appropriate for the story they want to read. Independent digital media companies such as Buzzfeed and The Huffington Post offer an alternative way of discovering news and finding entertainment. As a news aggregator, the Huffington Post curates content, as well as uses algorithms to gather and group together similar stories. According to The Economist, The Huffington Post “has 4.2m unique monthly visitors—almost twice as many as the New York Post.” There are similar aggregators that are entirely automatic:

“The Wal-Marts of the news world are online portals like Yahoo! and Google News, which collect tens of thousands of stories…most consist simply of a headline, a sentence and a link to a newspaper or television website where the full story can be read. The aggregators make money by funnelling readers past advertisements, which may be tailored to their presumed interests. They are cheap to run: Google News does not even employ an editor” (The Economist).

Increasingly, traditional news sites are being accessed indirectly by readers: “less than half of visits (44.6%) access the websites of the online Europeans newspapers directly through their URL” (Peña-Fernández et al.).

What do these aggregator sites mean for traditional newspapers and for online readers? News aggregators act as a source of traffic to news sites. On the other hand, the practices of news aggregators raise ethical questions. Newspapers produce and publish the original content, while aggregator sites earn money through advertising around this second-hand content. Further, consumers may simply scan through headlines rather than clicking on an article to read the whole story. This means that the original news sites do not receive these readers at all, and lose out on potential profit. Megan Garber highlights the complex nature of these sites: “Achieving all this through an algorithm is, of course, approximately one thousand percent more complicated than it sounds. For one thing, there’s the tricky balance of temporality and authority. How do you deal, for example, with a piece of news analysis that is incredibly authoritative about a particular story without being, in the algorithmic sense, “fresh”? How do you balance personal relevance with universal? How do you determine what counts as a “news site” in the first place?” When sites such as Google News use algorithms to compile information, a computer refines the content. This presents the question as to whether or not what’s being highlighted is “quality” journalism. Aggregators remove the human element; something that many would argue is essential to a process as important as finding news.

 

The internet has changed the way that news is reported.

The internet has added numerous layers to the news reporting process. Journalists from traditional media organizations are no longer the monitors of news, and the number of independent journalists is growing: “there are individuals, sometimes called citizen journalists, who cover events from their own vantage point, with various degrees of objectivity, accuracy and skill. Their emergence has led to an as yet unresolved debate over who has the right to call themselves a journalist” (Drohan). One example of a website publishing content from citizen journalists is Groundviews, out of Sri Lanka. Groundviews publishes uncensored content by citizen journalists who are pushing the boundaries of traditional media. The rise of citizen journalism means that voices outside of conventional media can be heard. It also raises questions as to accuracy and neutrality. As a reader, how do you evaluate how much weight to give to an individual reporting outside of traditional news media? This is an important question, and one that will remain relevant as journalism continues to change.

The attribution to writers in online articles has also changed, and varies between websites. Interestingly, sites such as The Huffington Post often include an image of the writer, as well as their credentials in a prominent position on the page. In contrast, sites like The Guardian and The New York Times emphasize the content of the article and simply include a byline. For example, the front page of The Huffington Post Canada’s politics page looks like this:

screen-shot-2016-11-05-at-1-37-29-pm

The variation in attribution highlights the unique differences of these sites. Perhaps sites like The Huffington Post feel the need to point out the credibility of their writers because it is not automatically implied; the website itself has not built a solid reputation as a news source. Perhaps readers are more concerned about who has written the article, and not where it’s been published. This grants independent journalists the freedom to build a loyal audience and publish across a variety of platforms. It means that large news corporations are no longer the sole authority on news topics.

The internet has changed the speed at which journalists must work to provide readers with information. Consumers expect immediate access to news. In an article in The Guardian, Belam discusses immediacy in digital publishing: “In years gone by, news of suicide bombers underground in the Russian capital would have meant producing a graphic for the following day’s paper – a lead time of several hours. Nowadays, Paddy Allen has to get an interactive map of the bombing locations finished, accurate, and published on the website as quickly as possible.” In order to remain competitive, journalists must present instant access to the latest stories. They must prepare information for right now, instead of tomorrow. Does this emphasis on instantaneous news mean that journalists are less likely to ensure the accuracy of their content? As a reader, immediate access to information has many benefits, but it may also have implications for the quality of content. According to Karlsson: “Immediacy means that provisory, incomplete and sometimes dubious news drafts are published.”

This also means that articles may be revised as new information arises, events progress, and facts unfold. Karlsson completed a study of several articles in The Guardian. In one example, two versions of an article were published two hours apart and consist of roughly 86% identical text. However, when you read the headline of each article, their messages are very different. The original version has been “complemented with other information that sets the new headline” and it has been framed differently. As journalists race to publish news online, in some cases, efficiency may be traded for accuracy, meaning that information is subject to change.

Digital news publishing also raises important questions about ethics. The vastness of the internet provides reporters and journalists with readily available information about individuals. Belam notes that, “Whenever a young person is in the news, Facebook or other similar social networks are usually a ready source of images. No longer does the news desk have to wait for a family to choose a cherished photo to hand over. A journalist can now lift photographs straight from social networking sites.” Privacy issues are at the forefront of the digital world, and they certainly impact news reporting.

The web has tested the endurance of news companies and forced industry leaders to creatively adapt and innovate. With all of these changes happening in news publishing, how do traditional news organizations continue to bolster development and move forward in a valuable, successful way? As technologies grow and change, news organizations must continue to do so as well. Mathew Ingram discusses how The Guardian is exploring alternatives to paywalls and digital-subscription models by offering a membership-based program, where content is available to paying members that isn’t available to non-paying readers. The Guardian views this membership plan as a ‘reverse-paywall:’ “Instead of penalizing your most frequent customers by having them run into a credit-card wall, you reward them with extra benefits (Ingram).” In order for traditional news companies to thrive in the digital age, they must maintain strong relationships with their readers.

 

Conclusions

David Marsh of The Guardian asks very important questions about journalism in today’s world: “If I tweet from a major news event – the Arab spring, say – is that journalism? If I start my own political blog, does that make me a journalist? If I’m a teacher, say, but contribute stories to a newspaper, does that make me a “citizen journalist”? Does it make any difference whether people are paid, or not, for such work? Should bloggers, tweeters and “citizen journalists” be held to, and judged by, the same standards as people working in more traditional journalistic roles?” Having unlimited access to instant news via the internet keeps us well informed and socially aware. Digital journalism also raises many questions. It makes it difficult to define news. With a plethora of new options for receiving news, are people replacing traditional formats with ones that are less culturally significant? Have we been conditioned as digital consumers to desire instant entertainment over well-researched, evidence-based news? Has social media diluted the core of news and diverted our attention away from informative sources? These are questions that are important for consumers to consider as they interact with and search for news online.

It is possible that with unlimited access to news via tablets and mobile devices, consumers are spending more time reading news than ever before. A younger population is now consuming digital news on a regular basis. With these positives, also come negatives: social network conglomerates have a stronghold on media, quality reporting is declining, and on various platforms it can be difficult to distinguish between gossip and news. As consumers in the digital age, it is important to recognize that while technology and the internet continue to play an increasingly important role in our lives, we are in control of what we read and how we gather information. We must take responsibility for the way in which we engage with online content. There are ways we can counter a growing inclination to consume sharable, fleeting, surface-level information. Firstly, as consumers, we need to look at the value of what we’re reading and think deeply about how we’re engaging with media. Secondly, as publishers/journalists/writers we need to recognize that the audience should come first; circulating content that does not simply aim to capture engagement, but is enlightening and stimulates analytical thinking, is more important than it’s ever been.

 

Works Cited

Belam, Martin. “Journalism in the digital age: trends, tools and technologies.” The Guardian. 14 April 2010, https://www.theguardian.com/help/insideguardian/2010/apr/14/journalism-trends-tools-technologies.

CBC News. “Desktop internet use by Canadians highest in world, comScore says.” 27 March 2015, http://www.cbc.ca/news/business/desktop-internet-use-by-canadians-highest-in-world-comscore-says-1.3012666.

Drohan, Madelaine. Does serious journalism have a future in Canada?” Canada’s Public Policy Forum. 2016, http://www.ppforum.ca/sites/default/files/PM%20Fellow_March_11_EN_1.pdf.

Egan, Timothy. “The Eight-Second Attention Span.” The New York Times. 22 Jan. 2016, http://www.nytimes.com/2016/01/22/opinion/the-eight-second-attention-span.html?_r=0.

Garber, Megan. “Google News at 10: How the Algorithm Won Over the News Industry.” The Atlantic. 20 Sept. 2012, http://www.theatlantic.com/technology/archive/2012/09/google-news-at-10-how-the-algorithm-won-over-the-news-industry/262641/.

“Huffington Post Canada Politics, Front Page” 6 November 2016. Author’s screenshot.

Ingram, Mathew. “The Guardian, Paywalls, and the Death of Print Newspapers.” Fortune. 17 February 2016, http://fortune.com/2016/02/17/guardian-paywall/.

Karlsson, Michael. “The immediacy of online news, the visibility of journalistic processes and a restructuring of journalistic authority.” Journalism. April 2011 12: 279-295, https://www.academia.edu/561238/The_immediacy_of_online_news_the_visibility_of_journalistic_processes_and_a_restructuring_of_journalistic_authority

Kirchhoff, Suzanne, M. “The U.S. Newspaper Industry in Transition.” Congressional Research Service. 9 Sept. 2010, http://www.fas.org/sgp/crs/misc/R40700.pdf.

Ladurantaye, Steve. “Newspaper revenue to drop by 20 percent by 2017, report predicts.” The Globe and Mail. 5 June 2013, http://www.theglobeandmail.com/report-on-business/newspaper-revenue-to-drop-20-per-cent-by-2017-report-predicts/article12357351/.

Marsh, David. “Digital age rewrites the role of journalism.” The Guardian. 16 October 2012, https://www.theguardian.com/sustainability/sustainability-report-2012-people-nuj.

Peña-Fernández, Simon; Lazkano-Arrillaga, Inaki; García-González, Daniel. “European Newspapers’ Digital Transition: New Products and New Audiences.” Media Education Research Journal. 16 July 2015.

“Tossed by a gale.” The Economist. 14 May 2009, http://www.economist.com/node/13642689.

Winseck, Dwayne. “Media and Internet Concentration in Canada Report, 1984-2014,” Canadian Media Concentration Research Project, (Carleton University, November 2015), http://www.cmcrp.org/media-and-internet-concentration-1984-2013/.