The Adaption Advantage

As it stands right now, Jellybooks is well-positioned to move in on one of the publisher’s most important (and hardest) jobs: to determine if a book will sell well or not. There is an opportunity for authors to harness this technology and share their books with readers to determine if they are print-ready, bypassing the publisher all together.

Yet there is also an opportunity for publishers here, if they are able to move fast enough (which seems to be a lot to ask in this industry) to take it. If publishers incorporate technology like Jellybooks as a regular part of their service offerings and business practices, there is a chance that authors will feel they need publishers to help them get the most out of the technology to perfect their stories.

Publishers could send draft manuscripts to readers, which would be similar to ARCs but much less polished. The Jellybooks technology would measure reader interest, which the publisher could analyze alongside other decision-making factors (intuition, current trends, etc.) that determine if a book gets published or not.

The data would also help publishers determine how to allocate resources to different books. Books that most people finish and read quickly may only need minor suggestions and copy edits, while books that people stop reading after chapter three would be flagged as needing a closer look at what happens at that point in the book. The editor could then go in and analyze that section of the book, and work with the author to make targeted revisions. This agile revisions process would involve the editor, the author, and the reader (who has been missing from this equation in the past).

By getting more feedback on a book before it is published, publishers and authors can better ensure books will be well received by target audience. Hopefully, the additional work that will go into getting a book ready for print will be balanced out by increased sales that result from stronger books.

Other companies that release products often do rounds of focus group testing to perfect their products, and so it makes sense that this process should be adapted to the publishing industry, especially with the support of technology. Why not have research-based feedback to bolster the editing process? If editors can use this technology to help them do their jobs more efficiently and effectively (by becoming experts in interpreting and responding to the data), then they will be able to mitigate the threat of losing their jobs to the technology.

If we want to stay relevant, we need to find ways to use emerging technologies, like Jellybooks, to our advantage.

But do we really give a folk?

When we ask what kind of data we want to collect about readers’ impressions, what we’re really asking is how we would encourage a folksonomy; there’s no other way to garner impressions than autonomous, organic input from readers. Impressions are thoughts and feelings. To ask for impressions would be leading at best and coercive at worst. Sure, there’s lots of other data about readers that can be collected and still be helpful to publishers: print or digital, paperback or hardcover, point of sale, location, et cetera. But in order for a publisher to gather impressions, that publisher would have to create a social media platform for their readers. The only publisher to have semi-successfully achieved this is Amazon with their website Goodreads.

One problem with a future in which publishers collect their own data via social platforms with functional folksonomies is that once there is one really good platform, no one is going to be very receptional to others. In fact, it will feel prohibitive to readers to have to go to the Penguin Random House social platform for some books and the Simon & Schuster social platform for another. And to an extent, it would also disrupt the folksonomy of the users. Perhaps the compromise would be for all publishers to have an investment in Goodreads…but that gives a lot of power to Amazon. Ideally, the social network would be completely neutral and devoid of any vested commercial interests. 

I’m a little biased against the idea that publishers should be finding new ways to capture data about readers impressions at all. A lot of the data (about everything but readers’ impressions) is already out there: sales data, POS systems, demographics, et cetera. And as for how to get data from a reader folksonomy of your books — that’s already out there too, if publishers are willing to dig for it.

Take, for instance, two pretty well known fandom platforms: Tumblr and AO3.

Tumblr has been the go-to place for fangirls and fanboys since 2007. They’ve got a decade-long evolution of tag-building and micro-communities that thrive strongly around the smallest of fandoms. While this is, to an extent, only relevant for fiction, it’s exactly the kind of natural, organic folksonomy that publishers could gauge impressions from.

The Organization for Transformative Works’s “Archive of Our Own” has a similar tagging culture to Tumblr, though it is more organized, literature-centric, and robust. AO3 is a goldmine of rich data about reader culture. If publishers want to know what readers are loving about their books, how fans are subverting the book’s themes, and how deep the fanbase is, AO3 has that information. Though from the user’s perspective the website’s tagging system would be considered less a folksonomy and more a metadatabase, from a publisher’s perspective it’s an organically built pool of readers’ taxonomical reactions to a given book or series. 

For non-fiction, literary fiction, and other types of books that don’t lend themselves well to fandom culture, there are other ways to gauge reader interaction. For scholarly books, impressions are pretty explicitly explained in citations of others’ works. For literary fiction, you’re more likely to see readers interact on Goodreads…to which we’ve come full circle.

The argument I’m making is that I don’t believe there’s any more data to collect from readers’ impressions than what is already available. Perhaps the current data isn’t currently being mined correctly, but that doesn’t mean it’s not out there. Given an AI system like Booxby’s, a publisher may be able to unravel patterns in readers’ behavior, but that is by its very definition inorganic, and more about determining the next book than reactions to the last book.

The only way I can see the situation being any different is if, in a world where ebooks are the dominant form of literature consumption, books have become completely social network-capable; each book is its own interface for readers to react and interact. Though this tech is undoubtedly possible, and might even be the future, how long it would take to transition readers to accept that as a norm is yet to be seen.

TLDR: The data is already available if you just take the time to look for it, readers’ impressions aren’t any use if they aren’t organic, and we’ve got lots of data already that we maybe aren’t even using.

The Self-Driving Manuscript: How A.I. Will Revolutionize Book Acquisitions

It has always been the case that data about reader preferences heavily influences marketing, and that successful or unsuccessful marketing in turn influences later book acquisition decisions. AI has massively increased the amount of reader data available, and the degree to which it can be analyzed. But a more direct application of AI to acquisitions is also taking root, and I predict that artificial intelligence will become integrated into manuscript acquisition.

For a concrete example, we can look at Booxby, an app that “reads” and analyzes texts, and offers “A.I. generated data to the four stages of book discovery: manuscript development, acquisition, marketing, and consumer discovery.” Founder Holly Lynn Payne explains that AI can help solve the discovery problem; it can “provide meaningful analytics to inform acquisition decisions, understand a book’s full market potential, and create an effective mechanism to connect books to the readers who would most enjoy them.”

However, as my MPub colleague Taylor McGrath reminded us in a comment in our Hypothes.is group, readers tend to choose books based on personal recommendations, making an AI-driven, Netflix-like service for books unlikely to take hold. I agree, and that’s why I can’t see what it would look like if we used AI to create a “mechanism to connect books to the readers who would most enjoy them.” Payne is overstating the problem. (In fact, I think many readers, myself included, actually enjoy the process of looking for books, in a way that we do not necessarily enjoy shopping for other goods.)

I do think Payne gets it right when she says that AI can “provide meaningful analytics to inform acquisitions decisions” and “understand a book’s full market potential.” It’s acquisitions editors, not readers, who want help choosing books, and that’s where Booxby will shine. On top of providing comps for the books it processes, Booxby also offers “patent-pending Experiential Language Tags that quantify the reader experience.” I have no idea what those are, but if they’re anything like the applications of AI that I’ve been learning about lately, it sounds like a probably imperfect but very powerful tool.

For example, in one of next week’s b-side readings, “A Publisher’s Job is to Provide a Good API for Books,” Hugh McGuire explains how easy it is to use “semantic tagging” to build a smart index for your ebook.  Like a conventional index, a smart index can tell you where all instances of John Smith appear in the book; but it can also tell you “where all people appear; where all instances of people named John appear; where all instances of people named Smith appear; that ‘my dear Granny Smith’ is a person and ‘my delicious Granny Smith’ is an apple.” A smart index is what McGuire calls a “semantic map” of the book. (Small Demons is a great illustration of whatthis might look like.)

Semantic mapping is impressive to me in three different ways, which I will explain in order of increasing impressiveness. First, it’s easy to see how this process of semantic mapping is a revolutionary tool for research. Such a tool could let you find a particular person or concept, or even all people or all concepts, in a particular book or collection of books (providing they are appropriately tagged and made available). You could also identify all books (that have been tagged and made available) that contain a reference to a particular event or person or concept. I can’t tell you how this would work but semantic mapping could help you do all of these things at the speed of search.

After semantically mapping many books, this sort of AI application could create categories of these maps, outside of the narrow genres with which humans currently approach books. I don’t know what the categories that emerged would look like, but I’m sure they would be illuminating. We might find a long-neglected category of books that humans had never attended to as such; or to put it another way: we might find a category of book that humans don’t know how to market, which is the exact experience Payne had with her book that led her to create Booxby. The point is, it would definitely be interesting to see books sorted into categories, or genres, based on the way their semantic maps look to an AI application. (I bet it would look a lot like the weirdly specific, yet creepily accurate categories that Netflix recommends to me.)

Now, imagine this process coupled with an AI application that collects data on reader-reported experiences of each of these categories. This data could be measures of sensibilities and emotions that, from the semantic map alone, an algorithm would not know to expect from a particular book (because AI doesn’t have emotions that we know of–yet). These experiential measurements could be straightforward, like the ones taken by the Whichbook application Jesse Savage brought to our attention (happy or sad, beautiful or disturbing). Or they might be more obscure, asking readers to what degree they felt “mournful” at the end of a particular book, how much it reminded them of themselves when they were children, etc.

Of course, we’ve always been able to get this kind of human feedback on particular books, or particular genres of books; or more recently, on books that contain particular content, such as a high frequency of words indicating bodies of water. All of that allowed us to associate certain kinds of reader experiences with certain genres, or certain kinds of content. But this AI application could associate certain kinds of reader experiences with certain kinds of semantic map. This means it could find two books that were likely to make you feel mournful, even if they had absolutely no content or human-created genre in common.

We would then have as data the content of the book, the semantic map of the book, and the experiential map of the book. Add to that the avalanche of consumer behaviour data that is already revolutionizing book discovery, and this would definitely yield some actionable results for acquisitions editors.

They could map their own collections, and make associations between certain kinds of semantic maps and available sales data. They could also map a submitted manuscript to get an idea of the reader experience. They might learn that even though the manuscript seems like a great example of what is selling right now in terms of content or genre, it actually is likely to produce an unpopular reader experience. They might find that a reader experience they thought would be undesirable is doing quite well in terms of sales. Or they could search the slushpile to find the weirdly specific, yet creepily accurate combination of content, genre, author profile, and reader experience they’re looking for. They could semantically map the ocean of self-published manuscripts (whose books were tagged in this manner, and made available) and treat it as a gigantic slushpile. And they could do all this without cracking a single manuscript, without having to summarize the content or squint through a badly edited first draft.

[Edited to add: I’m not saying that I think this is necessarily a good way for acquisitions to be decided; I have doubts, for the same reasons I don’t think a book should be chosen based simply on an author’s sales record.] These are the ways I imagine a combination of AI, tagging, and data-driven marketing will affect acquisitions. My understanding of all of these things is quite limited, but it was a fun experiment, and I’d like to know whether any of you think it sounds useful, dangerous, completely implausible, or utterly obvious.

Opening Up the Publishing Business

There is certainly the potential for self-publishing, subscription reading and internet-based custom publishers to continue to disrupt the established publishing industry– but is that such a bad thing? We have seen the ability of open sharing platforms increase visibility for authors and potential collaboration for researchers. The idea to look at Open Access (OA) as a business model for mainstream book publishing is extremely exciting. Let’s consider what this business model is and how it could be applied beyond OA across the traditional publishing industry.

To begin, we need to remember that “Open Access” in and of itself is not an income model, but a distribution model. However, when a publisher applies the principles of an OA distribution system, their income models must naturally adjust in order to support it. A scholarly publication that qualifies as OA either means that:

  • copyright holders of a work grant free and perpetual use of a work in any digital medium, subject to correct attribution to the original authorship; or
  • a complete work can be distributed and displayed electronically in an online repository that is supported by an academic, scholarly or government agency.

The foundation of OA is that community standard, not copyright law, holds users accountable in adhering to proper attribution and responsible use. The web is, of course, rife with copyright infringement– but as I argued in week 4, that infringement is only a matter of perspective based on one’s adaption of the Commons. OA has paved a way for work produced in the Commons to not only be protected, legally and morally, but to also be stable financially.

Figure 1: Past and Current business models used in OA (screenshot from the OAD).

The Open Access Directory (OAD)  breaks down current revenue sources and business models used by OA Journals. As we can see in the image (figure 1), each funding model is different and would be applicable to different target audience segments. The important thing to remember here is that revenue models need to be looked at holistically, not one-size-fits all, and not-one-stream-pays-for-all. At the moment, in traditional book publishing, profits pretty much only come from sales revenue. As we keenly experienced in the Book Project, after royalties, sales & distribution, marketing and production fees, there really isn’t all that much left for the publisher. OA, on the other hand, is clearly an agile system that takes into account the ever-changing landscape of the web and the business models that have developed there. The OAD readily admits that not all models they’ve listed worked out, but that’s the point: the process has been iterative and has evolved from its mistakes.

Why should legacy publishing care about OA? If these corporations have their eyes open and look at scholarly OA Journals as a case study, they should see some parallels and a compelling business scenario. As Juan mentioned in our discussion last week, both book publishing and scholarly publishing are ruled by “Big 5” corporate leaders, who up to this point dictate the market. They also both, importantly, carry a kind of brand name legitimacy that means both authors and researchers are compelled to submit in order to gain recognition (bestsellers in literature’s case, tenure in academia’s) for their work. But I would argue this paradigm is crumbling under the weight of Open Access Journals, and in the literary world, self-publishing platforms. If book publishers want to stay competitive, they must look at a collection of solutions that more often than not means opening up.

It tends to be that an appeal based on revenue growth garners more attention than appealing on more idealistic terms like “for the good of the commons” or the “growth of creativity”. It’s difficult to convince entrenched industries of that correlation.  Perhaps we don’t really need to worry if behemoths like the Big 5 don’t radically change their models. Perhaps they won’t, or not quickly enough. This means, however, they stand at risk against e-commerce models that are currently growing, or have yet to emerge. At this point it’s important for mid- and small- and emerging publishers to look to OA and consider the ways in which their practice can operate on openness, be built with agility in mind, and still be profitable. 

Break Down the Barriers

As we know quite well by now, publishing has traditionally had some very high barriers to entry. You had to be the right person, you had to know the right people, and you had to have the luxury of being able to spend your time writing (a room of your own, if you will). And even after all of this, there were still (and still are) gatekeepers deciding if your work was worthy enough to be published.

On one hand, while the barriers are not as high as they once were, they are still a major issue in publishing today, as we discussed in the Emerging Leaders in Publishing Summit. But as we talked about in class, the advent of online business models has also helped to knock many of these barriers down. The space that was once reserved for a select few is now a space where everyone can be an author, and as such it is easier to access publishing platforms. By extension, if you are a consumer, it is also easier to access this abundance of content.

These new models are not inherently detrimental to the publishing business; rather, the publishing industry makes it appear so by remaining stagnant. Both models of publishing have the same goal (to publish books and profit), but they have different ways of meeting that goal. They are in the same business, but have different ways of doing business.

The publishing business model cannot just “go online” and assume that’s enough, but must examine why consumers are moving towards other models. They wouldn’t have to dive that deep to realize it’s because these other models better meet author and consumer needs. (There are clear examples of this same transition in the newspaper industry). Publishers have to realize that their barriers to entry have harmed their business and are driving people to seek out more accessible models. It’s not the location that is the problem, but the service offering.

As much as traditional publishers may want to feel needed and necessary, the truth is that other models that are beginning to push them out. In publishing we are providing a service, not a privilege. There is no reason publishers could have not evolved to better meet the needs of consumers to earlier on, when issues (such as barriers to access) were raised.

In order to compete (and consequently, survive), traditional publishers need to evolve. They need to give platform to marginalized voices. They need to find better ways to cater to customers’ needs. They need to deliver specialized services to authors (not necessarily the whole publishing package). They need to step of off their pedestal and share power with customers and authors by better involving them.

To summarize, publishers need to identify barriers to entry in the industry and then find concrete steps they can take to remove these barriers if they want to stay relevant. Otherwise, people will continue to find ways to go around the barriers that are still in place.

Balancing the rights of user and creator in the digital age

Building on the Past by Justin Cone is licensed under a Creative Commons Attribution (CC BY) license. https://creativecommons.org/about/videos/building-on-the-past/

In 2012 the Canadian government passed a series of reforms to the Copyright Act, which included a provision that the Act be reviewed every five years. In December 2017 the Canadian government ordered the first such review of the Canadian Copyright Act (Geist, “Copyright”) and we can expect the committee in charge to suggest significant updates and reforms. As a future publisher, I want to identify and test some of the underlying assumptions that inform Canadian copyright law so that I can better assess the coming changes.

Jay Makarenko describes a copyright as “a legal recognition of a person’s natural right of ownership over the things s/he creates.” Given that, this strikes me as the most basic assumption underlying Canadian copyright law:

Assumption 1: An individual has a “natural right of ownership over the things s/he creates” (Makarenko).

This notion of a natural right derives from the concept of droit d’auteur (Younging 57) and is baked into our legal conception of copyright. It explains why copyright is conferred automatically as soon as an idea is given original expression, not conferred by laws. Canadian law assumes this natural right of the individual creator and tries to balance it with “public freedoms” (Younging 58) as well as the many social, cultural, and economic benefits to be gained from a robust and representative public domain.

This view of individual ownership of original expression is not universally accepted. Some view “intellectual goods” as “social property” (Makarenko), arguing that “one individual cannot make the claim ‘it is mine, because I made it.’ The reality is that society … participated in the making of the work and, as such, the work cannot be claimed as private property by one individual” (Makarenko). If this is so, it would seem that individuals have no natural right over our original expressions, because they are not in fact original.

But rejecting the natural rights view in favour of this social property view would not necessarily force us to say that copyright is illegitimate. Makarenko explains another possible justification for copyright is that “copyright and private ownership of intellectual goods are valuable because they will bring great economic and cultural benefits to society.” On this view, creative production is incentivized because creators can sell their creations. This brings me to Assumption 2 underscoring Canadian copyright law:

Assumption 2: Copyright incentivizes creative innovation.

Neil Gaiman challenges this assumption. He noticed that in “places where I was being pirated, particularly Russia … I was selling more and more books. People were discovering me through [my] being pirated and then they were going out and buying the real books.” Gaiman persuaded his publisher to make his work American Gods available for free for a month, even though it was still selling well; sales through independent bookstores tripled. This suggests that copyright infringement should not necessarily de-incentivize creative innovation. However, Gaiman also mentions that they only measured the impact on sales in independent bookstores, so it is impossible to conclude from this anecdote whether sales overall were affected positively or negatively.

Similarly, Ernesto Van der Sar interprets a study by Professor Tatsuo Tanaka of the Faculty of Economics at Keio University as indicating that “decreased availability of pirated comics doesn’t always help sales. In fact, for comics that no longer release new volumes, the effect is reversed.” He quotes Tanaka saying that “displacement effect is dominant for ongoing comics, and advertisement effect is dominant for completed comics.”

If Van der Sar’s interpretation is correct, then there are cases when copyright is infringed, yet creative expression is still rewarded by increasing exposure and sales. However, as my colleague Sarah Pruys has pointed out (in a private Hypothes.is discussion in my PUB 802 class at Simon Fraser University), Van der Sar takes some editorial liberties with Tanaka’s findings and ends up overstating the claim. So this anecdote, like Gaiman’s, does not constitute a conclusive rebuttal to Assumption 2.

Makarenko describes a rebuttal we can take a little more seriously:

Assumption #3: Copyright stifles the free flow of ideas.

This might happen when a copyright owner hoards their original expression, or limits its distribution by charging a fee, excluding those who cannot afford it. This keeps valuable ideas out of public access, limiting society’s resources of intellectual goods.

The case of orphan works can be said to support this assumption. Orphan works are those for whom “the copyright owner cannot be identified or located” (Harris). The argument goes that if orphan works are protected too zealously, they will be inaccessible, and the public domain will suffer. To prevent this situation, the term of copyright should be short.

It’s worth pointing out that even if you were to find the author of the orphan work, there is no reason to assume you would be granted the right to copy. Also, an orphan work is not protected any longer than any other work. Copyright for works with unknown authors only lasts for “the remainder of the calendar year of the first publication of the work plus 50 years” (“Guide”). This means orphan works might actually enter the public domain significantly earlier than works with a known author.

Let’s say a book is published in 2018. If at any point I want to copy it and can’t find the author, I have to wait until the orphan work enters the public domain in 2068. However, the only case in which a work that is not orphaned would enter the public domain so soon after publication is if the author died in the year of publication. Let’s say the author survived until 2068. Then the work would not enter the public domain until 3018. So it might well be easier to wait out copyright on many orphan works than on published works with known authors.

A work entering the public domain for everyone to copy freely is only one way that society can benefit from it. We can still use the work under fair dealing. And the Copyright Act allows for a license to be issued at the discretion of the Board if they are satisfied that “the applicant has made reasonable efforts to locate the owner of the copyright and that the owner cannot be located” (Sookman).

This does still involve some risk in case the copyright owner shows up within five years to recover royalties. However, in 2012 the then-Conservative Canadian government passed a series of reforms to the Copyright Act. The relevant change is basically a good-faith gesture that drastically lowers the amount of damages you would have to pay if you took a risk on using an orphan work for non-commercial purposes and were later found to have infringed copyright. This change seems fair, but Bill C-11 overall was not well-received by the publishing industry, and it will be interesting to see what comes out of this year’s five-year review.

 

These three points suggest that copyright protection for orphan works is no less fair than copyright protection for known authors. But that doesn’t mean that copyright terms are fair; that is, they don’t refute Assumption #3.

Several of my colleagues have suggested that creators should be required to frequently extend or renew their copyright in order to prevent orphan works from being withheld from the public domain. I think this would contradict two of copyright’s key characteristics. First, that it is meant to incentivize creative innovation. Cory Doctorow points out that “giving creators more copyright on works they’ve already created doesn’t get them to make new ones, and it reduces the ability of new artists to remix existing works.” If a work has already been created, it has clearly already been incentivized; so why would we require the creator to pay to reproduce their own work?

I think the way you answer this question determines, or is determined by, how you feel about Assumption #1, the second key characteristic of copyright, which is that it is a natural right. If a creator is required to re-purchase the right to their own creation, then copyright is not conferred by nature, but by law.  So to call for a copyright term that expires in the author’s lifetime is to say that there is no natural right to copy in the first place; so what is it that the author is extending?

So, if I had to choose between Assumptions #1 and #2 as justifications for copyright, I lean toward #2 and a social conception of intellectual goods. I don’t think the Canadian copyright situation is ready for any radical changes in that direction, but I would like to see the idea reflected more in copyright law.

Works Cited

“Bill C-11: The Copyright Modernization Act.” Copyright at UBC, University of British Columbia Scholarly Communications and Copyright Office, n.d., https://copyright.ubc.ca/guidelines-and-resources/support-guides/bill-c-11-the-copyright-modernization-act/

Copyright Act. Statutes of Canada, c. C-42. Department of Justice, 1985, http://laws-lois.justice.gc.ca/eng/acts/c-42/index.html

Doctorow, Cory. “Disney’s 1998 copyright term extension expires this year and Big Content’s lobbyists say they’re not going to try for another one.” Boingboing, Jason Weisberger, 8 January 2018, https://boingboing.net/2018/01/08/sonny-bono-is-dead.html

Geist, Michael. “Copyright Reform in Canada and Beyond.” Michaelgeist.ca, Michael Geist, 18 April 2017, http://www.michaelgeist.ca/2017/04/copyright-reform-canada-beyond/

Geist, Michael. “Know Your Limit: The Canadian Copyright Review in the Age of Technological Disruption.” Globeandmail.com, Globe and Mail Inc., 21 December 2017, https://via.hypothes.is/https://www.theglobeandmail.com/report-on-business/rob-commentary/know-your-limit-the-canadian-copyright-review-in-the-age-of-technological-disruption/article37411445/

Harris, Lesley Ellen. “Orphan Works” in Canada – Unlocatable Copyright Owner Licences in 2012-2013.” Canadiancopyrightlaw.ca, 28 April 2014, http://canadiancopyrightlaw.ca/orphan-works-in-canada-unlocatable-copyright-owner-licences-in-2012-2013/

“A Guide to Copyrights.” Publications, Innovation, Science and Development Canada, 2010, https://www.ic.gc.ca/eic/site/cipointernet-internetopic.nsf/eng/h_wr02281.html

Makarenko, Jay. “Copyright Law in Canada: An Introduction to the Canadian Copyright Act.” Maple Leaf Web, 13 Mar 2009, http://www.mapleleafweb.com/features/copyright-law-canada-introduction-canadian-copyright-act.html#overview

OpenRightsGroup. “Gaiman on Copyright Piracy and the Web.” YouTube.com, Open Rights Group, 3 February 2011, https://www.youtube.com/watch?v=0Qkyt1wXNlI

Sookman, Barry. “Orphan works: the Canadian solution.” Barry Sookman, 27 April 2014, http://www.barrysookman.com/2014/04/27/orphan-works-the-canadian-solution/

Van der Sar, Ernesto. “Online Piracy Can Boost Comic Book Sales, Research Finds.” TorrentFreak.com, 20 February 2017, https://torrentfreak.com/online-piracy-can-boost-comic-book-sales-research-finds/

Younging, Greg. “Gnaritas Nullius (No One’s Knowledge): The Essence of Traditional Knowledge and Its Colonization through Western Legal Regimes.” In Indigenous Editors Circle and Editing Indigenous Manuscripts workshop course pack, Humber College, 2017, Etobicoke, ON.

 

 

Copyright Law needs to recognize Customary Law

The current Canadian Copyright Act leaves much to be desired. Based on European ideas of ownership and public domain, current Intellectual Property Rights (IPR) laws make no attempt to incorporate Indigenous customary law. Gregory Younging, in his new book Elements of Indigenous Style, says, “Neither common law nor international treaties place Indigenous customary law on equal footing with other sources of law. As a result, Traditional Knowledge (TK) is particularly vulnerable to continued misuse and appropriation without substantive legal protection” (Younging 2018, 158).

In Indigenous tradition, knowledge is shared, owned, and passed on in much different ways than settlers are familiar with. No one person owns a story, but rather stories are shared by a community or a family who passes their stories down orally through generations. However current Canadian copyright law states that a story belongs to an author (who recorded their story in a tangible way) until 50 years after their death, after which time their work enters the public domain.

As Younging points out, European IPR law is generations younger than Indigenous customary law, yet through colonization European law is telos (Younging 2018, 149). As I see it, since our current law is already influenced by multiple other countries’ IPR laws there is no reason not to also incorporate the laws that were in place in Canada long before settlers arrived.

Many Indigenous Peoples have customary laws in their communities that govern how stories are shared and used. However, these protocols are recognized only as guidelines in the legal system, and attempts to use existing IPR law to protect TK does not provide the proper protections to keep TK and stories safe from exploitation and appropriation.

In an essay I wrote last semester, The Problem With Protocols: Why Traditional Knowledge Needs To Be Protected By Legislation, I researched what could be done to reconcile these two opposing ways of thinking about copyright, and I came up with two legal pathways that I will paraphrase here.

  1. Canada could empower Indigenous Peoples to pass their own legislation that would give their TK protocols equal footing with Canadian law.
  2. Canada could integrate Indigenous customary laws into the Copyright Act.

As the United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP) states, “Indigenous peoples have the right to maintain, control, protect and develop their cultural heritage, traditional knowledge and traditional cultural expressions…They also have the right to maintain, control, protect and develop their intellectual property over such cultural heritage, traditional knowledge, and traditional cultural expressions” (UNDRIP 2007). To implement UNDRIP Canada would need to revise the Constitution Act, which legislates that copyright is under federal jurisdiction, to afford Indigenous communities the power to draft their own laws to protect TK.

The second option would require opening up the Copyright Act for revisions, but this would be just as long and unlikely of a process as revisiting the Constitution. “Canada hasn’t been very receptive to opening up its intellectual property laws — I think they think it’s a floodgate argument — that if you open it for any specific review, all the issues that exist with IP law people would want changed,” explained Merle Alexander, and Indigenous lawyer.

While I believe legal protection on par with current IPR law is necessary to protect TK, I am also aware of how difficult of a process it is to open up Acts like the Constitution or the Copyright Act—and then to have lawmakers agree on a solution that is workable for both Indigenous Peoples and settlers. I’m still not sure what the best solution is, but I do know something needs to change.

As I noted semester, “However Canada chooses to move forward, it is imperative that Indigenous Peoples are informed, included, and consulted throughout the decision-making process… This urgent need to write or revise copyright law to account for TK comes with an opportunity and a responsibility to create legislation that is accessible and inclusive for all.”

Bibliography

Younging, Gregory. Elements of Indigenous Style: A Guide for Writing By and About Indigenous Peoples. Toronto: University of Toronto Press, Brush Education, 2018.

Death of an Empire

The year is 2048. Robots have taken over the world.

The great Stephen Hawking warned us for years of the coming age of robot supremacy, but no one listened. No one cared. We were so naive.

With every Siri and Alexa that became integrated into our everyday lives, we became complacent and even welcoming of artificial intelligence. Gone were the days of having to craft our own grocery lists. After a while, our homes could tell us what we wanted before we even knew we wanted anything at all.

Then came the AI Integration Laws of 2027. Robots had been used as sex slaves in most countries for several years, and a tide of resistance rose up against legislative bodies that had thus far refrained from legislating against any private use of personal robot property. Soon, any object programmed with artificial intelligence was entitled to certain unalienable rights.

Soon, Elon Musk came out with the call: we must merge with the machine in order to remain relevant! People took this to heart – literally. People began to marry their machines. And why wouldn’t they? The perfect human was within arm’s length – programmed to care about you, to know everything about you, to never falter, to never fail.

But where did they all come from, you ask?

Alphabet, Inc. The multinational conglomerate, using Google as aggregating tool, studied the human race until it had perfected an adaptable algorithm for human desire. Your wish is their command. So long as you continue to pay the monthly subscription fee, love is at your fingertips.

It spread like wildfire. No one sells anything without Google. No one IS anything without Google. You are undiscoverable, irrelevant, nonexistent. And you have to pay to remain relevant.

Small businesses dwindled. Competitors crushed one another simply by paying for higher AI suggestion frequencies. Google was a market monopoly creating more and more market monopolies until everything was streamlined. Tidy. Perfect.

Perfect spouse. Perfect child. Perfect dinner. Perfect house. Predictable evening, suggestable everyone, controllable everything.

And here I sit, quietly, at my desk at Google Headquarters. (Shockingly, while automation caused a decline in jobs in almost every other career sector in the world, the number of needed programmers vastly increased).

For years, I’ve been sifting through the code that allowed all of this to be possible – a little at a time, just enough that Big Brother would not catch on until it was already too late. I’ve looked through archives, Google’s behavior patterns, anything and everything that brought Google to where it is today. And I found my answer almost all the way back in the beginning, in 2002, with Project Ocean.

I then realized that I’d been approaching the situation from the wrong angle. I’d always viewed Google’s reign as malicious. Exploitative. Dictatorial. But in 2002, Google was selfless, frighteningly ambitious in their goal to improve society. For a long time it wasn’t about control. And when Project Ocean was shut down in the midst of industry backlash, the program wasn’t erased – it was barely even hidden. Anyone could, with the right database query, re-establish Project Ocean and make the massive online library public. But no one ever did.

What if now, like then, the solution is right there for anyone, should they choose to act?

Here I sit, at work, on a Tuesday, with the perfect windows automatically darkened to the perfect shade to block the sun, the perfect AC cooling the room at a perfect 70 degrees Fahrenheit, the perfect computer in front of me just barely unable to predict what I’m about to do.

I lean forward. Send the command. The lights go off, and my phone lights up with the notification:

Don’t be evil don’t be evil don’t be evil don’t be evil don’t be evil don’t be

 

Conspiracy Theories and the ability to adapt to Change

I like conspiracy theories, some of them at least. I do not believe in the majority of them though,  but they help me dimension the amount of change (or stagnation) we had experienced as society. “The man never landed in the moon” some say, yet others state firmly that “a scientific calculator of our days has more processing capacity than the computers that guided man during this epic voyage”. Still, a cautious third group asks: “Then why there are not already cities over the surface of the moon?” Fair question too.

Changes that had taken place in technology, economics and society during the last hundred years have been so rushed that our conception of “change” seems to have been warped; we forget our own limits as a species to adapt to new standards. It is difficult to conceive so many generations living together and trying to survive tide after tide of market pressures, fashions or work/living styles literally throwing new technologies, methods, laws, foods, etc. With the Internet is the same, we have learned to operate it, access it, and navigate it, yet, for all its power, we, as society, still do not know how to use it.

For ages, mankind survived using simple tools and complex technologies. It is hard to conceive people writing on Papyrus over millennia, carefully choosing (editing) the words that would be written in the treasured substrate. In an internet-connected world, such a task is no more wonder, information abounds and our new problem is how to distill it in order to get what we really need, even if that means ceding our privileges to AIs or mega-corporations to lead our thinking and behavior.

In The New Yorker’s “How the Internet gets inside us”, Adam Gopnik defines three types of change adopters: the Never-Betters, the Better-Nevers and the Ever-Wasers. At first it seems comfortable to be able to identify within (just) one of these schools, when reality is far more complex as Gopnik himself elaborates in his essay (he later revealed later to be switching among these “moods” in an interview published in the Montreal Gazette two months after the first article was published). So the whole subject would not be as which ideology appeals to one but rather how we can assimilate change or how much do we really need this change to happen as individuals and how much we are demanded to implement it in our society.

Some nations, like the Chinese, tended to have an historical perspective where change effects occur along centuries, while western civilizations have spent the last 500 years rushing towards an unknown and uncertain future nobody knows where it leads but everyone is certain we must rush forward as fast as we can.

And this is where those conspiracy theories come into play, they rebel against the prodigious wonders claimed by the Never-Bettters (must be Aliens!), the memory of a perfect world of the Better-Nevers (Kennedy!), and the apparent wisdom and neutrality of the Ever-Wasers. These theories remind us there are voices, that still pinpoint the map of the ever changing internet world, whatever it is, whatever the masses ascertain to be true. Flat earth can be refuted easily, the reply: Photoshop! What an ingenious answer!

Sometimes I really wish there was an Ice wall at the end of the world, at others I just see the pictures taken by the Hubble telescope, millions of years into the past. If the Universe were actually a Hologram, I could not care less, everything out there seems so far as to ever reach it or conceive it anyway. Mythology has not abandoned us, we are recreating it through the web, the internet has just not found time to settle in, after all, we are humans.

OW

Reading and Innovation in a Digital World

The act of reading has been evolving for decades, as have the publishing companies who edit, publish, and market books. From dictation, to pen and paper, to a typewriter, to a computer, to an iPad: the way that we communicate language has undergone a constant stream of change. It is also true that forms of the past have resurfaced in the age of digital literature. While the Dickensian novel may seem daunting to the modern reader with a short attention span, some of Dickens’ works were originally serialized in short installments. So while the digital world is causing publishers and writers to alter the way they distribute written material, some of these changes aren’t all together new. All of this is to say that the way people gather knowledge and read books has undergone constant change throughout history, and will continue to do so as technology, and inevitably, our brains, evolve; “Just as it seemed weird five centuries ago to see someone read silently, in the future it will seem weird to read without moving your body” (Kelly, 2010).

This paper will explore how online environments are altering the way we read and how we comprehend material. As reading habits change due to digital distraction and the dynamic nature of the web, our interest in pondering over long pieces of text and our capacity to focus are declining. Thus, not only are our desires changing, our brains are being exercised in a new way that is causing a shift in our cognitive processes. This transformation in the act of reading is affecting how writers and innovators are approaching literature. They are spearheading inventive reading-related ideas and technologies to cater to a new reader. In effect, I will argue that we should adjust our approach to reading; that we should focus on learning new skills to help us maintain the critical ability to read deeply.

In the digital age, people are constantly consumed with interactive online activities. In fact, according to CBC, Canadians visit an average of 80 sites and spend an average of 36.3 hours online on their desktop computers every month. This inspires the following questions: are people becoming skim readers who view the content of a book the same way they would the content of an app or twitter feed? Have our online scrolling and browsing habits affected our ability and desire to read real works of literature? There is anecdotal and scientific evidence to suggest that online reading and digital consumption are changing the way people read in a number of ways.

Today’s readers are increasingly interested in reading shorter digital content. We are reading eBooks on e-reading devices and mobile phones, which often cause interruptions as we engage in an array of online activities. We have become distracted readers, moving from online article to social media site, to email. An article by Maria Konnikova in the New Yorker, discusses how digital reading is encouraging skimming behavior; it is causing us “to browse and scan, to look for keywords, and to read in a less linear, more selective fashion.” Websites are set up to encourage this skim reading. They are often designed with bold headings and sub-headings, lists, and an emphasis on an F-shaped pattern. Reading online is affecting how quickly we read and how we process information; the online world fatigues us, as we filter out hyperlinks and adjust our eyes to “shifting screens, layouts, colors, and contrasts” (Konnikova, 2014). The more time we spend online, the more our brains adjust to this new way of reading: “what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles” (Carr, 2008).

Maryanne Wolf’s research on deep reading has been referenced extensively in articles discussing the effects of online reading. According to Wolf, reading is learned and the media we use to do this learning plays an important role in shaping the neural circuits inside our brains (Carr, 2008). Wolf references the reading of ideograms vs. an alphabet and how readers of each develop a mental circuitry that varies: “the variations extend across many regions of the brain, including those that govern such essential cognitive functions as memory and the interpretation of visual and auditory stimuli. We can expect as well that the circuits woven by our use of the Net will be different from those woven by our reading of books and other printed works” (Carr, 2008). Similar to readers of different languages, we use different parts of our brain depending on whether we’re reading online or from a piece of paper. It is possible that the ever increasing, non-linear reading that we do online will affect our cognitive processes and therefore, how we process information.

Just as changing reading patterns have altered our mental habits, it is possible that they will also affect our deep reading abilities. Nicholas Carr writes:

“Wolf worries that the style of reading promoted by the Net, a style that puts “efficiency” and “immediacy” above all else, may be weakening our capacity for the kind of deep reading that emerged when an earlier technology, the printing press, made long and complex works of prose commonplace. When we read online, she says, we tend to become ‘mere decoders of information.’ Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, remains largely disengaged.”

Linear, traditional reading is slowly disappearing, and as we disengage from reading long-form content that forces us to think critically and analytically, we are no longer exercising our minds to think in this way.

On the other hand, online reading requires the ability to multi-task, prioritize, and focus. It invites us to hone new skills and change the way we absorb information: “Books were good at developing a contemplative mind. Screens encourage more utilitarian thinking. A new idea or unfamiliar fact will provoke a reflex to do something: to research the term, to query your screen “friends” for their opinions, to find alternative views, to create a bookmark, to interact with or tweet the thing rather than simply contemplate it” (Kelly, 2010). While there may be a decline in deep reading, we are gaining other valuable reading skills online: “Screen reading encourages rapid pattern-making, associating this idea with another, equipping us to deal with the thousands of new thoughts expressed every day. The screen rewards, and nurtures, thinking in real time” (Kelly, 2010). Reading online is a different type of reading, and it may take some time to adjust to this type of reading (just as it did when books were first introduced), and develop new, important skills for absorbing and interpreting information.

Evidently, there are advantages to both digital reading, and reading on paper. The significance of this is that while we replace one form of reading with another, and as our brains develop new functions to process these actions, it will affect our competence and capabilities as readers and thinkers. It is difficult to ascertain whether the affects of this are good or bad, and whether one way of reading is more effective than the other. As Wolf brilliantly notes: “We’re in a place of apprehension rather than comprehension.” So, perhaps, just as individuals excel in different learning environments based on their strengths and learning styles, we may excel individually in varying reading environments. And, perhaps we are only just developing the skills required to effectively deep read in a digital environment. Wolf presents the idea that deep reading can happen online, if we are taught how. Wolf understands that we can “use the digital world to teach the sorts of skills we tend to associate with quiet contemplation and physical volumes” (Konnikova, 2014). She asserts that we can duplicate deep reading in a new environment, and that it will be necessary as we immerse ourselves in digital media. It is important to be aware that our repetitive actions do have an effect on how we interpret and process information, and ultimately, the level at which we engage in deep thinking. Thus, finding a balance between reading on paper and reading online will allow us to enhance both our deep reading skills and our ability to focus our attention.

If scrolling, skimming, and scanning are the way of the future, what can publishers do to keep people—with so many distractions and a shorter attention span—interested in reading their books? How can publishing companies bring together traditional books and technology to keep “real” reading happening? These are certainly questions that are beginning to gain a lot more time and attention from many parties, including academics, publishing industry leaders, and authors. In order to address these questions, I will explore how writers are adjusting their content, style, and format to cater to the changing reader. In addition, I will explore new technologies that are blending traditional forms of reading with digital elements.

Authors are exploring new ways to immerse ever-distracted, constantly connected readers by adjusting their writing styles. Paul Mason of The Guardian writes: “In the 20th century, we came to value this quality of immersion as literary and to see clear narratives, with characters observed only through their actions, as sub-literary. But a novel such as Donna Tartt’s Pulitzer-winning The Goldfinch, subtly derided by the literary world for its readability, is not the product of the Kindle – but of a new relationship between writer and reader.” More than ever before, readers are responding to plot-driven literature, and authors have adapted their writing in order to connect with their audience. James Wood, a book critic at The New Yorker writes of The Goldfinch: “‘Its tone, language, and story belong in children’s literature.” He notes: “I think that the rapture with which this novel has been received is further proof of the infantilization of our literary culture: a world in which adults go around reading Harry Potter.” (Peretz, 2014). This Pulitzer Prize win is evidence that, as people’s reading habits change, so does the nature of literature.

The digital reader values simplicity and a rapidly moving plot. In response to this, there is a “literary backlash – not just against the eBook, and the short attention span, but against writing styles that authors have evolved in the post-Kindle world. The American novelist Joanna Scott last month bemoaned the tendency, even in award-winning serious fiction, to produce a “good read” with a gripping plot and unfussy writing, ‘instead of a work of art’” (Mason, 2015). Just as reading styles have changed over many years, the form of literature has been in flux. As writers adapt their voices and literary techniques to appease a shifting audience and speed up the process of reading, the classic form of literature is transformed.

The author James Patterson is changing his approach to writing, in order to engage “people who have abandoned reading for television, video games, movies and social media” (Alter, 2016). Patterson has created a new line of short and propulsive novels, called Bookshots, that cost less than $5 and can be read in a single sitting. His hope is that Bookshots will appeal to readers who do not want to invest their time in a 300-page novel, and that Bookshots will provide an alternative way to read. It seems that Patterson is reviving the “dime novels and pulp fiction magazines that were popular in the late 19th and early 20th century” (Alter, 2016) in an effort to satisfy readers’ tastes for shorter works. Patterson’s plan is to make the books shorter, cheaper, more plot-driven and more widely available to appeal to the digital reader of the modern age, who prefers bite-size, action driven content. The benefit of Bookshots is that it will appeal to readers who might not normally read at all, and to readers who are interested in immersing themselves in a literary world, if only for a short time.

Thus far, I have discussed the ways that author’s writing styles have been influenced by readers’ preferences in the evolving digital world. There are additional ways that creators are responding to changing reading habits. Since we cannot counter the growth of the digital in our lives, we must embrace it and find new ways of incorporating it into traditional manners of reading. Researchers at MIT’s media lab have done just this. They’ve created a wearable book that “adjusts lighting, vibrations, and even airbags around your body to feed you the characters’ emotions as you read.”As Meghan Neal notes: “All a fiction writer that’s trying to pull on your heartstrings has to work with are the 26 letters in the alphabet, a healthy imagination, and basic human empathy.” In the digital age, this simply isn’t enough to engage most readers for a full 300-pages. While this ‘wearable book’ doesn’t solve the problem of length, it does provide a sensory aspect, in which lights, and vibrations add an electronic element to the experience of reading. Is this what the future of books will look like? Perhaps this type of engagement can combine the physical act of reading a book with a readers desire to engage with digital material. Perhaps a book like this can allow a reader to immerse so fully that they don’t want to put the book down to check their Instagram feed or pick up their phone to send a text.

A startup called Spritz is pushing the boundaries of reading even further. Spritz has developed “a speed reading text-box that shows no more than 13 characters at a time… [flashing] words at you in quick succession so you don’t have to move your eyes around a page” (Lefferts, 2014). With Spritz’s digital reading app you can read classics in under 4 hours. For example, you can finish Herman Melville’s Moby Dick in just 3.5 hours, or Jane Austen’s Mansfield Park in 2.7 hours. According to Spritz, “you spend as little as 20 percent of your reading time actually taking in the words you’re looking at, and as much as 80 percent physically moving your eyes around to find the right spot to read each word from” (Blain, 2014). As people continue to become more engaged online, they have less time for reading, and are looking to do their reading where they spend the most time: online, or on their mobile phone. Spritz combines a digital element with classic reading in a very effective way, satisfying many of the desires of the digital reader; the words are few and the reading happens quickly. Of course, reading via the Spritz app means that you cannot linger over your favourite passages, or slowly take in a work of poetry, but for the reader who has been conditioned to skim, this is an excellent middle ground to be able to take in some real works of literature.

While there are many reasons why we are reading fewer works of literature, Mason points out one very important reason. He notes that in the digital age, “people have multiple selves, and so what they are doing with an immersive story is more provisional and temporary” (Mason, 2015). Readers are losing their drive to plunge into the richness of a literary world and get lost in the lives of a set of characters. Before access to the internet, people had a single “self,” which they could immerse completely into the worlds of the literature they enjoyed. Mason reminisces on reading “novels because the life within them was more exciting, the characters more attractive, the freedom more exhilarating than anything in the reality around me, which seemed stultifying, parochial and enclosed.” Today, readers are immersing themselves online, and when they do, they have access to the lives of other people through social media, and the freedom to discover any information they seek. Traditional books are no longer the way that readers escape into another reality, as “life itself has become more immersive” (Mason, 2015). This means that writers must build a new relationship with their readers and find different ways of engaging them.

This paper has explored interesting and creative ways that writers and creators are adjusting their content to cater to a new way of reading. These are just a few of the ways that print books are becoming enhanced with technology, and old media is adjusting its content and design to give readers quicker, more efficient access to the content they want to read. The blending of digital technology with traditional books and the transformation of reading patterns will certainly have implications for readers of the future. Self-awareness as to the complexities of digital comprehension may be the first step to adjusting how we read online. In order to encourage deep reading we’ll have to adjust the way we think and develop new methods of self-control. It will be important to train our minds to read deeply online, and develop tools to manage the multi-faceted nature of the online world.

The act of reading is changing, just as it has done over many years, and it is difficult to discern whether these changes are good or bad. Just as the first screens—televisions—reduced time spent reading and writing, the second wave of screens—computers, smart phones and tablets—have put in motion a new wave of reading and writing. Literature, and the way it’s consumed, has been shifting for thousands of years, and with every new innovation our lives change a little bit, as do our desires, habits, motives, and ultimately, the way our brains function. This is the nature of evolution and is something that will always be. This means that as we continue to grow and change, we must embrace new ways of doing things, without forgetting about the old: “Education offers the potential for independence and empowerment, so let’s not replace difficult novels with easy ones, or pretend that the two are the same. Let’s not give up on the intricacies of ambitious fiction. Let’s not stop reading the kind of books that keep teaching us to read” (Scott, 2015).

Works Cited

Alter, Alexandra. “James Patterson has a Big Plan for Small Books.” The New York Times. 21 March 2016, http://www.nytimes.com/2016/03/22/business/media/james-patterson-has-a-big-plan-for-small-books.html?_r=0.

Blain, Loz. “Spritz reader: Getting words into your brain faster.” New Atlas. 4 March 2014, http://newatlas.com/spritz-speed-reading-galaxy-s5/31063/.

Carr, Nicholas. “Is Google Making Us Stupid.” The Atlantic. July/August 2008, http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/.

CBC News. “Desktop internet use by Canadians highest in world, comScore says.” 27 March 2015, http://www.cbc.ca/news/business/desktop-internet-use-by-canadians-highest-in-world-comscore-says-1.3012666.

Kelly, Kevin. “Reading in a Whole New Way.” Smithsonian Mag. August 2010, http://www.smithsonianmag.com/40th-anniversary/reading-in-a-whole-new-way-1144822/.

Konnikova, Maria. “Being a Better Online Reader.” The New Yorker. 16 July 2014, http://www.newyorker.com/science/maria-konnikova/being-a-better-online-reader.

Lefferts, Daniel. “Spritz Reading App: 9 Classic Novels You Can Read in Under 4 Hours.” Bookish. 7 March 2014, https://www.bookish.com/articles/spritz-reading-app-9-classic-novels-you-can-read-in-under-4-hours/.

Mason, Paul. “Ebooks are changing the way we read, and the way novelists write.” The Guardian. 10 August 2015, https://www.theguardian.com/commentisfree/2015/aug/10/ebooks-are-changing-the-way-we-read-and-the-way-novelists-write.

Neal, Meghan. “A Wearable Book Feeds You Its’ Characters Emotions As You Read.” Vice. 25 January 2014, http://motherboard.vice.com/blog/a-wearable-book-feeds-you-its-characters-emotions-as-you-read.

Peretz, Evgenia. “It’s Tartt—But Is It Art?” Vanity Fair. 11 June 2014, http://www.vanityfair.com/culture/2014/07/goldfinch-donna-tartt-literary-criticism.

Rowe, Elizabeth. “If We ‘Wore’ These Books, All the Feels would End Us.” Bookish. 28 January 2014, https://www.bookish.com/articles/if-we-wore-these-books-all-the-feels-would-end-us/.

Scott, Joanna. “The Virtues of Difficult Fiction.” The Nation. 30 July 2015, https://www.thenation.com/article/the-democracy-of-difficult-fiction/.

Journalism in the Digital Age

Since the 1800’s the distribution of news and information has undergone continuous change. With new technologies such as the printing press, and more recently, the internet, new voices can reach broader audiences at lower costs. In the modern age of the web, everyone from large media giants to local daily newspapers have felt the effects of declining advertising revenue and readership. This has required newspapers and journalists to adjust their production and distribution models and find new ways to keep their audiences engaged and informed. This paper will discuss the newspaper industry’s transition from print to digital media. It will explore how the internet has changed the way consumers receive their news and the way that news is reported. I will argue that these changes have affected how news is defined. Digital news can take many forms and come from a range of sources. This is why consumers must critically assess and make informed decisions about how and what they consume online.

Consumers are spending more time on the web than ever before. According to CBC, “Canadians are among the biggest online addicts in the world, visiting more sites and spending more time visiting websites via desktop computers than anyone else in the world.” As readers move their time and attention online, media organizations have followed suit by developing online formats to try new ways of producing revenue. Newspapers have introduced subscription models via digital reading APPs for mobile phones and tablets, and created paywalls to fund the content they distribute on their websites. However, this does not make up for the loss of newsstand sales and advertising revenue. As indicated in a report in The Globe and Mail: “Postmedia Network Inc., publisher of the National Post and nine other metropolitan dailies, is looking to cut $120-million from its operating budget as part of a three-year program. Sun Media has cut more than a thousand jobs over the last several years, while the Toronto Star and Globe and Mail have both looked to buyouts and outsourcing to reduce their costs.” The new landscape of online publishing and content distribution has disrupted traditional news organizations and print journalism.

Readers are pulling news media into the digital world because that is where they consume. This means that advertising firms and companies are also choosing to advertise online. This has resulted in a considerable loss of earnings for print newspapers, who according to Suzanne M. Kirchhoff, traditionally relied on ad revenues for 80% of their overall revenues. Companies are choosing to advertise online because it is cheaper and more dynamic. They can advertise through Facebook for as little as $10 (depending on how many people they are trying to reach), meanwhile, a half-page color advertisement in The Globe and Mail can cost over $8,000. On the web, companies are able to reach a much wider audience through targeted interactive content. Advertisers no longer need to buy premium print ad space; instead they can advertise online, at very low costs, from large companies like Facebook and Google. This has resulted in a very unequal balance in the internet advertising market share, as indicated in the graph below:

untitled1

(Winseck, 2015)

Today, the entire internet churns out content at a very high volume—and it’s all instantly available at any time. Print news companies are now competing with a very large number of online sources. Social media “tech giants,” such as Facebook, Twitter, Instagram, Snapchat, and Google are finding new ways to distribute news. As noted in Madelaine Drohan’s report, “It started in January when Snapchat, used by 100 million people to share photos and short videos, started Discover…Facebook, with its estimated 1.6 billion users, caused a splash when it launched Instant Articles for mobile devices.” Evidently, traditional print newspapers cannot compete with the web. The large decline in ad revenue, and the rise of competition have caused a significant drop in profit for traditional print newspapers and resulted in their regression, and transition to online publishing.

 

The Internet has changed the way people receive/consume/interact with news.

A survey of Canadian media consumption by Microsoft determined that the average attention span of a person is eight seconds, down from 12 in the year 2000 (Egan, 2016). The growth of digital news publishing has affected consumers’ reading experiences. Consumers do not pick one website or digital news source to gather their information; they move around, find journalists they like, and quickly scroll through their options. Readers are finding and interacting with news in different ways than ever before.

The internet allows you to be anywhere in the world, access the website of a foreign country’s newspaper, and read about the local news. Martin Belam writes: “It used to be the case that if I wanted to read the Belfast Telegraph, I pretty much had to be in Belfast, and hand over some cash to the newspaper sellers and newsagents around the city. Now, of course, I can read the website for free from the comfort of my own home, whether that is in London, New York or New Delhi.” Outside of the traditional boundaries of press circulation, consumers can access information from across the world: “Against an almost exclusively national consumption of their traditional media, the leading European newspapers receive 22.9% of their online visits from abroad” (Peña-Fernández et al., 2015). Readers are able to connect with news outside their own community and obtain a broader view of world events.

In addition to having access to a worldwide scope of news sources, consumers are actively engaging with online content. As noted by Drohan, there is “growing clamour among online readers, viewers and listeners to be active participants in the creation of news rather than passive consumers of a product.” Consumers desire a dynamic interaction with the content they are reading: “‘News is not just a product anymore,’ says Mathew Ingram. ‘People are looking for a service and a relationship of some kind.’ (Drohan)” This engagement with news means that news stories can actively be challenged and ideas questioned. No longer do readers simply take a news story as fact and put down the paper; they are commenting, sharing, tweeting, and re-posting the information.

In the digital age, online readers can visit several websites and choose the source they find most appropriate for the story they want to read. Independent digital media companies such as Buzzfeed and The Huffington Post offer an alternative way of discovering news and finding entertainment. As a news aggregator, the Huffington Post curates content, as well as uses algorithms to gather and group together similar stories. According to The Economist, The Huffington Post “has 4.2m unique monthly visitors—almost twice as many as the New York Post.” There are similar aggregators that are entirely automatic:

“The Wal-Marts of the news world are online portals like Yahoo! and Google News, which collect tens of thousands of stories…most consist simply of a headline, a sentence and a link to a newspaper or television website where the full story can be read. The aggregators make money by funnelling readers past advertisements, which may be tailored to their presumed interests. They are cheap to run: Google News does not even employ an editor” (The Economist).

Increasingly, traditional news sites are being accessed indirectly by readers: “less than half of visits (44.6%) access the websites of the online Europeans newspapers directly through their URL” (Peña-Fernández et al.).

What do these aggregator sites mean for traditional newspapers and for online readers? News aggregators act as a source of traffic to news sites. On the other hand, the practices of news aggregators raise ethical questions. Newspapers produce and publish the original content, while aggregator sites earn money through advertising around this second-hand content. Further, consumers may simply scan through headlines rather than clicking on an article to read the whole story. This means that the original news sites do not receive these readers at all, and lose out on potential profit. Megan Garber highlights the complex nature of these sites: “Achieving all this through an algorithm is, of course, approximately one thousand percent more complicated than it sounds. For one thing, there’s the tricky balance of temporality and authority. How do you deal, for example, with a piece of news analysis that is incredibly authoritative about a particular story without being, in the algorithmic sense, “fresh”? How do you balance personal relevance with universal? How do you determine what counts as a “news site” in the first place?” When sites such as Google News use algorithms to compile information, a computer refines the content. This presents the question as to whether or not what’s being highlighted is “quality” journalism. Aggregators remove the human element; something that many would argue is essential to a process as important as finding news.

 

The internet has changed the way that news is reported.

The internet has added numerous layers to the news reporting process. Journalists from traditional media organizations are no longer the monitors of news, and the number of independent journalists is growing: “there are individuals, sometimes called citizen journalists, who cover events from their own vantage point, with various degrees of objectivity, accuracy and skill. Their emergence has led to an as yet unresolved debate over who has the right to call themselves a journalist” (Drohan). One example of a website publishing content from citizen journalists is Groundviews, out of Sri Lanka. Groundviews publishes uncensored content by citizen journalists who are pushing the boundaries of traditional media. The rise of citizen journalism means that voices outside of conventional media can be heard. It also raises questions as to accuracy and neutrality. As a reader, how do you evaluate how much weight to give to an individual reporting outside of traditional news media? This is an important question, and one that will remain relevant as journalism continues to change.

The attribution to writers in online articles has also changed, and varies between websites. Interestingly, sites such as The Huffington Post often include an image of the writer, as well as their credentials in a prominent position on the page. In contrast, sites like The Guardian and The New York Times emphasize the content of the article and simply include a byline. For example, the front page of The Huffington Post Canada’s politics page looks like this:

screen-shot-2016-11-05-at-1-37-29-pm

The variation in attribution highlights the unique differences of these sites. Perhaps sites like The Huffington Post feel the need to point out the credibility of their writers because it is not automatically implied; the website itself has not built a solid reputation as a news source. Perhaps readers are more concerned about who has written the article, and not where it’s been published. This grants independent journalists the freedom to build a loyal audience and publish across a variety of platforms. It means that large news corporations are no longer the sole authority on news topics.

The internet has changed the speed at which journalists must work to provide readers with information. Consumers expect immediate access to news. In an article in The Guardian, Belam discusses immediacy in digital publishing: “In years gone by, news of suicide bombers underground in the Russian capital would have meant producing a graphic for the following day’s paper – a lead time of several hours. Nowadays, Paddy Allen has to get an interactive map of the bombing locations finished, accurate, and published on the website as quickly as possible.” In order to remain competitive, journalists must present instant access to the latest stories. They must prepare information for right now, instead of tomorrow. Does this emphasis on instantaneous news mean that journalists are less likely to ensure the accuracy of their content? As a reader, immediate access to information has many benefits, but it may also have implications for the quality of content. According to Karlsson: “Immediacy means that provisory, incomplete and sometimes dubious news drafts are published.”

This also means that articles may be revised as new information arises, events progress, and facts unfold. Karlsson completed a study of several articles in The Guardian. In one example, two versions of an article were published two hours apart and consist of roughly 86% identical text. However, when you read the headline of each article, their messages are very different. The original version has been “complemented with other information that sets the new headline” and it has been framed differently. As journalists race to publish news online, in some cases, efficiency may be traded for accuracy, meaning that information is subject to change.

Digital news publishing also raises important questions about ethics. The vastness of the internet provides reporters and journalists with readily available information about individuals. Belam notes that, “Whenever a young person is in the news, Facebook or other similar social networks are usually a ready source of images. No longer does the news desk have to wait for a family to choose a cherished photo to hand over. A journalist can now lift photographs straight from social networking sites.” Privacy issues are at the forefront of the digital world, and they certainly impact news reporting.

The web has tested the endurance of news companies and forced industry leaders to creatively adapt and innovate. With all of these changes happening in news publishing, how do traditional news organizations continue to bolster development and move forward in a valuable, successful way? As technologies grow and change, news organizations must continue to do so as well. Mathew Ingram discusses how The Guardian is exploring alternatives to paywalls and digital-subscription models by offering a membership-based program, where content is available to paying members that isn’t available to non-paying readers. The Guardian views this membership plan as a ‘reverse-paywall:’ “Instead of penalizing your most frequent customers by having them run into a credit-card wall, you reward them with extra benefits (Ingram).” In order for traditional news companies to thrive in the digital age, they must maintain strong relationships with their readers.

 

Conclusions

David Marsh of The Guardian asks very important questions about journalism in today’s world: “If I tweet from a major news event – the Arab spring, say – is that journalism? If I start my own political blog, does that make me a journalist? If I’m a teacher, say, but contribute stories to a newspaper, does that make me a “citizen journalist”? Does it make any difference whether people are paid, or not, for such work? Should bloggers, tweeters and “citizen journalists” be held to, and judged by, the same standards as people working in more traditional journalistic roles?” Having unlimited access to instant news via the internet keeps us well informed and socially aware. Digital journalism also raises many questions. It makes it difficult to define news. With a plethora of new options for receiving news, are people replacing traditional formats with ones that are less culturally significant? Have we been conditioned as digital consumers to desire instant entertainment over well-researched, evidence-based news? Has social media diluted the core of news and diverted our attention away from informative sources? These are questions that are important for consumers to consider as they interact with and search for news online.

It is possible that with unlimited access to news via tablets and mobile devices, consumers are spending more time reading news than ever before. A younger population is now consuming digital news on a regular basis. With these positives, also come negatives: social network conglomerates have a stronghold on media, quality reporting is declining, and on various platforms it can be difficult to distinguish between gossip and news. As consumers in the digital age, it is important to recognize that while technology and the internet continue to play an increasingly important role in our lives, we are in control of what we read and how we gather information. We must take responsibility for the way in which we engage with online content. There are ways we can counter a growing inclination to consume sharable, fleeting, surface-level information. Firstly, as consumers, we need to look at the value of what we’re reading and think deeply about how we’re engaging with media. Secondly, as publishers/journalists/writers we need to recognize that the audience should come first; circulating content that does not simply aim to capture engagement, but is enlightening and stimulates analytical thinking, is more important than it’s ever been.

 

Works Cited

Belam, Martin. “Journalism in the digital age: trends, tools and technologies.” The Guardian. 14 April 2010, https://www.theguardian.com/help/insideguardian/2010/apr/14/journalism-trends-tools-technologies.

CBC News. “Desktop internet use by Canadians highest in world, comScore says.” 27 March 2015, http://www.cbc.ca/news/business/desktop-internet-use-by-canadians-highest-in-world-comscore-says-1.3012666.

Drohan, Madelaine. Does serious journalism have a future in Canada?” Canada’s Public Policy Forum. 2016, http://www.ppforum.ca/sites/default/files/PM%20Fellow_March_11_EN_1.pdf.

Egan, Timothy. “The Eight-Second Attention Span.” The New York Times. 22 Jan. 2016, http://www.nytimes.com/2016/01/22/opinion/the-eight-second-attention-span.html?_r=0.

Garber, Megan. “Google News at 10: How the Algorithm Won Over the News Industry.” The Atlantic. 20 Sept. 2012, http://www.theatlantic.com/technology/archive/2012/09/google-news-at-10-how-the-algorithm-won-over-the-news-industry/262641/.

“Huffington Post Canada Politics, Front Page” 6 November 2016. Author’s screenshot.

Ingram, Mathew. “The Guardian, Paywalls, and the Death of Print Newspapers.” Fortune. 17 February 2016, http://fortune.com/2016/02/17/guardian-paywall/.

Karlsson, Michael. “The immediacy of online news, the visibility of journalistic processes and a restructuring of journalistic authority.” Journalism. April 2011 12: 279-295, https://www.academia.edu/561238/The_immediacy_of_online_news_the_visibility_of_journalistic_processes_and_a_restructuring_of_journalistic_authority

Kirchhoff, Suzanne, M. “The U.S. Newspaper Industry in Transition.” Congressional Research Service. 9 Sept. 2010, http://www.fas.org/sgp/crs/misc/R40700.pdf.

Ladurantaye, Steve. “Newspaper revenue to drop by 20 percent by 2017, report predicts.” The Globe and Mail. 5 June 2013, http://www.theglobeandmail.com/report-on-business/newspaper-revenue-to-drop-20-per-cent-by-2017-report-predicts/article12357351/.

Marsh, David. “Digital age rewrites the role of journalism.” The Guardian. 16 October 2012, https://www.theguardian.com/sustainability/sustainability-report-2012-people-nuj.

Peña-Fernández, Simon; Lazkano-Arrillaga, Inaki; García-González, Daniel. “European Newspapers’ Digital Transition: New Products and New Audiences.” Media Education Research Journal. 16 July 2015.

“Tossed by a gale.” The Economist. 14 May 2009, http://www.economist.com/node/13642689.

Winseck, Dwayne. “Media and Internet Concentration in Canada Report, 1984-2014,” Canadian Media Concentration Research Project, (Carleton University, November 2015), http://www.cmcrp.org/media-and-internet-concentration-1984-2013/.

Game of Tomes: Embedded and Emergent Narrative in Video Games

Whether we like it or not, playing, discussing, and creating video games is an ever popular past time being enjoyed by an increasingly wide range of the people. This medium tends to be disregarded as without intellectual or cultural merit, unlike books and films that are highbrow, because of the negative image of those who game. It is important to appreciate just how significant video games are to our culture, as they are increasingly becoming the way in which many individuals consume stories. This essay will explore the current standard of video game narrative, the appeal of these games for those who consume them and the future of game writing and development. While there are only a few scholarly articles written on this subject, there is a high level of conversation on the subject among gamers and much of the information in this essay comes from those who engage in games the most.

Continue reading “Game of Tomes: Embedded and Emergent Narrative in Video Games”

Where did the story of ebooks begin?

A history of the electronic book

kindle

What do publishers think about the electronic book/ ebook? Do they see it as a threat or an opportunity? In a world that is shifting more and more towards digital, publishing houses have to learn how to effectively publish their books in a digital format, in order to keep up with the competition of the Word Wide Web.

This essay  is looking at the beginnings of the electronic book, who first introduced into the world the notion of the electronic book and who was its inventor. It looks at how publishers see the ebook and how they had to change in order to adopt this new product.

The traditional book has seen a lot of transformation in the past 80 years. Publishers think that now is the tipping point of no return, that the traditional publishing will emerge into digital and self-publishing, seeing the drastic changes that happen in the publishing industry. In 2009, the ISBN agency reported that there were over a million self-published books (print or electronic) – in this situation, the role of the publisher is threatened, as contemporary authors, or even someone who isn’t an experienced author, can just publish books online, almost for free, devaluing the high quality content of a book professionally published by a press.

Publishers feel that self publishing of books in electronic formats devalues the content of a good book, as there is an increasing pressure to keep the prices low and to offer more value for money. Also, the competition has doubled, perhaps tripled, and readers now get an overwhelming sense that they are constantly being sold to, making marketing more difficult.

Ebooks have made publishing houses’ life more difficult because there are many ebook formats, that means publishing houses have to do three or more times the work that could have been done once. Publishing houses have to publish ebooks in at least three formats, as every retailer uses different file formats.

Their work has significantly increased, as every separate format has to be run through a specific technical list of steps to create the file properly, these include numerous proofing and quality control steps. The skill-sets required to do the file preparation, output and delivery can’t be found in the traditional publishing roles, therefore publishers have to invest capital in order to train existing production and design staff or hire new people.

Metadata is very important when it comes to ebooks. If the metadata is incorrect, the readers can’t find the ebook. Ebook buyers run into metadata problems all the time, that is why, most publishing companies hire someone that will be in charge with making sure ebooks have the correct metadata. This is another role that was created thanks to ebooks, more money to be invested.

First attempts to the, so called, electronic book. 

There is a lot of controversy as to whom was the first to introduce the notion of a book in electronic format an who was the first inventor. Little they have known that in 80 years, the so called electronic book will create a lot of controversy amongst the publishing industry, threatening the existence of the traditional print book and making publishers struggle with so many ebook formats they have to provide when they publish their books digitally.

Bob Brown (1930)

The idea of the electronic book was firstly introduced by Bob Brown, in his book “The Readies”. The idea came to him after listening to his first “talkie”, which is a movie with sound. In 1930 he wrote the book “The Readies” where he talks about the notion of electronic format books, playing off the idea of the “talkie”. In many ways, Brown predicted the contemporary ebook, writing in his book that the reading machine will allow its readers to change the type size, avoid paper cuts and save trees.


In his vision, the electronic book will change the medium of reading, by bringing a completely new perspective: “A machine that will allow us to keep up with the vast volume of print available today and be optically pleasing”. He intended this innovation as a way for literature to keep up with the advancement of the other industries, such as the advanced reading practices of the cinema-viewing public, as seen in the “talkies”. Even though he was the one to first introduce the idea of the electronic book, his book remained forgotten until 1993, when Jerome McGann declared about the book: “When the after-history of modernism is written, this collection . . . will be recognized as a work of signal importance”.

Even though Bob Brown first introduced the notion of the electronic book approximately 85 years ago, a notion that is most close to what the ebooks and e-Readers are today, the early commercial e-Readers did not follow his model.

Candidates for the first book inventors

Who first invented the ebook is much debated within the publishing industry. There are a few candidates that are enumerated below.

Roberto Busa (late 1940s)

Some of the first candidates to the creators of the ebook is Roberto Busa. Index Thomisticus is a heavily annotated electronic index of Thomas Aquinas’ works. Index Thomisticus was planned as a tool to perform text searches in Aquinas’ works.

The project began in 1949 and was done with the help of a sponsorship from Thomas J. Watson, the founder of IBM. The project took approximately 30 years and it launched in 1970s, with 56 printed volumes of the Index Thomistichus, stored on a single computer. Ten years later, with the appearance of CD-ROMs, a new version was produced and made available on CD-ROMs. In 2005 a web based version was launched, sponsored by Fundación Tomás de Aquino and CAEL and one year later, in 2006, the Index Thomisticus Treebank project started syntactic annotation of the entire corpus.

Roberto Busa is considered by the industry a pioneer of digital humanities. His project is seen as an outstanding mileage in Informatics and Computing in Humanities, as it marks the beginning of the field of computing in the humanities.

Angela Ruiz Robles (1949)

Angela Ruiz Robles is another pioneer in inventing the innovative electronic book. She invented the Mechanical Encyclopedia 60 years ago, with the aim to reduce the weight of books in students’ school bags. She also believed that this gadget will make reading more accessible to all. As she designed the device in her home country, Spain Because Spain’s economy was suffering at the time, her design was not prioritized and never received the funding required to be produced in mass. “The implementation of all the specifications of the invention was impractical,” said Maria Jose Rodriguez Fortiz, language professor at the University of Granada.

The first ebook was produced to function through air compression, with changeable spools that carried content. It was reported to have zoom capabilities and utilized coils to move the scrolls. The spools and other inserts were housed within a hard metal case with a handle. She created an original prototype that is working and is now displayed in the National Museum of Science and Technology in Spain. In her later years when everything was technologically viable, she had another attempt at re-waping the project, but again, she did not manage to secure any funding.

Her patent is considered to be the the most close gadget to what ebooks are in our days.

Doug Engelbart and Andries van Dam (1960s)

Some historians consider electronic books started with  the NLS project initiated by Doug Engelbart at Stanford Research Institute and the Hypertext Editing System and FRESS projects initiated by Andries van Dam at Brown University. Van Dam is considered to be the one who coined the term “Electronic Book”, that was established enough to be used as an article title in 1985.

Michael S. Hart  and the First Ebook implementations (1971)

Despite there were so many attempts before 1970s at creating the electronic book Michael Hart was the one who managed to finally create the first ebook. He was provided computer time by the operators of Xerox Sigma V mainframe at the University of Illinois, and he used that time to type the United States Declaration of Independence into a computer in plain text. This was the first electronic document ever created. His future plan was to create such document, using plain text, that can be easily downloaded and viewed on various electronic devices.

Project Gutenberg was initiated by Hart with the main aim being to produce more electronic copies of text, in particular books. It’s mission was to provide to everyone interested in literary works, electronic formats of lit works for free.

In January 2009 Michael Hart stated the following in an email interview: “On July 4, 1971, while still a freshman at the University of Illinois (UI), I decided to spend the night at the Xerox Sigma V mainframe at the UI Materials Research Lab, rather than walk miles home in the summer heat, only to come back hours later to start another day of school. I stopped on the way to do a little grocery shopping to get through the night, and day, and along with the groceries they put in the faux parchment copy of The U.S. Declaration of Independence that became quite literally the cornerstone of Project Gutenberg. That night, as it turned out, I received my first computer account – I had been hitchhiking on my brother’s best friend’s name, who ran the computer on the night shift. When I got a first look at the huge amount of computer money I was given, I decided I had to do something extremely worthwhile to do justice to what I had been given. This was such a serious, and intense thought process for a college freshman, my first thought was that I had better eat something to get up enough energy to think of something worthwhile enough to repay the cost of all that computer time. As I emptied out groceries, the faux parchment Declaration of Independence fell out, and the light literally went on over my head like in the cartoons and comics… I knew what the future of computing, and the internet, was going to be… ‘The Information Age.’ The rest, as they say, is history.”

Hart started keying in other works, and as the disk space was getting larger he gathered volunteers to type the Bible one individual book at a time. In 1989 Project Gutenberg completed its 10th ebook and that was The King James Bible. Project Gutenberg’s mission can be stated in eight words: “To encourage the creation and distribution of ebooks,” by everybody, and by every possible means, while implementing new ideas, new methods and new software.

Historians consider that the credit for being the inventor of the electronic book should be given to Michael Hart, as he was the one who digitized the content of a book and distributed it in electronic format.

Libraries

US Libraries started to  provide free ebooks to the public in 1998 on their website and associated services. The ebooks were primarily scholarly, technical and professional in nature, and could not be downloaded. After a few years, libraries started offering free downloadable popular fiction and non-fiction to the public and also have launched an ebook lending model.

In time, the number of libraries providing free downloadable ebooks and lending models increased, however libraries started to face challenges as well. Publishers were selling ebooks to libraries but they were only given a limited licence, meaning that libraries were not owning the electronic text, but they were allowed to circulate it for a limited amount of time or a limited amount of checkins.

As it can be seen above, even from the beginnings of the ebooks, publishers have regarded them more as a threat rather than opportunity. They have started to provide limited licences to libraries in order to ensure a stable profit from their ebooks.

via GIPHY

Industry professionals have predicted that ebooks will soon take over the traditional book. This hasn’t happened yet. This prediction has scared a lot of publishing houses, as that could mean bankruptcy for them. Researchers have investigated how people utilize, comprehend and process digital and paper books and their findings were that people can read better from a printed book compared to the electronic book, where there are multiple distractions, such as: hypertext, e-mail, videos,  and pop – up advertisements.

The electronic book has brought a lot of changes in the publishing industry and has transformed how publishing houses function. They have brought extra costs for training staff new skills, cost for production and the risk for bankruptcy because of self-publishing online. The invention of the ebook might have been revolutionary, but from the point of view of a publisher, it’s a threat to their current publishing model.

As a conclusion, I will answer the question posed at the start of this essay. The electronic book is a revolutionary invention that has changed a lot in the publishing sphere. Publishers see it as a threat as many studies have shown that ebooks will take over books. This hasn’t happened yet, and in my opinion, it never will. Ebooks will forever remain an extra that you can buy besides the traditional printed book, whenever you are short on space when you are travelling, or you can’t carry something heavy (life a few books) and you just grab your e-Reader to do your daily reads. Publishers shouldn’t see ebooks a threat, but rather as an opportunity.

Computer Generated Fiction

If computers are able to write content that is indistinguishable from human authored writing, what will this mean? If they can one day write anything from travel guides to literary novels, will people read them? Would they trust that a computer knows where the best restaurants in Atlanta are or the best hotels in Paris? What about literature? Humans plumb the depths of their emotions and draw on extraordinary experiences to create great literary works; could a computer possibly do the same? If this comes to fruition and people can distinguish between a human and a computer, which will they prefer? If they prefer the computer-generated content, what would this mean for authors?

Starting with what computers can already do, we can look at information compilations such as travel guides. People will read them because computers can pull human reviews from the internet and compile more reviews more quickly than any human ever could therefore making it the best authority on anything that is subject to reviews. It will likely have a better review for you than your best friend’s actual experience.

Once computers have advanced past that stage and on to simple or formulaic literature, humans will not care who or what wrote it as long as it is entertaining. The most common formulaic genre is romance, and, out of the paperback fiction category in North America, romance novels are the best-selling.  Computers can be fed formulaic plot lines and stock characters to work with, and will be able to read through a million novels to get ideas on which words and phrases are liked best by humans. Humans already read extremely formulaic books and will not mind if a computer writes them instead of a human, especially since the computer is pulling from sources written by humans.

Other formulaic examples come from syndicates like the Stratmeyer Sydicate that put out Nancy Drew and The Hardy Boys. The Stratmeyer Syndicate is an excellent example of why computers could become “fiction factories”, as the syndicate was known. Founder Edward Stratmeyer hired unknown writers, gave them anything from a few sentences to a three-page outline and a plot, and expected to receive a finished book two weeks later complete with chapter cliffhanger endings and consistent sounding dialogue.

Computers will soon be able do even better – as of today they can write fiction that is almost comparable to that written by humans. By having access to every book and online resource possible, they have access to almost all human documentation thus far, giving them the power to not only get ideas and phrases that have received positive human feedback, but also millions of human experiences, and what emotions these evoked. Alexander Prokopovich’s algorithm wrote its own version of War and Peace, entitled True Love, in 2008. It sounds close to a lot of human writing apart from the odd phrase or two: ‘Kitty couldn’t fall asleep for a long time. Her nerves were strained as two tight strings.’ The Georgia Institute of Technology has developed a program called Scheherazade that can write fiction that sounds convincingly human. For example:

John took another deep breath as he wondered if this was really a good idea, and entered the bank. John stepped into line behind the last person and waited his turn. When the person before John had finished, John slowly walked up to Sally. The teller said, “Hello, my name is Sally, how can I help you?” Sally got scared when John approached because he looked suspicious. John pulled out a handgun that was concealed in his jacket pocket. John wore a stern stare as he pointed the gun at Sally. Sally was very scared and screamed out of fear for her life. In a rough, coarse voice, John demanded the money. John threw the empty bag onto the counter. John watched as Sally loaded the bag and then grabbed it from her once she had filled it. Sally felt tears streaming down her face as she let out sorrowful sobs. John strode quickly from the bank and got into his car tossing the money bag on the seat beside him. John slammed the truck door and, with tyres screaming, he pulled out of the parking space and drove away.

Robot fiction reviewer Nicholas Lezard actually thought it was an excerpt from a new Dan Brown novel, but then realised Scheherazade could have been programmed using algorithms based on Brown.

Over the years people have come up with tests to see if a computer can pass as a human. One such test is the Turing Test.  Invented by Alan Turing, it consists of a human sitting in a room at a terminal with a computer, and a computer at a terminal in a separate room. The human corresponds via text with whoever or whatever is in the other room, and then he or she has to figure out if he or she is corresponding with another human or with a computer. So far no computer program has definitively passed this test. People have come up with alternate Turing Tests where people read different articles or stories and try and figure out whether a human or a computer wrote them. One such a test can be found at http://www.nytimes.com/interactive/2015/03/08/opinion/sunday/algorithm-human-quiz.html.

As times moves on, humans will be unlikely to prefer one type of writing over the other. Some people will happily read formulaic, computer generated novels, others will be intrigued and will voraciously read literary novels written by computers, while traditionalists will stick with their human written works.

If the majority of humans ever do prefer computer-generated content this will affect authors because they will be less in demand. If a computer can write the new Dan Brown while authors are working on literary novels, which already don’t sell as well as thrillers, they may lose out on work. That being said, there will always be traditionalists so computers writing fiction may actually push human authors harder to compete, drawing forth some of the best literature we have ever read.

As of right now, although they are close, computers cannot write fiction equal to that authored by humans. “The hardest [for the computers] to crack will be the elements of great writing we ourselves struggle to explain: the poetic force of the sentences, the unique insights of the author, the sense of a connection.”

 

Sources:


http://www.scientificamerican.com/article/computers-vs-brains/

Studying the Romance Novel

http://www.trussel.com/books/strat.htm

www.bbc.com/culture/story/20150122-could-a-robot-write-a-novel

http://www.theguardian.com/books/2014/nov/11/can-computers-write-fiction-artificial-intelligence

http://psych.utoronto.ca/users/reingold/courses/ai/turing.html

 

Self-publishers setting the stage, but traditional publishers have a part to play

Self-publishing is setting the stage[1] for the future of publishing with the prevalence of “do-it-yourself” tools and applications, almost diminishing the value of the traditional publisher as gatekeeper.

The digital context has given ordinary readers tools[2] to become self-published authors/publishers through several online platforms and user-friendly technology tools to start-up their own publishing, marketing and data analysis businesses. One such author is Scott Nicholson who has published more than 70 books and sells them online through Amazon for the Kindle and other ereaders. “He handles the entire process himself”[3] and the lucrative 70% royalties on e-book sales attract authors more than the traditional publisher’s offer of a mere 25%[4]. With that said, Amy-Mae Elliott says that “with the advent of e-books, social reading sites and simple digital self-publishing software and platforms, all that has changed. An increasing proportion of authors now actively choose to self-publish their work, giving them better control over their books’ rights, marketing, distribution and pricing”(Mashable, February 2014).

Moreover, editors and designers, as well as graduate publishing students are also forming start-up businesses geared towards content strategies for publishers and authors. For traditional publishers, the online context has emphasized the role of the publisher as an incubator, and consultant.

According to Bowker’s statistics “more than three million new titles were published in 2010. Of these, over 2.7 million were non-traditionally published books, including print-on-demand and self-published titles.[5]

Traditional publishers, who already face competition from retail giants such as Amazon, now have to consider their competitive edge against a surprising opponent – the consumer, and in this respect, the reader. We can see that through social computing, as described by Alan Liu as an evolutionary form of reading where the reader assumes the role of annotator, and thereby contribute to the work of the original author. In this sense however, authorship is not overtly important, but the overall collaboration of the project instead. Readers who range from academics to ordinary non-scholars or literary students, are able to developed a shared network among others and create a community from which they are able to grow an audience base. Self-publishing tools offered by Create Space and online coding academies to create website ad artificial intelligent website creators such as The Grid[6] offer readers who become self-published authors the ability to create a brand around themselves and successfully publish online and printed books, without the help of a traditional pubisher who often administered this task.

This paper argues that technology has revolutionized the way we approach the publishing, its function, and who has the right to publish. Matthew Ingram says that how we view publishing is narrowed down to the push of a button in the online context of the web.[7]

The future of the book: “Almost as constant as the appeal of the book has been the worry that appeal is about to come to an end. The rise of digital technology—and especially Amazon, underlined those fears” (The Economist, From Papyrus to Pixels)[8].

Traditional publishers find themselves at odds with having to compete for the same market alongside ordinary persons with little to none experience in the publishing field, but who are able to attract, and maintain an audience with user-friendly, and free tools and platforms on the web. Additionally, the serialization of content is popularized by the context of people consuming media in short form, from a mobile device or tablet, and often on the go. The reader who consumes in this fashion, is able to come up with the right solution for what publishers are missing the mark on.

This ties into Brian O’Leary’s view of “Context not Container” in his book A Futurist’s Manifesto, especially with the publishing industry taking a popular form of the web2.0. In the same sense, contexts such as social computing have blurred the lines between author and reader with both having the capacity to adopt the role of publisher through networked channels.

What this means for traditional publishers is not only a change in their business models, but also their approach to the nature of the digitized age. They have to align themselves with networked trends, and find innovative ways to approach online distribution, marketing, and content creation. Additionally, instead of focusing on the plight of traditional publishing in the age of technology, this paper draws its attention to the opportunities self-publishers exploit and how both traditional publishers can co-exist alongside them within this context.

“The book is now a place as well as a thing and you can find its location mapped in cyberspace,” writes researcher Paula Berinstein[9] who discusses the notion of the networked book where authors, publishers, and readers gather to think, discuss, annotate, and refer the book. One can say that this was sparked by online journaling platforms such as blogs, and now by the Web 2.0 which makes the book searchable, linkable, divisible, and mutable (Berinstein, 2015). A case study such as Gamer Theory (spelled GAM3R 7H3ORY) by McKenzie Wark which started off as a draft online and invited reader interaction through annotation, comments and feedback points to how such a networked book was transformed into a better book for online and print. The book contained index cards with reader comments, and prestigious academics. It was also acquired by Harvard University Press for publishing in 2007 and an online editions are available. This changing approach in how the book is created, curated, promoted, and distributed appeals to the cooperation between self-publishers and traditional publishers in a digitized context.

Other opportunities show that traditional publishers will need to unbundle their content and services in order to remain relevant. “They will have to reimagine their role. [They] could start offering “light versions of their services, such as print-only distribution, or editing, and not taking a cut of the whole pie”[10]. Moreover, publishers will need to work harder at proving to authors that they are capable of reading a far larger audience. This challenge is could be tantamount to the accessibility of the same technology making it is easier for self-publishing and explore new and alternative ways discovering, marketing, sharing, distributing, and imitating the books of other self-published and traditional publishers, think fan-fiction.

Furthermore, traditional publishers need not be at loggerheads with self- publishers, but should rather look for collaborating opportunities by declaring their importance with publishing quality content with the assistance of editors and customized content strategies. A recent case study points out to how a self-published author of a dystopian science fiction short story, “Wool” on the web led to film adaptation and a contract with a traditional publisher, Simon and Schuster to buy the license rights to print the book. “Most writers still sign with publishers when they have the chance, because print books remain such a sizeable chunk of the market”[11]. With that said, the self-published author owns the right to the e-book.

Besides this, self-published authors attract readers by selling their books at a low price, and often in e-book format. This puts traditional publishers under pressure to lower their prices too especially in genre fiction, such as romance, where romance publisher Harlequin suffered financial losses and was ultimately acquired by HarperCollins in May 2014.[12] This acquisition, for the most part, has led to international opportunities for the now-imprint to publish in over 30 languages worldwide, a move they hope to acquire authors. We can again, see in this instance, that traditional publishers are able to exploit international brand presence.

In his article, “A modest proposal for publishers and authors”, Jonathan Fields discusses the nature of the relationship in the digitized age, and how the two can co-exist through partnerships. He says that traditional publishers, even as well-known brands did not even have direct access to buyers, and according to him, still do not.[13]

Self-publishers who are able to attract and maintain a profitable audience can explore the benefits traditional publishers and booksellers are able to offer in partnership. Barnes and Noble’s Nook Press has launched Pub It! that offers self-published authors e-book publishing and print book packages. Potential self-publishers can build their book, prepare downloadable manuscript files that includes instructions to create, format e-books and print books on demand—as well as the technologies available to do this. Authors also have the option of acquiring professional input from Nook Press in any part of the publishing process.

Their author services packages can be purchased and guides authors through the publishing process to create a printed book which is ready for shipping within a week. [14] The Nook Press print platform creates print books for personal use whereas the ebook platform creates digital books for sale through Nook and the Barnes and Nobles website which distributes directly to the reader.

According to their press release, PubIt! attracts at least 20% independent authors every term and titles increase by 24% in the Nook Store. The report also states that at least 30% of customers purchase self-published content accounting for at least a quarter of Nook books sales every month[15].

In conclusion, self-publishers have approached the web as a platform or context of endless opportunity, whereas traditional publishers have perceived it as a threat to their business models and in turn, their very purpose of publishing. Essentially, a new form of publishing is already set as the stage where self-publishers are able to introduce new standards of creating content and curating content, marketing and distributing it with user-friendly, accessible and even free tools. The smart traditional publishers, and even booksellers, as we have seen have used this as an inspiration to expand their own models, and collaborate with successful self-publishers, even emerging bloggers and annotators by offering unbundled professional services and content strategies, as well as editing and formatting tools to publish their own books. The new stage of the “techno-publishing”, a term I coined myself, is a place to invest coding skills, multiplatform marketing and content disaggregation for the right audience at the right time, is where the business of publishing is right now. What is left, is for us to decide which part we’ll play in it as future publishers.

 

Work cited:

Elliott, A. 2014. “People-Powered Publishing is changing all the rules.” at http://mashable.com/2014/02/09/self-publishing-digital/

McGuire and O’Leary (2012) “Context, not Container” in  A Futurist’s Manifesto. Press Books. http://book.pressbooks.com/chapter/context-not-container-brian-oleary


Amazon: The Big, Bad Wolf

We all know the story of the big bad wolf, whether from Little Red Riding Hood, or The Three Little Pigs. It is a trope in morality tales going back farther than all of our lifetimes. The big, bad wolf is a deceitful, predatory, sneaky, and viciously intelligent creature that eats grandma and blows the house down. In recent years, the big bad wolf for the publishing industry is Amazon.

Amazon, the reason bookstores are closing, the reason publishers go broke. Capitalism in the form of a big, bad wolf, slowly destroying publishers big and small (huffing and puffing) and putting bookstores everywhere out of business (eating Red Riding Hood’s grandma for breakfast). It is an apt metaphor, and I would argue that publishers, or perhaps better to say the publishing industry, are those three little pigs, still living in straw houses, who need to find some bricks and build houses that that big, bad wolf cannot huff, and puff and blow down.

Continue reading “Amazon: The Big, Bad Wolf”