Why Share?


The meaning of ‘sharing’ has evolved with the growth of the web, especially social media. To share something viral certainly doesn’t have the connotations it would have carried pre-internetand certainly wouldn’t have been something to brag about. Like ‘pinning,’ ‘googling,’ and ‘liking,’ ‘sharing’ is part of the new vernacular of the digital world. In her cleverly titled article “The Six Things That Make Stories Go Viral Will Amaze and Maybe Infuriate You,” Maria Konnikova poses the question “what pushes someone not only to read a story but to pass it on?” Why do we share? What gets shared the most, and more importantly, what does what we choose to share say about us?

Continue reading “Why Share?”

For the Love of Metadata

In Laura Dawson’s chapter in Book: A Futurist’s Manifesto (2012), she addresses “What We Talk About When We Talk About Metadata”.

Beginning with a brief history of metadata, Dawson notes that, “because a book is no longer a physical object, discoverability via metadata is only just now becoming a front-office problem.” She explains how this coincides with Brian O’Leary’s chapter about the “book” no longer being the physical container that holds the content. I think this also integrates with the marketing and discoverability of a title has become more of a front-office problem. For example, if a potential author doesn’t have an existing web presence, that is considered a huge problem for a publisher. The author is expected to take on a marketing role, and without having a platform to do so, they are facing a steeper climb to discoverability.
Continue reading “For the Love of Metadata”

You Won’t Believe How Long I Can Rant About Content Sharing In A World Where The Number of Shares Is The Most Important Thing. Oh, And Here’s a Cat Meme.

When considering Konnikova’s article “The Six Things That Make Stories Go Viral Will Amaze, and Maybe Infuriate, You” – a headline that is itself comprised of click-bait-esque elements – all I can think is, is this all we’ve come to? Are we really primarily concerned with whatever makes people click on and share things the most often, even if it’s total garbage? What ever happened to quality, to publishing content for reasons beyond just the potential to cast the net as wide as possible? Jonah Berger’s study is interesting and provides relevant data for any kind of online marketer or content creator in a time when the publisher without a social media campaign is outdated, but what does it really mean for publishing? Sure, following a prescribed formula for making things “interesting” and “sharable” may result in thousands of shares and “likes,” but how long is this transparent form really going to be effective? Anyone who spends any amount of time on the internet is well-acquainted with the click-bait headline, the article promising to provide emotional arousal and excitement – it’s easy to spot, with its all-caps, its listicle form, its sloppy grammar. That is, this formula, as Berger has outlined in his studies, has become so prescribed and so expected that any article using it fades away into a sea of other articles that look and sound identical to it. What once worked – and for a short time longer still will, I imagine – to grab people’s attention and attract their clicks, probably won’t work for that much longer. It has become the norm to see articles banging their heads, so to speak, just to get people to read them, and it’s no longer attractive to readers. How much longer is this formula going to suffice as a tool for story-telling structures for the general reader, and is getting thousands of clicks and shares all that we care about in publishing anymore? Is it, more or less, all just about the best seller now? Clicks equal money, but for how long will that be enough?

Call me a jaded cynic, but I’ve always imagined that content-sharing – publishing, really – was about more than just how many people you can get to “buy” your “product.” It’s about stories – and I agree with Konnikova when she questions the worth of a listicle or cat meme as a story. It’s about connecting people together, making some kind of greater change, moving readers in their everyday lives. It’s why we all got into publishing, isn’t it?

The publishing industry moves so fast, and if I were to take a wild guess, the formula for what makes content shareable won’t change too much – it hasn’t since Aristotle and his three principles of ethos, pathos, and logos – but the form of it might change, as it has over time. Eventually, people will tire of the click-bait article, the listicle, and the meme, despite their Aristotelian qualities. What the new form will look like is impossible to say, but it’s quite evident that people always have and always will be drawn to emotionally stirring stories and content. I can only hope that the publishing world doesn’t devolve into one big BuzzFeed. So maybe Konnikova was right: the six things that make stories go viral maybe do infuriate me.

Why You Don’t Remember What You Read Online Yesterday

Photo by Stefan Schmitz under CC-BY license
Photo by Stefan Schmitz under CC-BY license

There’s no doubt that people today are reading more online, and even more on mobile. According to Mary Meeker’s Internet Trends, time spent with digital media per adult per day in the USA has gone up from 2.7 hours in 2008 to 5.6 hours in 2015. The same adult consumed digital media on mobile for only 0.3 hours in 2008, skyrocketing to 2.8 hours by 2015 (Meeker 14). In fact, Ziming Liu’s study on trends in reading behaviour over a ten-year period found that in the digital age, “people are spending more time on reading” for the very reasons of digital technology and the information explosion (704). No longer a sub-par reading experience of being chained to a desktop computer, the experience of digital reading has been significantly improved by new technologies. Screen-based devices have most importantly become “smaller and more portable with enhanced resolution and graphics” as well as optimized for mobile reading (Subrahmanyam et al. 6). But what’s the difference whether we read on-screen or on paper? We actually use different parts of our brain. But will our skill of “deep reading,” defined as the kind of “long-established linear reading you don’t typically do on a computer,” be lost forever as web reading becomes more and more prevalent? (Raphael). What’s most important for us now more than ever is being aware of the differences of how our brain reads and learns in the digital environment versus on paper. This knowledge can help us make better choices to get the most out of our reading in both media.

Benefits of digital reading
There’s a reason why people read online, the web has so much more to offer than the print environment, even just in terms of interactivity. There are many other formidable advantages that are traditionally absent from print, such as the “immediacy of accessing information, and the convergence of text and images, audio and video” (Liu 701). The benefits of this multimodality are that finding reading becomes easier, faster, and more relevant. The superiority of the computer system not only in composing documents, but also in the “storing, accessing and retrieving” of them makes reading online easier (Hillesund). Nicholas Carr highlights searchability and hyperlinking as the most attractive benefits of the online ecosystem, noting that “We like to be able to switch between reading and listening and watching without having to get up and turn on another appliance or dig through a pile of magazines or disks. We like to be able to find and be transported instantly to relevant data—without having to sort through lots of extraneous stuff” (91-92).

How we read on the web
In “You Won’t Finish This Article: Why People Online Don’t Read to the End” Farhad Manjoo uses website traffic analysis from data scientist Josh Schwartz at Chartbeat to better understand online reading behaviour. It turns out that web readers exhibit behaviours that make it less likely that they will read an article online in its entirety. For example, out of those who visit a site, a large percentage (38, according to his figures) will “bounce” immediately, meaning they will spend “no time ‘engaging’ with th[e] page at all” (Manjoo). According to aggregate data from a number of Chartbeat-analyzed websites, 10 percent of visitors will never scroll, and of those that do, they stop at about halfway through the article. Most visitors will view all content in an article only by the photos and videos embedded in it (Manjoo). Manjoo explains that Schwartz’s data tells us two important things: one is that for those that do not scroll at all, there is a good chance they will see “at most, the first sentence or two” of an article; and the other is that there is not a strong link between scrolling and sharing, meaning that “lots of people are tweeting out links to articles they haven’t fully read” (Manjoo).

Scrolling and skimming characterize online reading. Liu’s study found that “screen-based reading behaviour is characterized by more time on browsing and scanning, keyword spotting, one-time reading, non-linear reading, and more reading selectively” (705). One of Liu’s concerns for scrolling behaviour is that “Readers tend to establish a visual memory for the location of items on a page and within a document” and that scrolling “weakens this relationship” (703). Maria Konnikova suggests that scrolling and skimming dominates online reading because of the shift in physiology from reading on paper to a screen, and explains that “when we scroll, we tend to read more quickly (and less deeply) than when we move sequentially from page to page,” and that this tendency is a coping mechanism for the overload of information we are presented on the web (Konnikova). She reiterates that the online reader browses and scans “to look for keywords” and thus reads very selectively.

With so many options and avenues for the online reader, the role of multitasking becomes more and more dominant, indeed, “it has become an integral part of reading on screens” (Subrahmanyam et al. 6). The effects of multitasking in both reading comprehension and synthesis studies, as one can imagine, “significantly increased” reading time (Subrahmanyam et al. 15). One of the reasons multitasking becomes so attractive is the constant stimulation the web has to offer. But even if we choose not to go down the rabbit trails of hyperlinks and to focus instead only on the main text, the online world “tends to exhaust our [mental] resources. . . . We become tired from the constant need to filter out hyperlinks and possible distractions” (Konnikova). Konnikova summarizes researcher Julie Coiro’s findings, that “good print reading doesn’t necessarily translate to good reading on-screen,” because more self-control and self-monitoring is needed when reading online. She argues that picking up a book is the one choice needed to focus attention on the page, whereas in the online world there is constant temptation to click away from the main text.

Studies tend to agree that the most efficient use of online reading is the searching for reading, but it is not necessarily the best venue for consuming texts. Terje Hillesund’s study of expert readers (scholars in various social sciences) finds that “proficient readers use the Web and computers for overview,” and mainly as a tool for “finding, scanning and downloading text.” Liu concurs with the findings in his study in terms of reader satisfaction with formats, noting that “paper-based media are preferred for actual consumption of information” (701). He relates this paper preference to the widespread use of Adobe’s PDF format, which he argues “discourages screen reading and encourages printing. People tend to print out documents that are longer than can be displayed on a few screens” (702).

How expert readers read
Though we might assume that the best readers read articles in full, and books cover to cover without distraction, that is incorrect. Hillesund’s aforementioned study revealed that expert readers combine sustained and discontinuous reading. This means that they “seldom read a scholarly article or book from beginning to end, but rather in parts and certainly out of order.” So it is not the continual aspect of reading a text fully that is crucial to their success or their choice of medium, but their sustained attention to reading relevant material, whether in snippets of an article or parts of a book. Another common factor among these prolific readers was their time spent reflecting, “underlining and annotating, often relating the reading to their own writing” (Hillesund). Liu also notes that in terms of preferences, people “like to annotate when they read” but that they are less likely to do so online (707).

Photo by Stefan Schmitz under CC-BY license
Photo by Stefan Schmitz under CC-BY license

Language and reading researcher Maryanne Wolf explains the two stages that the expert reader’s brain does when reading; the first is “decoding” the words to know their meanings, but the second stage is “connect[ing] the decoded information to all that we know” (Wolf). She explains that this second stage of reading is where we are “given the ability to think new thoughts of our own” which forms “the generative core of the reading process.” The goal of the adult expert reader is to go beyond the text and to expand comprehension. Wolf explains in her book that “Reading is a neuronally and intellectually circuitous act, enriched as much by the unpredictable indirections of a reader’s inferences and thoughts, as by the direct message from eye to text,” meaning that reading is not just about the words on the page, but the messages constructed inside the reader’s brain and the connections they make in order to form their own original ideas (16). Her research focuses on the implications for the digital reader “who is immersed in a reading medium that provides little incentive to use the full panoply of cognitive resources available” (Wolf). The incentive is lower in the digital environment because we are served up so many attention-grabbing links and other media that are easier to follow than the effort required to forge our own intellectual pathways. She worries that the immediate information offered by the web “requires and receives less and less intellectual effort,” which may result in the deterioration of the most important stage of reading: connecting.

The brain wants what it wants
Nicholas Carr explains that the brain is naturally in a state of “distractedness,” and that we are predisposed to “shift our gaze, and hence our attention… to be aware of as much of what’s going on around us as possible” (63). The most effective way to distract the brain is “any hint of a change in our surroundings” (Carr 64). The major danger for online reading, especially when paired with multitasking, is the brain’s distraction by the slightest change in our field of view. This becomes very troubling considering Meeker’s Internet Trends, reporting that notifications on our mobile devices are “growing rapidly” and are “increasingly interactive” both with messaging platforms and with other apps (54). If we consider the younger generation and their use of screens and multitasking, she cites that 87 percent of millenials in the USA admit that “My smartphone never leaves my side” (Meeker 69).

Our brains are naturally lazy, and want to take the path of least resistance. Hillesund summarizes Anne Mangen’s research in which she explains “when we have options to easily rekindle our attention through outside stimuli, we are psychobiologically inclined to resort to these options. It requires less mental energy to click the mouse and rekindle our attention than to try to resist distractions” (Hillesund). It is hard to focus the brain’s attention whether the reading is being done online or offline. Hillesund explains that traditional reading immersion is when the reader is engaged and internally stimulated by the processes in the mind, whereas online reading immersion is a result of external stimuli: an information flow fed to the reader. Carr worries about the long-term influences of the internet on how we think, and the paradox of the internet only seizing our attention to scatter it (118).

Is the internet making us stupid?
It’s not all bad. There are skills that internet users develop that may actually bode well for our reading future. Carr explains that searching and browsing “strengthen brain functions related to certain kinds of fast-paced problem solving,” particularly those “involving the recognition of patterns” in complete data and information overload (139). Internet users also become better at evaluating informational cues, such as “links, headlines, text snippets, and images” to very quickly judge whether following a source will have benefit to the reader (Carr 139). Scanning and skimming are also useful abilities, but only insofar as they do not become “our dominant mode of reading” (Carr 138). Wolf understands that web reading will help us develop important skills, such as multitasking, and integrating and prioritizing vast amounts of information, and wonders if our accelerated intelligence via these new and faster intellectual capacities could in fact “allow us more time for reflection” (214).

Photo by Jason Devaun under CC-BY license
Photo by Jason Devaun under CC-BY license

Carr defines the depth of our intelligence as a function of “our ability to transfer information from working memory to long-term memory and weave it into conceptual schemas” (124). But the problem with reading on the web is that our short-term memory is clogged with the overstimulation of links, photos, videos, and the like. The technology of media coupled with hypertext is defined as “hypermedia” (Carr 129). Educators have long-accepted the notion of “the more inputs, the better” in regards to hypermedia, yet Carr says this notion has been completely contradicted by research (129). Having all these inputs simultaneously actually “further strains our cognitive abilities, diminishing our learning and weakening our understanding” (Carr 129). His deep fear is that if we cannot convert material from our working memory to our long-term understanding by engaging with it, “the information lasts only as long as the neurons that hold it maintain their electric charge—a few seconds at best” (193). Carr believes that if we cannot focus and consolidate own knowledge from what we read online, we will use the internet “as a substitute for personal memory” (192). Wolf has similar fears; her concern with our use of the web’s immediate access to information is that it may cause our reading brains to be less developed in the “range of attentional, inferential, and reflective capabilities” that are associated with deep reading (214).

Making the brain better at reading online
Let’s face it; it’s highly unlikely that the web is going to become less distracting for the benefit of the worldwide reading brain. So what can we do to make sure that we get the most out of reading, both online and offline? We can make sure that we have a mix of print and online reading in our own lives, as Wolf recommends, in order to facilitate the processes of deep reading (Raphael). A skill we can take from expert readers is to annotate while we read. Konnikova notes that studies have suggested that annotation “helped improve comprehension and reading strategy use.” Hillesund explains in his study that highlighting, taking notes, and annotating helped expert readers in four ways: it helps “slow down the pace of reading,” improves overall comprehension, makes “visible relevant connections,” and gives a useful path for the “re-reading of passages.”

The reading studies cited focused on an older demographic, and agreed that more research will need to be done on the newest generation that has grown up with technology in order to better understand their reading habits and techniques for learning (Liu 710). Wolf worries that the next generation may never become expert readers and instead may be “a society of decoders… whose false sense of knowing distracts them from a deeper development of their intellectual potential” (226). Wolf recommends that teachers, parents, and guardians ensure that kids are “taking some time away from scattered reading” in order to develop deep reading skills, as well as providing “explicit instruction for reading multiple modalities of text presentation… [so] that our children learn multiple ways of processing information” (Raphael, Wolf 16). Konnikova advises that “Not only should digital reading be introduced more slowly into the curriculum; it also should be integrated with the more immersive reading skills that deeper comprehension requires.”

The best way to get the most out of your everyday online and offline reading is to give yourself more time. If you do that, you can follow web links and make your own links inside your brain. You should also be aware of your own multitasking behaviours. The allure of multitasking disrupts deep reading, so you have to teach yourself to focus first and foremost. Instead of skimming as fast as possible, Wolf reminds us that we need to “find the ability to pause and pull back from what seems to be developing into an incessant need to fill every millisecond with new information” (Wolf). The good news is that our brains are adaptable (hence “neuroplasticity”) so that we can learn (and relearn) the skills of deep reading. Wolf believes that our ultimate goal as a reading public is to develop “a discerning bi-literate brain,” whereby our brains can recognize when we need to skim, and when we need to read deeply, and the wisdom to know when to use each part (Raphael).


Works Cited:

Carr, Nicholas. The Shallows: What the Internet is Doing to Our Brains. New York: W. W. Norton & Co., 2010. Print.

Hillesund, Terje. “Digital Reading Spaces: How Expert Readers Handle Books, the Web and Electronic Paper.First Monday 15.4 (5 April 2010). Web. 9 Feb. 2016.

Konnikova, Maria. “Being a Better Online Reader.” The New Yorker. The New Yorker, 16 July 2014. Web. 9 Feb. 2016.

Liu, Ziming. “Reading Behaviour in the Digital Environment: Changes in Reading Behaviour Over the Past Ten Years.” Journal of Documentation 61.6 (2005): 700-712. Web. 9 Feb. 2016.

Manjoo, Farhad. “You Won’t Finish This Article: Why People Online Don’t Read to the End.Slate. Slate, 6 June 2013. Web. 9 Feb. 2016.

Meeker, Mary. “Internet Trends 2015 – Code Conference.” Kleiner Perkins Caufield Byers. 27 May 2015: 1-196. Web. 9 Feb. 2016.

Raphael, T.J.. “Your Paper Brain and Your Kindle Brain Aren’t the Same Thing.” Public Radio International. PRI, 18 Sept. 2014. Web. 9 Feb. 2016.

Subrahmanyam, Kaveri, et al.. “Learning from Paper, Learning from Screens: Impact of Screen Reading and Multitasking Conditions on Reading and Writing among College Students.” International Journal of Cyber Behaviour 3.4 (2013): 1-27. Web. 9 Feb. 2016.

Wolf, Maryanne. “Our ‘Deep Reading’ Brain: Its Digital Evolution Poses Questions.” Nieman Reports. Nieman Reports, 29 June 2010. Web. 9 Feb. 2016.

—. Proust and the Squid: The Story and Science of the Reading Brain. New York: HarperCollins, 2007. Print.

White, Western and Male: How Wikipedia fails to deliver on the promise of knowledge by all, for all


Wikipedia is the keystone project of the Wikimedia Foundation, a non-profit organisation whose mission is to “share the sum of all knowledge with every person in the world” and to “help bring new knowledge online, lower barriers to access, and make it easier for everyone to share what they know” (1). While the Foundation actively runs a dozen free knowledge projects, Wikipedia is by far the most prominent and well known of them. It consistently ranks as one of the most popular websites in the world (2), and is a remarkable anomaly as a non-profit in the company of multi-billion dollar technology giants. Much of this status comes from Wikipedia’s unchallenged dominance in the online encyclopaedia realm, with traditionally published encyclopaedias being unable to match its free proposition. And while it does solicit and receive considerable donations, the key to their ability to provide such a service free of charge is due to an estimated 100 million hours of unpaid work done by volunteers to create, edit and maintain articles. With 35 million articles in 290 languages, the economics of the project without volunteer labour is simply unfeasible.

Click here for a visualisation of Wikipedia’s monthly page views over time

However, this system of crowd-sourced labour in its current state is a barrier to Wikipedia’s stated goal of building free encyclopaedias of neutral, cited information in all languages of the world, at the same time as being the only way to achieve it. According to Wikipedia’s own survey of their volunteer editors in 2011, they are overwhelmingly male, concentrated in North America and Europe and 76 percent of them edit in the English Wikipedia (3). While this survey is now several years old, the percentage of female editors is still commonly estimated to be around 10-12 percent, although some studies put the number closer to 15-20 percent (4). Of members with more than 500 edits, however, only six percent are female (5). Furthermore, while the number of non-english articles has grown, the breakdown of the 290 languages represented is still weighted heavily in favour of the English and European markets (6):

Screenshot 2016-02-15 at 18.28.54

This breakdown of editors clearly illustrates that Wikipedia’s workforce is overwhelmingly Western and male, and while no data exist on editor ethnicities, it is not a stretch to add that they are most likely overwhelmingly white as well. And in the pursuit of a collection of the world’s cumulative knowledge, the absence of voices from outside this narrow slice of the global population is significant.

The counterargument to this would be that Wikipedia mandates neutral, cited articles, in which case the profile of the majority of editors should not be important. The ten rules for editing and the five pillars of the project repeatedly emphasise neutrality in writing and that the site is not a platform for opinion or promotion. What is more, a large portion of editing is conducted by bots (7), including ClueBot NG that detects vandalism with up to 90 percent accuracy and others perform useful, if more mundane, tasks like automatically tidying up categories, fixing links or correcting common misspellings. There is also COIBot that reports potential conflicts of interest where account or usernames overlap with the subject of the article being edited. Considering the difficulty of ensuring impartiality and objectivity in an environment where anyone can edit a page, such a bot is important. Other attempts to monitor conflicts of interest have also appeared, such as @congress-edits, a Twitter bot that tweets when anonymous edits are made from IP addresses in the US Congress. Many of these edits turn out to be to be correcting minor errors or misspellings (8), but the oversight acts as a preventative measure. However, active conflicts of interest are only the most obvious way in which objectivity can be violated, and bots can only do so much. With the production of content still entirely reliant on active participants, what pages are created and developed depends very much on the interests and expertise of the editors.

Here, a participant base weighted heavily towards a certain segment of the population begins to have an impact. The first of these traits – Western – is in some ways not difficult to address given time. The pattern of distribution of editors globally largely matches the distribution of internet access and uptake. The result is clear in the numbers, with English (including North America and the Commonwealth) and European articles making up the vast majority of the total, and, by language, representing nine of the thirteen Wikipedias with over a million articles each. This means significantly more articles about Western issues and events, more articles written from a Western perspective, and more influence on Wikipedia policy and the community from Western editors. A 2011 study from the University of Oxford found that 84 percent of entries tagged with a location were about Europe or North America, and Antarctica had more entries than any nation in Africa or South America (9). However, the remaining four one-million-plus Wikipedias show the growth in uptake in Asia, with Japanese, Vietnamese and two Filipino languages represented. And while the English Wikipedia remains nearly three times as big as the next largest (German), the percentage of all articles that are in English and in the ten largest Wikipedias dropped steadily over time.


While this data does not go beyond 2008, it has also been noted that new language Wikipedias seem to follow the same pattern of growth that English did (CITE), suggesting that the trend may continue as more languages are added and grow. While the English/Western dominance looks sure to continue for some time, as internet access spreads and improves globally it is feasible that Wikipedia’s geographic diversity will match it in progress. The Foundation is even engaging directly in the effort to expand access globally with their Wikipedia Zero initiative, that allows access to Wikipedia in 64 countries, mainly in the Global South via mobile data without incurring any charges.

Click here for an animated graph of Wikipedia’s growth

However, that this fails to address is the silo effect that is created when a group writes only for its own members. While the individual sections may grow the information each language group can access will be limited. Many browsers do offer increasingly accurate web page translation, but discovery is severely limited for a user who does not speak the language in question. In this one can see a flaw in the Wikipedia mission, as simpling collating the world’s knowledge is not the same as making it accessible to everyone. It could be argued that the English Wikipedia’s size counters this somewhat, as English is a common second language globally, making it the default version, accessible to most, if not all. Indeed the infographic below demonstrates how much more popular the English version is than any other. However the Western bias, and even a North American bias is clear in the selection of articles and likely in the content, so it remains an inaccurate depiction of the world’s knowledge to whoever reads it. What is more, there are considerable drawbacks for native English speakers, who are far less likely than others to learn another language, as there is a very real likelihood that they would not even recognise that the information they were getting could be biased or incomplete. A collection of over five million articles (and counting) gives a convincing impression of being comprehensive and there is considerable trust placed in an encyclopaedia, online or otherwise.


The effect of gender representation is in many ways more subtle, and more difficult to address. The 80-90 percent male editing force has resulted in some notable effects on the kinds of subjects that are added and developed, and as well as creating more obvious controversies. Generally, the male bias has led to articles on typically male ventures like Pokemon, or WWE (the fourth most contested article in the English on the site) being given an enormous amount of attention, and articles about women, of specific interest to women or about aspects of culture traditionally ascribed as feminine being absent or neglected (10). Founder Jimmy Wales has used the example of Kate Middleton’s wedding dress to show how the subject of an article can highlight the gender divide. In a speech to Wikimania he pointed out that while there are over a hundred articles on different Linux distributions, indicative of the influence of a male, tech-heavy community, a new article about Middleton’s dress was immediately flagged for deletion with responses ridiculing it as trivial (11). Research has also shown that articles worked on by predominantly female editors, which were presumably of interest to female readers, were significantly shorter than those edited by mostly male or an equal mix of editors (12). It is also important to consider that studies such as this only use article length as an indicator, as analysing how much these trends are replicated in article content is difficult. As the recent OED controversy illustrated, however, biases can be insidious and go unnoticed for long periods of time, so it is likely that there are many instances of embedded gender prejudices throughout Wikipedia’s millions of articles, being read by an audience who look to the source as an authority. This issue is discussed by philosopher Martin Cohen who says that “all the prejudices and ignorance of its creators are imposed” on the content and that at the time of writing (2008) the articles that had earned a ‘bronze star’ for being accurate, neutral and complete only made up 0.01% of the total. The number has not grown in the eight years since.

There have also been several much less subtle demonstrations of the gender divide, few less subtle than the GamerGate episode. In short, a female video game developer, Zoe Quinn, released an interactive fiction game based on her experiences with depression, and was immediately met with threats and harassment, including doxing that put her phone number and address online, from the gaming community. The attacks escalated when an ex-boyfriend wrote a blog post claiming that Quinn had cheated on him with several people, including a journalist who had written about the game. That journalist was quick to point out that he had not reviewed the game, merely reported that it existed, but the story evolved into one purportedly about ethics in gaming journalism, while being vitriolic in its treatment of Quinn and women in gaming generally.

While there were layers of controversy throughout the story as it spilled into social media and other characters become involved, Wikipedia was a battleground between the two sides from early on. After a lengthy edit war of the GamerGate page, the issue went to Wikipedia’s highest arbitration committee, ArbCom, where eleven male and three female members ruled to ban five prominent feminist editors from editing either the GamerGate page or any other article about “gender or sexuality, broadly construed” (13). The breadth of the sanctions was widely criticised for leaving not only the original page but those of the people involved, Quinn and others, open to editing by their critics. The only accounts suspended on the opposing side of the issue were throwaways. Regardless of whether the ArbCom decision was justified or not, Gawker stated that at the very least “the episode punches a neat a hole in the idea that Wikipedia is a neutral and democratic platform” but beyond that, “that the world’s seventh-most popular website would look at Gamergate and decide that what’s needed is a silencing of feminist perspectives is depressing, but it’s hardly surprising.”

This criticism is not the first the Foundation has faced around the gender gap and its impact, but the senior executives acknowledge it freely, as evidenced by Wales’ speech, and are making efforts to address it. Sue Gardiner, a former executive director of the Wikimedia Foundation, set a goal in 2011 to raise the proportion of female editors to 25 percent by 2015 and numerous projects have been introduced to try to help. On the English Wikipedia these included a gender gap taskforce to help recruit and retain female editors, the Inspire Campaign grant funding and projects like WikiProject Feminism, WikiProject Women’s History, WikiProject Women scientists, and WikiProject Women’s sport designed to expand the article entries in under-developed areas. Efforts have also been made to redesign the user interface to make it more approachable, but this was met with stiff resistance from the established community when it was made the default, so it was eventually retained only as an opt-in option that is difficult to find for newcomers (14). Other external efforts include edit-a-thons where both experienced and novice female editors arrange meet-ups and edit together, offering tutorial sessions, research support and guidance. Unfortunately, these have faced trouble from the community as well, with one event that used Smithsonian archives to create new articles on unknown female historical figures having two of the pages they created quickly flagged for deletion and subjected to debate (15). There is no evidence that these efforts have made any impact on pushing female editor representation even close to the 25% mark and the initiatives meet with resistance from the male core of editors at every turn. Jimmy Wales admitted in 2014 that the effort had ‘completely failed’. Representation remains around 10-15 percent according to most sources and the environment remains toxic for many of the female editors who do participate, with stories of harassment commonplace.

What has been presented here is only an overview of the deeply entrenched issues of representation in the Wikipedia editing ecosystem. The Western bias is overwhelming in sheer numbers and undoubtedly affecting content. Women remain isolated and excluded by hostility, harassment and a system designed by and for a community that does not want to include them (16). While ethnicity has not been explored here, many of the same issues of underrepresentation exist for minority groups. Entries on Eric Garner, who was choked to death by an NYPD officer in 2014, and other victims of police violence have been edited by an NYPD IP address to appear less inflammatory (17) and articles on black history and culture are absent or underdeveloped (18). The small efforts that are made to improve the situation, while valuable, face an enormous uphill struggle and are actively resisted by members of dominant groups. The problem threatens to completely undermine Wikipedia’s goal of collecting and sharing the world’s knowledge, and it already does undermine their credibility as a neutral source of information. And yet the illusion of neutrality remains convincing, as internal conflicts remain invisible to the general public, and its authority only grows as more and more people and programs draw their information from within its pages. Ultimately, it is clear that while theoretically comprehensive and open to everyone, Wikipedia is an online microcosm of the restricted access, participation and representation that has existed throughout history. It does not, in its current form, look likely to meet the goal it has set for itself of knowledge for all, by all.

Pull vs. Push: An Online Model

The problem of discoverability is an extremely topical one and one that I have put a lot of thought into since the start of this program. The disappearance of physical shelf space has publishers concerned enough as is. It means they have to find alternate avenues in which to sell their books. A few popular ones include clothing and specialty boutiques, food stores, and transportation-related shops like ferries and Greyhound stations. That being said, this model proves tricky. Consumers are used to buying books in traditional bookstores, as they are used to buying food in supermarkets and clothes in shopping malls. Consumers have shopping habits; they don’t want to be going to seven different stores to find one title. However, this isn’t necessarily a bad thing; it just changes the game. Book buying becomes a game of stumbling upon a title rather than seeking one out. David Steinberger, president of Perseus Books, mentions in the article “Pull vs. Push” that the new publishing business model means publishers are no longer pushing books; rather, consumers are pulling them. While traditional bookstore shelf space may disappearing, there is no shortage of shelf space as a general entity. But can the same be done on the web?


Events and the idea of a community are a great starting point. While a Simon and Schuster event might not attract as many guests as a Condé Nast or a Wired event, one of its authors may do so very successfully. Moreover, online communities are real, tangible spaces. These events, to a certain extent, already exist. The blogosphere, for instance, is a prime example; it serves as a hub for the online community. An i-Scoop article entitled Online Communities and Social Communities: A Primer addresses this point admirably. The article suggests that individuals have it in their nature to build a community. As such, they will seek one out regardless of the medium available. This has proved to be true through online social communities existing around a array of social and professional interests (e.g. Facebook, Twitter, LinkedIn): “Community is a natural phenomenon, a mindset and a way of engagement. It is also the essence of social business. Communities of people have always existed and online communities existed long before we even used blogs. Social communities are online communities using social platforms.” An online community shares all the same components as a “real-life” community.


Because of this, the same can indeed be done on the web. A community devoted to certain content likely already exists. It becomes the publisher’s undertaking to seek it out and to then supply readers with content when they want it. These communities can prove beneficial in building readership.


Finally, one part in “Pull vs. Push” reads, “Many panellists felt that the best hope of introducing content to people was making it easy for existing costumers to share with their networks of friends.” The article lists Goodreads and Shelfari as examples. While I understand the author’s intent, I personally don’t have any friends (aside from those in the MPub program) that use Goodreads as a source of book reviews. It hasn’t taken off in the same way as Rotten Tomatoes, used for movie reviews, for instance. I would argue that this is because a community has yet to fully develop around Goodreads. While Facebook may not be the answer, another online community likely is. In the same way that authors use Twitter to communicate with readers, publishers need an online community to do the same. Groups and communities, particularly as they relate to word-of-mouth, are key in creating anticipatory buzz and buzz in general for books.

Logging the Web: The Evolution of Blogging Platforms

No Middle Man Needed

Publishing, though arguably one of the oldest art forms in existence, has evolved dramatically in the recent past. While the web meant that publishers were unwise if they failed to consider an online presence, this online presence has more recently taken on a life of its own. Digital publishing constitutes an enormous part of the publishing world to the point that some publishers only publish online. What’s more, this has provided people with an outlet for self-publishing on the Internet; there is no longer a need for a middle man, i.e. the publisher.


It can be said that each individual social media platform is created when a developer sees for themselves or gets feedback from someone else regarding a glitch in the last platform that can be improved. This trend and natural progression transcends the social world and correlates directly to blogging platforms. These developers use the last platform as a basis for a new one, further developing what works yet removing what doesn’t.


The Early Days of Blogging (Weblogs)

Blogs have become an integral part of online culture. It is probable that everyone reads some form of blog, whether popular or lesser-known. Most Google searches will lead to a blog post on the topic, which generally stems from either opinion or research. Blogging platforms have not suffered the same labelling as social platforms have. While everyone knows Facebook, Twitter, and Instagram, including which is trendiest at the time being, not many keep tabs on Links.net, LiveJournal, and Blogger. Having said that, the first popular blog host was in fact Links.net, which was founded no later than January 1994. (MySpace was only founded in 2003.) Justin Hall, a student at Swarthmore College in Pennsylvania, started a webpage which essentially had the same form as what we have come to know as the blog. Hall’s blog remains active to this day; its format has not changed. His first entry read, “Howdy, this is twenty-first century computing… (Is it worth our patience?) I’m publishing this, and I guess you’re readin’ this, in part to figure that out, huh?” Hall’s mention of the word publishing has proven to be very telling of the present day publishing industry.

Screen Shot 2016-02-15 at 9.36.03 PM


By 1997, these personal homepages were increasing in popularity. The term weblog was finally coined by Jorn Barger, who ran Robot Wisdom, one of the web’s earliest blogs. The term referenced the notion of keeping a regular record of various incidents in the form of an online post. The premise of Barger’s blog was to give readers access to topical links on the Internet; Barger also wrote his own posts. Though web historians consider Hall to be the first blogger ever, Barger pioneered the concept and brought upon its popularity. Barger always had an interest in math and computer science. He bounced around from college to college, never actually getting a degree. He finally decided education was not for him and spent a number of years farming at a famous hippie commune in Tennessee. While this may seem irrelevant, he emerged with a deep interest in human behaviour. Given his abilities and the need for computer programmers in the eighties, Barger started working in this field. In 1989, he took on a post at Northwestern University as an artificial intelligence researcher. He found that human behaviour was best analyzed through computer stimulation, hence his founding of Robot Wisdom.


Although it is important to make mention of Justin Hall and his personal webpage, Barger did not necessarily draw inspiration from him. His inspiration came from Usenet. Usenet became available to the public in 1980, when Barger began his position at Northwestern University; this is also when he developed his interest in the web. Two graduate students at Duke University conceived the idea for Usenet. Its main goal was to serve as an online community and as a host for people’s ideas; it allowed for an online discussion. It provided categories within which users subscribed. When a new article appeared, those subscribed to its category would have access to it. Here it was announced that the World Wide Web would launch. This concept was of great interest to Barger, who saw this as an almost-ideal online community. (What lacked was its inability to appeal to non-tech people.) Barger helped create some of the first forums, in the arts category, regarding signer Kate Bush and writer James Joyce. He is credited with sparking some of the earliest web studies of Joyce. A 2011 James Joyce Broadsheet article entitled “Joyce Journals in Review” addresses the notion of literary education evolving as the web developed. The article says that in the last 15 years, there has been a revolution in higher learning, which has been largely attributed to the web. “There is another set of Joyce journals that exist only digitally. The survival of the digital journals is sometimes even more precarious than the paper variety.” (Lernout 2) The article makes mention of the survival of Barger’s “one-man journal” as the earliest online study of Joyce; it has since evolved and serves as a trustworthy and respected guide on the author.


That said, Barger saw an opportunity to create something more personal and accessible. He saw Usenet as useful, but also believed it would be a much more incredible resource if it could be accessed by people who weren’t necessarily tech-savvy. Thomas Mason and J. Kent Calder wrote a book entitled Writing Local History Today: A Guide to Researching, Publishing, and Marketing Your Book, which illustrates the importance of Barger’s work as an early developer in online publishing. They say blogs and digital writing became popular because of Robot Wisdom. “Jorn Barger, the proprietor of the Robot Wisdom Weblog, originated the notion in 1997, and the blog as personal journal became popular among young people working in technology companies in the late nineties. As blogs became more popular, software packages arose that greatly simplified the task of creating and maintaining a blog. (Mason, Kent 69) What’s important to remember from Barger and Robot Wisdom is that he saw a model, one with which readers gathered as part of an online community to discuss concepts, that had immense potential to reach a wider audience. In essence, he developed his model known today as the blog based on an idea that required some improvement. From this, millions were inspired to start their own blogs as a means of digital expression. In the present-day, blogging has become a large part of the publishing industry.


Is Blogging Publishing?

Another case that can be made is whether or not blogging actually constitutes as a legitimate form of publishing. Evan Williams, an Internet entrepreneur who has founded multiple Internet companies, including Blogger and Medium, and served as CEO of Twitter, argues that the two entities are completely different. Williams co-founded Pyra Labs, which was made with the intent of managing software. From this came Blogger, a management software for creating and organizing blogs. Blogger proved useful for writers who had discovered the joy of sharing their content on the Internet; it served as a gateway tool to more powerful blogging platforms. Like Barger, he saw an improvement that could be made in a previous model – his previous model – and created Medium in 2012, which was a much more accessible publishing platform. In 2015, Williams re-evaluated his beliefs and wrote an article, on Medium, entitled “Medium is not a publishing tool.”


In his article, Williams states that the intent of Blogger was to publish online. Because it was a host, it proved complicated when he wanted to make a change; it meant every user had to accept this change. The fact that it had so many users further complicated this platform. Williams later worked for Twitter. While it had more users, they were less invested in the quality of the content given the nature of its concept (a maximum of 140 characters). Williams made sure to not use many features as a means to simplify the production process. Its simplicity meant it was easier for people to use. As such, content had a higher chance of going viral. This is where Medium comes in. The platform was designed as a simple means to publish good writing. Medium proved useful as a writing platform for people to share ideas that also didn’t want the trouble of actually creating a blog. “It’s clear that there are many more people who occasionally have valuable perspectives to share than there are people who want to be “bloggers.” These people love writing on Medium, even if they see it as just a tool to create a nice page to point people to from Twitter.” One element that has gotten a huge amount of attention is the highlight tool. This tool, according to Williams, allows for people to make note of important aspects of the writing and comment. Medium, however, is an online community; it is a networking tool for writers. It surpasses a publishing tool. Arguably the same can be said for all blogging platforms, to a certain extent. Whereas a traditional publishing platform does not allow for conversation, the blog was created with exactly this in mind.

Screen Shot 2016-02-15 at 9.36.07 PM

As proven in the aforementioned examples, web developers and entrepreneurs use previous platforms – their own or that of others – as inspiration; they see an area for improvement and work off of that. Williams developed Blogger and chaired Twitter, viewing both as publishing tools. But when does publishing become social?


Social Media as an Offshoot of Blogging and Digital Publishing

This trend of improving from one platform to another is logical, has proved immensely successful, and is seen across both web-based blogging platforms and social media tools. As users develop expectations, it becomes critical to stay somewhat consistent from platform to platform. This trend on social tools is extremely common. Examples include Facebook photos to Instagram, iPhone’s front-facing flip photos to SnapChat, dating websites to dating apps in general, namely Tinder. It should be noted that some of these stemmed from a desire to publish content.


Having said that, however, the first noticeable “improvement” was MySpace to Facebook. Adam Hartung, a Forbes regular contributor, wrote an article supporting this point. The latter suggests that both platforms had the exact same audience (and even an almost identical concept) in mind, but Facebook simply did it better; they did so by noticing what was wrong with MySpace and worked with that as a basis for their new platform. Moreover, while MySpace was meant as a sort of publishing platform for writing, photos, and general content, Facebook co-founder Mark Zuckerberg allowed for his site to take whatever direction users wanted. This meant not excluding games, such as Farmville, which account for a large amount of the site’s success. Inspired by its users, Facebook grew to serve them: “The brilliance of Mark Zuckerberg was his willingness to allow Facebook to go wherever the market wanted it. Farmville and other social games – why not? Different ways to find potential friends – go for it. The founders kept pushing the technology to do anything users wanted. If you have an idea for networking on something, Facebook pushed its tech folks to make it happen. And they kept listening. And looking within the comments for what would be the next application – the next promotion – the next revision that would lead to more uses, more users and more growth.” This platform, inspired by MySpace (arguably a publishing platform) became a social network. It evolved as to what users wanted, which turned out to be something greater than a publishing platform. Like MySpace, it hosts an enormous online community, but it does not serve the same purpose.


In conclusion, blogging platforms have evolved from one to the next. Blogs are central to online communities and developers, knowing this, have improved upon their predecessors’ platforms. As per user requests, some of these platforms have blended with the social world. Whether or not social constitutes publishing is up for debate. That said, given its origins, it is likely that it does.


Works Cited


Djuraskovic, Ogi. “How Jorn Barger Invented Blogging.” Free Resources Guides Help for Web Newbies. First Site Guide, 20 Mar. 2015. Web. 15 Feb. 2016. <http://firstsiteguide.com/robot-wisdom-and-jorn-barger/>.


Hartung, Adam. “How Facebook Beat MySpace.” Forbes. Forbes Magazine, 14 Jan. 2014. Web. 15 Feb. 2016. <http://www.forbes.com/sites/adamhartung/2011/01/14/why-facebook-beat-myspace/#566d3ad47023>.


Lernout, Geert. “Joyce Journals in Review.” James Joyce Broadsheet. Oct. 2011: 1-6. Digital.


Mason, Thomas A., and J. Kent Calder. Writing Local History Today: A Guide to Researching, Publishing, and Marketing Your Book. Lanham: AltaMira, 2013. Print.


Read, Ash. “The Unabridged History of Social Media.” Buffer Social. Buffer Social, 10 Nov. 2015. Web. 16 Feb. 2016. <https://blog.bufferapp.com/history-of-social-media>.


“The History of Social Networking.” Digital Trends. Digital Trends, 04 Aug. 2014. Web. 15 Feb. 2016. <http://www.digitaltrends.com/features/the-history-of-social-networking/>.


Williams, Evan. “Medium Is Not a Publishing Tool – The Story.” Medium. Medium, 20 May 2015. Web. 15 Feb. 2016. <https://medium.com/the-story/medium-is-not-a-publishing-tool-4c3c63fa41d2#.th7bkmmhg>.


Reading Response: Which Kind of Innovation?

Baldur Bjarnason’s article “Which Kind of Innovation?” gave a lot of credit to ebooks, in my opinion. But I think he was on the right track when he said that ebooks weren’t disruptive innovations. The problem I find within the publishing industry is that they need to be disruptive to the entirety of the industry if they want to get adopted with any sort of staying power.

Print books have been improved upon for more than 500 years. So in a way, it makes sense for ebooks to be modelled after the print formula. However, how can ebooks compete with paperback books—physical takeaways—when their prices differ by only $0.00 to $5.00? Ebooks must offer something more substantial and satisfying than print books if the industry wants to have them adopted by a wide audience. It is almost comical when Bjarnason comments, “Amazon’s Kindle format remains for all intents and purposes a 1990s technology.” In reality, ebooks are a digital facsimile of a book, for the most part. They are laid out similarly and I would argue that the Kindle format is a 1500s technology. But Bjarnason seems to be on to that as well as he says “[Fixed layout ebooks] contain… no innovative features to speak of, they are merely an accumulation of complex print-like cruft to aid the transition of illustrated or designed print books into digital.”

Projects such as The Pickle Index, where there is a web 2.0 storytelling integration that occurs simultaneously in story-time and in real-time over ten days, “revealing the narrative through the various features of the app: popular vinegar-based recipes, daily news updates, dynamic maps, and Q&A” is a much more interesting way to grab readers to have them read digitally. In fact, it is as this point that I would actually refer to digital reading as an “innovation.” When Bjarnason calls ebooks a “sustaining innovation,” as in the idea that they sustain what already exists in the publishing world, I think he is using an oxymoron. If they are sustaining a status quo, they are not creating innovation at all.

I think a major switch in the thinking around creating ebooks needs to be changed. They cannot just be an afterthought, a digital book. There has to be something altogether different about them, a reason for people to choose them over print books. But when prices are comparable, there is no physical takeaway, and print books are better designed than ebooks, there is no real point to adopt them.

Publishing Brands

In response to “Sifting Through All These Books” by Hugh McGuire. 2010.


In the online blogging world, similar blogs are linked together by those who produce them for the purpose of discoverability. If you like this one blog, you will probably also be interested in another blog that they have linked to because the writer believes that their blog is similar to that other one (or that the other one is of good quality, approaches interesting questions, talks about important subject matter, ect.). In order to reproduce this in the real-life-publishing-world, perhaps publishing houses should focus on promoting their brand as a whole.

Continue reading “Publishing Brands”

Pull vs. Push: Publishers Search for New Ways to Help Readers Discover Their Content—Reading Response

Discoverability is a fear that has long plagued publishers. When publishers rejected Dickens’ A Christmas Carol, they feared for its discoverability. At a time when the very idea was considered akin to paganism, they couldn’t imagine any supporters for the book. And yet it went on to be a bestseller! With numerous reprints and movie editions, the book symbolizes the spirit and emotions that lie at the heart of Christmas to this day!

Before the emergence of the World Wide Web, mass communication was a risky venture—be it book publishing or movie making. With the web connecting the entire world under one roof, accessibility increased. As Rajiv Jain, chief technology officer of photo-marketing site Corbis, says in the article: “Discoverability has always been an issue, but there’s now infinite shelf space.”

Social media platforms emerged to compartmentalize the shelf space quite strategically. A publisher can now tweet, blog, Instagram, Facebook, Youtube (book trailers), Pinterest, and Tumblr its books! Add to the pile, discoverability giants such as Goodreads and publishers should be able to sleep easy. Yet, what was yesterday the publisher’s venture has today been reduced to a reader’s fancy.

A tweet from the publisher about the latest shades of grey would hardly make a difference if 5,475 readers did not choose to retweet it. A Youtube trailer about The One by Kiera Cass—the latest YA epic romance—would not make its presence felt beyond its digital space if 7,804 fans were not to “like”, “share”, and “comment” on the trailer! 899 “repins” of “series for fans of The Hunger Games” on Pinterest helped me discover the next dystopian trilogy I wanted to read. Pinterest even uses SEO discoverability such as linking to the mother of all search giants, Google! Imagine your book being a part of “The 10 trendiest books to read this summer”—blogged by a reader who fancied it. Or simply log onto Goodreads and see readers, authors, friends, and foes working their magic alike. The 5th Wave, the first in a post-apocalyptic dystopian trilogy written by Rick Yancey and now a major motion picture, has 160,754 ratings! Who published it again?

The article cites that “content discoverability is vital to keeping publishers relevant”, but also acknowledges how relevant it is for readers to know what their friends are reading or quote direct content from books they have read. A publisher must be on constant lookout for new ways to keep readers engaged because this is no more about the book of the season, or about the editor’s picks. This is a slowly unwinding movie about a publisher relinquishing control. It is more about what a reader thinks is worth reading and if his or her opinion catches the fancy of the world, then Fifty Shades of Grey is bound to happen.

Publishing has always been about control. The question today is who controls whom?

The Evolving Force of Censorship In a Social Media Landscape

The prevalence of social media and online communities as platforms for expressing one’s personal thoughts, views, and opinions has reshaped the way we share, monitor, and regulate these expressions. In less than a generation, the meaning of censorship, and how we address it, has evolved into a multifaceted creature, and a challenging one to capture. Censorship in the print world has by no means diminished, but we see the rise of a greater, less definable counterpart in the online world. What censorship is, and, equally importantly, who has a right to enforce it, is more challenging to grasp than ever before.

Continue reading “The Evolving Force of Censorship In a Social Media Landscape”

The Case for Interactive Children’s Books

Children today are more adept at technology than their parents, having more familiarity with devices that they have been exposed to since birth. Much is written on the dangers of screen time for children; however, in reaction to the reality of almost constant and unavoidable media exposure, investigating bodies have reevaluated this stance. With children having access to books within the palm of their hand, there are an increasing number of ways in which publishers can appeal to a new generation of readers. Interactive reading can change the manner in which children learn and is something that publishers must fully take advantage of in order to build new readers from a young age upwards. In accordance with research, digital reading in this essay will refer to reading of fiction, non-fiction, and news on portable devices both within the classroom and at home.

Continue reading “The Case for Interactive Children’s Books”

Participating On The Internet: The Most Unfair Playground In The World

Just about anyone can create and share content on the Internet nowadays. And for the average millennial (lurker and/or troll) who engages with the Internet, who does not give a second thought about it, it begs the question: should we be more conscious of what information we are consuming, and whom it is being put out by? Continue reading “Participating On The Internet: The Most Unfair Playground In The World”

Great text transcends nothing — Reading Response

Bjarnason’s article offsets the cons of print and ebook forms quite beautifully. With ebook the likely loser of the contest’s outcome, Bjarnason, nevertheless, has the sagacity to not dismiss the digital form outright: “Ebooks, quite simply, have to improve.” I agree with that but I also argue that great text has the power to transcend its borders.

While Bjarnason proposes a beautifully designed, typeset, and packaged book does bring joy to the beholder, an ebook is not without its own attractions. Its permanence, for one, is untouchable. Its contents are fluid and transferable; the ebook itself is light and portable—one ebook is the mobile equivalent of your entire book shelf. A physical book is, however, restricted by its physical dimensions, not to mention lack of portability. Its packaged beauty is its vulnerability and its curse.

Ebooks as Bjarnason points out look worse than their trade paperback or hardcover counterparts but he also argues that it is simply a natural corollary to the fact that print and digital are two extremely different media to begin with. For example, choices of typography in the ebook might be restricted but that is a conscious decision of ebook makers. Some studies say that the eye generally prefers san serif fonts while reading online. But ebooks are still a growth area for fonts and typography. In the article Font swap in iBooks, Glenn Fleishman says Apple shipped iBooks for the iPad in 2010 with five font choices: Baskerville, Cochin, Palatino, Times New Roman, and Verdana. The latest iBook version still retains only Times New Roman of that lot with the addition of seven new fonts, six of which are serif and two san serif. Clearly, Apple is still experimenting with its reader.

The biggest boon of ebooks has perhaps been in the self-publishing field. In fact, the past and the present can be divided into two universes—one with Hugh Howey’s Wool and one without. Having written Wool as a stand-alone short story, Howey self-published it via Amazon’s Kindle Direct Publishing system in July of 2011. In Wool by Hugh Howey — a review, Flood says, “By October, readers were clamouring for more, and he duly obliged. His novel now runs to over 500 pages and has hit US bestseller lists, with book deals on both sides of the Atlantic, and film rights picked up by Ridley Scott.” The self-evident fact here is that had the poor looks of a Kindle reader—as opposed to its beautifully designed and glossed nemesis—been a deterrent, Howey would not have turned into the sensation that he is today. He has inspired thousands more to join him in the self-publishing movement—all made possible by the emergence of the Kindle.

Then there are those kinds of books which teach, and make for more than interesting reads. These books already have a bright future in the digital form. With embedded media and other software apps, their tactile worth is far beyond what a print book can achieve.

Arion Press in the USA is considered “the nation’s leading publisher of fine-press books”. As Suich points out in Essay – The Future of the Book, “its two-volume Don Quixote with goatskin binding and lush illustrations sets readers back a bit more than $4,000”. As a novel that founded the work of modern Western literature, an amount of $4,000 can only come in the way of its worth—how many of us can afford an edition that expensive? Reading the text of this great “canonical” book here becomes the main objective, not the form it comes in—and for those looking for less costly versions, the cheaper the better. After all, Allen Lane, the late founder of Penguin, devised paperbacks not so much to give joy to its beholders but to make the book cheap enough for the masses to read. So isn’t the ebook simply carrying on that tradition?

The ebook might not yet do justice to the 500-year-old tradition of the print book as an art or craft or even as a precious object in people’s lives, but then again the digital revolution has only just begun. There is space and immense potential for improvement. And as far as the text of the book is concerned, the ebook is already doing its job—and doing it well. Ask Hugh Howey!

Information Sharing Online and in Coffeehouses: Gatekeepers and Social Discourse

Information Sharing Online and in Coffeehouses:
Gatekeepers and Social Discourse

Information sharing today has reached a peak that is unprecedented. Higher literacy rates, the accessibility of the Internet, and the availability of pages online, inclusive of blogs, comments, and profile pages, contribute to a endless stream of information that must be sorted through in order to be understood. Furthermore, what are the side effects of the ways users are sorting through content? By examining the social changes in regard to information sharing during the Age of Enlightenment and comparing them to the challenges of sharing knowledge on a website such as Facebook, this essay will argue that while using algorithms is beneficial for the expansive amount of information on the web, it ultimately leads to a less knowledgeable, less informed online community. It will examine how the Age of Enlightenment thrived where the Internet is failing despite the possibility for progressiveness and innovation.

The Age of Enlightenment was a period in eighteenth-century Europe in which there was a movement against the then-current state of society, inclusive of church and government. In pre-Enlightenment Europe, “individual dignity and liberty, privacy, and human and civic rights… [were] virtually nonexistent… ‘burned and buried’ in medieval society and pre-Enlightenment traditionalism” (Zafirovski 9). This illustrates the church and state’s role as gatekeepers of knowledge, allowing only what they deemed as appropriate to be accessed by society. Zafirovski states that during the Enlightenment, “Descartes, Voltaire, Diderot, Kant, Hume, Condorcet, and others emphasized overcoming ignorance and intellectual immaturity, including religious and other superstition and prejudice” (4). He is referring to the major thinkers of this time, those who wrote public essays on the tenets of enlightenment and reason. It was the age where past ideals were rejected in order to champion the concept of individual thought and voice. It was not a period of “anti-” religion or state, but of individual liberty and of pushing against absolutism. During this time, the Encyclopédie was published, which disseminated the thoughts of the Enlightenment. Diderot, the editor of the project, is quoted to have said that the goal of the Encyclopédie was to “change the way people think” (“Encyclopédie”). During the Enlightenment, the opinions of those who wanted to remain within the norms of pre-Enlightenment society existed alongside the dissertations of those who proclaimed it was time for change: “The inner logic, essential process, and ultimate outcome of the Enlightenment are the destruction of old oppressive, theocratic, irrational, and inhuman social values and institutions, and the creation of new democratic, secular, rational, and humane ones through human reason” (Zafirovski 7). The thinking that existed pre-Enlightenment had to occur; the prominent thinkers emerged from a society of rules they did not relate to. In other words, they had to know the culture they were living in very deeply in order to argue strongly against it.

As stated previously in regards to the Encyclopédie, the dissemination of knowledge was paramount during the Enlightenment. For the sake of this paper, the major sources of knowledge-spread are deduced to be of two origins: book publishing and the salons and coffeehouses. As illustrated much earlier through Martin Luther’s Ninety-Five Theses, the ability to spread printed information became much simpler and more efficient with the invention of the moveable type by Johannes Gutenberg. Previous to this invention, religious scribes hand wrote all of the books that were available. Because this was such an intensive process and paper was handmade, books were very expensive. Yet, as time went on, the efficiency of the printing press grew, especially with the beginning of the Industrial Revolution. This meant lower prices and therefore more availability. In turn, literacy grew. Furthermore, the inexpensive cost allowed the increased spread of journals, books, newspapers, and pamphlets (“Age of Enlightenment”). More people could engage with texts because of higher literacy rates and the growing number of texts that were now available. Once articles, essays, and books were read, they were also discussed in places such as coffeehouses and salons where both men and women could meet to debate and discuss the ideas of the time. This created a social environment that was a catalyst for new philosophies. In fact, the idea for the Encyclopédie was conceptualized at the Café Procope in Paris, one of the coffeehouses of Paris that is still maintained (“Age of Enlightenment”). Furthermore, because anyone could come to discuss politics and philosophy, it undermined the existing class structure, thus allowing for multiple perspectives in one place.

At the time of its introduction, the possibility of how an open public internet would become so ingrained in human society and culture could not have been predicted. The rapid growth of the Internet is considered by Douglas Comer to be a result of its decentralization and the “non-proprietary nature of internet-protocols” (qtd. in “Internet”). During the time in which the Internet became popular, the speed of information growth was unprecedented. New websites with personalized homepages and links emerged as people began to explore the World Wide Web. Today, sites such as Facebook act as home websites replacing the “homepages” of before. This, as shown in “The Rise of Homeless Media,” is beginning to replace the old ways of the web. Facebook is becoming a much bigger entity than the developers imagined at its conception. While this change may mean that the web is becoming streamlined, it comes at a cost of control to these site users. In the ‘90s and early 2000s, the popular free web hosting services provided a very personalized experience. Sites such as Angelfire, Freewebs, LiveJournal, and DiaryLand relied on subscribers and ads in order to allow their sites to run freely and in a way that allowed users to personalize their content, with the exception of ad placement for non-subscribers. Personalization occurred through writing code such as HTML. Furthermore, serious bloggers acted as a catalyst for other voices, creating a community where readers were linked to other bloggers and informative sites of related ideologies and/or topics. For instance, Mike Shatzkin’s  The Shatzkin Files hyperlinks to other sites that may be of interest to a reader of that particular subject. Though it is a fairly recent blog, it is basic in its design, reminiscent of much earlier blogging interfaces. Today, blogs are increasingly popular and come with pre-made themes, making coding unnecessary although still possible on platforms such as WordPress. On Facebook however, users cannot change the style of their page. This control of style is one way the web is becoming more streamlined. The primary benefit to living on a home website such as Facebook, Twitter, Instagram, or LinkedIn is accessibility. Each site has their own niche purpose and learning to code is not a necessity to run these pages. One simply needs to know how to link the various pages properly to allow for an integrated movement across platforms. Because users do not need to understand code in order to have a profile on these websites, their user base is much larger. This is comparable to the accessibility to literature in the eighteenth century which made reading a pastime for more than just an educated elite.

This ease-of-use has led to a global reach of perspectives. In this sense, the age of the Internet can be correlated with the Age of Enlightenment in that the proliferation of knowledge is now much easier than it was in the past. Today, over one billion pages exist on the web (Woollaston). The billions of people using the web are provided access to a multitude of differing perspectives and insights (“Internet Usage on the Web by Regions”). Though this has the potential for tension, it has been proven to help develop critical thinking and empathy. In the article, “How Diversity Makes Us Smarter,” Edel Rodriguez states, “social diversity… can cause discomfort, rougher interactions, a lack of trust, greater perceived interpersonal conflict, lower communication, less cohesion, more concern about disrespect, and other problems.” However, being confronted with these problems and having to mediate around diversity enhances creativity, “leading to better decision making and problem solving” (Rodriguez). Thus, diversity creates adversity, but provides good results when people are encouraged to consider other people’s perspectives. Our minds are prompted to work harder when disagreement arises as a result of social differences. Thus, a difference in perspective “[encourages] the consideration of alternatives” (Rodriguez). This article, published by Scientific American, puts words to this phenomenon being studied by a multitude of people, including “organizational scientists, psychologists, sociologists, economists and demographers” (Rodriguez). It illustrates why salons and coffeehouses were so important as places to spark conversation. They were hubs of discourse that generated innovative ideas and ideologies, sometimes for pleasure, but other times to create planned social movements such as those that led to the French Revolution. Similarly, the web provides an outlet for people to create discourse. Though not a physical space like salons, the web allows for a greater global discourse to occur; it should be the perfect platform for our globally-social world.

The most popular social network today is Facebook (“Leading Social Networks Worldwide as of January 2016, Ranked by Number of Active Users [in millions]”) with approximately 1.59 billion active monthly users (“Number of Monthly Active Facebook Users Worldwide as of 4th Quarter 2015 [in millions]”). Facebook is a platform for users to create profiles for personal or business use in order to connect with others. Facebook also doubles as a publication platform, though Facebook would argue against this (Kiss and Arthur). Publishing is defined by the Oxford English Dictionary (OED) as, “the act of making something publicly known” (“Publishing”). Users on Facebook create posts and share them with both strangers and friends, thus creating a public publishing platform. These posts and comments are as much a public form as blog posts or online fan fiction. It is documented proof of what has been said by whom; in fact, it is now possible to see the editing history on a single post or comment. Even if a post or comment is deleted, Facebook retains access to that content. Their Help Centre website states, “When you choose to delete something you shared on Facebook, we remove it from the site. Some of this information is permanently deleted from our servers; however, some things can only be deleted when you permanently delete your account.” Thus, content considered “deleted” exists past the time the creator removes it; it is still available to some, remaining “published” on Facebook’s servers.

Facebook is a platform where unique content is created in addition to a site where users “share” and “like” content they deem relevant. This can result in a lively discourse of back-and-forth commenting, especially with the new option for users to “reply” to previous comments. However, in order to find content that is in opposition to one’s currently held views, one must often purposely seek it out themselves. This is due to Facebook’s algorithms, largely invisible and secret to Facebook users. Facebook created algorithms that filter its seemingly-endless content into curated, personalized “news feeds” for its users. An algorithm, defined by the OED, is “a precisely defined set of mathematical or logical operations for the performance of a particular task” (“Algorithm”). As a business, Facebook succeeds in the task of retaining consumers; they are able to deliver an appropriate amount of content to their consumers. Where Facebook’s algorithms fail is in giving users unique content, not only based on their specific “likes” but on their broader general interests. Furthermore, they are unsuccessful at providing “readers” with interesting and challenging content that is oppositional to their currently held views. They are unable to show a snapshot of the multitude of voices that exist on this platform; instead, they proliferate a user’s preconceived views and reinforce a user’s confirmation bias. Ultimately, Facebook is a business. Their prediction algorithms that provide users with a personalized news feed are meant to generate a user-friendly experience; however, in doing this, computers become gatekeepers and users become confined to ideological bubbles.

During the Age of Enlightenment, the book trade and affordability of books allowed for the proliferation of new areas of thinking and novel philosophies. Censorship by the church and state was dying in favour of books that were engaged readers and inspired discussion and debate about their ideas. What made books and discourse interesting was not necessarily the sameness of opinion, but the diversity of opinions that were becoming louder during the eighteenth century. Facebook could have become a place of social diversity. Instead, its owners have engaged in gatekeeping and invisible editing in order to keep users returning to their site. This comes at the expense of social and intellectual growth and change. The people who manage Facebook’s algorithms generate many of them based on “likes,” hidden posts, and the amount of time spent reading an article. “[Chris] Cox [Facebook’s chief product officer] and the other humans behind Facebook’s news feed decided that their ultimate goal would be to show people all the posts that really matter to them and none of the ones that don’t,” states Will Oremus in “Who Controls Your Facebook Feed.” However, humans are not as predictable as mathematic equations; utilizing “likes” or time spent reading as a baseline of what is shown to people does not illustrate the whole complex picture of what human beings can, and should, engage in. In his TEDxTALK, Eli Pariser gives an example of algorithms attempting to understand a human being based on these baselines alone. He says that as a liberal, he engaged in more progressive content. However, he enjoys politics and likes reading about the conservative side of the political spectrum. He recognized he was engaging in right-wing content less often, but he was perceptive enough to notice when the conservative viewpoint disappeared from his feed, leaving only content from his liberal friends. Opposing content, though interesting and necessary for Pariser, was gone and he now had to seek it out. He had no active role in editing his news feed as content was disappearing, and neither do other users. Yet, most other users to not notice the content shift happening; instead, they see their own views proliferated. Previous to the Internet, the broadcast and print media were the gatekeepers of information. It is widely recognized that the media is fallible, but journalistic ethics existed in order to promote multiple perspectives. The Internet undermined this old media as it expanded. Huge companies, such as Facebook and Google, grew and computers have become the gatekeepers of information. Oremus states, “Facebook had become… the global newspaper of the 21st century: an up-to-the-minute feed of news, entertainment, and personal updates from friends and loved ones, automatically tailored to the specific interests of each individual user.” The idea of a platform presenting only one perspective to its readers without the availability of an opposing opinion at arm’s reach, as is the case with newspaper stands, is an archaic thought considering the movements that have been made to prevent censorship from occurring, especially in regards to the importance of social diversity. Oremus’ article is informative and supportive of algorithms, yet he still laments, “Drowned out were substance, nuance, sadness, and anything that provoked thought or emotions beyond a simple thumbs-up.” Ultimately, Pariser, in his TEDxTALK, recognizes the biggest issue at hand when computers control the information people see, and it is not always as simple as ideological bubbles. Ultimately, it extends into a dysfunctioning democracy, removed from a conducive and just flow of information. To have a strong conviction requires knowing and understanding all sides of an issue. As Katherine Phillips states, “We need diversity… if we are to change, grow, and innovate.” Facebook and Internet users cannot let website conglomerates be the only innovators, the only ones capable of seeing solutions from multiple angles, whether those problems involve an algorithm or differences in ideology, religion, or politics. Users cannot let computers be their personal gatekeepers, preventing them from understanding that there are other perspectives and that they are equally as valuable.

Ultimately, Facebook’s algorithms serve a vital purpose: a means of generating revenue, retaining users, and making sense of the expanse of information available on the web. However, these secret, invisible algorithms prevent Facebook’s users from being introduced to novel information or opposing viewpoints. This in turn prevents people from understanding global events, and instead creates ideological bubbles. Milan Zafirovski writes, “subjects were literally reduced to the servants of theology, religion, and church, thus subordinated and eventually sacrificed… to theocracy.” In this statement, he is referring to pre-Enlightened Europe. However, as people become more accustomed to seeing their own views proliferated on what many consider their main news source, they are becoming accepting of the idea that their view is the only one. As history shows, discourse and challenging opinions and ideas are what fuel social change. Ultimately, Facebook needs to sort through the massive amount of information on their site; however, they cannot be gatekeepers to distribute only information they deem as “important.” Facebook users need to have a voice in what is shown to them, and this needs to be bigger than a “thumbs up.”

Works Cited

“Age of Enlightenment.” Wikipedia: The Free Encyclopedia. Wikimedia Foundation, Inc. 30 January 2016. Web. 31 January 2016.

“algorithm, n.” OED Online. Oxford University Press, December 2015. Web. 26 January 2016.

Arthur, Charles and Jemima Kiss. “Publishers or Platforms? Media Giants May be Forced to Choose.” The Guardian. 26 July 2013. Web. 29 January 2016.

Chowdhry, Amit. “Facebook Changes News Feed Algorithm To Prioritize Content From Friends Over Pages.” Forbes. 24 April 2015. Web. 26 January 2016.

Dickey, Michael. “Philosophical Foundations of the Enlightenment.” Rebirth of Reason. Web. 26 January 2016.

“Internet.” Wikipedia: The Free Encyclopedia. Wikimedia Foundation, Inc. 29 January 2016. Web. 29 January 2016.

“Internet Usage in the World by Regions.” Internet World Stats. 26 January 2016. Web. 1 February 2016.

“Leading Social Networks Worldwide as of January 2016, Ranked by Number of Active Users (in Millions).” Statista. January 2016. Web. 31 January 2016.

Luckerson, Victor. “Here’s How Facebook’s News Feed Actually Works.” Time. 9 July 2015. Web. 26 January 2016.

Marconi, Francesco. “The Rise of Homeless Media.” Medium. 24 November 2015. Web. 15 January 2016.

“Number of Monthly Active Facebook Users Worldwide as of 4th Quarter 2015 (in Millions).” Statista. January 2016. Web. 26 January 2016.

Oremus, Will. “How Facebook’s News Feed Algorithm Works.” Slate. 3 January 2016. Web. 26 January 2016.

Pariser, Eli. “Beware Online ‘Filter Bubbles.’” TED. March 2011. Lecture.

Phillips, Katherine. “How Diversity Makes Us Smarter.” Scientific American. 1 October 2014. Web. 26 January 2016.

“publishing, n.” OED Online. Oxford University Press, December 2015. Web. 26 January 2016.

“What Happens to Content (Posts, Pictures) that I Delete from Facebook?” Facebook. Web. 29 January 2016.

Woollaston, Victoria. “Number of Websites Hits a Billion: Tracker Reveals a New Site is Registered Every Second.” Daily Mail Online. 17 September 2014. Web. 26 January 2016.

Zafirovski, Milan. The Enlightenment and Its Effects on Modern Society. New York: Springer. 2010. Web.