Reading is Reading is Reading is Reading is…

The summer I discovered fanfiction, I started to do the bulk of my reading online. I was thirteen years old at the time, reading on homemade fan sites and platforms that had either been co-opted or were fanfic-friendly with awful interfaces (those spaces were not the web libraries Frank Chimero envisions, let me tell you). Still, I have been reading online for years, and if my experience with digital reading has taught me anything, it’s that:

  1. We definitely have to train our brains to read digitally.
  2. People can be just as snobby about how they read online as how they read books.

Firstly, it’s important to acknowledge that cultural capital of digital reading is already less than that of print—though the reasons why could fill an entire semester’s worth of blog post and won’t really be covered, here. Suffice it to say that there is something about the digital medium that makes it be perceived as lesser than to its print counterpart. Therefore, it’s no surprise that within the digital medium, those forms that most resemble print (i.e. eBooks, online articles) are the forms that hold the most cultural capital. Though I don’t agree that one from of reading is more “pure” than another, I do feel that the sentiment exists.

Audiobooks are a great example of this. Associate professor of education Beth Rogowsky of Bloomsberg University of Pennsylvania says she viewed audiobooks as “cheating.” This implies that listening to a print book is not a form of reading, but a way to consume stories that is viewed negatively due to its accessibility and ease of use; it’s a short-cut for people who don’t want to spend their time reading Real Books.

The act of reading a book traditionally is something that requires a certain degree of privilege: one must know how to read, which means having the ability to attend school. Traditionally reading a book also requires leisure time, whereas audiobooks can be listened to on the go—while driving, working, etc. This supports the idea of audiobooks as being less valuable, or as a technology that is used to “cheat.” The expectation is that it’s what you listen to when you can’t get to a Real Book, not as a valuable piece of technology in its own right. There are even misconceptions that we do not retain as much information when listening to audiobooks.

Essentially, these arguments use the same logic surrounding the question of what books have and do not have literary merit: those use plain, easy-to-understand language, and can be read quickly—like romance, crime, and erotica—are considered to be commercial fiction, which are considered to be low brow for many reasons, but mostly for their accessibility (in language, in price point, etc.). Commercial fiction is not Important, and is therefore not part of the literary canon, which is curated by tastemakers and the Academy. Not called an ivory tower for nothing, university English departments are still rife with snobby professors who believe that the English literary canon, for all its lack of diversity and generally inaccessible language and writing (James Joyce, I’m looking at you)—is the only thing people should be reading. In my opinion, this argument has merely been superimposed onto the question of form in digital environments; instead of viewing commercial fiction as lesser due to its accessibility, we think of audiobooks as such. The scope has shifted from what you read to how you read, despite the fact that the underlying arguments are the same.

So, yes, I think that there continues to be a belief that “pure” or “tainted” reading experiences exist—but I want no part in them. People who feel this way about audiobooks do not consider how helpful they can be to those learning how to read, or those who can’t read in a traditional manner due to accessibility issues. I believe that as technology changes, our ways of reading change as well, and no one method is not better than the other.

Audiobooks are my JAM*

 

In case you couldn’t tell from the title and the GIF, I love audiobooks. I love reading and I love performance, so an audiobook is the marriage of those two things into a consumable media that I just devour. Also, they are so handy to read when you’re traveling, doing chores, or cooking. Traveling is a particular draw for me, as the audiobooks I listen to are housed online or on my phone, which means I don’t have to carry any extra weight with me when I travel.  Besides all this, I think they are just super neat! Seriously, of the fifteen non-school related books I’ve read in 2019, eleven have been audiobooks.

But there is phrasing around audiobooks that really bothers me, and it is that, supposedly, when one listens to audiobooks they aren’t ‘real’ reading.

Okay, I say after a deep, calming breath, I’ll bite. What are the reasons that audiobooks aren’t ‘real’ reading? 

““I was a fan of audiobooks, but I always viewed them as cheating,” says Beth Rogowsky, an associate professor of education at Bloomsburg University of Pennsylvania” in Markham Heid’s article Are Audiobooks As Good For You As Reading? Here’s What Experts Say.

Rogwsky went on to conduct an experiment in 2016 where she had students read the same section of a book on an e-reader and in audiobook. She found that the retention of information from the reading was the same in both formats, although she did say that that might have been because e-books have been shown to sometimes have a smaller retention rate than physical books (Heid, Are Audiobooks). However, we know that this is not necessarily the truth, thanks to Maria Konnikova’s article Being a Better Online Reader where Konnikova finds that difficulties with retention in reading have more to do with distractions than to do with the physical format (Konnikova, Being).

The most compelling evidence that audiobook reading is not ‘real’ reading, in my opinion, is that the spatial and physical aspects of reading a physical book are lost, leading to poorer retention of material (Heid). However, those issues also exist in e-book reading, and I haven’t heard many arguments that ebook reading is not ‘real’ reading, just that you need to read it differently (Konnikova).

Audiobooks have immense benefits that should not be undermined by negative connotations. They can help children who struggle with reading, as we read about in Linda Flanagan’s article, but they can also help readers with disabilities, like dyslexia and blindness. By writing audiobooks off as cheating, people are also writing off those who benefit from audiobooks as less than as well. Also, people get the story the same way whether it be through physical, audio, or e-book.

Different people learn in different ways– for example, I’m a kinetic learner, (with my audio and visual learning coming in second and third, respectively) which means I learn things best when I’m moving. Audiobooks stimulate this for me, as I can move when I’m listening.

In my opinion, audiobooks are just as much of a reading experience as reading a physical or e-book. By saying otherwise, people might forget the ways in which audiobooks excel where the other formats do not.

*seriously, I don’t listen to music anymore HELP ME

Work Cited

Flanagan, Linda. 2016. How Audiobooks Can Help Kids Who Struggle with Reading. KQED

Heid, Markham. Are Audiobooks As Good For You As Reading? Here’s What Experts Say. Time. September 06, 2018. Accessed April 02, 2019.

Konnikova, Maria. 2014, July 16. Being a Better Online Reader. New Yorker.

Time to Say Goodbye: A Review of PUB802

Before taking this class, not only did I not think critically about anything involving the digital technology in my day-to-day life, but I didn’t have the vocabulary to talk about anything tech-related in a serious way. Now, at the end of the semester, I can hold my own in a casual conversation about technology-related events and trends, drawing on the various lenses through which we looked at the digital technologies to do so.

Objective One
This class has definitely whet my appetite for thinking about the role and effects of digital technologies, and how they relate to the content I consume. Learning about the Web versus the Internet in our first class immediately captured my interest. In the future, I’m curious to learn more about some subjects than others—as a fan and frequent remixer, I’m still very interested in learning about copyright as laws continue to change—whereas I have less interest in online business models. In short, my eyes have been opened with regards to critically thinking about technology and the tech industry; the way the Web has evolved over time, the way we think of data collection and privacy versus what’s being collected and how that data is used, the dangers of using only one business model both on and offline, and the web as a space as it pertains to design were all of special interest to me.

Objective Two
As I said in my first blog post, this course has provided me a vocabulary and framework to analyze and talk about technology-related concepts, events and trends. I’ve become much more cognizant of how I interact with technology in the digital spaces I frequent, and now have the framework to be critical of them. I can analyze any platform through multiple lenses: business model and data privacy, measuring and tracking user behaviour, design as an integral part of the online experience, etc. As such, I’ve been able to develop my own thoughts regarding various aspects of technology—especially concerning the issue of data privacy, and user measuring and tracking. After reading and discussing in class, I’ve managed to better understand what my comfort level with regards to these things are, and why I feel the way I do.

Objective Three
While I have a very good grasp of copyright law, XML, various online business models (subscriptions services, the Patreon model, advertising, etc.), and how the Internet works, I wish we had learned more about how to implement a lot of the technologies we talked about, such as spending time learning to code. That being said, I definitely understand how the technologies we covered work, and can implement this knowledge in my future endeavors. My knowledge of metadata comes to mind, here; knowing how it works as well as its function permits me to understand why it’s important and how it can be better used to help publishers in the future.

Objective Four
After completing all required blog posts, annotating all the readings, and posting my Wikipedia assignment, I can confident say that I have experience with all three of these digital publishing tools. I really enjoyed annotating all the readings—I feel that they helped me grasp the material, and the sense of community created within the annotations was a welcome addition to the class, and provided further learning opportunities through links, explanations, and anecdotes. I’ll continue to use them. I found the blog posts to be extremely difficult to keep up with—they were very time consuming and the expectation for the assignment was unclear until later in the semester, which I found frustrating. That being said, I think I’ve hit my stride with regards to the assignment objectives and requirements; I’m linking, tagging, and adding gifs to my posts and have balanced the narrative reflection with information and analysis.

I’m very happy the Wikipedia assignment was optional; the weekly blog posts and annotations are a lot of work by themselves, but combined with that assignment and my other classes, the class workload was impossible to keep up with. It was still very difficult—I wish there had been fewer blog posts with longer word counts, and that they had been presented as mini-essays or articles.

All told, this class provided me with a solid framework to understand, use and analyze various digital technologies, and I’ve come out of it better equipped to be critical of the online world.

If I had unlimited access to the world

As global COO of Macmillan Science and Education, Ken Michaels, states, access to data and the analysis of what is out there allows publishers to “chart better strategic business objectives, improve the effectiveness and efficiency in all parts of the business, including developing better products and audience outreach, enhancing how we market, even one to one [marketing].”

I would use the information out there to do all of the above. I would not necessarily start letting data or computers make all of my marketing or acquisition decisions, but I would work to interpret the data and let it inform my decisions in a way that is collaborative. I also think once publishers have a greater wealth of data and a greater understanding of it, it makes sense that that data would then become a larger factor in pitching titles to Indigo, Barnes and Noble, and other buyers. I would also use the data to shape which kind of titles to commission, as the data would enable us to determine where there is a niche to be filled and what audiences exist.

Speaking on a more specific level, having all the user data for Facebook would enable me to optimize my marketing by helping me learn more about specific reader demographic profiles and how to optimize my audience information when generating ads for specific books and branded contents. Using Facebook’s infinite amount of user data, we could learn more about how people read online, what makes them engage with content, and how directly target consumers likely to actually read our products. As a publisher, I could use data to identify historical trends of what has traditionally succeeded in terms of themes, format, and more. The data from social media platforms could help me identify social trends and I would utilize that knowledge to publish titles that are topical (with an understanding that some trends really are just “trends”) and I would combine this knowledge to see which patterns exist in the overall market.

Using Amazon’s data, we could find out more about what kind of metadata works and how best to optimize our titles for discoverability in a way that takes advantage of Amazon’s algorithms. We could also create more effective comp titles if we had access to all the similar titles a consumer tends to buy (rather than just the ones listed on the website), and we could create more in-depth reader/persona profiles by having further access to the full purchasing or browsing history of users who bought these similar titles.

According to WNWP (What’s new with publishing), a company called Storyfit has been using AI to determine which art is appropriate for which media. The artificial intelligence answers questions such as the following:

“Is this book a good fit for a Facebook marketing campaign across Europe? Is that book series a wise investment for a movie studio to option the film rights? In comparing these three books on sending a spaceship to Mars, which is the most likely to be the most popular and sell the most units, if all are priced the same way?”

The technology is likely not 100% dependable, but being able to gather data helps us improve discovery, create more effective marketing plans, and ultimately drive the sales. Despite all the class discussions about the ethics around using data, I think that publishing right now is largely a guessing game, and that any quantifiable information you can gather about the market and readers is an advantage that one would be foolish to ignore. While I do not think I would build my acquisition strategy, I think the data would prove pivotal for convincing other industry professionals once the practice of gathering better data fully catches on. I think any data I would be able to gather would give me a competitive edge and enable me to push for the books I am already passionate about.

You Either Die a Hero or Live Long Enough to See Yourself Become the Villain (or How to avoid becoming Lex Luthor)

Me, trying to convince myself that I wouldn’t monetize and vastly overuse data-mining as a publisher

If I was a publisher who had access to any data that existed on the internet I think I would be most interested in what readers enjoy about my books and what trends exist in books that sell the best in the long run. I think it is very difficult to predict a bestseller, and even more difficult to get ahold of one as a publisher, but seeing what sells well consistently over time could be a solid plan for your backlist books. This information could be used to pad out your income as a publisher in order to continue to stay open as a company and to take chances on work that is a bit different and is not a sure-in for being a bestseller.

It would also be awesome to tell exactly what will be a bestseller before you spend a bunch of money publishing it, but I think that this particular thing takes a bit more guts than digits, so I’ll leave that for Lynn Neary to debate (Nearly, Publisher’s).

It is easy as a business to fall back on the ‘evil’ practices and just take what you want while your audience is unaware and dazzled by your amazing platform, especially with the commercial success of Facebook and Google to compete with. And it is understandable– unfettered access to people’s private data is a marketer’s candy land and can put tons of money in the bank.

Facebook and Google giving us their business advice

However, I think by being straight forward about what you’re planning on taking and what you’re going to do with it stands on its own as a way for you to prevent privacy violations while still collecting data that can help you as a company. I would plan to be incredibly straight forward with the data I would collect and why I would collect it. I would also try to be straight forward about what I was applying that data too in order to de-mystify the process. Plain language is our friend in this situation.

Another big portion of this question is how the data is being gathered. The only way to ensure that the data is not misused it to collect it yourself and not sell it to marketers, or, if you get it from a company, make sure that it doesn’t go further than your company and that those who have contributed the data know you have it and what you are doing with it. If you go with the first option, it can cost you a ton of money. So unless you have a big income outside of the data mining situation, you could easily be tempted to sell the data you’ve collected to outside marketers. And a lot of times in business the lure of money is too strong to resist, despite your first intentions.

Where everyone using data mining starts out…

Due to this, I think I would be more comfortable collecting the data through a company but making sure that those whose data I  was using were aware I was using it and for what. I also would not collect super personal information, like their address or their names. I would say that a layer of anonymity is needed.

Data analytics and collection is a very controversial topic. Although it is a very uncomfortable subject for many, it is easy to see yourself becoming the villain in a situation where you envision yourself as the business doing the data collection. The best way thing to do is just be honest and upfront about what you’re doing, and to allow your visitors a way to opt out, if they so choose.

 

Work Cited

Neary, Lynn. “Publishers’ Dilemma: Judge A Book By Its Data Or Trust The Editor’s Gut?” NPR, NPR, 2 Aug. 2016, www.npr.org/sections/alltechconsidered/2016/08/02/488382297/publishers-dilemma-judge-a-book-by-its-data-or-trust-the-editors-gut.

🎶Don’t Wanna Be an American Idiot 🎶 (looking at you, Congress)

Overall, I am unsurprised by the lack of data privacy online. I’ve known for a while now that something is tracking what I’m doing as I do it, whether it be Google, Facebook, or Apple. However, it is a bit frightening to see it all laid out in places like Dylan Curran’s twitter feed and to see how google maps tracks our movements throughout the day. What frightens me more than either of these things is what unregulated entities might do with that data on a personal and political scale.

Although I would like to believe the government is attempting to regulate big businesses like Facebook and Google, every day we see that they are focusing on the wrong things. In the Google Congressional Hearing, held on December 11th, 2018, the American Congress had the change to question google on how it abuses data privacy and its way of handling that data after compiling it. Instead of doing that, however, the members of congress decided to focus on things that had nothing to do with privacy and everything to do with the more self explanatory algorithms almost anyone under 50 can understand (Lapawowsky, Congress).

Footage of me watching congress date itself to the age of the dinosaurs

This not only proved that Congress is incredibly out of touch (watch this video for evidence- these congress people are ridiculously embarrassing) but that the government in general is focused on only the superficial issues surround tech giants because they do not understand the more pressing matters. Not to mention, the big companies do not want regulation and we know that big companies have a big stake in government, regardless of what people say.

We’ve seen how companies like Facebook an influence political situation through the 2016 election, with the Cambridge Analytica Scandal. But on a more personal note, a lot of these companies gather data about buying habits that can negatively impact people on a day to day basis. In this case, I will refer to the experience of Gillian Brockell, a woman who continued to receive ads as though she gave birth to a baby after delivering a stillborn child (Kindelan, Woman).

She posted on twitter, stating;

“Please, Tech Companies, I implore you: If your algorithms are smart enough to realize that I was pregnant, or that I’ve given birth, then surely they can be smart enough to realize that my baby died, and advertise to me accordingly — or maybe, just maybe, not at all […] We never asked for the pregnancy or parenting ads to be turned on; these tech companies triggered that on their own, based on information we shared. So what I’m asking is that there be similar triggers to turn this stuff off on its own, based on information we’ve shared…” (Kindelan).

This is just the tip of the iceberg on the way that data mining infringes on privacy. Situations like the Google hearing and like Brockell’s situation (in which I doubt much has been done to change the algorithm, despite public outcry) make me doubt that any government backed venture or internal change is likely to happen any time soon. Until then, I’m just going to accept that I have to be careful with my searches and try to limit what I put online.

 

Work Cited

Kindelan, Katie. “Woman Demands Change from Tech Sites like Facebook, Instagram after Receiving Parenting Ads after Stillbirth.” ABC News. December 13, 2018. Accessed March 13, 2019. https://abcnews.go.com/GMA/Wellness/woman-demands-change-tech-sites-facebook-instagram-receiving/story?id=59799116.

Lapowsky, Issie. “Congress Blew Its Hearing With Google CEO Sundar Pichai.” Wired. December 11, 2018. Accessed March 13, 2019. https://www.wired.com/story/congress-sundar-pichai-google-ceo-hearing/.

A Look at the Patreon Model

The fact nobody supposedly makes a living on Patreon has never been an issue to me. It is a supplementary form of income that allows artists, cosplayers, writers, podcasters, and more, to put their work behind a paywall or to receive donations from fans. Unlike how Patreon advertises itself, it is not the ideal model for creators to survive off of freelancing. Still, it serves its purpose and enables creators with a platform to have a little bit of extra money each month. The problem that I’m seeing more and more of is the gigantic gap between Patreon’s profits and priorities versus that of the Patreon artists.

In Keith Parkin’s Medium article in 2017, he asked, “Is Patreon a Scam?” In the article, Parkins highlights the platform’s controversy where it was proposed that patrons pay an extra 0.37 cents per pledge, thus hurting less popular creators who rely on their accumulation of 1 USD subscriptions. In the quoted twitter thread, Julie Dillon argued that even those few extra dollars a month can be life changing, and that it hurts to have the platform dismiss this. Of course, the changes were rolled back and Patreon apologized, but the change ultimately revealed the core philosophy and priority behind the platform. The change would have been devastating for small creators (who make up the majority of Patreon), somewhat profitable for larger creators, and incredibly profitable for Patreon. Twitter user @Burrito_Tim calculated that with his pledges, the platform would receive 118% more after the change. Again, even though this new policy was rectified, Patreon is in a position to decide that the demands of investors and their own pursuit of profit outweighs the bad PR of small creators’ outcries. After all, according to Patreon, they only value the “truly life-changing creators.”

In 2017, Patreon received around 60 million in investment capital from Thrive Capital after already having received 30 million from them in 2016, and 17 million in 2014. According to Dan Olsen, Patreon has only actually earned 55 million in revenue since 2013, which makes it highly unprofitable expense right now for those who have invested in it, thus placing further pressure on the platform to generate revenue streams that serve neither the consumer nor the creators.

After the Patreon CEO’s recent announcement that the platform’s current model is “unsustainable,” twitter user Dan Olsen predicts, “series of ill-advised feature rollouts, like they’ll probably go gonzo and build a livestreaming platform or pivot to Fortnite or buy Teespring or something equally confusing, with a slow degradation of the core user experience. Like you’ll sign in and there’ll be six popups asking if you’ve tried Patreon Mega and extolling how it can help you mega-engage with your audience, while you’re just like “can I have a commission button so people can make one-time payments?” and they’re like “no.” Unfortunately, the increasing demand for Patreon to focus only on trying to draw more Hank Green-type clients and profit off of them means the site is often neglecting its primary user base.

There will also likely be a big push to find ways to further monetize creators and have them pay for a better experience. So what is the solution then? I definitely think there needs to be a cooperative platform version made for and by creators. The cooperative version ideally would respect both the small tier and top tier creators, have more payment options that would allow for grouping together as channels and one-time commission payments, and it would have a model that does not overcharge for payment transfer fees. It would serve the creators, rather than treating them like serfs. Until then, creators using Patreon at the mercy of a platform that is at the mercy of venture capitalists. We need more platforms for creators that will put proportionally put money into the hands of workers rather than the pockets of corporations that are looking to just expand the value of the platform so they can sell it for a profit. When the latter happens, the “target audience” of the platform becomes its venture capitalist investors, and what follows is censorship, and a website that ultimately does not prioritize its users.

Citations:

https://twitter.com/FoldableHuman/status/1092870599985123329

https://theoutline.com/post/2571/no-one-makes-a-living-on-patreon?zd=1&zi=pmnmzelf

https://medium.com/dark-mountain/is-patreon-a-scam-a9d0e38bd69e

Something’s Gotta Give: The Perils of Dominant Business Models in Online Environments

The general consensus seems to be that the current online advertising system is broken. People don’t like online ads (based on views and/or clicks), so AdBlock Plus is extorting publishers and content creators like some kind of digital mafia boss, with those who rely on ad revenue helpless to stop it. This makes actually making any money very difficult, especially when we tend to pass on the responsibility of dealing with the current broken advertising model, and then use the excuse of “neutral” platforms, software and extensions to explain why it is not our job to fix them/why they cannot be fixed. Of course, no platform, software, or extension is neutral. At some point in the process, a biased human being is on the other side of the screen making very biased human decisions about how things are designed, and how they operate. If we want to fix the system, we shouldn’t “[…] build systems that let us pass the buck to someone else, in exchange for passing them a few bucks”; we should demand and take responsibility for the things that affect us. Or, at least, that’s Anil Dash’s argument.

I think this is easier said than done.

The problem with a single business model becoming dominant in an online environment, and in fact in any environment, is that no one model is infallible. Being completely reliant on a single revenue stream makes you vulnerable should that stream dry up. Furthermore, when a business model becomes dominant, it limits the incentive for business owners to create or build new models or go looking for other revenue streams—if it ain’t broke, don’t fix it! This stagnancy and lack of creativity makes the model vulnerable as the market evolves, until everyone is in crisis because, say, traditional online advertising no longer works as effectively (if at all) in a digital environment. This “panic mode” either forces creativity and an evolution of the model, or demands its replacement with something more sustainable—and the cycle continues. This loop can be seen in the evolution of TV and radio advertising: forced to compete with Netflix and rapidly changing social climate, TV advertisers have been forced to become clever in their ads.

With the rise of ad blockers, it looks like we’ll soon be seeing the same shift in online advertising—though if advertising will change, or if business models will shift to remove it from their revenue streams completely still remains to be seen. Either way, something’s gotta give.

Thoughts on the Medium Model

The with growing dominance of adblock (which has decimated digital ad revenues), it is worth speculating how publishers can adapt by creating models that enable website traffic and monetization without alienating readers. Medium’s recent model changes put into play an interesting structure: a membership model that, for 5 dollars a month, enablers readers to access “the best” of Medium’s content. Before deliberating on how publishing can apply such a model, I want to first look at what is and is not working with the system.

Continue reading “Thoughts on the Medium Model”

To Pay or Not to Pay? Why I’ll Start Supporting Creators

Though I’ve only very recently begun to think about paying for subscriptions (the last two years or so), I can pinpoint the exact moment when my thinking began to shift: I had been complaining to my brother that one of my friends was going to charge me for art I’d asked her to make for a piece of fanfiction I was writing, and had been really upset that she hadn’t offered to do it for free. I had written her a ton of fic in the past, I’d changed my travel plans to visit her in both Germany and Italy during the semester I’d been in Europe, and I’d been shocked that she hadn’t offered to do this for free when I thought we were friends.

My little brother was not sympathetic.

He first asked why I didn’t think my friend should be compensated for her labour, then pushed further by inquiring if I didn’t want to support her in her creative endeavours. I was gobsmacked.

I had honestly never thought of paying my friends for their creative labour before. Mostly, this can be attributed to how I grew up: I was always taught that you don’t charge your friends (or you at least give them a serious discount) because you love them, and that’s just how you behave towards the people you love. Other parts can probably be explained away by the general undervaluing of the arts: even in last year’s federal budget, the Canadian government failed to recognize the precarious position of 650,000 cultural workers, and “some forms of museum funding still remain at levels lower than they were in 1972”. That’s not even considering the fact that the arts are severely underfunded in Canadian grade schools[1]… which is where you’d generally learn to appreciate and value various kinds of art.

Needless to say, my opinions shifted. Later, when I began to consider the possibility of publishing written fanworks in printed anthologies, I became aware that my attitudes towards monetizing print and visual art were also very different. Namely: I believed visual art to be inherently more expensive. I was willing to pay $20 to commission a piece of fanart, but I couldn’t conceive of compensating a fic writer for the same service. For a printed anthology, fine… but where I was willing to pay for art whether I received a print or it stayed on my screen, an online fic was something I very firmly believed was and should stay free of charge.

I think this might have had to do with a subconscious viewing of fanfiction as lesser due to its primarily female reader and authorship—but I think it also had to do with the way Western society values the visual over text. When was the last time you went into a place that displayed and showcased books? Museums don’t tend to have selections of books on display unless they’re very old, and libraries are not viewed as having nearly as much cultural capital as museums. Furthermore, if you want to have access to a special collection, you need permission to do so. Part of the reason as to why this is may also be is due to the fact that text is so very ubiquitous, both in print and online—we’re so used to seeing it that we have certain expectations when we do. I think that a lot of these expectations have to do with form: I expect to pay for a newspaper, so I’ll subscribe to a newspaper. I expect to pay for a print book, so I pay for a print book. But the idea of monetizing long-form content unaffiliated with traditional news sources, or monetizing the creation of online fanfiction, are fairly recent and had been indiscriminately free when I started using the web.

I have never paid for a subscription to any online magazine or blog. I tend to find quick fixes through switching browsers, or moving on to view free content. This is, I think, for all the reasons listed above, as well as the fact that my historical lack of disposable income has meant I’ve had to be very selective in where I allocate what few dollars I have to give. That doesn’t mean I’ll never pay, but right now, my priorities revolve around rent and groceries and allowing myself the odd night out when I spend all day reading on a screen. After I graduate and get a job? Chances are, my priorities will have shifted towards wanting to read long-form articles—ones I pay for, this time, in order to properly compensate authors for their labour.

 

[1] If the linked article doesn’t convince you due to its 2013 timestamp, take a look at this one, written specifically about Ontario and it’s practices (2018).

The Good, The Bad, and the Ever-Waser

It’s easy to put sets of beliefs into neat little categories, and I’m not saying this is a bad thing when Adam Gopnik does this in his article “The Information.” It’s a natural thing for us to do, to try and make sense of a complicated and confusing world by simplifying it. Our relationship with technology is complicated, so there’s relief when we simplify society’s relationship with it into three camps. On one extreme of the spectrum there’s the Never-Betters who hail the power and innovation of technology––they’re positive and optimistic. On the other extreme there’s the Better-Nevers who mourn for the past and fear the rapid change of technology––they’re negative and pessimistic. Right in the middle there’s the Ever-Wasers, who like the neutral party they are, believe that technology has always been a thing in modernity and that some people are going to enjoy the change and some people won’t, that these advancements bring positive effects and negative ones.

Like many binaries in life (sexuality or political preference for example) this Never-Better-Better-Never-Ever-Waser categorization falls on a spectrum, a sliding scale if you will and you can fall anywhere in between. These socially constructed binaries are a way of simplifying complicated relationships, and while they’re nice and easy they’re only a start to understanding these relationships and that while we may fall on the spectrum, we can also fall totally outside of it.

I’m not going to spend this blog post deconstructing binaries, and if we’re using Gopnik’s Never-Better-Better-Never-Ever-Waser binary then I’d have to say I fall in the Ever-Waser box, with a slight inclination to Never-Better (but I don’t sport rose-coloured glasses). As for society as a whole, well they’re all over the map and I don’t think you can make such a sweeping generalization to where they fall on the spectrum (or outside of it). As for myself, I don’t believe in new technology being inherently good, and I also don’t believe in it being evil. New technology simply is, and it depends on how we use it that makes it good or bad.

Whenever we’re debating the positives and negatives of our relationship with new technology I always have Marshall McLuhan’s “the medium is the message” from Understanding Media: The Extensions of Man running through my brain. Yes, this is crazy dated since it’s from the 60s, and wow times certainly have changed, but I think the core of what he was saying still remains. Mark Federman breaks down this phrase in his essay “What is the Meaning of the Medium is the Message.” The “message” is not “the content or use of the innovation, but the change in inter-personal dynamics that the innovation brings with it.” The “medium” is any extension of ourselves, something that allows us “to do more than our bodies could do on their own.” The point that McLuhan is trying to make is that we can understand the nature of these innovations through the behavioral changes they create within our society. It’s not the content of the internet that matters, it’s how it changes our behaviour that reveals something about us and therefore the medium (the internet). The medium is neither good nor bad, it’s how we interact with it that decides that.

Which brings me to Frank Chimero’s piece “The Good Room,” where he writes “technology’s influence is not a problem to solve through dominance; it’s a situation to navigate through clear goals and critical thinking. Attentiveness is key.” It’s this critical thinking that is key when we engage with technology. We need to consider if what we’re doing is for the betterment of society or not. Unfortunately, what a “better” world is depends on the person you ask. This blog post is not going to deconstruct the values of good and evil and the subjectivity of that either.

Technologies live and die, change and evolve and they are always going to benefit someone, and simultaneously be a detriment to someone else. It all depends on who you are and how you’ll use the new innovation. One can hope for that utopian vision of open knowledge and the infinite expansion of the mind, and hopefully prevent a Terminator-esque robot take-over dystopia but in the end the choice is yours.




The Goldilocks Problem: Thoughts on Reductive Reasoning and New Tech

In his article, “The Information: How the Internet Gets Inside Us”, Adam Gopnik presents each position—Never-Betters (optimists), Better-Nevers (pessimists), and Ever-Wasers (neutral; humanity’s love-hate relationship with technology has never changed)—as having its strengths and weaknesses: technology can and has been used to enslave just as easily as it has been used to empower; cognitive exasperation runs just as rampant as cognitive expansion; the Internet and Web inhibit meaningful social interaction while it simultaneously acting as a hub of interconnectivity; and, finally, just as historical attitudes towards technologies tend to repeat themselves in a never-ending cycle, contemporary technology’s particular brand of omnipresence is something humanity has never encountered before. Frank Chimero, in his essay “The Good Room”, seems to agree with the latter point wholeheartedly, when he wrote: “technology has transformed from a tool that we use to a place where we live”. This is something I agree with as well.

I went into “The Information” thinking my own opinions regarding technology were so complex it would be impossible to fit them into a one tidy category, and upon finishing all the readings from last week, that position has remained the same—what has changed, however, is my ability to formulate and my thoughts and opinions more clearly. As it turns out, I am a mix of all three positions Gopnik lays out, and I feel that North American society as a whole[1] also fits into this new, complex category: where new technology is as exciting as it is scary, and something we both have and have not encountered throughout history.

When pushed, people tend to have slightly more nuanced opinions of things than they let on. I have observed this both in my personal life, and in my interactions with others. For example: at first glance, I tend to present myself as a Never-Better: I use modern technology for everything, I’m rarely seen without my computer and phone, and I truly enjoy all the benefits of modern tech awards me, and so defend them vehemently. That being said… technology also scares me half to death. The idea that companies harvest my information for commercial use is uncomfortable, I have a fear of getting doxed, and the commercialized Web (with all its negative implications) deeply upsets me. At the same time, I’m aware that most of my immediate resistance to new tech is a resistance to change, which is something humans tend not to enjoy—but I also know that humanity has never had such an intimate relationship with technology as we currently do, which makes me wary to write off any and all emotional and critical responses as part of an ancient cycle of human behaviour.

In short, the way I feel about new technology is a complicated mess. These feelings mirror those of my roommate, my brother, my parents, and I’m willing to bet, just about every other North American who has been exposed to technology within the past decade. The more complex technology becomes, and the more parts of our lives it irrevocably changes—the more it has us living inside the library, instead of visiting at our leisure—the more complicated and complex our relationship with it becomes. Furthermore, fitting such a large part of contemporary life into simplistic black and white areas is reductive and potentially dangerous. If the Stream has taught me anything, it’s that we should be wary of easy answers and neat, boxed-up solutions… they tend to radicalize in a way that makes it easy to suspend critical engagement despite our nuanced thoughts and feelings.

[1] It’s worth thinking about how we are define “society as a whole”: globally? The West? Canada? Vancouver? My experience of technology might be very different than someone who lives in China, or Georgia, so I’m using North America, based on my own experiences.

The Best Never-Betters-Better-Nevers-Ever-Waser Yet

 

In Adam Gopnik’s article, “How the Internet Gets Us”, he explains that there are three groups of people when we think about technology. The Never-Betters are the optimists that intrinsically believe in technology as if the Internet is our greatest creation and more innovative technologies are to come. The Better-Nevers are the nostalgic ones that crave “how it used to be”, thinking that the world will come to an end because nothing will be as great and powerful as the book. The Ever-Wasers are the rationalists that learn to deal with technology and its challenges as they come. If you were to ask me, I’d say all three, please!

We view the older generation: our grandparents, elders or seniors in our community, as more likely to identify themselves as Better-Nevers because they’ve lived longer than us, with viewpoints and lifestyles we’ll never understand. Lately, we’ve been thirsting for nostalgia, cue Stranger Things and its successes with chiming into our retro early-mid-1980s connotations. Some of us weren’t even born then to enjoy the symbolic mementos, so how could we possibly be nostalgic for it? Alas, when we compare how life was then to how life is now, how simple things were back then to how things keep getting more and more complicated, I understand how one might feel like a Better-Never.  Has modernity failed us? There are nights when nothing, not even a compelling Netflix show, can beat the feel of a new book in my hands. I devour it and sift into a different world that isn’t now. 

While thinking about all the new technology that enters our world, I can understand why some (including a part of myself) are Never-Betters. Some of our technology is really mesmerizing, and I can imagine people who were first introduced to the Internet feeling the same. I can’t remember the first time I was connected to the Internet, but I can remember the first time I got an iPhone as my first cellphone and what an experience that was. I can understand how sometimes technology really hurts us, how we are consumed into a burnout generation of social media, gaming, and just staring at the screen for so long your eyeballs melt addictions. However, I can’t fathom what’s next for us with where the Internet can go. It’s scary, but sometimes fear drives us towards better things ahead.

I recently went back to my part-time job where I see many Ever-Wasers, elderly people bravely and diligently learning new features to better optimize their phones. I often say to seniors watching the session from afar: “if those brave people can do it, then what’s holding you back from doing the same?” It shouldn’t be an age thing; technology does not discriminate who a user can be. A person can be nostalgic but still hopeful for what’s to come, or better yet, use that nostalgia to inspire new innovations that capture a bit of the essence from the past. As Gopnik suggests, “Once it is not everything, it can be merely something. The real demon in the machine is the tirelessness of the user.” Technology is what we make it out to be; it is controlled by the people and our thinking towards it. Perhaps we have to take things as they come because we don’t know a piece of technology until we’ve thoroughly tried to integrate it into our lives.

It’s all about balance. Maybe every time we get an iOS update, my heartbeat will quicken and I’ll spam text my friends “LOOK WHAT THEY DID NOW. TECHNOLOGY SUCKS!” But after a couple of days, I’ll give in and let my phone refresh into the not-so-scary future that awaits me. I’m still waiting to convert my grandma into an iPhone user, and when I’ve finally done it, I’ll be the first and possibly best Never-Betters-Better-Nevers-Ever-Waser yet.

“The More One Knows, the Quaggier the Mire Gets” – Sarah Vowell*

Having recently prepared a project that relies on the concept of “digital fatigue,” I have read a lot of information online on the topic. There are blog entries, such as Frank Buytendik’s futurist-focused one, where he writes, “we are moving towards a #digitalsociety. Not only business changes, not only work changes. Life itself changes.” At the same time, there are medical warnings against the continued and growing exposure to screens. For example, Dr Aizman’s talks about ocular muscle strain and writes, “digital eye strain is very common because of our reliance on digital technology.”

Yet if you put these two observations together, you’re in Quagmire Land. Somewhere somehow, the eyes (which recent studies say are part of the brain and not separate organs) have to both do the work you’re demanding of them, and preserve themselves as part of providers of one of your five senses. Perhaps this is why content-retention when reading materials online is not as reliable – there is ocular and brain stress that steals away from the energy one devotes to reading and reading comprehension.

So – should publishers care? is a question that one wonders as a budding publisher. I think the most reasonable answer is, “it depends on the publisher.” When I was finishing my Graphic Design diploma, the Head of the Department and Portfolio instructor had us do rigorous research in terms of our “dream companies.” I had learned about Scholastic through my part-time work with children and made it one of my three winning companies. Now, at the tail end of the academic portion of my Master of Publishing, I know that if I were to indeed become a part of the team, I would use the type of medical and psychological research being done to encourage children to read real books, as well as educate parents on the necessity of perpetuating this method of reading. In fact, if you haven’t heard this interesting factoid, it has become public knowledge over the last few years that the children of Silicon Valley techies attend no-technology schools. While this New York Times article is a bit outdated, it offers a peek at some of their methodologies, such as  “Andie’s teacher, Cathy Waheed, who is a former computer engineer, tries to make learning both irresistible and highly tactile. Last year she taught fractions by having the children cut up food — apples, quesadillas, cake — into quarters, halves and sixteenths.”

Isn’t that so ironic? That the masterminds who brought personal computing to global levels are segregating their own children from their inventions? They must know something we don’t know.

So that’s if I were involved in publishing geared towards children and education.

Now, on the other hand, given Buytendik’s prediction that our future lives are inescapably digital and will become more so over time, I can imagine improvements to technology that publishers could (and would have to) take advantage of. I have not seen any VR-reading yet but sci-fi films often touch upon scientists finally unravelling the mysteries of the brain and plugging materials directly into neurons, the way we transfer data via cables or miniSDs into devices in the present. While growing up I was never much of a sci-fi fan, it never ceases to fascinate me that all writers’ “predictions” from past decades are now part of our daily lives. A vast majority of people are so ungrateful, too, in their unquenchable thirst for “better” “faster” “more.”  So with this new technology, new reading formats would inevitably dictate the way readers would access information. Thus publishers would have to indeed lend an ear, if they wished to survive into the 22nd century.

I’m 31 now and know that life will be so vastly different when I am 81.

*Vowell said this about American History but I find it applicable to everything in life.

Anna Stefanovici

Concept-Hopper

In his article “The Information: How the Internet Gets Inside of Us,” staff writer for The New Yorker Adam Gopnik reflects on the development of “the Internet” and categorizes books on this topic into three parts: The Never-Betters (those who think it bringer of utopia), the Better- Nevers (should have never happened) and Ever-Wasers (this modernizing thing is actually repeating). Reflecting on these categories and the question posed, meaning to define myself via one of these, I realized that I concept-hop a lot. I have never been a big fan of extremism and always think, in the privacy and freedom of my home, how astonishing it is that people can devote themselves so blindly to one belief or another.

For example, I am a Canadian… but the sort born elsewhere. I read a few years back a book entitled The Power of Why by Amanda Lang, a book that discusses and analyzes innovation. Lang explains that people that have terms of comparison (have lived in a few places or speak a few languages) tend also to be more innovative. You see, in my case, immigration was the closest to time-travel one can come to [– for now! Who knows what the future will bring?]

While 2001 was for North America, and the Vancouver IT industry in particular, the culminating year of the dot-com bubble (and ensuing crisis), I had left Romania as one of the few children in my class privileged to have a computer in the home. Yes, the modem made those weird sounds and yes, you stared at the basic graphics of the transporting envelope, but in those moments, I was surely one of the Never-Betters. At the time, I was immensely mesmerized by Microsoft’s Encarta Encyclopedia as well, and I mean the CD-ROM version, not the online one. This was October of 2001. Then in November of 2001, I was all of a sudden under the pressure of adapting very quickly to my peers’ speedy fingers. In Vancouver, schools already had computers and by the end of high school and beginning of university, the expectation was that all my research would involve an online reading/research component.

Now that I nanny to keep myself fed as a student, I see how the Internet really is inside children. You must truly reflect on this, I read it myself in an article and it stopped me in my tracks: we are the last generation to know life both with and without computers and “the Internet.” Some children I watch over have habits that definitely make me roll my eyes and think of myself as a Better-Never, mostly revolving around the concept of videogames. I have recently learned games are not even built with multiple consoles anymore, since the dawn of the age of “network playing.” Before, you played outside or played a videogame – at least you played together. Now, if you want to play with a friend, you must geographically separate in order to each take on a digital persona. This is not the worst of it, as far as I am concerned – to me, the most disturbing aspect is that children watch YouTube videos of o-t-h-e-r people playing videogames. The level of detachment is completely beyond me.

Overall, however, I look at each wave of technology brought upon us, and I realize that from an eagle-eye view, the reality of the matter is that to me, the most sensible thing is to admit the facts, and I thus ultimately admit defeat to the Ever-Wasers group. Who knows, with virtual reality, perhaps the next wave of children will indeed sit around in empty spaces but mentally exposed to rich, colourful worlds. I just deeply hope the real world around us does not end up looking like the universe of WALL•E, though.

Anna Stefanovici