The Sound Of Silence

The question about the rights of a writer and a commenter is full of gray areas. It used to be pretty straightforward when publishing was limited to print. Readers used to write letter to authors and editors. Reviewing of text was in the hands of few critics or peers who had the credibility to comment. Books used to exchange hands via libraries and used books stores, gaining annotations along the margins with every exchange. The chain of dialogue was always consecutive and never concurrent.

Online publishing does not enjoy this privilege.  The web has opened the flood gates of social interaction. Anyone can express any opinion and find a large audience with few measured efforts. Publishing your thoughts is easy. So is commenting on it. We live in a day and age where everyone is “Google expert” and feels it’s their right to express opinion. We rarely stop to think – what, why, who, when, where and how we should articulate our thoughts.

Lack of barriers means there is a growing gap between what gets published and what actually needs to be published. Similarly, who comments on what is a complicated concept. In both instances, someone decides that it is a good idea to break the silence and write about a topic or for someone to comment on someone’s work.  Perhaps this Zen story can convey the conundrum of social interaction:

Four monks decided to meditate silently without speaking for two weeks. By nightfall on the first day, the candle began to flicker and then went out.

The first monk said, “Oh, no! The candle is out.”

The second monk said, “Aren’t we not suppose to talk?”

The third monk said, “Why must you two break the silence?”

The fourth monk laughed and said, “Ha! I’m the only one who didn’t speak.”

Each monk broke the silence for a different reason. The first monk became distracted by one element of the world (the candle) and so lost sight of the rest. The second monk was more worried about rules than the meditation itself. The third monk let his anger at the first two rule him. And the final monk was lost to his ego.

There is no right or wrong way of looking at who gets to moderate feedback or who is entitled to give one in the first place. What we, as a society, need to spare more thought to is our reasons for breaking silence. Yes, freedom of speech gives us the write to express ourselves, but this fundamental right comes attached with duty. We’re responsible for what we express. And that applies equally to the writer and the commentator. Self-moderation is what we need, where online text is concerned.

Maybe there was a fifth monk in the story, who slept through peacefully, blissfully unaware of the value of his silence.

 

 

A book is a book is a book: On Marginalia and Authority.

Publishing is to make public”. This is a statement that has been repeated plenty of times over. To publish is to seek out eyeballs. Whether it is done on the individual level (via self-publishing) or the collective level (traditional publishing), when work is put out there, audience engagement in some form, is sought out. “Eyeballs” are multidimensional: audiences do not only read works but they form opinions of works and make them known. They comment, they highlight, they leave marginalia on texts, both online and in print. Do they have the right to interact with texts that have been made available to them? Yes, they do.

Is marginalia authoritative if it is never found, never made public or if it never garners an audience? It has been argued that marginalia in print is long-lasting however in my opinion, it is less likely to gain an audience of more than a handful of the same people. For example, if a codex has a print run of 10 000, distributed all over Canada. And a person finds marginalia in one of the 10 000 books, possibly on a library shelf buried besides other books, their likelihood of being able to trace back to the original creator of the marginalia is low and their ability to create an instant community around the musings is even lower. In the digital sphere, however, marginalia is usually credited to a specific person (eg. on Hypothes.is) and as much as S. Brent Plate argues that this marginalia is ephemeral, the likelihood of more people interacting with it quicker is higher. Furthermore, the ease of community building around online marginalia could also be based on the fact that everyone is commenting on the same article despite their geographical location. In print, the marginalia might be in book 528 of the 10 000. Unless posted online (yet again), can this marginalia reach the author and be in conversation with them? The likelihood is no. I take into account that entire communities have been formed around print marginalia but these are the limitations of it in this digital era.

The point I am getting at here is that both audiences in print and online should be allowed to interact with texts if those texts have been made public. Whether they can “shape the text” however will be determined by the visibility of their marginalia and the community they can build around it.

Writers are also able to determine who can comment on their work by the simple act of defining the public it reaches and not publishing to all groups. They can choose language that deters certain people from engaging with their works for example. This has a tendency to be discriminatory however. By censoring interactions the writer becomes  a propagator of an opinion vacuum.

To summarise:

1) Audiences can react to texts if those texts have been made public. To publish something is to garner eyeballs. Interactions between published work and reader are part and parcel of the publishing process.

Marginalia requires an organised public of its own to be authoritative.

2) The writer can determine how their work is disseminated thereby deciding who has the right to comment on it. This can be discriminatory.

3) Should authors seek out eyeballs and subsequently not allow those eyeballs to engage with their works? I think not.

Small fun fact, on this topic of marginalia: I am a person who had first edition Jane Austen books and doodled in them because a book is a book is a book.

 

Annotating EBooks—And Collecting Data

My first thought was that I would like to collect readers’ annotations on books in the future, but knowing nothing about ebooks I figured there was a chance this had already been done. And of course, it had. And so in this post, I’d like to review briefly where the technology is at currently, and where it could go in the future.

It turns out the Hypothesis program had the same idea to annotate books, and just a few months ago in September as we were starting school they announced “the world’s first open-source, standards-based annotation capability in an EPUB viewer.” The annotation program, similar to the program available for Internet users to install and use to annotate web pages, is available on the “two most popular open-source frameworks, Readium and EPUB.js.” People are able to annotate within closed groups or publically just like on the web browser version.

However, the focus is on how to improve this experience for the annotators and not on how publishers can capitalize on the results of this program (understandably, as Hypothesis’ mission is to create “open source software, [push] for standards, and [fosters] community.”

If the engagement data was collated into a report that was shared with publishers on a monthly or weekly basis so that publishers could see numbers of comments, what pages were bookmarked, what people were commenting on, or even the comments themselves if they were made public, this would be an amazing way to track readers’ impressions. But as far as I can tell if publishers want to see what annotations have been made they have to go to that specific page or book in order to see engagement. Publishers have hundreds or thousands of books—with many hundreds or thousands of pages. It is highly unlikely that they will be able to use this software in a way that would be meaningful to them from a data collection perspective. In addition, the software would need to be accessible on all devices that feature all types of ebooks in order to create a well-rounded picture; and the reports generated would need to also be able to pull data from individual devices’ built-in annotation capabilities.

So if the one hurdle in capitalizing on annotation software is to have it produce reports, the another hurdle is to get readers actually using the software. People still need to create an account, install the software on their device, and then open it to highlight sections and type notes. None of these are complicated steps, but they all require actions that we have to inform people of and convince them to take.

While I’m dreaming, I’d like to look for other ways to reduce friction and make annotations just as simple as picking up a pen and scribbling in the margins of a book. For example: the software could come already installed on EPUB readers, the readers’ accounts could simultaneously log them on to Hypothesis and the account associated with their device so they wouldn’t require yet another account, or the program itself could allow readers to highlight passages with a swipe of their finger.

The possibilities are endless—and so are the challenges!

Hey Siri, What Should I Read Next?

The topic AI, as I am beginning to appreciate, is a Pandora’s Box. Once opened, it cannot be contained. And although AI promises to simplify complex things, it inadvertently contributes to adding complexity to our ‘once simple life’.

To imagine the next possible confluence of AI and Publishing, we first need to evaluate the most urgent need for publishers. What is the most persisting need?

Considering that publishing industry is going through a big shift, the fight has moved beyond two key parameters—content and availability. The age-old cornerstone of publishing—find great content and make it available to as many readers as possible, usually through extensive distribution network. Earlier, a book had to compete for shelf space. The possible field was limited to bookstores and newsstands. But the market is different now. With the innovation in eCommerce and Amazon’s hold over the market, the concept of shelf space has disappeared. Every book fends for itself now. Distribution is one of the strongest assets of publishing industry, but with Amazon in the picture, it’s no longer a unique advantage.

The publishers still hold advantage over content; but not for long. Amazon has single-handedly revolutionized self-publishing, breaking one of the strongest barriers of entry—a publishers stamp. Anyone can publish now. It isn’t necessarily a bad thing for the publishers.  Some really promising writers have emerged through the cacophony of indiscriminate self-publishing. There’s a low-risk opportunity for publishers.

But going forward, the fight has moved to discoverability now–It is all about the reach now. And that’s where AI can really benefit the publishers. The market can no longer be limited to geographical boundaries, or demographics for that matter. With Machine Learning and NLP, it’s becoming increasingly possible to not only track what people are buying, but also why they are buying it. This deeper, non-linear understanding of human behaviour is leading the way to behavioural marketing. With the use of AI, publishers can expand their reach with better, more focused marketing.

Publishers can benefit a lot from AI. From content curation, to SEO, user generated data (reviews, ratings, categories), to email marketing and social media reach; these tools can not only to make publisher’s lives easier, but to make them better at their jobs. The optimization of processes and faster turnaround time not only yield better results for businesses, but they also help by being relevant for the consumers, leading to better informed buying decisions and higher conversion rate.

AI has already had a tremendous impact on the way users conduct online searches and discover books. This in turn is changing the way marketers create and optimize content. Innovations like the Amazon Echo, Google Home, Apple’s Siri, and Microsoft’s Cortana make it easier for people to conduct searches with just the press of a button and voice command. That means the terms they’re searching for are evolving too. The publishers need to observe this user behaviour closely. How people search of books is important to ascertain how buying decisions are made and where the actual buying takes place. With help of AI, publishers can re-establish a more efficient purchase funnel for the readers.

I think publishers need to smart here. The industry is going through a disruption right now, with the driving force in the hands of tech giants, who can’t necessarily be identified as publishers. For all the waves Amazon is making, it couldn’t have gotten where it is today, without the groundwork of traditional publishing. To me it seems quite clear that the publishers need to embrace AI, because it is bound to get them anyway. It makes sense to stay on top of the game, rather than play catch-up all the time. If there is a remotest possibility of publishers regaining the ground lost to Amazon, it is through the AI. It is the only thing that’ll level the playing field once again.

Anumeha Gokhale

May The Force Be With You, Mr. Publisher

I grew up in a small town in Northern India, where bookstores were a rare sight. Academic reading was very much encouraged, but the concept of reading for leisure was foreign to majority of folks. I come from a family of non-readers. Since I turned out to be the book-sheep of the family, I had to find my own ways to secure reading material. Beg & borrow aside, I used to walk couple of kilometers, twice a week, to visit the only library in our locality. Calling it library would be stretching it. It was just a hole in the wall, lined with a couple of hundred books. But to my book starved eyes, the place was salvation.

A couple of decades later the picture is quite different. Today, I have access to almost every book that gets published worldwide. I can read anything, anytime, in any format, without moving an inch.

With internet business models taking more and more concrete shape, publishing industry, as we know it, is undergoing a sea of change. Access to publishing platforms and access to content are two extreme ends of the traditional publisher’s role – to decide who gets published and how their books get distributed. Publishers have, in a way, acted as a Chinese wall between the reader and the author. That wall is crumbling as we speak.

Some believe that the role of publisher as the middleman is becoming increasingly redundant as self-publishing gains ground. With traditional distribution stuck in a rut, the readers are getting click-happy. The authors are discontent because traditional publishing methods don’t payout for majority of them. The readers are loyal to the author alone, so they don’t really care how the books are reaching them. So where does that leave the publishers?

As the part of this industry, we understand the value a publisher adds to the process of making a book. But an average reader is often unaware of the role the publisher plays in the making of the book. Most readers don’t spare much thought to the process of building a book. And as convenient as internet publishing models are, abundance isn’t always a good thing. We’re moving away from a streamlined dissemination of content to indiscriminate publishing, creating less value and more noise in the process. Yes, eBooks are cheaper and easier to find, but it also means chaos as every book fends for itself on an algorithm driven website. Most books run a hundred meters sprint and die. Suddenly, the derelict library from my childhood is looking so much better.

To survive this era of digital transformation, the publishers need to pivot and regroup. They need to rethink their ‘behind-the-scenes’ approach and start marketing, not just the books, but themselves as well. Readers need to understand the value publishers add to their favorite books. That is the only way to preserve the sanctity of this profession. Publishers need to bring the fight where their strengths are—print books. Readers are still loyal to the printed book, and that’s something publishers have an upper hand at. The digital model of publishing completely sidelines the ‘form’ of the book. No eBook or print-on-demand copy can compete with a lovingly reproduced book through the hands of an experienced publisher. The publishers need to re-calibrate their strategy to give the readers a reason to buy more books or return to the print format. The digital distribution battle belongs to Amazon, because they got there first. But publishing business as a whole is teetering on precipice of big change. The publishers need to up their game, because this can go either way.

Anumeha Gokhale

 

Break Down the Barriers

As we know quite well by now, publishing has traditionally had some very high barriers to entry. You had to be the right person, you had to know the right people, and you had to have the luxury of being able to spend your time writing (a room of your own, if you will). And even after all of this, there were still (and still are) gatekeepers deciding if your work was worthy enough to be published.

On one hand, while the barriers are not as high as they once were, they are still a major issue in publishing today, as we discussed in the Emerging Leaders in Publishing Summit. But as we talked about in class, the advent of online business models has also helped to knock many of these barriers down. The space that was once reserved for a select few is now a space where everyone can be an author, and as such it is easier to access publishing platforms. By extension, if you are a consumer, it is also easier to access this abundance of content.

These new models are not inherently detrimental to the publishing business; rather, the publishing industry makes it appear so by remaining stagnant. Both models of publishing have the same goal (to publish books and profit), but they have different ways of meeting that goal. They are in the same business, but have different ways of doing business.

The publishing business model cannot just “go online” and assume that’s enough, but must examine why consumers are moving towards other models. They wouldn’t have to dive that deep to realize it’s because these other models better meet author and consumer needs. (There are clear examples of this same transition in the newspaper industry). Publishers have to realize that their barriers to entry have harmed their business and are driving people to seek out more accessible models. It’s not the location that is the problem, but the service offering.

As much as traditional publishers may want to feel needed and necessary, the truth is that other models that are beginning to push them out. In publishing we are providing a service, not a privilege. There is no reason publishers could have not evolved to better meet the needs of consumers to earlier on, when issues (such as barriers to access) were raised.

In order to compete (and consequently, survive), traditional publishers need to evolve. They need to give platform to marginalized voices. They need to find better ways to cater to customers’ needs. They need to deliver specialized services to authors (not necessarily the whole publishing package). They need to step of off their pedestal and share power with customers and authors by better involving them.

To summarize, publishers need to identify barriers to entry in the industry and then find concrete steps they can take to remove these barriers if they want to stay relevant. Otherwise, people will continue to find ways to go around the barriers that are still in place.

Babeling Tongues: Literary Translation in the Technological Era

According to the Canada Council for the Arts, “It all starts with a good book. Then a translator, writer, or publisher is inspired to see it translated.” Indeed, in the present moment, translation is becoming ever important to both the globalizing world generally and the publishing industry specifically. Despite the increased role translations must undoubtedly take in the world market today, Three Percent, a resource for international literature at the University of Rochester, reports that “Unfortunately, only about 3% of all books published in the United States are works in translation… And that 3% figure includes all books in translation—in terms of literary fiction and poetry, the number is actually closer to 0.7%.” This paper justifies the need for increasing the number of translations available in the market, and explores the problems and possibilities doing so.

This essay is divided into three sections. It begins by examining the role of language in literature. It will use the political importance of a wider canon and the mass appeal of World Literature to establish the importance of works in translation. It will then explore the different processes by which professional translators and machine translation softwares approach the translation of texts. In this section, I will demonstrate how machine translations differ from human translations in their conception and execution. The final section of this essay will discuss the limits and possibilities applicable to both types of translation. In particular, it will suggest that machine based translations are, at present, largely capable of translating only literally, while literary translations require translations that go beyond simply literal forms, relying as they do on cadence, metaphor, connotations, and a detailed knowledge of context. Finally, I conclude by showing how work in this regard remains nonetheless open as different groups are attempting to perfect machines equipped with Artificial Intelligence that can deal with more complex types of decision-making required for such translation.

On Translation and World Literature

In order to realize the importance of translation today, we must first recognize that we are at present in the present 21st century, in which the world has become incredibly globalized. In this globalized world we have, for the first time in history, so many individuals from so many different cultures interacting with each other on a daily basis. Consequently, such individuals speak to each other on a daily basis, and also take interest in the literatures of each other. This having been said, the communication predominantly takes place in English, which has, because of the British empire, historically been a very important language at the global scale. Thus, though individuals may speak several languages at once, the dominant language of communication across cultures is English. Indeed, one would be hard-pressed to find someone who does not find that English is the global language of the present world.

The position of English deserves more thought, particularly as, according to the Kenyan scholar Ngugi wa Thiong’o, languages are not politically neutral. They are not inert objects used simply for communication, but rather, every language is “both a means of communication and a carrier of culture [emphasis added]” (Ngugi 13). By calling language a carrier of culture, Ngugi informs us that each language, in its very form and vocabulary, carries the experience of a certain people. Effectively, languages carry “the entire body of values by which we come to perceive ourselves and our place in the world” (Ngugi 390).

Though insightful in itself, Ngugi’s provocation proves particularly pertinent when we consider the historic progenitors of English language: usually first-world, native speakers, who are almost always white. And while, in the past, this fact may not have been an issue as the language circulated in only this same sphere of people, it is problematic in the modern world where the literature published has failed to match the diversity of its leadership. Indeed, as The Guardian’s recent list of the 100 Best Novels attests, the majority of what is considered ‘great literature in English’ is still considered to come from the people mentioned above. Despite surveying novels across five centuries, the Guardian list acknowledges less than 10 novels by authors of colour. Political repercussions notwithstanding, this difference reflects present literary trends even outside of novels published originally in English. At present, we find ourselves in a world whose supposedly ‘global’  literature does not represent the diversity of the people who read it. Given the role of editors and publishers in shaping the literary landscape, such individuals and groups must strive to ensure that the available literature reflects the experiences of those who are to read it. An integral step in this process is to make more English translations available for the current, global readership of the language.

Apart from the responsibility they hold, publishers need to increase the translations they publish because, quite frankly, it makes economic sense. Literature in translation is a growing market, particularly in diasporic communities whose second and third generation readers cannot read their original languages, yet still desire to reconnect with their roots. Moreover, as more and more people are have become aware of the limited perspectives inherent in ‘purely English’ literature, they have also recognized the importance of books in translation. As a result, they have created a huge demand for literature in these languages that reflects their respective cultures. Moreover, such books are also of interest to native English readers as such individuals are curious to know what people from other countries are writing about. For this reason, many authors have gained worldwide appeal despite never having written in English: Gabriel Garcia Marquez, Milan Kundera, Yukio Mishima, and Jorges Luis Borges– to name just a few.

This immense demand for translations is visible in that such books generally claim much more than of the market than the ratio in which they are produced. According to Nielsen,

“‘On average, translated fiction books sell better than books originally written in English, particularly in literary fiction.’ Looking specifically at translated literary fiction, [we can see that] sales rose from 1m copies in 2001 to 1.5m in 2015, with translated literary fiction accounting for just 3.5% of literary fiction titles published, but 7% of the volume of sales in 2015.” (from The Guardian)

Given the above-mentioned statistics, it is obvious that the translation market is an incredibly fruitful avenue for publishers to explore. Moreover, given how small the current production share of translations is, there is still a lot of potential for exploiting the market for translated literature. Rebecca Carter corroborates this point as she notes that “Amazon had identified an extremely strong and under-served community: readers with an interest in books from other countries.”

In light of these findings, it is obvious that we need to increase the number of translations available, and to see what avenues are available for rendering high-quality translations. As we seek to do so, I argue that it becomes prudent that we look into newer methods of translation, particularly machine-based translations (MT), which could possibly prove more efficient and economical than traditional means. As such, it is necessary for us to see how these two processes of translation, by humans and by machines, work, and what the problems and possibilities are of each.

Translating: By Machine and By Hand

At present, the dominant translation methodology is that followed by professional translators. Given that translation is a niche profession, we must examine the motivations of professional translators in order to understand the techniques they use in translating works. Deborah Smith, translator of, The Vegetarian, winner of the 2016 Man Booker International Prize, explains that “[p]art of the reason I became a translator in the first place was because Anglophone or Eurocentric writing often felt quite parochial” (from The Guardian). Smith’s view is very much in tune with the current ascendancy of World Literature and the movement towards a more global canon. Andrew Wilson, author of Translators on Translating, and a translator himself, is “struck by the enjoyment that so many translators seem to get from their work” (Wilson 23). In fact, the various accounts from translators in his book indicate that passion is the driving force behind the profession. Per Dohler is immensely proud of himself and fellow translators because “[w]e come from an incredible wealth of backgrounds and bring this diversity to the incredible wealth of worlds that we translate from and into” (Wilson 29). While many translators, like Dohler, come with a background in literature and linguistics, others, like Smith, are self-taught.

Building off the motivations expressed by these other translators, Andrew Fenner describes a general approach to translation. He points out that, firstly, the translator reads the whole work thoroughly, in order to get a sense of the concepts in the text, the tone of the author, the style of the document and the intended audience. The translator then follows this by translating what they can. They prepare the first draft, leaving unknown words as is. After doing so, the translator leaves the work aside for a day, and allows their subconscious to mull over ambiguous words or phrases. They then return to the work sometime to later to make checks, to correct any errors, and to refine the translation. Lastly, the translator repeats this last process a few more times (Wilson 52-3).

The salient feature of Fenner’s process is that the human translator takes into consideration the work as a whole. They do not imagine the text simply as an object built from the connection of the literal meanings of words. This idea of the work as a whole being reflected in each individual segment will become exponentially important later, when we explore machine-based translations. For now, however, we must only note that this approach ties into Peter Newmark’s diagram of the dynamics of translation:

The Dynamics of Translation

Note: 9 should say “The truth (the facts of the matter) SL = Source language TL = Target Language

As the above diagram shows, the translator must keep in mind both the source and target language’s norms, culture, settings, as well as the literal meaning of the text – a challenging task to say the least– one that involves a complex system of processes and judgements.

Machine translations, in contrast to human translations, use a different series of processes, which generally do not take into account different factors in the same way. For the purposes of this essay, we will look at two machine translation softwares: Duolingo and Google Translate. By its own admission, Duolingo uses a crowdsourcing model to “translate the Internet.” Founder Luis von Ahn strove to build a “fair business model” for language education– where users pay with time, not money, and create value along the way. Duolingo allows users to learn a language on the app, while simultaneously contributing to the translation of the internet.

Ahn introduced the project and the process of crowdsourcing these translations at a Ted Talk:

He claims that Duolingo “combines translations of multiple beginners to get the quality of professional translators” (from video above). In the video, Ahn demonstrates the quality of translations derived from the app. The image below contains translations from German to English by a professional translator, who was paid 20 cents a word (row 2), and by multiple beginners (row 3). Comparing Translations: Professionals versus Duolingo Beginners

As is evident, the two translations seen in the bottom rows are very similar to each other. Using the ‘power of the crowd,’ Ahn estimates that it would take 1 week for him to translate Wikipedia from English to Spanish, a project that is “$50 million worth of value” (from video). From this estimate alone, we can see that machine translation provides the possibility of saving a lot on the cost of translation– a prospect that, in itself, may allow for many more  translations to be produced with the same amount of financial capital.

Apart from Duolingo, one of the more common translation softwares is Google Translate. Unlike Duolingo, which relies on the input of many users translating the same sentence, Google translate works in an entirely computational manner. It performs a two-step translation, using English as an intermediary language, although it undertakes a longer process for certain other languages (Boitet et al.). As the video below shows us, Google translate relies on pre-existing patterns in a huge corpus of documents, and uses these patterns to determine appropriate translations.

While we grant professional translators the benefit of doubt, in that we do not expect their translations to be ‘perfect,’ it is important to note that we seem to have exalted expectations of work done by machines. Machine translations, with their statistically sound algorithmic models, are assumed to provide accurate and appropriate translations. As we go forward with this essay, especially as we discuss the limitations and possibilities of these approaches to translation, it is important to realize that while machine-based translations may indeed advance the pace and quality of translations, we still cannot always assume their translations to be perfect, or always reliable.

Translating: Problems and Possibilities

In terms of limitations, I have noticed that the primary issue with machine-based translation at present is that seem capable only of doing literal translations. In short, this method of translation is most suitable in translating individual words occurring in simple sequences, one after the other. Now, this limitation proves especially debilitating because many texts, particularly literary texts, do much more than simply convey literal meaning. As Philip Sidney explained in his Defence of Poesie, Literature with the capital L means to both “teach and delight.” Literature, in its attempt to delight and entertain, involves an infinitely complex interaction between words, their sound and cadence, their denotation and connotation. It is not simply an object of beauty, but ascends to the level of metaphor, symbolism, and leitmotif, and, n so doing, becomes an object of beauty. To put it simply, when we talk about Literature, it is not just what is said (what we can see in literal translation) that matters, but also how it is said (which is not easy to execute).

Given this supra-literal quality of literary fiction, we must question the applicability of machine translations to such literary forms. Indeed, because machine translations do not seem capable of accounting for this metaphoric dimension of literary language, they may be better suited to types of writing whose goal is the simple transferral of meaning, or communicative writing. Machine translations are thus more applicable to knowledge-orientated genres of writing such as encyclopaedic articles, newspapers, academic texts, whose main focus is to educate and whose core linguistic operations are literal and not metaphoric. However, though machines seem less apt for translating these more complex forms of writing, I maintain that there is the possibility of having machines perform translations of such texts with the aid of limited human intervention, and Artificial Intelligence.

According to a recent study of Google’s Neural Machine Translation system conducted at MIT, the quality of machine translations could possibly be made to be very similar to translations performed by a professional. Tom Simonite reveals that, “When people fluent in two languages were asked to compare the work of Google’s new system against that of human translators, they sometimes couldn’t see much difference between them.” The inherent challenges of translating literary works lie in that multiple connotations of the same word are often context dependent, and therefore, programming a system that can intelligently select one connotation over the other is no easy feat. Will Knight explains the advancement of artificial intelligence, “In the 1980s, researchers had come up with a clever idea about how to turn language into the type of problem a neural network can tackle. They showed that words can be represented as mathematical vectors, allowing similarities between related words to be calculated…By using two such networks, it is possible to translate between two languages with excellent accuracy.”

The official MIT report concludes: “Using human-rated side-by-side comparison as a metric, we show that our GNMT system approaches the accuracy achieved by average bilingual human translators on some of our test sets.” Now, there is definitely no indication that this model is perfect, as yet, but it is a fascinating possibility for the future of translation.

Similar to how we read and process language in texts, Google’s software “reads and creates text without bothering with the concept of words” (web). Simonite describes that the software, in a manner similar to humans’ processing of language, “works out its own way to break up text into smaller fragments that often look nonsensical and don’t generally correspond to the phonemes of speech.” Much like the professional translators approaching the text in chunks that they feel are appropriate, this software does the same. For publishing, the benefits of machines performing high-quality translations equivalent to that of professional translators are manifold.

Primarily, such form of translations would mean lesser production times per translation, and increased accessibility of the work. In the current system where translations are usually performed only when funding or grant money is available, or when there is an assured demand or number of sales in the target market, quality machine translations would ensure that lack of funds would not hinder the development of a translation project. When professional translators themselves may not be readily available for certain languages, machines could step in to do the work. Of course, the financial and physical accessibility of such software to publishers themselves is another matter of consideration. But these are dreams worth considering, and pursuing.

The question remains, however: how can this machine translation model be perfected? Without delving too much into the technicalities of the matter, one finds that it is evident that one of the best ways to fine-tune translation models such as these is to provide the system as much parallel data as possible. According to Franz Josef Och, the former head of Machine Translation at Google, Google Translate has relied on documentation from the Canadian government (in both English and French), and files from the United Nations database. In a similar manner, we can ask publishers to provide literary texts, either original works or translations, to which they currently hold the copyright. By providing copious amounts of data, and by using processes of machine learning, we may be able to teach computers to increasingly translate better. This, in turn, could lead to very advanced machine translations, capable of even translating highly metaphoric forms of literature. In so doing, we can possibly arrive at a stage where, as in the words of Jo-Anne Elder, the former president of the Literary Translators Association of Canada, “A translated book is not a lesser book.” In pursuit of this goal, our aim must be to not simply give up in recognition of the present hurdles confronting machine-based translations, but, like a literary Usain Bolt, we must strive to ascend above them, and succeed.


Works Cited

About Three Percent.” Three Percent. University of Rochester. Web. 7 Nov. 2016.

Boitet, Christian, et al. “MT on and for the Web.” (2010):10. Web. 24 Nov. 2016.

Carter, Rebecca. “New Ways of Publishing Translations – Publishing Perspectives.Publishing Perspectives. 05 Jan. 2015. Web. 20 Nov. 2016.

Duolingo – The Next Chapter in Human Computation. YouTube, 25 Apr. 2011. Web. 28 Nov. 2016.

English Essays: Sidney to Macaulay. Vol. XXVII. The Harvard Classics. New York: P.F. Collier & Son, 1909–14; Bartleby.com, 2001. Web. 24 Nov. 2016.

Flood, Alison. “Translated Fiction Sells Better in the UK than English Fiction, Research Finds.” The Guardian. Guardian News and Media, 09 May 2016. Web. 10 Nov. 2016.

Google. Inside Google Translate. YouTube, 09 July 2010. Web. 26 Nov. 2016.

Knight, Will. “AI’s Language Problem.MIT Technology Review. MIT Technology Review, 09 Aug. 2016. Web. 25 Nov. 2016.

Literary Translators Association of Canada.Literary Translators Association of Canada. Web. 28 Nov. 2016.

Medley, Mark. “Found in Translation.National Post. National Post, 15 Feb. 2013. Web. 28 Nov. 2016.

McCrum, Robert. “The 100 Best Novels Written in English: The Full List.The 100 Best Novels. Guardian News and Media, 17 Aug. 2015. Web. 26 Nov. 2016.

Och, Franz Josef. “Statistical Machine Translation: Foundations and Recent Advances.” Google Inc. 12 Sept. 2009. Web. 25 Nov. 2016.

Simonite, Tom. “Google’s New Service Translates Languages Almost as Well as Humans Can.MIT Technology Review. MIT Technology Review, 27 Sept. 2016. Web. 28 Nov. 2016.

The Butterfly Effect of Translation.Translation. The Canada Council for the Arts. Web. 13 Nov. 2016.

Thiong’o, Ngugi Wa. Decolonizing the Mind: The Politics of Language in African Literature. London: J. Currey, 1986. Web. 26 Nov. 2016.

Wilson, Andrew. Translators on Translating: Inside the Invisible Art. Vancouver: CCSP, 2009. Print.

Wu, Yonghui, et al. “Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation.” (2016). Web. 25 Nov. 2016.

Ah, Internet writing. What does one call thee?

What does it mean “to publish”? The Oxford English Dictionary defines it as when one makes information available to the public. In A Writing Revolution Seed Magazine written by Denis Pelli and Charles Bigelow at Seed Magazine, the two make claims around what publishing means today. Yes, what they consider as contemporary publishing is supported with graphs and statistics, conveying that the Internet is making it even easier for anyone to essentially publish (make things public); however, I’m not so entirely on board that what they are describing is called “publishing”.

Continue reading “Ah, Internet writing. What does one call thee?”

Reading Response: Which Kind of Innovation?

Baldur Bjarnason’s article “Which Kind of Innovation?” gave a lot of credit to ebooks, in my opinion. But I think he was on the right track when he said that ebooks weren’t disruptive innovations. The problem I find within the publishing industry is that they need to be disruptive to the entirety of the industry if they want to get adopted with any sort of staying power.

Print books have been improved upon for more than 500 years. So in a way, it makes sense for ebooks to be modelled after the print formula. However, how can ebooks compete with paperback books—physical takeaways—when their prices differ by only $0.00 to $5.00? Ebooks must offer something more substantial and satisfying than print books if the industry wants to have them adopted by a wide audience. It is almost comical when Bjarnason comments, “Amazon’s Kindle format remains for all intents and purposes a 1990s technology.” In reality, ebooks are a digital facsimile of a book, for the most part. They are laid out similarly and I would argue that the Kindle format is a 1500s technology. But Bjarnason seems to be on to that as well as he says “[Fixed layout ebooks] contain… no innovative features to speak of, they are merely an accumulation of complex print-like cruft to aid the transition of illustrated or designed print books into digital.”

Projects such as The Pickle Index, where there is a web 2.0 storytelling integration that occurs simultaneously in story-time and in real-time over ten days, “revealing the narrative through the various features of the app: popular vinegar-based recipes, daily news updates, dynamic maps, and Q&A” is a much more interesting way to grab readers to have them read digitally. In fact, it is as this point that I would actually refer to digital reading as an “innovation.” When Bjarnason calls ebooks a “sustaining innovation,” as in the idea that they sustain what already exists in the publishing world, I think he is using an oxymoron. If they are sustaining a status quo, they are not creating innovation at all.

I think a major switch in the thinking around creating ebooks needs to be changed. They cannot just be an afterthought, a digital book. There has to be something altogether different about them, a reason for people to choose them over print books. But when prices are comparable, there is no physical takeaway, and print books are better designed than ebooks, there is no real point to adopt them.

Information Sharing Online and in Coffeehouses: Gatekeepers and Social Discourse

Information Sharing Online and in Coffeehouses:
Gatekeepers and Social Discourse

Information sharing today has reached a peak that is unprecedented. Higher literacy rates, the accessibility of the Internet, and the availability of pages online, inclusive of blogs, comments, and profile pages, contribute to a endless stream of information that must be sorted through in order to be understood. Furthermore, what are the side effects of the ways users are sorting through content? By examining the social changes in regard to information sharing during the Age of Enlightenment and comparing them to the challenges of sharing knowledge on a website such as Facebook, this essay will argue that while using algorithms is beneficial for the expansive amount of information on the web, it ultimately leads to a less knowledgeable, less informed online community. It will examine how the Age of Enlightenment thrived where the Internet is failing despite the possibility for progressiveness and innovation.

The Age of Enlightenment was a period in eighteenth-century Europe in which there was a movement against the then-current state of society, inclusive of church and government. In pre-Enlightenment Europe, “individual dignity and liberty, privacy, and human and civic rights… [were] virtually nonexistent… ‘burned and buried’ in medieval society and pre-Enlightenment traditionalism” (Zafirovski 9). This illustrates the church and state’s role as gatekeepers of knowledge, allowing only what they deemed as appropriate to be accessed by society. Zafirovski states that during the Enlightenment, “Descartes, Voltaire, Diderot, Kant, Hume, Condorcet, and others emphasized overcoming ignorance and intellectual immaturity, including religious and other superstition and prejudice” (4). He is referring to the major thinkers of this time, those who wrote public essays on the tenets of enlightenment and reason. It was the age where past ideals were rejected in order to champion the concept of individual thought and voice. It was not a period of “anti-” religion or state, but of individual liberty and of pushing against absolutism. During this time, the Encyclopédie was published, which disseminated the thoughts of the Enlightenment. Diderot, the editor of the project, is quoted to have said that the goal of the Encyclopédie was to “change the way people think” (“Encyclopédie”). During the Enlightenment, the opinions of those who wanted to remain within the norms of pre-Enlightenment society existed alongside the dissertations of those who proclaimed it was time for change: “The inner logic, essential process, and ultimate outcome of the Enlightenment are the destruction of old oppressive, theocratic, irrational, and inhuman social values and institutions, and the creation of new democratic, secular, rational, and humane ones through human reason” (Zafirovski 7). The thinking that existed pre-Enlightenment had to occur; the prominent thinkers emerged from a society of rules they did not relate to. In other words, they had to know the culture they were living in very deeply in order to argue strongly against it.

As stated previously in regards to the Encyclopédie, the dissemination of knowledge was paramount during the Enlightenment. For the sake of this paper, the major sources of knowledge-spread are deduced to be of two origins: book publishing and the salons and coffeehouses. As illustrated much earlier through Martin Luther’s Ninety-Five Theses, the ability to spread printed information became much simpler and more efficient with the invention of the moveable type by Johannes Gutenberg. Previous to this invention, religious scribes hand wrote all of the books that were available. Because this was such an intensive process and paper was handmade, books were very expensive. Yet, as time went on, the efficiency of the printing press grew, especially with the beginning of the Industrial Revolution. This meant lower prices and therefore more availability. In turn, literacy grew. Furthermore, the inexpensive cost allowed the increased spread of journals, books, newspapers, and pamphlets (“Age of Enlightenment”). More people could engage with texts because of higher literacy rates and the growing number of texts that were now available. Once articles, essays, and books were read, they were also discussed in places such as coffeehouses and salons where both men and women could meet to debate and discuss the ideas of the time. This created a social environment that was a catalyst for new philosophies. In fact, the idea for the Encyclopédie was conceptualized at the Café Procope in Paris, one of the coffeehouses of Paris that is still maintained (“Age of Enlightenment”). Furthermore, because anyone could come to discuss politics and philosophy, it undermined the existing class structure, thus allowing for multiple perspectives in one place.

At the time of its introduction, the possibility of how an open public internet would become so ingrained in human society and culture could not have been predicted. The rapid growth of the Internet is considered by Douglas Comer to be a result of its decentralization and the “non-proprietary nature of internet-protocols” (qtd. in “Internet”). During the time in which the Internet became popular, the speed of information growth was unprecedented. New websites with personalized homepages and links emerged as people began to explore the World Wide Web. Today, sites such as Facebook act as home websites replacing the “homepages” of before. This, as shown in “The Rise of Homeless Media,” is beginning to replace the old ways of the web. Facebook is becoming a much bigger entity than the developers imagined at its conception. While this change may mean that the web is becoming streamlined, it comes at a cost of control to these site users. In the ‘90s and early 2000s, the popular free web hosting services provided a very personalized experience. Sites such as Angelfire, Freewebs, LiveJournal, and DiaryLand relied on subscribers and ads in order to allow their sites to run freely and in a way that allowed users to personalize their content, with the exception of ad placement for non-subscribers. Personalization occurred through writing code such as HTML. Furthermore, serious bloggers acted as a catalyst for other voices, creating a community where readers were linked to other bloggers and informative sites of related ideologies and/or topics. For instance, Mike Shatzkin’s  The Shatzkin Files hyperlinks to other sites that may be of interest to a reader of that particular subject. Though it is a fairly recent blog, it is basic in its design, reminiscent of much earlier blogging interfaces. Today, blogs are increasingly popular and come with pre-made themes, making coding unnecessary although still possible on platforms such as WordPress. On Facebook however, users cannot change the style of their page. This control of style is one way the web is becoming more streamlined. The primary benefit to living on a home website such as Facebook, Twitter, Instagram, or LinkedIn is accessibility. Each site has their own niche purpose and learning to code is not a necessity to run these pages. One simply needs to know how to link the various pages properly to allow for an integrated movement across platforms. Because users do not need to understand code in order to have a profile on these websites, their user base is much larger. This is comparable to the accessibility to literature in the eighteenth century which made reading a pastime for more than just an educated elite.

This ease-of-use has led to a global reach of perspectives. In this sense, the age of the Internet can be correlated with the Age of Enlightenment in that the proliferation of knowledge is now much easier than it was in the past. Today, over one billion pages exist on the web (Woollaston). The billions of people using the web are provided access to a multitude of differing perspectives and insights (“Internet Usage on the Web by Regions”). Though this has the potential for tension, it has been proven to help develop critical thinking and empathy. In the article, “How Diversity Makes Us Smarter,” Edel Rodriguez states, “social diversity… can cause discomfort, rougher interactions, a lack of trust, greater perceived interpersonal conflict, lower communication, less cohesion, more concern about disrespect, and other problems.” However, being confronted with these problems and having to mediate around diversity enhances creativity, “leading to better decision making and problem solving” (Rodriguez). Thus, diversity creates adversity, but provides good results when people are encouraged to consider other people’s perspectives. Our minds are prompted to work harder when disagreement arises as a result of social differences. Thus, a difference in perspective “[encourages] the consideration of alternatives” (Rodriguez). This article, published by Scientific American, puts words to this phenomenon being studied by a multitude of people, including “organizational scientists, psychologists, sociologists, economists and demographers” (Rodriguez). It illustrates why salons and coffeehouses were so important as places to spark conversation. They were hubs of discourse that generated innovative ideas and ideologies, sometimes for pleasure, but other times to create planned social movements such as those that led to the French Revolution. Similarly, the web provides an outlet for people to create discourse. Though not a physical space like salons, the web allows for a greater global discourse to occur; it should be the perfect platform for our globally-social world.

The most popular social network today is Facebook (“Leading Social Networks Worldwide as of January 2016, Ranked by Number of Active Users [in millions]”) with approximately 1.59 billion active monthly users (“Number of Monthly Active Facebook Users Worldwide as of 4th Quarter 2015 [in millions]”). Facebook is a platform for users to create profiles for personal or business use in order to connect with others. Facebook also doubles as a publication platform, though Facebook would argue against this (Kiss and Arthur). Publishing is defined by the Oxford English Dictionary (OED) as, “the act of making something publicly known” (“Publishing”). Users on Facebook create posts and share them with both strangers and friends, thus creating a public publishing platform. These posts and comments are as much a public form as blog posts or online fan fiction. It is documented proof of what has been said by whom; in fact, it is now possible to see the editing history on a single post or comment. Even if a post or comment is deleted, Facebook retains access to that content. Their Help Centre website states, “When you choose to delete something you shared on Facebook, we remove it from the site. Some of this information is permanently deleted from our servers; however, some things can only be deleted when you permanently delete your account.” Thus, content considered “deleted” exists past the time the creator removes it; it is still available to some, remaining “published” on Facebook’s servers.

Facebook is a platform where unique content is created in addition to a site where users “share” and “like” content they deem relevant. This can result in a lively discourse of back-and-forth commenting, especially with the new option for users to “reply” to previous comments. However, in order to find content that is in opposition to one’s currently held views, one must often purposely seek it out themselves. This is due to Facebook’s algorithms, largely invisible and secret to Facebook users. Facebook created algorithms that filter its seemingly-endless content into curated, personalized “news feeds” for its users. An algorithm, defined by the OED, is “a precisely defined set of mathematical or logical operations for the performance of a particular task” (“Algorithm”). As a business, Facebook succeeds in the task of retaining consumers; they are able to deliver an appropriate amount of content to their consumers. Where Facebook’s algorithms fail is in giving users unique content, not only based on their specific “likes” but on their broader general interests. Furthermore, they are unsuccessful at providing “readers” with interesting and challenging content that is oppositional to their currently held views. They are unable to show a snapshot of the multitude of voices that exist on this platform; instead, they proliferate a user’s preconceived views and reinforce a user’s confirmation bias. Ultimately, Facebook is a business. Their prediction algorithms that provide users with a personalized news feed are meant to generate a user-friendly experience; however, in doing this, computers become gatekeepers and users become confined to ideological bubbles.

During the Age of Enlightenment, the book trade and affordability of books allowed for the proliferation of new areas of thinking and novel philosophies. Censorship by the church and state was dying in favour of books that were engaged readers and inspired discussion and debate about their ideas. What made books and discourse interesting was not necessarily the sameness of opinion, but the diversity of opinions that were becoming louder during the eighteenth century. Facebook could have become a place of social diversity. Instead, its owners have engaged in gatekeeping and invisible editing in order to keep users returning to their site. This comes at the expense of social and intellectual growth and change. The people who manage Facebook’s algorithms generate many of them based on “likes,” hidden posts, and the amount of time spent reading an article. “[Chris] Cox [Facebook’s chief product officer] and the other humans behind Facebook’s news feed decided that their ultimate goal would be to show people all the posts that really matter to them and none of the ones that don’t,” states Will Oremus in “Who Controls Your Facebook Feed.” However, humans are not as predictable as mathematic equations; utilizing “likes” or time spent reading as a baseline of what is shown to people does not illustrate the whole complex picture of what human beings can, and should, engage in. In his TEDxTALK, Eli Pariser gives an example of algorithms attempting to understand a human being based on these baselines alone. He says that as a liberal, he engaged in more progressive content. However, he enjoys politics and likes reading about the conservative side of the political spectrum. He recognized he was engaging in right-wing content less often, but he was perceptive enough to notice when the conservative viewpoint disappeared from his feed, leaving only content from his liberal friends. Opposing content, though interesting and necessary for Pariser, was gone and he now had to seek it out. He had no active role in editing his news feed as content was disappearing, and neither do other users. Yet, most other users to not notice the content shift happening; instead, they see their own views proliferated. Previous to the Internet, the broadcast and print media were the gatekeepers of information. It is widely recognized that the media is fallible, but journalistic ethics existed in order to promote multiple perspectives. The Internet undermined this old media as it expanded. Huge companies, such as Facebook and Google, grew and computers have become the gatekeepers of information. Oremus states, “Facebook had become… the global newspaper of the 21st century: an up-to-the-minute feed of news, entertainment, and personal updates from friends and loved ones, automatically tailored to the specific interests of each individual user.” The idea of a platform presenting only one perspective to its readers without the availability of an opposing opinion at arm’s reach, as is the case with newspaper stands, is an archaic thought considering the movements that have been made to prevent censorship from occurring, especially in regards to the importance of social diversity. Oremus’ article is informative and supportive of algorithms, yet he still laments, “Drowned out were substance, nuance, sadness, and anything that provoked thought or emotions beyond a simple thumbs-up.” Ultimately, Pariser, in his TEDxTALK, recognizes the biggest issue at hand when computers control the information people see, and it is not always as simple as ideological bubbles. Ultimately, it extends into a dysfunctioning democracy, removed from a conducive and just flow of information. To have a strong conviction requires knowing and understanding all sides of an issue. As Katherine Phillips states, “We need diversity… if we are to change, grow, and innovate.” Facebook and Internet users cannot let website conglomerates be the only innovators, the only ones capable of seeing solutions from multiple angles, whether those problems involve an algorithm or differences in ideology, religion, or politics. Users cannot let computers be their personal gatekeepers, preventing them from understanding that there are other perspectives and that they are equally as valuable.

Ultimately, Facebook’s algorithms serve a vital purpose: a means of generating revenue, retaining users, and making sense of the expanse of information available on the web. However, these secret, invisible algorithms prevent Facebook’s users from being introduced to novel information or opposing viewpoints. This in turn prevents people from understanding global events, and instead creates ideological bubbles. Milan Zafirovski writes, “subjects were literally reduced to the servants of theology, religion, and church, thus subordinated and eventually sacrificed… to theocracy.” In this statement, he is referring to pre-Enlightened Europe. However, as people become more accustomed to seeing their own views proliferated on what many consider their main news source, they are becoming accepting of the idea that their view is the only one. As history shows, discourse and challenging opinions and ideas are what fuel social change. Ultimately, Facebook needs to sort through the massive amount of information on their site; however, they cannot be gatekeepers to distribute only information they deem as “important.” Facebook users need to have a voice in what is shown to them, and this needs to be bigger than a “thumbs up.”

Works Cited

“Age of Enlightenment.” Wikipedia: The Free Encyclopedia. Wikimedia Foundation, Inc. 30 January 2016. Web. 31 January 2016.

“algorithm, n.” OED Online. Oxford University Press, December 2015. Web. 26 January 2016.

Arthur, Charles and Jemima Kiss. “Publishers or Platforms? Media Giants May be Forced to Choose.” The Guardian. 26 July 2013. Web. 29 January 2016.

Chowdhry, Amit. “Facebook Changes News Feed Algorithm To Prioritize Content From Friends Over Pages.” Forbes. 24 April 2015. Web. 26 January 2016.

Dickey, Michael. “Philosophical Foundations of the Enlightenment.” Rebirth of Reason. Web. 26 January 2016.

“Internet.” Wikipedia: The Free Encyclopedia. Wikimedia Foundation, Inc. 29 January 2016. Web. 29 January 2016.

“Internet Usage in the World by Regions.” Internet World Stats. 26 January 2016. Web. 1 February 2016.

“Leading Social Networks Worldwide as of January 2016, Ranked by Number of Active Users (in Millions).” Statista. January 2016. Web. 31 January 2016.

Luckerson, Victor. “Here’s How Facebook’s News Feed Actually Works.” Time. 9 July 2015. Web. 26 January 2016.

Marconi, Francesco. “The Rise of Homeless Media.” Medium. 24 November 2015. Web. 15 January 2016.

“Number of Monthly Active Facebook Users Worldwide as of 4th Quarter 2015 (in Millions).” Statista. January 2016. Web. 26 January 2016.

Oremus, Will. “How Facebook’s News Feed Algorithm Works.” Slate. 3 January 2016. Web. 26 January 2016.

Pariser, Eli. “Beware Online ‘Filter Bubbles.’” TED. March 2011. Lecture.

Phillips, Katherine. “How Diversity Makes Us Smarter.” Scientific American. 1 October 2014. Web. 26 January 2016.

“publishing, n.” OED Online. Oxford University Press, December 2015. Web. 26 January 2016.

“What Happens to Content (Posts, Pictures) that I Delete from Facebook?” Facebook. Web. 29 January 2016.

Woollaston, Victoria. “Number of Websites Hits a Billion: Tracker Reveals a New Site is Registered Every Second.” Daily Mail Online. 17 September 2014. Web. 26 January 2016.

Zafirovski, Milan. The Enlightenment and Its Effects on Modern Society. New York: Springer. 2010. Web.

Why Assign Fictional Characters an ISNI

When the International Organization for Standards (ISO) published the International Standard Name Identifier (ISNI) in March of 2012, the idea was that it would be used to identify “the millions of contributors to creative works and those active in their distribution, including writers, artists, creators, performers, researchers, producers, publishers, aggregators, and more. … ISNI can be assigned to all parties that create, produce, manage, distribute or feature in creative content including natural, legal, or fictional parties, and is essential to those working in the creative industries for quick, accurate and easy identification.” (ISNI International Agency) Continue reading “Why Assign Fictional Characters an ISNI”

The Unexplored Potential of the Internet as Art Medium

marcel duchamp fountain

Oh, Fountain. Go to any first-year art history course and Duchamp’s urinal-turned-art-piece will be one of the most debated topics of the term. There are always those students who insist the merit of any work of art is in the skill of the execution, of which Duchamp demonstrates none. But the concept—a critique of the posh High Art world—always prevails and Fountain is hailed as one of the strongest works of the European avant-garde.

Continue reading “The Unexplored Potential of the Internet as Art Medium”