The Engineers A.I. driven future of Publishing

Potentially, AIs can be used to cover, more or less successfully, all of the wide range of activities leading to the selection, creation and distribution of books and other printed materials, from manuscript draft, to substantive and copy editing, to layout and cover design, printing (or encoding for e-books) and even distribution of the published works.

One possible future, that is likely to happen is the “engineers” approach to implement AIs in Publishing. Engineers are problem solvers and optimize things, so its natural the whole process will be driven, like many other fields in technology, by this vision.

The process would take several specialized AIs to do the task, but they will no doubt accomplish “something”. What makes the difference, is the approach we take to use them, and I mean WE, because as future professionals,  decision makers and leaders of this industry, we must be very wary of how we want this to happen.

I have also included “The Boss” perspective to these outcomes (find them in red) about the possible appeal of these technologies to these people to show how these technologies appear and shape the decisions of the managerial level and their impact on the workforce.

 Scenario I -The engineers’ approach-

This process would evolve systematically, starting with the manuscript selection as a Machine Learning project called “Gutemberg” (named un- creatively after a long struggle with Copyright holders… engineers after all). “Gut” starts learning from the actions of a human editor, then, combining the data gathered from the choices of several editors, it would gather enough information to start making its own choices, those being probably corrected again by those editors, who would think its wonderful to have some time to do anything else or just to increase their “productivity”, focusing on “editing” twenty, books instead of ten at a time.

What about the boss? The boss is happy to have invested in this promising technology that may save a lot to the Company in unnecessary human and material resources. It is a very competed world and the ones with the best tools will win the battle (or so the Boss thinks).

With the new data set input, “Gut” would optimize and start making more accurate decisions, “productivity” would increase to 50 books per editor, then 100, the process being refined successively on each iteration. Finally, the “editor” would only have to assign parameters to filter those manuscripts the AI had selected and focus on making “high level” choices.

At this point, The Boss is considering reducing the workforce in the editorial level, the savings are huge and they will allow for investment on other projects. After all, mandate states the company must give voice to as many people as possible. The dream of “serving the community” seems to be fulfilling.

 A side effect is, with each successive iteration, the “editor(s)” doing the job become experts on data selection, no more reading required, no need to understand, the primary requirement being competence in evaluating the numbers. Not far from today, this “editor” will become effectively a data analyst with publishing insights. The same process would apply to substantive and copy editing, probably discarding the job position of the later before anyone else.

On the Big office: The Boss is very happy to have saved so much in “not always reliable” workforce. Some new positions had to be created of course, like the AI Tech Specialist, who monitors and maintains the correct working of the AI, its a major expense but “Gut” can do the work of dozens of people in the same amount of time, not even that, they had already developed version  26.11 which even has a simulated but stimulating sense of humor module to allow “meetings” with it more pleasant.

In essence, this Boss has a five figure salary and his troubles had been reduced to dealing with his “chief editors”, a big name for people evaluating the numbers and reading the one page, bullet point prompts the AI deliver to them so they -at least- are informed what a book is about and the major points of the plot.

Design and layouts seem also simple to create artificially, just provide a set of proven templates, use machine learning to teach the AI how to correct widows, hyphens and the like, and don’t worry about the rest, by the moment this occurs, people had already re-learned to read based on those (horrible) screen readers with accessibility, zoom in/out and convenient storage capacity.

Printed books don’t do better. Even today, publishers had sacrificed all the use and meaning of margins and blanks to maximize the use of space and increase their profit margin, which is no surprise, but is deplorable, since even a set of margins as short as half an inch each on a 5×8” book means only 70% of the page is used for text, add “leading” to the equation and that usage may drop to as low as 50%.

 For the boss, one of the happiest things brought by AI, this is a different one called “Minuzio” in honor to the famous Italian typographer and printer. He finally got rid of those pesky freelancers who tried over and over to get a cover done, when all that was need was”more red”. Fortunately, “Minuzio” is very obliging so you only have to tell it what style you want and it will deliver tens or hundreds of options, all appealing and optimized for visual impact.

On the accounting, financing and administrative departments, editors would have long been relieved of this pain of doing numbers and dealing with P&Ls. Why bother? The new system linked to “Gutemberg”, called “MIDAS” has the particularity of analyzing the market trends and predict, with 95% accuracy, the best possible date within a time-frame for a new product to be released, also to organize and track orders and deliver prompt shipping to points of sale, not to mention, handle the e-commerce site where e-books are ordered or track sales across Amazon and other regional platforms. Additionally, it can also do your tax reports.

MIDAS has saved the boss the pains of dealing with faulty logistics, the AI is everything they promised, and more. He saves time, money and resources, and now only decides on the best course of action for the Company to invest. The logistics feature means each book may have as few as a few as a couple dozen copies in print and probably double that number on e-book sales, but they are a steady market and return rates are fewer than 5%!

 The end result: The Boss only has to deal with AIs, they work 24/7, meaning no more delays, no more missing deadlines, everything just a stream of finished works. With so many projects managed by “Gutemberg” and designed by “Minuzio”, sales are like a videogame where you invest your resources on one or other project. If only writers could write faster, but then, that will be solved when they release “Cervantes” the Writing Author AI everyone is expecting. Then, books will be a matter of inputting a number of parameters and drag a project into the publishing console to produce.

 5 years later: The advancement on AIs systems allow the total disposal of unnecessary personnel, at most, a company now haw a CEO, one Executive Editor and Executive Manager which are required to maintain a certain level of humanity behind the scenes of an otherwise automated process.

 After a hard struggle, Open Access supporters finally release “improved” versions (mostly copies and rip offs) of the different AIs with various, sometimes flamboyant names, some of these specialize in certain genres, others try to emulate the protocols of Gutemberg or Minuzio. Many are free but mediocre, most are paid per upgrade or feature.

Whatever the angle, this leads to the sudden burst of “single man/woman” publishers managing hundreds of projects at a time which seem to be good at becoming celebrities and influencers. Self publishing is possible but if you want to “write” something that does not stall in the dozen sales mark, you need those guys to become your “Publishers”.

Grant systems for publishing, where applicable, collapse under the pressure of tens of thousands of applications, sometimes, the grant is as low as to barely cover the domain cost site or, the price of a cup of “Hyper Cetacean milk coffee”, it uses no cetacean milk by the way, just a brand, it has no sugar and no actual coffee, just the flavor. Its very popular by then.

 Widespread publishing is a reality, anyone can write or give an idea to a “Cervantes” replica, had the book written, then process it and publish “a book”. Mission accomplished, everyone can publish now. With so many works and everyone writing, nobody reads each other.

10 years after total implementation of AI in Publishing: With so many published failures with “Cervantes” and its clones, people starts working back to actually write something appealing to humans, technically, the AIs works are brilliant, but for some reason people do not like the ending, or the story, it was too good, to sad, too real. Something was lacking. Perhaps some lack of perfection?

15 years after total implementation. Book publishing could be considered at its peak since the invention of writing. Almost every person in the planet has “writen” a book at some point or turned his live experiences into one, AIs registering the travels or daily experiences of people can now turn them into movies, blogs and of course, books.

30 years after total implementation of AIs in Publishing: No one reads any longer, the new ODID (organic data and information input device) works marvels to provide people with the knowledge and experience they need. Books are obsolete and reading is a skill that must be taught separately, because not even ODID can “install” such a complex process in one’s brain. Besides, nobody cares about this elaborated system of symbols, meanings and references required to provide basic understanding of topics or evoking an elemental imagery in the mind. Those who read are either those old enough to have been taught to, or learn it out of pure historical interests.

 50 years later… internet unplugged…

 27,000 years later… On its way to a red star, (formerly AC +79 3888), a primitive space artifact is discovered, there is great expectation as it may be the one sent by the former inhabitants of planet Earth, thousands of cycles ago. Within it, comes a rich description of a world the meta-humans do not know about. When the “Archorologist” finds some unusual markings on it, it uses the primitive code of a techosentient being trapped on a terminal to scan the drawings, the holo-projector replies: I REGAYOV.

 

Sorry about the length of this work, I was driven by the topic.

 

 

Hey Siri, What Should I Read Next?

The topic AI, as I am beginning to appreciate, is a Pandora’s Box. Once opened, it cannot be contained. And although AI promises to simplify complex things, it inadvertently contributes to adding complexity to our ‘once simple life’.

To imagine the next possible confluence of AI and Publishing, we first need to evaluate the most urgent need for publishers. What is the most persisting need?

Considering that publishing industry is going through a big shift, the fight has moved beyond two key parameters—content and availability. The age-old cornerstone of publishing—find great content and make it available to as many readers as possible, usually through extensive distribution network. Earlier, a book had to compete for shelf space. The possible field was limited to bookstores and newsstands. But the market is different now. With the innovation in eCommerce and Amazon’s hold over the market, the concept of shelf space has disappeared. Every book fends for itself now. Distribution is one of the strongest assets of publishing industry, but with Amazon in the picture, it’s no longer a unique advantage.

The publishers still hold advantage over content; but not for long. Amazon has single-handedly revolutionized self-publishing, breaking one of the strongest barriers of entry—a publishers stamp. Anyone can publish now. It isn’t necessarily a bad thing for the publishers.  Some really promising writers have emerged through the cacophony of indiscriminate self-publishing. There’s a low-risk opportunity for publishers.

But going forward, the fight has moved to discoverability now–It is all about the reach now. And that’s where AI can really benefit the publishers. The market can no longer be limited to geographical boundaries, or demographics for that matter. With Machine Learning and NLP, it’s becoming increasingly possible to not only track what people are buying, but also why they are buying it. This deeper, non-linear understanding of human behaviour is leading the way to behavioural marketing. With the use of AI, publishers can expand their reach with better, more focused marketing.

Publishers can benefit a lot from AI. From content curation, to SEO, user generated data (reviews, ratings, categories), to email marketing and social media reach; these tools can not only to make publisher’s lives easier, but to make them better at their jobs. The optimization of processes and faster turnaround time not only yield better results for businesses, but they also help by being relevant for the consumers, leading to better informed buying decisions and higher conversion rate.

AI has already had a tremendous impact on the way users conduct online searches and discover books. This in turn is changing the way marketers create and optimize content. Innovations like the Amazon Echo, Google Home, Apple’s Siri, and Microsoft’s Cortana make it easier for people to conduct searches with just the press of a button and voice command. That means the terms they’re searching for are evolving too. The publishers need to observe this user behaviour closely. How people search of books is important to ascertain how buying decisions are made and where the actual buying takes place. With help of AI, publishers can re-establish a more efficient purchase funnel for the readers.

I think publishers need to smart here. The industry is going through a disruption right now, with the driving force in the hands of tech giants, who can’t necessarily be identified as publishers. For all the waves Amazon is making, it couldn’t have gotten where it is today, without the groundwork of traditional publishing. To me it seems quite clear that the publishers need to embrace AI, because it is bound to get them anyway. It makes sense to stay on top of the game, rather than play catch-up all the time. If there is a remotest possibility of publishers regaining the ground lost to Amazon, it is through the AI. It is the only thing that’ll level the playing field once again.

Anumeha Gokhale

Say hello to the very efficient and very effective “racist robots”

“AI has a disconcertingly human habit of amplifying stereotypes. The data they rely on – arrest records, postcodes, social affiliations, income – can reflect, and further ingrain, human prejudice.”

It pleases me to say that talk about “diversity and inclusivity” in Western publishing has become so commonplace it is beginning to sound like a broken record, a vital one but repetitive nonetheless. Just two decades ago, entire publishing conferences would not have been dedicated to people of colour, queer people or people from different socioeconomic backgrounds. We are working in an industry that is slowly trying to open its doors and move away from the systemic imbalance that has for decades governed it.

Because the industry is overwhelmingly white, there is a dominant monolithic voice that determines which books are acquired and which ones subsequently make it onto the shelves, digital or otherwise. More editors of colour are needed to begin to see any change in this regard. If what Professor Juan Alperin said to the cohort on Monday, the 5th of March is true: then acquisitions is the role in publishing that will be the easiest for machines to replace and in my opinion, this will be a blow for diversity.

A report on the future effects of AI on the publishing industry stated that “machine learning can help editors move beyond gut feeling when making content decisions”. But in my opinion, it is this very “gut feeling” that makes the human process of acquiring new books indispensable. Taking this opportunity from editors of colour and shifting straight into the machine learning era is a disservice  for representation.

It is not difficult to imagine a future run by AI, as the medical and retail industries have begun to show how helpful it can be in increasing efficiency. In publishing, however, the incorporation of machine learning especially in the acquisitions sector can set the industry back decades unless the machines are multidimensional and taught to value diverse content.

Imagine the scenario below:

Doubleday, June 2030

 After the annual general meeting on Monday, it was decided that psychological thrillers with a literary angle are what readers are looking for. Roco, the machine that asks consumers exactly what they want to read every time they open their Amazon search engine, has told publishers everywhere to buckle down and give readers a mixture of the now legacy book Gone Girl and the vintage classic, Catcher In the Rye. Doubleday which is overseen by the one-woman publisher Em Kay, has also fully implemented Sirex, the acquisitions software that has taken the United States by storm. Sirex has been programmed to acquire books that are “full of imagistic thrill and visual realism”.

 This scenario was not difficult to conjure up, a world where entire publishing teams are made up of one human being because the rest of the editorial processes have been assigned to machines in the name of “operational efficiency”. In this scenario, the machine is taught to look for books similar to what has worked in the mainstream past. We all know that this has for the most part been white therefore the machine is just propagating the biases in existence and amplifying them too. I picked the term “visual realism” because as I argued in my undergraduate dissertation these are the types of words literary bodies such as the Swedish Academy prize and they are Eurocentric in their very nature. Unless data sets are increased significantly to include people of all ethnicities, sexualities and more, then machine learning might be the weapon to keep the publishing landscape as “comfortable” and closed off as it is now.

I will go as far as to say that what this industry needs first is the “gut instincts” of human beings of colour not machines.

Metadata and the Machine

Artificial Intelligence has much to offer for any industry and the publishing industry is no exception. One potential application for these systems is in creating comprehensive metadata. Metadata is gathering data about other data. Within the publishing industry this would be gathering information like author, genre, and topics about books. ONIX is the industry standard for metadata and promises a plethora of benefits. The ONIX website cites the benefits of the system as:

Providing a consistent way for publishers, retailers and their supply chain partners to communicate rich information about their products. It is expressly designed to be used globally, and is not limited to any one language or the characteristics of a specific national book trade… As a communication format, [ONIX] makes it possible to deliver rich product information into the supply chain in a standard form, to wholesalers and distributors, to larger retailers, to data aggregators, and to affiliate companies.

Metadata is clearly extremely useful but also very time consuming to create good metadata. To create metadata a person or a team of people must consume the data in order to gather information about it. This is a very slow process that takes hours and hours of work. When my boyfriend, Peter, first started working at a small documentary company he was creating metadata from footage for a polar bear documentary series the company was working on. For months Peter would watch footage of polar bears and record information like if the polar bears were near snow or no snow. No matter how good a person is at their job they cannot create the metadata faster than they can consume the source object. A branch of Artificial Intelligence called Natural Language Processing has multiple capabilities that will allow machines to create comprehensive metadata with much greater speed than humans are capable of. Natural Language Processing is the development of computers to be able to understand and process large amounts natural languages, where natural languages are how humans naturally communicate rather than computer coding languages. These natural languages for the purposes of the publishing industry would be published books and manuscripts. Instead of having an employee read a book or a manuscript and then recording metadata the machine can rapidly process the text and analyze or extract information. Andreessen Horowitz explains the different types of analysis and extraction that Natural Language Processing is capable of. The forms of analysis that are the most applicable are sentiment analysis (understanding the affect of the text), entity extraction (pulling key words from the text and sorting them¾this would be useful to build a metadata that includes concepts discussed in a text that doesn’t have a detailed index), information extraction (building on the information from the entity extraction and providing context), summarization (creating summaries of the text), and finally document analysis (the classification and categorization of documents and its content¾this is the overarching method of creating metadata). It won’t be long before the use of Natural Language Processing machines will replace interns and entry level positions from creating metadata. The machines are able to accomplish the same work that employees are able to do in seconds rather than months and cost a whole lot less to do so. Publishing companies as a result with have better more robust metadata that is cheaper and faster to produce.

Predictive Analytics and publishing

Artificial intelligence and its various applications like Machine Learning, Natural Language Processing, Deep Learning, etc. have made inroads into almost every area of human interest. And publishing is no exception. One doesn’t haven to imagine the ways in which AI will be integrated into publishing, because it’s already happening. Predictive analytics, evolution in the search and discovery of books, targeted advertising, and dynamic pricing are just some of the ways in which AI is impacting the publishing industry. For the purposes of this essay, I will focus on predictive analytics, a branch of machine learning.

People have been predicting the success of books practically ever since trade publishing began.  Years of experience in the industry and observing the performance of books over seasons has no doubt given some insiders the ability to gauge the market value and potential of a book. Lately, however, machines have been drafted for these very purposes. In 2016, Jodie Archer and Matthew L. Jockers wrote a book called The Bestseller Code: Anatomy of The Blockbuster Novel, in which they posited that an algorithm they had created could examine literary elements in a book and assess its bestseller potential. They claimed that their algorithm could – with 97% certainty – predict a New York Times bestseller in fiction. But publishing industry guru Mike Shatzkin is doubtful of technologies like these. He feels that bestsellers are made up of several complex moving parts and analyzing just the content (or text) of a book to predict its success is reductive and plain wrong as it does not consider the “consumer analysis, branding, or the marketing effort” required to make a book successful. As comparison, Shatzkin talks about how Google uses search algorithms to predict the success or failure of movies. While doing so, Google’s algorithm takes into account various parameters like “search volume for the movie, YouTube views, genre, seasonality, franchise status, star power, competition”, etc. Nowhere does the algorithm read scripts of movies to predict their success, because that alone would not be enough to guarantee success of a movie. Shatzkin is not entirely dismissive of the application of algorithms in publishing. He has helped develop OptiQly, an application that uses algorithms to generate scores that can help publishers optimize a book for discovery and sale and guide them regarding “the extent to which author-focused marketing [can] contribute to discovery and sale.”

Neil Balthaser, founder of SaaS platform Intellogo that analyzes content for its clients, believes machine learning can predict a bestseller. According to him, machine learning can “identify similar tones, moods, topics and writing styles to books that are topping bestseller lists … and, in this way, better understand the reading audiences’ desires.” It is possible that if a machine was fed data and programmed to analyze bestsellers that were indicative of audience interests, it could analyze a book and recommend certain areas where that publisher could “focus its marketing efforts”. In this way, “machine learning can remove the gut feeling or personal bias inherent in business decision making.” Balthaser sees machine learning as an indispensable tool for publishers because he thinks it can provide publishers  “real-time information about their readers, figure out what is working in the marketplace, and, perhaps, make the bestseller lists more of an accurate depiction of what readers want to read, not simply what is available.” Market research is extremely important for publishers, but very soon, instead of solely relying on focus groups, ARCs and opinion polls, machine learning can study much larger and more complex data which would include audience research, social media and market trends and popular searches to forecast the reaction a certain book will have.

The advantages of AI and machine learning in publishing are enormous but the people making this technology and offering it are not from the publishing industry. It’s companies like Apple, Google, Facebook, Amazon that are investing heavily in machine learning and exploiting its capabilities. Bar a startup like OptiQly, which was created by people in the industry, publishers have pretty much been the spectators and not the participants in AI. And that’s something to think about. Cliff Guren, founder of Syntopical, a publishing consultancy company, feels that machine learning can quickly evolve from become a forecasting tool to an authoring tool, in that it can formulate “machine-authored responses that synthesize information from a wide variety of sources.” Publishers then need to decide the extent to which they’re okay with third-party systems making crucial decisions for them. One way to mitigate this issue, according to Guren, would be for publishers to make their own investment into AI so that they use it to develop and not dictate the industry.

Artificial Intelligence is a complex field and there is bound to be some friction when this 21st century technology meets a 500-year-old industry like publishing. That AI is here to stay is clear; publishers just need to be cognizant of its capabilities and neither entirely dismiss it nor blindly embrace it.

Eh, I think it’s pretty cool

Theories of the existence of artificial beings with human-level or more intelligence  have been part of some cultural canons for millennia –  greek myths, for example, spoke of the existence of artifical beings coming to life (though those usually were by means of divine intervention rather than mathematical processes). Since then, AI has become a very scientific idea that tries to use our understanding of how the brain works to reproduce it in mechanical terms. And the idea of AI being so human-like has spurned many, many interpretations (novels, movies, etc) of a future where artificial beings – of organic or inorganic material – roam the world with us, and think for themselves, and become the biggest threat to the human way of life.

While the fear of a singularity or an AI takeover has been an undercurrent in parts of society for a long time now, today that fear seems to be taking a more tangible form as actual AIs are being constantly developed, upgraded, released, and shown to be able to handle more and more human-like tasks (that AI that can paint).

The fact that AI is proving itself to be, maybe not everything, but some things it’s chalked up to be makes some of those fears justified. It’s the industrial revolution all over again – what’s going to happen to people’s jobs? Especially now that it’s not just production and transportation and other practical jobs being threatened, but creative jobs as well!

But jobs are one thing, AI also has the potential to take over positions of power (I mean if people will vote animals as mayor, It’s not farfetched to see an AI being voted in as mayor some day). Will AI ever attain true consciousness and self-reflection and fulfill the roles of countless sci-fi stories? Who knows! We don’t know enough about how consciousness is generated in our own brains to give a definitive answer, but what we do know is that machines modeled after processes in the human brain can replicate human thinking to some extent, and this most certainly will allow them to take various roles in our society, and in our publishing industry.

Editing, publishing, authoring, distributing – all are theoretically possible and in some cases already happening. Will it be a smooth transition? Probably not. People are hesitant to trust the judgement of AI. But as time goes on and AI further grows into the industry, I think AI definitely has a shot at becoming the norm for such tasks as distribution and editing and design, in the self-publishing sector.

Beyond considering the ethical implications of job-loss, there is the consideration of societal reception to AI-produced books. Traditional publishing is “supposed to” act as a filter, to provide us the books worth reading. But if traditional publishing integrated with AI, I think it would lose a lot of that cultural power. There is cognitive dissonance induced by a mind without a human body behind it – because we are biologically, evolutionarily programmed to seek connection with other humans – and learning that a particular artwork was produced by algorithms rather than organic human thought can give it a feeling of emptiness. For that reason of human bias, I do not think the AI’s ability to fulfill those roles will align with employment of those AI. At least in the near future. (Will AI vs human editing/designing/authoring become an important factor in metadata as well?? I think so!)

We’ve got a Data Kink

For publishers, AI is this huuuuuuge concept that I think that it’s hard for us to wrap our heads around — for anyone to wrap their head around, honestly, but perhaps especially us creative-driven humans — and the ways that it will affect our industry as we move ever more so into digital publishing are many and complex. I think one of the most obvious ways that we can anticipate artificial intelligence in publishing is through data mining, which Holly Lynn Payne does a good job of introducing in her efforts to get us to buy into her company, Booxby. While Payne’s motivations to get people into the idea of a data-driven, artificially intelligent book selection app were mostly self-promotion, the technology she’s using has great potential to become an industry standard. The context is already there, to an extent. Users are at least already used to algorithmic recommendations, even if they’re not always trusted. And, more importantly, publishers have been looking to data for the answers to their problems increasingly these last two decades, as evidenced by the creation of BookNet Canada

Data-driven marketing and acquisition is already a reality for book publishers, of course. Though a relatively recent development in Canadian publishing, BookNet members have access to sales data from all over the country, and most use it to their advantage when it comes to curating their year’s list. What AI would do is transform sales data into full consumer analysis. An AI system that isn’t only tracking what books are bought but by whom, and why, and what parts of what book are working, could drastically impact how publishers curate books. It wouldn’t, then, be just how we get books into the right hands, but whether certain plot lines, certain character names, certain prosaic rhythms appeal to people. I see this as being the next step for publishers, though I can’t say whether it will all be through companies like Booxby, or if major publishers will create their own, or even if BookNet will begin to incorporate and release this type of data as a part of membership.

Of course, this has ethical implications for everyone involved: the reader, the author, and the publisher. The reader, though they’ll give consent by default when they purchase a book, will have all their interests turned into aggregated data and sold back to them. The author may eventually have to change what they write to fit the expectations of publishers. And as AI tech starts to integrate itself into publishing, publishers will have less and less of a reason to exist. What I’ve talked about today will definitely affect marketing and editorial, but to some extent everything about publishing has the potential to be replaced or modified by AI. When we talk about how digital publishing platforms, we often talk about how permanent positions in publishing are swiftly becoming freelance positions, but with the introduction of AI we risk losing those positions altogether, as companies look to automate certain aspects of the publishing process. I’ll refer again to an article called “The Ethic of Expediency written many years ago: the ethics of expediency are tricky at best, but it’s pertinent for us to keep in mind the sacrifice we make when pushing for convenience.

I’ll also just leave here something daunting I found during my sleuthing: apparently, six years ago, someone already mastered AI book creation-on-demand.

How AI Can Positively Affect Accessibility of Digital Content

Technology has never been my strong suit, so some of the readings for this week had me confused and constantly looking up new terms. Artificial intelligence (AI) is an interesting topic that has become increasingly popular and studied as our technology develops and improves. I’m interested in how it can work in the publishing industry, and I think the one place it could drastically improve our current practices is accessibility.

According to multiple talks I’ve attended on accessibility in digital publishing (one by Iva Cheung and one by Laura Brady), one of the accessibility tools we most often forget to include is alt text for images. A text to speech reader will read the alt text when it comes to an image, so if the alt text is simply the name of the image file—which is sometimes the automatic setting—it doesn’t offer any insight into what that image is of for the person attempting to access this content. In the Forbes article “What is the Difference Between Artificial Intelligence and Machine Learning?” there is a section called “Neural Networks” that states: “A Neural Network is a computer system designed to work by classifying information in the same way a human brain does. It can be taught to recognize, for example, images, and classify them according to elements they contain.” This specific usage could drastically improve not only the availability of alt text, but also the quality of that alt text—something else that is lacking, according to both Cheung and Brady.

Keeping on the topic of text to speech options, Lessley Anderson’s article “Machine Learning: how Siri found its voice” goes into detail discussing the pros and cons of the developing and improving technology for text to speech functions. One important thing I’d like to pull out of this article in terms of accessibility is the text to speech software’s ability to choose the correct pronunciation of a word: “When you say the word ‘wind,’ for instance, do you pronounce it the way you would if say, ‘the wind is blowing’ or ‘wind’ as in ‘wind the thread around the spool’? An adult human will make the correct determination automatically based on context. A computer must be taught about context.” This is important when you’re thinking in terms of accessibility because people might be reading this article through text to speech software, and they might not have the full context themselves to understand which ‘wind’ was meant. If the software happens to use the wrong pronunciation of the word, the user will have to pause and parse out the sentence word for word themselves to figure out what is wrong with the sentence and why it doesn’t make sense—not ideal for accessibility. However, if the software is able to read that context and figure out which ‘wind’ is meant to be there, it makes accessing that piece of text a lot easier for the end user.

Some forms of AI can be scary—think robots taking over the earth. But other forms of AI and machine learning can be a valuable resource to improve accessibility of digital content, which is something we should all be striving for.

AI for Audience

Imagine and explain one way in which AI (Machine Learning, Natural Language Processing, or another application of AI) will be integrated into publishing. You can go as near or far into the future as you like. You can also explore the ethical/implications of this technology becoming a publishing norm.

AI comes with a lot of advantages and although some of us are scared about it taking over human’s jobs, we simply can’t deny its existence. It is becoming more and more mainstream. As publishers, we can’t simply turn a blind eye of this innovation, especially when it could solve one of publishers’ biggest problems: audience.

We all know that there are plenty of good content out there. However, are those content reaching their potential audience? You can create the best content in the world and will be useless if your ideal readers don’t know about it. If you do not have a clear understanding of who your audience are, you are less likely to produce or get your hands on the content that they want.

AI could provide a near precise audience analysis and engagement. It is already known to and used by many marketers at big companies and publishers definitely could benefit from it as well. Say if you want to understand your audience and reach them better, you analyse what kind of content they are visiting on your website, what topics they are talking about on their social media, and through what platforms you can reach them the best. You also build a persona to better understand your ideal readers. You aggregate data from Google Analytics and Google Trends and any other hashtags and comparables that you can find. It’s a lot of work to do and we might just miss something along the way. This is where AI comes in. AI could provide a tool to give you a detailed picture of your potential readers and analyse your target market. One of the existing tools that has already existed out there is People Pattern. Although still not perfect, it runs on the same AI principles: aggregate data provided from big data (such as real people with real conversations), normalise it then analyse it. It then develops an in-depth audience intelligence; not just age and gender, but things like digital consumption, brand sentiment even life stage.

Another thing AI can do is to maximise audience engagement to provide better customer service. The answer to that will be chatbots. FastBot, for example, is a chatbot that is able to engage in ‘real’ conversation based on specific keywords. This chatbot could be used by readers to find out more about characters in the books and their backstory. They can also take knowledge quizzes to find out how well they know the characters in the book. Using NLP and guided elements, Chatbot can also answer readers’ queries to the authors and chat with multiple readers without adding to author’s time and/ commitment. Chatbot can also integrate video, audio, images, emoji, gif to enhance readers’ experience. However, the implementation of chatbot is still lacking because it does not recognise variants of the keywords like Siri or Alexa, but it is a start. Someone can, and they will, improve it and it could be life-changing for both publishers and readers.

Finally, with all the controversies regarding AI, AI is very beneficial for publishers. Not only it could help publishers by better understand their audience, it could also enhance readers’ experience and engagement. The implication of this very technology will certainly shift the publisher’s norm, but for the better. How do you think about it?