Just Ask them….

We live immersed in a world of tracking, measuring and analytics. Whether you have a Facebook, Google or similar account, or even if you play the game of hide an seek from the zillions of data collecting bots lurking in cyberspace, chances are you are being tracked at least for a good part of your day.

Like it or not, we are being tracked. the heinous world depicted by Orwell in 1984 is becoming a reality, and just like in Huxley’s Brave New World, people around us embrace the surveillance and think its for the best, be it security, having a deal or giving businesses the information they need to deliver “exactly” what they need.

Publishing books is a different matter though, first, because the historic evolution of the field has lead to an interesting mix of romantic feel about the touch, smell and feel of the pages and a yearn of the old printing techniques with the excitement of high-tech printing and the and virtual almost eternal lasting of e-books.

Also, the publishing industry has problems collecting or processing information about readers tastes and reasons to purchase. A novel for example has the challenge to be discovered first and then tell the person who came across it, about the benefits of reading the content compared to the thousands of titles around, some of which have huge media support and placement.

 For centuries, Publishers had relied in their instincts and experience to predict the most successful route for a book to reach its audience, but what is this “instinct and experience” (also called “gut”) but a very complex collection of processed data turned into information by years of practice in the gestalt consciousness of the profession as well as in the individual life story? How is it possible to fuel this “gut” with the type of data the digital gathering systems generate?

 When publishing a book, my major interest is Who and Where is its public? and how to deliver it to the them? I mean, not only how to make them aware of its existence, but also the best way for them to consume it. If there is a community with similar interests, a social club, Facebook page or forum? Do they read printed materials or digital, audiobooks, other?. Thus, I need to establish contact with them, or guide the writer to do it. This is where I find useful that data, to know what they like, what they think, how they read or consume knowledge and entertainment so I can create real expectations and prepare for a big show.

 It is agreed, Word of Mouth is the most successful way to promote a book, because it relies on a social web with heavily established bonds and protocols, in fact, it could be assumed that most of the other marketing channels aim towards positioning a book in the word of mouth channel at some point.

 So talking to the readers is key. Publishing is about establishing relations, closing writers and audiences, editors and Publics. You cannot lurk in the shadows with a dataset, measuring people from the distance and expecting to surprise them with a product their Gaussian distribution tells me they would like, but of which they have never heard of. As in all great businesses, direct communication is key, and thus, a simple prompt sample or question can work wonders compared to the most detailed dataset. Because in essence, we are getting the specific data we want to know.

 How to find the right audience… well that is another matter.

Reader Data By Readers


To capture the best data about readers’ impressions of books they read, I think it is important to get as much information from the reader themselves. Assumptions should not be made about what attracts each individual to a book, as differences between readers/individuals can be so vast. I might be attracted to a book by its cover, where as another reader would want to read the same book because of the author and couldn’t care less about the cover image.

I would develop an survey-like app that consists of questions and sliding scales. The data could then be taken from the app and analyzed. The survey would include questions like “Would you recommend the book to a friend?”, “How much did the cover design attract your attention or make you curious about the book?”, “How dynamic did you find the main character,” etc. It would be important to try and find out why the reader was attracted to the book in the first place, what kept them reading, and how satisfied they were with the ending. I would also try to find out why they stopped reading it if the reader did not make it to the end. Bonus questions could include questions about the price point of the book and if they received the book as a gift, borrowed from a library, or bought from a bookstore (and if so, a new or used bookstore).

The use of a sliding scale would be put in place so that unless the user wanted to (by clicking on an ‘add more’ type of button), they would not have to type out the answers, which could be a lengthy process and deter some people from reviewing at all. A tappable sliding scale would save much more time for the user and encourage them to review the book quickly after reading it. Users could also be encouraged to review books by offering a point system with sponsor or partner companies. For example, each review could be worth 5 points, and with 1,000 points the reader could receive a $10 gift card to Indigo.

In addition, after each review the app could generate an “overall rating” score (e.g. “8.5 out of 10”), and then suggest 3-5 books the reader may be interested in, based on their feedback, likes, and dislikes.

By collecting this type of data, publishers (and specifically marketers) could determine better ways to target and market to their audience, as well as determine what elements of a book work for certain readers and does not work for others. The information gathered could help publishers make decisions on which books to take risks on in the future, if similar books are well-received.

“Books smaller than natural books, books omnipotent, illustrated, and magical”

The place to capture our readers’ interests is in their social media accounts. Of course the obvious social media service here is GoodReads, but I think there is much more to be discovered by analyzing audience’s likes, dislikes, and preferences as they portray them on various other social media venues as well. Sure, people gush or complain on these sites about the book they just read, and that is absolutely valuable data, but I think we can take it further. In order to put “The Perfect Book™” into our reader’s hand, we need not only look to their reading interests, but to their lifestyle interests as well.

In contemplating the content of my blog post, I did a quick research of some companies that already exist to help us maximize an audience’s experience with our products. I stumbled upon Crimson Hexagon, a website that provides its members with “AI-Powered Consumer Insights,” including audience, brand, campaign, and trend analyses. What apparently sets Crimson Hexagon apart from other similar services is their adept analysis of “conversations” on Facebook, Instagram, Twitter, Tumblr, blogs, reviews, forums, news, and more. In fact, their archive is close to surpassing a trillion social media posts; they have an interesting page giving some insight into what is possible with data from a trillion posts which answers a bunch of questions I didn’t even know I had. My main takeaway from learning about this website, however, is the story behind their name. They say

In Jorge Luis Borges’ short story The Library of Babel, an infinite expanse of hexagonal rooms filled with books contained every possible arrangement of letters. For every important, beautiful, or useful book in this library there existed endless volumes of gibberish.

The only way to navigate this vast sea of meaningless information was to locate the Crimson Hexagon, the one room that contained a log of every other book in the library—a guide to extracting meaning from all the unstructured information.

I think Crimson Hexagon found a beautiful way of explaining their approach to data analysis, and I think it is incredibly relevant to how we as publishers should look at it too. Going deeper into the The Library of Babel reference (you bet I found a PDF of it to read), we can compare the infinite amount of books in the Library to our audience’s mind/interests/data set/etc., and if we reach the Crimson Hexagon, we will be able to sell them “The Perfect Book™:” the one even they don’t know they need. In order to find the Crimson Hexagon, we have to sift through indefinite amounts of rooms with indefinite amounts of books. Perhaps an AI-driven service such as Crimson Hexagon can help with that. We all talk about our interests on the Internet, and this website decided to capture that data and help its members turn that into something useful for their brands. It is not outside the realm of possibility that we can harness this data as well and use it to create an optimized reading experience.

Our readers are infinitely complex, like The Library of Babel, but we are getting closer to being able to give them what they need from their books. We, like the librarians of Borges’ short story, are “spurred on by the holy zeal to reach—someday, through unrelenting effort—the books of the Crimson Hexagon.”

Works Cited:

Borges, JorQe Luis. “The Library of Babel.” Collected Fictions. Trans. Andrew Hurley. NewYork: Penguin, 1998. https://libraryofbabel.info/Borges/libraryofbabel.pdf

Crimson Hexagon. 2018. https://www.crimsonhexagon.com/

Data – Giving Black readers what they want?

In 2014, Jason Kint boldly declared that data tracking was not at all beneficial for the publishing industry because it was damaging the trust relationships amongst consumers, publishers and marketers. Four years later, it is apparent that consumers are becoming more and more aware that their information is being used or tapped into, sometimes without their consent. The number of “FBI is watching me” memes and posts amongst my friends on social media alone, has increased significantly and the humour in these posts is making way for a grim reality. It is of utmost importance for myself as a publisher to recognise that data tracking may help me make beneficial business decisions but that the “trust relationship” between myself and my readers is of more importance. Therefore in the future if ever I need to mine data, I will try to make sure that this is being done with the full knowledge and consent of the readers I am trying to reach and that this tracking is for their eventual benefit.

This is especially because my goal is to publish books for Black readers and I would like to increase their reading experience. I have been toying around with the idea of an algorithm which helps me decide which format a book would work best in before it is published widely. This will be especially from an engagement point of view i.e. which format draws readers in to fully enjoy and get out of the book what it is they were expecting when they chose to read it. Whether they finish the book or not can be seen as an obvious indicator of “engagement” but I want an algorithm that is even more detailed than that. For example, one that tells me that when reading in eBook form, the reader did not refer the book to anyone else afterwards but when it was read as an audiobook, they referred it five of their friends. They engaged in wider discussions about the themes in the book.

It is important to point out that “Blackness” is multilayered and that Black readers are not a homogeneous group. The data set would have to be geographically diverse. For example Black people on the Continent (Africa) have different tastes to Black people in the diaspora. As much as art has been a unifying factor amongst Black communities worldwide, there are still nuances amongst the different groups. Black British people, Black Canadians and Continental Black people will  agree that Toni Morrison’s books are for all of us or that Chimamanda Ngozi Adichie’s books speak to us as a wider community but the question still remains. What formats would they prefer to read these books in? This will differ based on geographical location.

On the continent, our cultures have for the most part been oral. Stories passed down from generation to generation via oral storytelling. And as much as we enjoy reading print books, it is my personal belief that audiobooks would serve us better. I would need an algorithm to corroborate this fact  because audiobook production is expensive and a rather large investment. Data that showed if Black readers, engaged with entire audio chapters and read the entire books would be helpful in determining which books I would publish in this format. Data on the kinds of voices Black people responded to in audiobooks would also be beneficial. There are different accents and intonations which are more widely associated with Black people and global Black culture. Knowing what kind of voice actor, readers respond better to would be something data would help me with.

I want Black literature to be valued for what it is and will use data tracking only to see this through. The formats of books are of utmost importance in determining reader engagement and I would ultimately use data to bring about a cohesive relationship between the two.

Discoverability problem: the Bookish case

To answer the question regarding what data I would want to collect about readers’ impressions of the books I publish in future, I would say that it would have to deal with how they discover and buy their books. I think book discoverability is still a huge problem and I would want to know from where the majority of my readers purchase their books so that I can better my marketing efforts on the other avenues, while still prioritizing sales via the main point of purchase. The failure – or rather the ineffectiveness – of a site like Bookish demonstrates that discoverability is still a blind spot with publishers. Bookish was launched in 2013 by Penguin (before it merged with Random House), Simon & Schuster and Hachette as a site that can expand discoverability, connect with readers and generate prepublication buzz for books. The site’s mission – as stated on its ‘About’ page – is to ‘Help readers discover their next favorite book’. It was meant to foster a “direct digital customer relationship” and connect readers with books and authors with proprietary content and exclusive deals. Had Bookish served its purpose, we would, probably, be bemoaning the decline of the sales of books a little less and not mulling over why discoverability is still a thorn in the publisher’s side. Instead of building a community of book readers, Bookish is a marketing tool for publishers. The list of publishers participating in Bookish might have increased, but it’s still a one-way street, with content and information mainly coming from the operators of the site and not from the people using it. Book recommendation, currently, seems to be its main raison d’être with listicles upon listicles curated by Bookish for their readers. There is no option for a reader to recommend books or make their own listicle. What’s worse, there is a no “social” aspect to the site at all. Nowhere where the reader can make their account and build a virtual shelf à la Goodreads. If a reader wants to avail of any social features, they need to visit Bookish’s sister-site Bookish First. The conceit of Bookish First is that readers get to read a book before it is published. For this, they need to sign up, participate in contests and stand a chance to win a book. But to stand a better chance to win, the reader has to promote the offered book on their social media. I’m not sure whether the chance of getting to read a book before it’s pub date is incentive enough for a reader to basically do marketing for Bookish and its publishers. Not all books are met with the same fan anticipation that we witnessed before the launch of every Harry Potter. Getting the next Harry Potter in hand before its launch could have given you legit bragging rights. But, before the launch of the next book by Kelly Loy Gilbert or K J. Howe? Um, not so much. We might perhaps witness it again just before the launch of George R. R. Martin’s highly anticipated The Winds of Winter but publishing phenomena like Harry Potter or Game of Thrones are the exception and not the rule. For Bookish to dedicate an entire site just for contests by dangling the carrot of free pre-pub-date books, while making the readers do some legwork (figuratively speaking) for it, seems like a rather ill-conceived idea. They do not have a large user base: only 45K Facebook users, for instance, as opposed to Goodreads’s 1.25 million.

Going into the industry, it is worrisome to me if a project launched by 3 of the Big 5 as its discoverability platform is not living up to its potential. It perpetuates the idea that publishers live in a closed ecosystem, where communication is one way, where they think they know what the readers want without actually listening to them. Publishers seem to be disjointed from what’s happening today, where everyone is mining user data to create and curate the exact products and content people want. With a platform like Bookish, publishers had the opportunity for a direct, two-way communication platform to establish connection with the reader. Which is why, when five years since its launch that is not the case, I am really surprised, especially since the publishers participating in Bookish ostensibly set out to establish a “direct digital customer relationship”  with the reader. As an aspiring publisher I hope I can make a dent in the problems concerning discoverability. I’d hope that my impression of what the reader wanted mirrored the readers impressions and expectations.

But do we really give a folk?

When we ask what kind of data we want to collect about readers’ impressions, what we’re really asking is how we would encourage a folksonomy; there’s no other way to garner impressions than autonomous, organic input from readers. Impressions are thoughts and feelings. To ask for impressions would be leading at best and coercive at worst. Sure, there’s lots of other data about readers that can be collected and still be helpful to publishers: print or digital, paperback or hardcover, point of sale, location, et cetera. But in order for a publisher to gather impressions, that publisher would have to create a social media platform for their readers. The only publisher to have semi-successfully achieved this is Amazon with their website Goodreads.

One problem with a future in which publishers collect their own data via social platforms with functional folksonomies is that once there is one really good platform, no one is going to be very receptional to others. In fact, it will feel prohibitive to readers to have to go to the Penguin Random House social platform for some books and the Simon & Schuster social platform for another. And to an extent, it would also disrupt the folksonomy of the users. Perhaps the compromise would be for all publishers to have an investment in Goodreads…but that gives a lot of power to Amazon. Ideally, the social network would be completely neutral and devoid of any vested commercial interests. 

I’m a little biased against the idea that publishers should be finding new ways to capture data about readers impressions at all. A lot of the data (about everything but readers’ impressions) is already out there: sales data, POS systems, demographics, et cetera. And as for how to get data from a reader folksonomy of your books — that’s already out there too, if publishers are willing to dig for it.

Take, for instance, two pretty well known fandom platforms: Tumblr and AO3.

Tumblr has been the go-to place for fangirls and fanboys since 2007. They’ve got a decade-long evolution of tag-building and micro-communities that thrive strongly around the smallest of fandoms. While this is, to an extent, only relevant for fiction, it’s exactly the kind of natural, organic folksonomy that publishers could gauge impressions from.

The Organization for Transformative Works’s “Archive of Our Own” has a similar tagging culture to Tumblr, though it is more organized, literature-centric, and robust. AO3 is a goldmine of rich data about reader culture. If publishers want to know what readers are loving about their books, how fans are subverting the book’s themes, and how deep the fanbase is, AO3 has that information. Though from the user’s perspective the website’s tagging system would be considered less a folksonomy and more a metadatabase, from a publisher’s perspective it’s an organically built pool of readers’ taxonomical reactions to a given book or series. 

For non-fiction, literary fiction, and other types of books that don’t lend themselves well to fandom culture, there are other ways to gauge reader interaction. For scholarly books, impressions are pretty explicitly explained in citations of others’ works. For literary fiction, you’re more likely to see readers interact on Goodreads…to which we’ve come full circle.

The argument I’m making is that I don’t believe there’s any more data to collect from readers’ impressions than what is already available. Perhaps the current data isn’t currently being mined correctly, but that doesn’t mean it’s not out there. Given an AI system like Booxby’s, a publisher may be able to unravel patterns in readers’ behavior, but that is by its very definition inorganic, and more about determining the next book than reactions to the last book.

The only way I can see the situation being any different is if, in a world where ebooks are the dominant form of literature consumption, books have become completely social network-capable; each book is its own interface for readers to react and interact. Though this tech is undoubtedly possible, and might even be the future, how long it would take to transition readers to accept that as a norm is yet to be seen.

TLDR: The data is already available if you just take the time to look for it, readers’ impressions aren’t any use if they aren’t organic, and we’ve got lots of data already that we maybe aren’t even using.

The Engineers A.I. driven future of Publishing

Potentially, AIs can be used to cover, more or less successfully, all of the wide range of activities leading to the selection, creation and distribution of books and other printed materials, from manuscript draft, to substantive and copy editing, to layout and cover design, printing (or encoding for e-books) and even distribution of the published works.

One possible future, that is likely to happen is the “engineers” approach to implement AIs in Publishing. Engineers are problem solvers and optimize things, so its natural the whole process will be driven, like many other fields in technology, by this vision.

The process would take several specialized AIs to do the task, but they will no doubt accomplish “something”. What makes the difference, is the approach we take to use them, and I mean WE, because as future professionals,  decision makers and leaders of this industry, we must be very wary of how we want this to happen.

I have also included “The Boss” perspective to these outcomes (find them in red) about the possible appeal of these technologies to these people to show how these technologies appear and shape the decisions of the managerial level and their impact on the workforce.

 Scenario I -The engineers’ approach-

This process would evolve systematically, starting with the manuscript selection as a Machine Learning project called “Gutemberg” (named un- creatively after a long struggle with Copyright holders… engineers after all). “Gut” starts learning from the actions of a human editor, then, combining the data gathered from the choices of several editors, it would gather enough information to start making its own choices, those being probably corrected again by those editors, who would think its wonderful to have some time to do anything else or just to increase their “productivity”, focusing on “editing” twenty, books instead of ten at a time.

What about the boss? The boss is happy to have invested in this promising technology that may save a lot to the Company in unnecessary human and material resources. It is a very competed world and the ones with the best tools will win the battle (or so the Boss thinks).

With the new data set input, “Gut” would optimize and start making more accurate decisions, “productivity” would increase to 50 books per editor, then 100, the process being refined successively on each iteration. Finally, the “editor” would only have to assign parameters to filter those manuscripts the AI had selected and focus on making “high level” choices.

At this point, The Boss is considering reducing the workforce in the editorial level, the savings are huge and they will allow for investment on other projects. After all, mandate states the company must give voice to as many people as possible. The dream of “serving the community” seems to be fulfilling.

 A side effect is, with each successive iteration, the “editor(s)” doing the job become experts on data selection, no more reading required, no need to understand, the primary requirement being competence in evaluating the numbers. Not far from today, this “editor” will become effectively a data analyst with publishing insights. The same process would apply to substantive and copy editing, probably discarding the job position of the later before anyone else.

On the Big office: The Boss is very happy to have saved so much in “not always reliable” workforce. Some new positions had to be created of course, like the AI Tech Specialist, who monitors and maintains the correct working of the AI, its a major expense but “Gut” can do the work of dozens of people in the same amount of time, not even that, they had already developed version  26.11 which even has a simulated but stimulating sense of humor module to allow “meetings” with it more pleasant.

In essence, this Boss has a five figure salary and his troubles had been reduced to dealing with his “chief editors”, a big name for people evaluating the numbers and reading the one page, bullet point prompts the AI deliver to them so they -at least- are informed what a book is about and the major points of the plot.

Design and layouts seem also simple to create artificially, just provide a set of proven templates, use machine learning to teach the AI how to correct widows, hyphens and the like, and don’t worry about the rest, by the moment this occurs, people had already re-learned to read based on those (horrible) screen readers with accessibility, zoom in/out and convenient storage capacity.

Printed books don’t do better. Even today, publishers had sacrificed all the use and meaning of margins and blanks to maximize the use of space and increase their profit margin, which is no surprise, but is deplorable, since even a set of margins as short as half an inch each on a 5×8” book means only 70% of the page is used for text, add “leading” to the equation and that usage may drop to as low as 50%.

 For the boss, one of the happiest things brought by AI, this is a different one called “Minuzio” in honor to the famous Italian typographer and printer. He finally got rid of those pesky freelancers who tried over and over to get a cover done, when all that was need was”more red”. Fortunately, “Minuzio” is very obliging so you only have to tell it what style you want and it will deliver tens or hundreds of options, all appealing and optimized for visual impact.

On the accounting, financing and administrative departments, editors would have long been relieved of this pain of doing numbers and dealing with P&Ls. Why bother? The new system linked to “Gutemberg”, called “MIDAS” has the particularity of analyzing the market trends and predict, with 95% accuracy, the best possible date within a time-frame for a new product to be released, also to organize and track orders and deliver prompt shipping to points of sale, not to mention, handle the e-commerce site where e-books are ordered or track sales across Amazon and other regional platforms. Additionally, it can also do your tax reports.

MIDAS has saved the boss the pains of dealing with faulty logistics, the AI is everything they promised, and more. He saves time, money and resources, and now only decides on the best course of action for the Company to invest. The logistics feature means each book may have as few as a few as a couple dozen copies in print and probably double that number on e-book sales, but they are a steady market and return rates are fewer than 5%!

 The end result: The Boss only has to deal with AIs, they work 24/7, meaning no more delays, no more missing deadlines, everything just a stream of finished works. With so many projects managed by “Gutemberg” and designed by “Minuzio”, sales are like a videogame where you invest your resources on one or other project. If only writers could write faster, but then, that will be solved when they release “Cervantes” the Writing Author AI everyone is expecting. Then, books will be a matter of inputting a number of parameters and drag a project into the publishing console to produce.

 5 years later: The advancement on AIs systems allow the total disposal of unnecessary personnel, at most, a company now haw a CEO, one Executive Editor and Executive Manager which are required to maintain a certain level of humanity behind the scenes of an otherwise automated process.

 After a hard struggle, Open Access supporters finally release “improved” versions (mostly copies and rip offs) of the different AIs with various, sometimes flamboyant names, some of these specialize in certain genres, others try to emulate the protocols of Gutemberg or Minuzio. Many are free but mediocre, most are paid per upgrade or feature.

Whatever the angle, this leads to the sudden burst of “single man/woman” publishers managing hundreds of projects at a time which seem to be good at becoming celebrities and influencers. Self publishing is possible but if you want to “write” something that does not stall in the dozen sales mark, you need those guys to become your “Publishers”.

Grant systems for publishing, where applicable, collapse under the pressure of tens of thousands of applications, sometimes, the grant is as low as to barely cover the domain cost site or, the price of a cup of “Hyper Cetacean milk coffee”, it uses no cetacean milk by the way, just a brand, it has no sugar and no actual coffee, just the flavor. Its very popular by then.

 Widespread publishing is a reality, anyone can write or give an idea to a “Cervantes” replica, had the book written, then process it and publish “a book”. Mission accomplished, everyone can publish now. With so many works and everyone writing, nobody reads each other.

10 years after total implementation of AI in Publishing: With so many published failures with “Cervantes” and its clones, people starts working back to actually write something appealing to humans, technically, the AIs works are brilliant, but for some reason people do not like the ending, or the story, it was too good, to sad, too real. Something was lacking. Perhaps some lack of perfection?

15 years after total implementation. Book publishing could be considered at its peak since the invention of writing. Almost every person in the planet has “writen” a book at some point or turned his live experiences into one, AIs registering the travels or daily experiences of people can now turn them into movies, blogs and of course, books.

30 years after total implementation of AIs in Publishing: No one reads any longer, the new ODID (organic data and information input device) works marvels to provide people with the knowledge and experience they need. Books are obsolete and reading is a skill that must be taught separately, because not even ODID can “install” such a complex process in one’s brain. Besides, nobody cares about this elaborated system of symbols, meanings and references required to provide basic understanding of topics or evoking an elemental imagery in the mind. Those who read are either those old enough to have been taught to, or learn it out of pure historical interests.

 50 years later… internet unplugged…

 27,000 years later… On its way to a red star, (formerly AC +79 3888), a primitive space artifact is discovered, there is great expectation as it may be the one sent by the former inhabitants of planet Earth, thousands of cycles ago. Within it, comes a rich description of a world the meta-humans do not know about. When the “Archorologist” finds some unusual markings on it, it uses the primitive code of a techosentient being trapped on a terminal to scan the drawings, the holo-projector replies: I REGAYOV.


Sorry about the length of this work, I was driven by the topic.



Hey Siri, What Should I Read Next?

The topic AI, as I am beginning to appreciate, is a Pandora’s Box. Once opened, it cannot be contained. And although AI promises to simplify complex things, it inadvertently contributes to adding complexity to our ‘once simple life’.

To imagine the next possible confluence of AI and Publishing, we first need to evaluate the most urgent need for publishers. What is the most persisting need?

Considering that publishing industry is going through a big shift, the fight has moved beyond two key parameters—content and availability. The age-old cornerstone of publishing—find great content and make it available to as many readers as possible, usually through extensive distribution network. Earlier, a book had to compete for shelf space. The possible field was limited to bookstores and newsstands. But the market is different now. With the innovation in eCommerce and Amazon’s hold over the market, the concept of shelf space has disappeared. Every book fends for itself now. Distribution is one of the strongest assets of publishing industry, but with Amazon in the picture, it’s no longer a unique advantage.

The publishers still hold advantage over content; but not for long. Amazon has single-handedly revolutionized self-publishing, breaking one of the strongest barriers of entry—a publishers stamp. Anyone can publish now. It isn’t necessarily a bad thing for the publishers.  Some really promising writers have emerged through the cacophony of indiscriminate self-publishing. There’s a low-risk opportunity for publishers.

But going forward, the fight has moved to discoverability now–It is all about the reach now. And that’s where AI can really benefit the publishers. The market can no longer be limited to geographical boundaries, or demographics for that matter. With Machine Learning and NLP, it’s becoming increasingly possible to not only track what people are buying, but also why they are buying it. This deeper, non-linear understanding of human behaviour is leading the way to behavioural marketing. With the use of AI, publishers can expand their reach with better, more focused marketing.

Publishers can benefit a lot from AI. From content curation, to SEO, user generated data (reviews, ratings, categories), to email marketing and social media reach; these tools can not only to make publisher’s lives easier, but to make them better at their jobs. The optimization of processes and faster turnaround time not only yield better results for businesses, but they also help by being relevant for the consumers, leading to better informed buying decisions and higher conversion rate.

AI has already had a tremendous impact on the way users conduct online searches and discover books. This in turn is changing the way marketers create and optimize content. Innovations like the Amazon Echo, Google Home, Apple’s Siri, and Microsoft’s Cortana make it easier for people to conduct searches with just the press of a button and voice command. That means the terms they’re searching for are evolving too. The publishers need to observe this user behaviour closely. How people search of books is important to ascertain how buying decisions are made and where the actual buying takes place. With help of AI, publishers can re-establish a more efficient purchase funnel for the readers.

I think publishers need to smart here. The industry is going through a disruption right now, with the driving force in the hands of tech giants, who can’t necessarily be identified as publishers. For all the waves Amazon is making, it couldn’t have gotten where it is today, without the groundwork of traditional publishing. To me it seems quite clear that the publishers need to embrace AI, because it is bound to get them anyway. It makes sense to stay on top of the game, rather than play catch-up all the time. If there is a remotest possibility of publishers regaining the ground lost to Amazon, it is through the AI. It is the only thing that’ll level the playing field once again.

Anumeha Gokhale

Say hello to the very efficient and very effective “racist robots”

“AI has a disconcertingly human habit of amplifying stereotypes. The data they rely on – arrest records, postcodes, social affiliations, income – can reflect, and further ingrain, human prejudice.”

It pleases me to say that talk about “diversity and inclusivity” in Western publishing has become so commonplace it is beginning to sound like a broken record, a vital one but repetitive nonetheless. Just two decades ago, entire publishing conferences would not have been dedicated to people of colour, queer people or people from different socioeconomic backgrounds. We are working in an industry that is slowly trying to open its doors and move away from the systemic imbalance that has for decades governed it.

Because the industry is overwhelmingly white, there is a dominant monolithic voice that determines which books are acquired and which ones subsequently make it onto the shelves, digital or otherwise. More editors of colour are needed to begin to see any change in this regard. If what Professor Juan Alperin said to the cohort on Monday, the 5th of March is true: then acquisitions is the role in publishing that will be the easiest for machines to replace and in my opinion, this will be a blow for diversity.

A report on the future effects of AI on the publishing industry stated that “machine learning can help editors move beyond gut feeling when making content decisions”. But in my opinion, it is this very “gut feeling” that makes the human process of acquiring new books indispensable. Taking this opportunity from editors of colour and shifting straight into the machine learning era is a disservice  for representation.

It is not difficult to imagine a future run by AI, as the medical and retail industries have begun to show how helpful it can be in increasing efficiency. In publishing, however, the incorporation of machine learning especially in the acquisitions sector can set the industry back decades unless the machines are multidimensional and taught to value diverse content.

Imagine the scenario below:

Doubleday, June 2030

 After the annual general meeting on Monday, it was decided that psychological thrillers with a literary angle are what readers are looking for. Roco, the machine that asks consumers exactly what they want to read every time they open their Amazon search engine, has told publishers everywhere to buckle down and give readers a mixture of the now legacy book Gone Girl and the vintage classic, Catcher In the Rye. Doubleday which is overseen by the one-woman publisher Em Kay, has also fully implemented Sirex, the acquisitions software that has taken the United States by storm. Sirex has been programmed to acquire books that are “full of imagistic thrill and visual realism”.

 This scenario was not difficult to conjure up, a world where entire publishing teams are made up of one human being because the rest of the editorial processes have been assigned to machines in the name of “operational efficiency”. In this scenario, the machine is taught to look for books similar to what has worked in the mainstream past. We all know that this has for the most part been white therefore the machine is just propagating the biases in existence and amplifying them too. I picked the term “visual realism” because as I argued in my undergraduate dissertation these are the types of words literary bodies such as the Swedish Academy prize and they are Eurocentric in their very nature. Unless data sets are increased significantly to include people of all ethnicities, sexualities and more, then machine learning might be the weapon to keep the publishing landscape as “comfortable” and closed off as it is now.

I will go as far as to say that what this industry needs first is the “gut instincts” of human beings of colour not machines.

The Publishing Process Needs People


To take a more lighthearted approach to the possibility of Artificial Intelligence integrating into publishing, I think it is entirely possible that real books could and would be written entirely through predictive text. It has already been done with “Harry Potter and the Portrait of What Looked Like a Large Pile of Ash,” and while it may serve a more comical use, I would not be surprised to see predictive text used outside of fan fiction. That being said, humans must still play a big role in the production of such a book. From editors, designers, to the person in acquisitions deciding if the result is even worthwhile, humans will remain necessary in the publishing process.

The alarmists of the world may fear that writing with predictive text is a slippery slope leading only to a future that does away with authors altogether, replacing them with computers who can spit out thousands of books by the day, having been written solely with predictive-text-like technology, but that is realistically unlikely. At worst, AI may require copy editors, substantive editors, and designers to up their games and refine their skills. A computer may be able to scan a document and fact check against Google, but real eyes will still be required to ensure that a text flows, is logical, and is even emotional to the correct degrees.

As AI continues to develop, and the machine translation technology behind apps like Google Translate evolves and improves, it is possible that in order to save money, publishers will turn to apps to translate books into other languages. This would save time and money, but it is important to remember that any quality company must not rely on technology when it comes to translation. Eyes of human native speakers (or very experienced human translators) must still have the opportunity to review each document. Failure to do so by relying solely on a computer runs the risk of mistranslations, misunderstandings, and ultimately a bad experience for the reader.

Overall, Artificial Intelligence may speed up some publishing processes (e.g. using voice commands in design projects) , and may provide humerous results for others (e.g. stories resulting from only using predictive text), but the human touch is something that simply cannot be replaced by wires, data, or CPUs.


Predictive Analytics and publishing

Artificial intelligence and its various applications like Machine Learning, Natural Language Processing, Deep Learning, etc. have made inroads into almost every area of human interest. And publishing is no exception. One doesn’t haven to imagine the ways in which AI will be integrated into publishing, because it’s already happening. Predictive analytics, evolution in the search and discovery of books, targeted advertising, and dynamic pricing are just some of the ways in which AI is impacting the publishing industry. For the purposes of this essay, I will focus on predictive analytics, a branch of machine learning.

People have been predicting the success of books practically ever since trade publishing began.  Years of experience in the industry and observing the performance of books over seasons has no doubt given some insiders the ability to gauge the market value and potential of a book. Lately, however, machines have been drafted for these very purposes. In 2016, Jodie Archer and Matthew L. Jockers wrote a book called The Bestseller Code: Anatomy of The Blockbuster Novel, in which they posited that an algorithm they had created could examine literary elements in a book and assess its bestseller potential. They claimed that their algorithm could – with 97% certainty – predict a New York Times bestseller in fiction. But publishing industry guru Mike Shatzkin is doubtful of technologies like these. He feels that bestsellers are made up of several complex moving parts and analyzing just the content (or text) of a book to predict its success is reductive and plain wrong as it does not consider the “consumer analysis, branding, or the marketing effort” required to make a book successful. As comparison, Shatzkin talks about how Google uses search algorithms to predict the success or failure of movies. While doing so, Google’s algorithm takes into account various parameters like “search volume for the movie, YouTube views, genre, seasonality, franchise status, star power, competition”, etc. Nowhere does the algorithm read scripts of movies to predict their success, because that alone would not be enough to guarantee success of a movie. Shatzkin is not entirely dismissive of the application of algorithms in publishing. He has helped develop OptiQly, an application that uses algorithms to generate scores that can help publishers optimize a book for discovery and sale and guide them regarding “the extent to which author-focused marketing [can] contribute to discovery and sale.”

Neil Balthaser, founder of SaaS platform Intellogo that analyzes content for its clients, believes machine learning can predict a bestseller. According to him, machine learning can “identify similar tones, moods, topics and writing styles to books that are topping bestseller lists … and, in this way, better understand the reading audiences’ desires.” It is possible that if a machine was fed data and programmed to analyze bestsellers that were indicative of audience interests, it could analyze a book and recommend certain areas where that publisher could “focus its marketing efforts”. In this way, “machine learning can remove the gut feeling or personal bias inherent in business decision making.” Balthaser sees machine learning as an indispensable tool for publishers because he thinks it can provide publishers  “real-time information about their readers, figure out what is working in the marketplace, and, perhaps, make the bestseller lists more of an accurate depiction of what readers want to read, not simply what is available.” Market research is extremely important for publishers, but very soon, instead of solely relying on focus groups, ARCs and opinion polls, machine learning can study much larger and more complex data which would include audience research, social media and market trends and popular searches to forecast the reaction a certain book will have.

The advantages of AI and machine learning in publishing are enormous but the people making this technology and offering it are not from the publishing industry. It’s companies like Apple, Google, Facebook, Amazon that are investing heavily in machine learning and exploiting its capabilities. Bar a startup like OptiQly, which was created by people in the industry, publishers have pretty much been the spectators and not the participants in AI. And that’s something to think about. Cliff Guren, founder of Syntopical, a publishing consultancy company, feels that machine learning can quickly evolve from become a forecasting tool to an authoring tool, in that it can formulate “machine-authored responses that synthesize information from a wide variety of sources.” Publishers then need to decide the extent to which they’re okay with third-party systems making crucial decisions for them. One way to mitigate this issue, according to Guren, would be for publishers to make their own investment into AI so that they use it to develop and not dictate the industry.

Artificial Intelligence is a complex field and there is bound to be some friction when this 21st century technology meets a 500-year-old industry like publishing. That AI is here to stay is clear; publishers just need to be cognizant of its capabilities and neither entirely dismiss it nor blindly embrace it.

We’ve got a Data Kink

For publishers, AI is this huuuuuuge concept that I think that it’s hard for us to wrap our heads around — for anyone to wrap their head around, honestly, but perhaps especially us creative-driven humans — and the ways that it will affect our industry as we move ever more so into digital publishing are many and complex. I think one of the most obvious ways that we can anticipate artificial intelligence in publishing is through data mining, which Holly Lynn Payne does a good job of introducing in her efforts to get us to buy into her company, Booxby. While Payne’s motivations to get people into the idea of a data-driven, artificially intelligent book selection app were mostly self-promotion, the technology she’s using has great potential to become an industry standard. The context is already there, to an extent. Users are at least already used to algorithmic recommendations, even if they’re not always trusted. And, more importantly, publishers have been looking to data for the answers to their problems increasingly these last two decades, as evidenced by the creation of BookNet Canada

Data-driven marketing and acquisition is already a reality for book publishers, of course. Though a relatively recent development in Canadian publishing, BookNet members have access to sales data from all over the country, and most use it to their advantage when it comes to curating their year’s list. What AI would do is transform sales data into full consumer analysis. An AI system that isn’t only tracking what books are bought but by whom, and why, and what parts of what book are working, could drastically impact how publishers curate books. It wouldn’t, then, be just how we get books into the right hands, but whether certain plot lines, certain character names, certain prosaic rhythms appeal to people. I see this as being the next step for publishers, though I can’t say whether it will all be through companies like Booxby, or if major publishers will create their own, or even if BookNet will begin to incorporate and release this type of data as a part of membership.

Of course, this has ethical implications for everyone involved: the reader, the author, and the publisher. The reader, though they’ll give consent by default when they purchase a book, will have all their interests turned into aggregated data and sold back to them. The author may eventually have to change what they write to fit the expectations of publishers. And as AI tech starts to integrate itself into publishing, publishers will have less and less of a reason to exist. What I’ve talked about today will definitely affect marketing and editorial, but to some extent everything about publishing has the potential to be replaced or modified by AI. When we talk about how digital publishing platforms, we often talk about how permanent positions in publishing are swiftly becoming freelance positions, but with the introduction of AI we risk losing those positions altogether, as companies look to automate certain aspects of the publishing process. I’ll refer again to an article called “The Ethic of Expediency written many years ago: the ethics of expediency are tricky at best, but it’s pertinent for us to keep in mind the sacrifice we make when pushing for convenience.

I’ll also just leave here something daunting I found during my sleuthing: apparently, six years ago, someone already mastered AI book creation-on-demand.

The Self-Driving Manuscript: How A.I. Will Revolutionize Book Acquisitions

It has always been the case that data about reader preferences heavily influences marketing, and that successful or unsuccessful marketing in turn influences later book acquisition decisions. AI has massively increased the amount of reader data available, and the degree to which it can be analyzed. But a more direct application of AI to acquisitions is also taking root, and I predict that artificial intelligence will become integrated into manuscript acquisition.

For a concrete example, we can look at Booxby, an app that “reads” and analyzes texts, and offers “A.I. generated data to the four stages of book discovery: manuscript development, acquisition, marketing, and consumer discovery.” Founder Holly Lynn Payne explains that AI can help solve the discovery problem; it can “provide meaningful analytics to inform acquisition decisions, understand a book’s full market potential, and create an effective mechanism to connect books to the readers who would most enjoy them.”

However, as my MPub colleague Taylor McGrath reminded us in a comment in our Hypothes.is group, readers tend to choose books based on personal recommendations, making an AI-driven, Netflix-like service for books unlikely to take hold. I agree, and that’s why I can’t see what it would look like if we used AI to create a “mechanism to connect books to the readers who would most enjoy them.” Payne is overstating the problem. (In fact, I think many readers, myself included, actually enjoy the process of looking for books, in a way that we do not necessarily enjoy shopping for other goods.)

I do think Payne gets it right when she says that AI can “provide meaningful analytics to inform acquisitions decisions” and “understand a book’s full market potential.” It’s acquisitions editors, not readers, who want help choosing books, and that’s where Booxby will shine. On top of providing comps for the books it processes, Booxby also offers “patent-pending Experiential Language Tags that quantify the reader experience.” I have no idea what those are, but if they’re anything like the applications of AI that I’ve been learning about lately, it sounds like a probably imperfect but very powerful tool.

For example, in one of next week’s b-side readings, “A Publisher’s Job is to Provide a Good API for Books,” Hugh McGuire explains how easy it is to use “semantic tagging” to build a smart index for your ebook.  Like a conventional index, a smart index can tell you where all instances of John Smith appear in the book; but it can also tell you “where all people appear; where all instances of people named John appear; where all instances of people named Smith appear; that ‘my dear Granny Smith’ is a person and ‘my delicious Granny Smith’ is an apple.” A smart index is what McGuire calls a “semantic map” of the book. (Small Demons is a great illustration of whatthis might look like.)

Semantic mapping is impressive to me in three different ways, which I will explain in order of increasing impressiveness. First, it’s easy to see how this process of semantic mapping is a revolutionary tool for research. Such a tool could let you find a particular person or concept, or even all people or all concepts, in a particular book or collection of books (providing they are appropriately tagged and made available). You could also identify all books (that have been tagged and made available) that contain a reference to a particular event or person or concept. I can’t tell you how this would work but semantic mapping could help you do all of these things at the speed of search.

After semantically mapping many books, this sort of AI application could create categories of these maps, outside of the narrow genres with which humans currently approach books. I don’t know what the categories that emerged would look like, but I’m sure they would be illuminating. We might find a long-neglected category of books that humans had never attended to as such; or to put it another way: we might find a category of book that humans don’t know how to market, which is the exact experience Payne had with her book that led her to create Booxby. The point is, it would definitely be interesting to see books sorted into categories, or genres, based on the way their semantic maps look to an AI application. (I bet it would look a lot like the weirdly specific, yet creepily accurate categories that Netflix recommends to me.)

Now, imagine this process coupled with an AI application that collects data on reader-reported experiences of each of these categories. This data could be measures of sensibilities and emotions that, from the semantic map alone, an algorithm would not know to expect from a particular book (because AI doesn’t have emotions that we know of–yet). These experiential measurements could be straightforward, like the ones taken by the Whichbook application Jesse Savage brought to our attention (happy or sad, beautiful or disturbing). Or they might be more obscure, asking readers to what degree they felt “mournful” at the end of a particular book, how much it reminded them of themselves when they were children, etc.

Of course, we’ve always been able to get this kind of human feedback on particular books, or particular genres of books; or more recently, on books that contain particular content, such as a high frequency of words indicating bodies of water. All of that allowed us to associate certain kinds of reader experiences with certain genres, or certain kinds of content. But this AI application could associate certain kinds of reader experiences with certain kinds of semantic map. This means it could find two books that were likely to make you feel mournful, even if they had absolutely no content or human-created genre in common.

We would then have as data the content of the book, the semantic map of the book, and the experiential map of the book. Add to that the avalanche of consumer behaviour data that is already revolutionizing book discovery, and this would definitely yield some actionable results for acquisitions editors.

They could map their own collections, and make associations between certain kinds of semantic maps and available sales data. They could also map a submitted manuscript to get an idea of the reader experience. They might learn that even though the manuscript seems like a great example of what is selling right now in terms of content or genre, it actually is likely to produce an unpopular reader experience. They might find that a reader experience they thought would be undesirable is doing quite well in terms of sales. Or they could search the slushpile to find the weirdly specific, yet creepily accurate combination of content, genre, author profile, and reader experience they’re looking for. They could semantically map the ocean of self-published manuscripts (whose books were tagged in this manner, and made available) and treat it as a gigantic slushpile. And they could do all this without cracking a single manuscript, without having to summarize the content or squint through a badly edited first draft.

[Edited to add: I’m not saying that I think this is necessarily a good way for acquisitions to be decided; I have doubts, for the same reasons I don’t think a book should be chosen based simply on an author’s sales record.] These are the ways I imagine a combination of AI, tagging, and data-driven marketing will affect acquisitions. My understanding of all of these things is quite limited, but it was a fun experiment, and I’d like to know whether any of you think it sounds useful, dangerous, completely implausible, or utterly obvious.

AI for Audience

Imagine and explain one way in which AI (Machine Learning, Natural Language Processing, or another application of AI) will be integrated into publishing. You can go as near or far into the future as you like. You can also explore the ethical/implications of this technology becoming a publishing norm.

AI comes with a lot of advantages and although some of us are scared about it taking over human’s jobs, we simply can’t deny its existence. It is becoming more and more mainstream. As publishers, we can’t simply turn a blind eye of this innovation, especially when it could solve one of publishers’ biggest problems: audience.

We all know that there are plenty of good content out there. However, are those content reaching their potential audience? You can create the best content in the world and will be useless if your ideal readers don’t know about it. If you do not have a clear understanding of who your audience are, you are less likely to produce or get your hands on the content that they want.

AI could provide a near precise audience analysis and engagement. It is already known to and used by many marketers at big companies and publishers definitely could benefit from it as well. Say if you want to understand your audience and reach them better, you analyse what kind of content they are visiting on your website, what topics they are talking about on their social media, and through what platforms you can reach them the best. You also build a persona to better understand your ideal readers. You aggregate data from Google Analytics and Google Trends and any other hashtags and comparables that you can find. It’s a lot of work to do and we might just miss something along the way. This is where AI comes in. AI could provide a tool to give you a detailed picture of your potential readers and analyse your target market. One of the existing tools that has already existed out there is People Pattern. Although still not perfect, it runs on the same AI principles: aggregate data provided from big data (such as real people with real conversations), normalise it then analyse it. It then develops an in-depth audience intelligence; not just age and gender, but things like digital consumption, brand sentiment even life stage.

Another thing AI can do is to maximise audience engagement to provide better customer service. The answer to that will be chatbots. FastBot, for example, is a chatbot that is able to engage in ‘real’ conversation based on specific keywords. This chatbot could be used by readers to find out more about characters in the books and their backstory. They can also take knowledge quizzes to find out how well they know the characters in the book. Using NLP and guided elements, Chatbot can also answer readers’ queries to the authors and chat with multiple readers without adding to author’s time and/ commitment. Chatbot can also integrate video, audio, images, emoji, gif to enhance readers’ experience. However, the implementation of chatbot is still lacking because it does not recognise variants of the keywords like Siri or Alexa, but it is a start. Someone can, and they will, improve it and it could be life-changing for both publishers and readers.

Finally, with all the controversies regarding AI, AI is very beneficial for publishers. Not only it could help publishers by better understand their audience, it could also enhance readers’ experience and engagement. The implication of this very technology will certainly shift the publisher’s norm, but for the better. How do you think about it?


Leave the middleman alone…

The readings reviewed last week make it appear open access is winning (or should win) the battle versus the mainstream publishing industry and remove barriers to access content and publishing platforms (not only related to books, but also to media, videogames, video, art, etc) in order to make them accessible to “consumers”, under the banner of freedom and the promise to reach widespread audiences.

The middleman, as publishers (of all sorts) are commonly referred, is often seen as an evil and abusive factor in the chain of production, an obstruction, an elitist judge who filter and decides who and what gets published and who/what gets not. But we have to be careful with this assumption. The fact that an author is rejected by one or more Publishers (as it happened to many great authors published today) does not mean it is a bad job (enter J.K. Rowling), but it also does not mean the middleman is wrong.

The thing is, you don’t need to beat the middleman in order to propose your own model.

For the revolutionary inclined, it seems that justice is finally being made by the availability of free -or very accessible- publishing tools and platforms for people who wish to share and create communities with common shared goals and, thus, picturing a world where every effort is aimed towards the advancement of the human race, whatever that means to each advocate of this movement.

While I love the idea of being able to, for example: write a book, code a videogame or film a video over the same topic and share them with people who may get interested in it, then create a community and develop it further. I am always suspicious about what is behind the veil of generosity that most of the platforms available get in return, in other words “What’s the catch?”. I mean, after all, examples abound about technology companies offering free services which later turned into beasts which, despite still offering their services for “free”, profit with items of more value than one could ever have imagined, name Facebook for example.

However, free/acessible publishing and content services do not seem to be trying to become the IT giant. As far as we can see, they even seem too disorganized or focused towards different and some times contradictory goals. Thus, even if an “Uber of publishing” becomes a reality, it looks like it would be a little more of a nuisance to established publishers role, yet, it can become a serious threat to the existence of published works themselves.


Because facilitating widespread publishing would definitely increase the offer for works and while this seems to be a good thing, the fact is the industry and its open access counterpart is not lacking titles but rather suffering from an oversupply of them. In contrast, lack of interest and a change in the way people consume information, has made the whole industry more elitist, less original and oriented towards specific topics.

For example, there is a lot of Harry Potter fan- fiction (love it or hate it), but as we reviewed last term, everything counts, thus, if such fan-fiction were to become a canonic part of this story, we would have to resort to some kind of “multiverse”, like those used by comic book publishers to accommodate the whole spectrum.

In essence, after some iterations, “Harry Potter” would lose its meaning, its purpose, its identity, all the values associated and given to it by its author. On the opposite side, even if all fan-fiction strictly adhered to a set of rules, respecting the base form of the novels, then each fan-work would be constrained and limited by those rules.

What publishers had to offer then? Order. The publishing, design and distribution services can surely be replaced and even automated. Someday, an AI will be capable to write you a book based on a plot, characters and story-line you provide. However, what Publishers do and have been doing for centuries is a most valuable thing: to offer order in terms of curating potentially successful stories based on their knowledge of the readers (or market if you wish), on the editing process where a writer turns an idea into a successful story, or even a great story into a widespread success, the distribution planning, the events or media to close writers and readers and finally, protecting the integrity of those works by applying IP laws.

Those services, proper of the middleman, are now being devalued in favor of an apparently egalitarian discourse that in fact, proposes to crate an “Uber of publishing” or similar, forgetting that “Uber” is a company that actually profits from the effort and resources of others without risking or offering any securities to them.

We have to be careful what we wish for.

“Intelligence is the Ability to Adapt to Change”

“Intelligence is the ability to adapt to change.” – Stephen Hawking

While trying to think of an adjective for a business model that favours the customer, I came up with “consumer-focused” and immediately stopped and thought to myself “wait, aren’t we, as publishers, nothing if not consumer-focused?” That adjective made me reflect on how, no matter the model, the goal is always to put a product into the world that people will like, be it through traditional publishing, self-publishing, ebook publishing, or some sort of hybrid of any of those mixed together. With that in mind, I realized that, although the popular model may be shifting, we as publishers don’t have to be upset about it, as long as we keep up.

Many of the individual jobs that are done by traditional publishers can be done by freelancers, and a self-publishing author is already free to hire these freelancers herself. Although self-publishing is on the rise, I think the process of hiring these freelancers is perhaps what is keeping it from overpowering the traditional publishers altogether. Authors are noticing that it takes work to publish a book: work that they may not want to do all by themselves. In Ros Barber’s article “For me, traditional publishing means poverty. But self-publish? No way” the author says “If you self-publish your book, you are not going to be writing for a living. You are going to be marketing for a living.” From this point of view, traditional publishing looks pretty good! They will take care of all the tricky stuff for you while you get to focus on writing. Even though there are technologies that allow you to format your book’s interior to cater to the requirements of an ebook, for example, there are a lot of pieces that go into the publishing puzzle, and so far, traditional publishing is the best place to get them in one neat package.

While I don’t think traditional publishing is going away just yet, I think the industry could stand to learn a little more from those who decide not to use this system. Currently, the “gatekeepers” to publishing (publishers, book prize committee members, etc.) are a lot of privileged people (male people, white people, straight people, etc.) which means those getting published are people like them. I believe that some people choosing not to use traditional publishing are those who have traditionally had their voices quieted: a phenomenon which is still happening in traditional publishing today. Minorities are underrepresented in the industry, although I am optimistic about seeing more and more representation in the years to come. If the current gatekeepers fall to more consumer-accessible business models, I predict far more minorities to flood the system, allowing more voices to be heard and more opinions to be brought to light. Even if everyone eventually decides that they need publishers and we end up restructuring again, sometime down the road, we will have at least diluted the privilege pool a little bit more than it is now.

Traditional publishing isn’t going away, but I don’t think new publishing models are going to back down either. The most likely scenario is that publishers will adapt to fit into these new models, and make these new models fit into them. The industry is always growing, changing, and adapting. There is no reason that publishers shouldn’t continue adapting right along with it.