Please Bot Responsibly

Please Bot Responsibly: A Compendium of Audience Building Bot-haviors

By Erik Hanson

Building an audience on social media can be a laborious, time-intensive endeavor, so it only makes sense to use technological affordance where they are available. One such technology, social bots, are capable of making audience building an easier task by automating many of the time intensive tasks. A common place for social media audience building bots is Twitter, with which this paper will concern itself. It will argue that although bots, through increased sophistication in automation, have the capacity to aid greatly in building an active and engaged audience, one must exercise caution when designing and implementing the bot in order to navigate the challenges of being labeled as a spam bot, managing users’ expectations of bots interactions and embracing the possibilities of semi-autonomous bots.

Source: adweek.com
Twitter bots are here to stay.

Spambots

What are bots? What is spam?

Twitter is mired in both spam and bots, but what exactly is a bot and how does it spam? From a simplistic point of view, a Twitter bot is a computer program that can tweet by itself, without interference or interaction from a human controlling it (Dubbin, 2013). This can entail malicious behavior, but doesn’t necessarily require it. Some bots, such as @RealHumanPraise, mashup reviews from Rotten Tomatoes and replace references to actors and films with news anchors at Fox News—every two minutes. Another, @tofu_product, when engaged will respond back with a sometimes nonsensical string of your own frequently used words and phrases. These and other bots are known as social bots and can interact in comical or malicious ways for entertainment value or to gain followers. Social bots have their roots in chatterbots, which would attempt to converse with a human participant, and ideally, be able to convince (or deceive a human) into thinking the chatterbot was also human (Wald, Khoshgoftaar, Napolitano, & Sumner, 2013, p. 6).

Source: twitter.com
Real Human Praise Bot

Not all bots have such innocuous goals as to merely entertain humans, though. Others try to pass as humans in order to push a malicious agenda or simply be invasive. Twitter as a company defines spam in a very broad sense, leaving room for changes of the definition as the acts of “spamming” evolve. A few of the more common points include following or unfollowing large amounts of people in a short period of time, repeatedly following and unfollowing people, posting duplicate posts repeatedly, posting mostly links without “personal links,” etc. (“The Twitter Rules,” n.d.). Doing any of these manually, one at a time still constitutes spamming, but it’s automating these behaviors via bots that makes the spamming particularly effective and invasive. In addition to their API guidelines for building bots and interacting with their API, Twitter also provides automation rules and best practices, which discourages, when not outright banning, automating large scale actions, such as following and unfollowing or automating replies and mentions (“Automation rules and best practices,” n.d.). Automating retweeting must be done with care and actively provide a community benefit while automated favoriting is forbidden.

Spam and detection

Twitter does provide rules that govern its spam policy, but the definition of spam is so loosely and broadly defined because spammers are constantly evolving and adapting to Twitter’s efforts to police them. In order to police the spammers, one must first be able to detect them. In a study conducted by A.H. Wang (2010), he identified several hallmarks of spam accounts (most of which were automated bots) including following behavior, use of links, mentions and duplicate tweets. These were also detectable in a machine-readable, automated way. He was able to predict with 89% accuracy whether or not a username was a spam account (Wang, 2010). McCord & Chuah (2011) were able to improve on Wang’s research and were able to detect spam with 95.7% precision. Both Wang and McCord & Chuah identified that roughly 3% of all accounts are spam accounts.

In being able to detect spammers and their behavior, researchers were also able to identify ways in which spammers circumvented some of the Twitter spam-detection methods (McCord & Chuah, 2011; Wang, 2010). For example, Wang (2010) found that not all spam accounts followed a large amount of users. Instead, only around 30% of spam accounts identified followed a large number of users (Wang, 2010, p. 7). They do not need to follow users to draw attention to their spam tweets because they can posts tweets with mentions or replies. Others post duplicate Tweets, but include different shortened links or usernames/hashtags. It’s through these methods they are able to get around the existing Twitter guidelines on spam. While Twitter and other conscientious users are vigilant about preventing spam, spammers are just as dedicated to circumventing these detection methods. This places the would-be Twitter bot creator in a precarious game of cat and mouse to disseminate content and reach a wide audience while not spamming.

Take-aways from spambots

In taking these findings on bots and spam, a number of useful lessons can be learned. First care must be taken in the amount of automated interactions the bot engages in. There is a fine and vaguely defined line between simply automating actions and spamming. As Wang (2010) and McCord & Chuah (2011) have shown, as people become more aware of bots on the web and the spam detection methods improve, spam bots have become increasingly more sophisticated. Despite users becoming more discerning, McCord & Chuah (2011) still cited a previous study that showed 45% of users on a social network, such as Twitter, willingly clicked on any link posted by a friend in their friendlist, even if they did not know that person in real life (p. 175). This portrays a landscape of sophisticated spam bots and wary, but susceptible, users. For the Twitter bot creator, this translate to the need for a highly directed and thoroughly articulated bot strategy.

Depending on what the end goal of the bot or audience to be built is, having a directed and well articulated strategy means identifying the ways in which the desired audience interacts with Twitter (whether it is a network analysis, use of hashtags, engagement around a particular topic, etc.). If the automated actions are very well defined, they will a) only target the audience the bot creator wants to reach and b) keep the amount of interactions completed by the bot below a spam threshold. In both Wang (2010) and McCord & Chuah’s (2011) research, the spammers used tactics to reach as broad of an audience as easily and most effective as possible. If a bot creator is to make more than an entertaining but innocuous social bot—a bot that can interact and engage with an audience—and avoid being a spammer, they would do well to make sure they understand who they’re trying to target first before sending a tweet-happy bot into the twittersphere. The reasoning is fairly straightforward. If a bot intending to build an audience for a legitimate purpose gets shut down as a spam bot, it helps nobody.

Bot or not?

Is it a bot?

Bots may be particularly efficient spammers because they are automated, but not all bots are spammers. Regardless of their purpose, if one were to employ a bot or other automated processes in building an audience on Twitter, it is important to first understand the core features of what a Twitter bot does. In other words, the question to ask is: Is it a bot or not? One of the key hallmarks of a bot on Twitter is its automation. Bots are able to proliferate on Twitter because Twitter does not impose strict limits on automation. Twitter only requires human authentication in the form of a CAPTCHA when creating an account, but after the login information has been created by a human, a bot can make as many automated calls to the Twitter API as Twitter allows (around 150 requests/15 minutes) (Chu, Gianvecchio, Wang, & Jajodia, 2012).

If automation is a quintessential identifier of a bot, then one must understand what behaviors can be automated easily and are automated frequently. In a paper titled “Detecting Automation of Twitter Accounts: Are You a Human, Bot, or Cyborg?” Chu, Gianvecchio, Wang, and Jajodia (2012) explored some of the identifying characteristics of bot automation, especially in the context of spam. Bots tend to have a disproportionate amount of friends over followers, are more likely to send out external URLs and more likely to be consistently active when a human user wouldn’t (for example sending tweets regularly all throughout the day and night). Chu et al. (2012) found that 10.5% of Twitter accounts were bots. Haustein, Boman, Holmberg, Tsou, Sugimoto, and Lariviere (2015) in a study on tweets as impact factors and the implications of automated bot accounts noted that 16% of accounts showed a high degree of automation. Haustein et al. referenced automated Twitter bots for arXiv.org that would automatically tweet out every new submission without human intervention. They reference a tool, Bot or Not?, that is designed to detect automation in social bots, but has not yet been developed.

Source: truthy.indiana.edu
Bot or Not? Project by Truthy

“You’re not my dad!” Taking orders from a bot.

Bots’ automation is an inescapable aspect of their existence, but the degree with which it is advertised to a potential audience is another question the bot creator must grapple with. A question to frame the issue of automation transparency is whether or not one labels the account as a bot either in its name or description. As an example, in a study titled “Botivist: Calling Volunteers to Action” Savage, Monroy-Hernandez, and Hollerer (2016) used Twitter bots to start conversations around activism and recruit volunteers based on hashtags and language people used. Their bots were clearly identified as such and the results were illustrative of people’s attitude towards bots. When the bot approached someone in a direct way (asking a straightforward question like ‘How do we solve corruption in our cities?’), it was more effective than approaching them with an obvious agenda (for example the bot showing unprompted solidarity with a stranger on corruption). This was the opposite when people were asked in person. The most effective in-person tactics were not the most effective with the bot.

Based on the Savage et al. (2016) study, people were suspicious of bots with an obvious agenda when they were known to be bots. In a second example of identified bots interacting with the world, Wilkie, Michael, and Plummer-Fernandez (2015) created a number of bots to interact with the UK public on the subject of energy-demand reduction. One bot, @ErtBot, was identified as a bot in its name and would tweet at users who used the phrase ‘switched off the…’ and would tweet out “(‘Username’) has switched off the (device) and this has helped to reduce our overall energy demand” (p. 86). Because a bot was tweeting this out with impunity, it couldn’t account for outlier situations such as someone switching off their internet because their mobile phone was confiscated. The user responded to the bot’s pro-environmental message by stating “shut up its not good news for me” (Wilkie, Michael, & Plummer-Fernandez, 2015, p. 96).

On the other hand, when bots were not trying to assert intrinsically human opinions, but rather asking questions or taking in information, being a bot did not harm its prospects. In Friedman, Doron, Beatrice, Anat, and Peleg’s (2011) study, “Bots in our Midst,” they asked questions of users in an online video game via a chat client. Whereas the response rate to humans was higher than bots (66% vs. 35%), the users responded more positively to the bot and provided the humans with more negative answers. When interacting with the public as a bot, it is important to maintain perspective and scope on the perceived authority and legitimacy of a bot to be asking humans those questions.

The uncanny valley and lessons to be learnt

Part of the problem with bots expressing opinions is once it starts to try and act like a human (expressing opinions, emotion, etc.), it runs the risk of doing so poorly—or at least not as human-like as a human responder may wish. This places the Twitter bot in a space where it runs the risk of being in the uncanny valley, where a robot’s almost but not quite human-like appearance or behavior prompts a sense of unease or revulsion. Pensky (2014) in an article about bots of the new uncanny valley said that Twitter bots were framed in contrast to the ‘real’ accounts. Their inability to grasp the nuances of language made them funny, but when they start to try and act more human like (for example, expressing a value judgement about corruption), they enter the uncanny valley and lose their funny or entertaining status and become eerie. As Savage said in an interview by MIT’s Technology Review, “People actually started questioning whether bots should be involved in this kind of initiative and stopped participating” (“How Twitter Bots Turn Tweeters into Activists,” n.d.).

Grappling with the dilemma of how much automation is too much automation and how much faux-humanity is too much in a bot are key issues when using a bot to build an audience on Twitter. As evident from Savage et al.’s (2016) findings, people disengage when a robot approaches them and makes statements that are too human. On the other hand, a lighter, more robotic touch, can also backfire, which happened with Wilkie et al. (2015). In order to optimally automate a bot to be most effective for a given purpose, the creator must understand what goals they are trying to accomplish in automating their audience building and examine whether a bot’s touch is appropriate. If it would not be appropriate, how could you use and automate a bot in a way that would both accomplish a creator’s audience building goals while not alienating the audience they are attempting to build?

Semi-autonomy and future

What is a cyborg bot?

After considering what spam looks like on Twitter and how automated bots behave and look, it is worthwhile to consider some of the future implications and applications of Twitter bots. One such consideration is the intersection between humans and bots—namely the cyborg—which Chu et al. (2012) call either bot-assisted humans or human-assisted bots, depending on one’s orientation in the equation (p. 811). A cyborg on Twitter has many advantages. First, many of the issues mentioned in the previous section, such as Wilkie et al.’s (2015) bot construing someone’s phone being taken away as a win for the environment or Savage et al.’s (2016) complications with bots showing solidarity with activists, can be minimized when a bot creator or Twitter user takes the cyborg approach. The actors bring different strengths to the equation, the humans with their nuance and understanding of language/situational awareness and the bots with their machine capabilities and lack of fatigue in performing repetitive tasks. By taking advantage of each, one working on building an audience can harness the automation of a bot while still injecting the Twitter account with at least some humanity.

Anytime automation is introduced, it can give rise to complications. This is equally true with cyborg bots as it is with fully automated bots. Woolley et al. (2016) explore the concepts of bots at large in their botifesto, “How to Think About Bots.” In talking about using bots to build an audience for engagement in online political activism, Woolley et al. (2016) suggest thinking of most social bots as semi-automated actors—automated actions and tasked sprinkled with human input and direction. This semi-automation also comes with its fair share of complications. Bots are created by humans, who encode their values into the bots they create, but at the same time, as Woolley et al. (2016) write, “They also live on—and perform—on an unpredictable internet of nearly limitless input and output.” Bots carry out their automated actions on their own, but they are also inextricably linked to their creators.

Source:motherboard.vice.com
Self-aware social bots?

Bots and intellectual property rights

The paradoxical nature of the lives of bots extends to who is responsible for their behavior. If a bot does something, is it the bot or the creator who is responsible? Is it both? In 2015, this came up with a Twitter bot created by Jeffry van der Goot, which in its string of random and somewhat-nonsensical phrases, issued a death threat to another user on Twitter. In response to the incident following a police investigation in the Netherlands, van der Goot said, “apparently I’m responsible for what the bot says, since it’s under my name and based on my words” (Singleton, 2015). In another example, a bot that made random purchases for a Swiss art exhibit, the Random Darknet Shopper, purchased ecstasy and had it mailed to the exhibitors. Though public interest oriented art is legal in Switzerland, the group behind the bot have also said they ultimately take responsibility for what the bot does (Woolley et al., 2016). From a legal standpoint, where criminal actions are concerned, it seems that, at least for now, a bot’s creators are responsible for the bot’s actions.

The legal ramifications to bots aren’t limited to criminal proceedings like drug buys or death threats. There is also the matter of who owns the intellectual property of what the bot creates. If the end goal of a bot creator is simply to build an audience with preexisting, human-authored content, the intellectual property of the text is clear, but what about van der Goot’s bot’s death threats? Van der Goot may have been culpable in the act of issuing the death threat, but is the text of the death threat his? He may not want to own the death threat, but in an audience building context, who owns the IP may matter. For example, if a piece of text created by an audience building bot went viral and a monetary value came to be associated with the text, who owns the text? These questions are beyond the scope of this paper but are worth exploring if one is to use Twitter for building an audience, especially if the ultimate hope is to monetize it. If it becomes monetized, the stakes for who owns what become much higher.

Future implications

A Twitter bot creator will need to consider the intellectual property rights of bots in the future, but a future discussion of bots doesn’t end with intellectual property rights or criminal court proceedings. It also extends beyond Twitter. For example, Lunchbot, which eventually became Howdy, is a bot that lived inside of Slack and would schedule meetings, delegate tasks or even order you lunch (Newton, 2016). It performs a clear function within organizations and does so in a very bot-like manner. It takes a predetermined set of rules and applies them with a variety of human input (such as Phil being sick of deli sandwiches day-in, day-out).

Newton’s article and Woolley et al.’s (2016) botifeseto paint a picture of an increasingly bot-filled world. Bots and their anonymity need to be approached with caution where potential for automated abuse, coercion and deceit exist. All this must be done while still preserving what makes bots so fun and useful. Bots carry the potential for malice, for example through spam on Twitter, but the world would not be the same place without them. Woolley et al. (2016) sum it up nicely: “Wholesale elimination of bots on social media, would, after all, also get rid of bots doing important work in journalism and silence the variety of bots appreciated for the comedy and ‘botness.’”

Conclusion

The use of bots carries with it a fair amount of challenges and pitfalls, especially when they are to be implemented in building a community of humans. Using a bot on Twitter to build an audience is subject to the same trappings as some of the previous studies and projects outlined in this paper. Paying attention to spamming, a bot’s identity and the future of social bots is important when building an audience by automated means, especially since the actions carried out by bots are often happening without continual human input or oversight. This paper serves as a jumping off point for exploring the possibilities and ramifications of using bots in social contexts. As a final thought, it is important for anyone creating a bot to keep these aspects in mind while creating and implementing a bot. There is often a fine line between just right and all wrong, especially when it comes to both spamming and being an approachable, interesting bot. Both semi-automated and fully automated bots have potential to help steward audience building but ultimately require careful human input and consideration.


 

References

Automation rules and best practices. (n.d.). Retrieved March 9, 2016, from https://support.twitter.com/articles/76915

Chu, Z., Gianvecchio, S., Wang, H., & Jajodia, S. (2012). Detecting Automation of Twitter Accounts: Are You a Human, Bot, or Cyborg? IEEE Transactions on Dependable and Secure Computing, 9(6), 811–824.

Dubbin, R. (2013, November 14). The Rise of Twitter Bots – The New Yorker. Retrieved March 9, 2016, from http://www.newyorker.com/tech/elements/the-rise-of-twitter-bots

Friedman, D., Doron, F., Beatrice, H., Anat, B., & Peleg, T. (2011). Bots in Our Midst: Communicating with Automated Agents in Online Virtual Worlds. In Lecture Notes in Computer Science (pp. 441–442).

Haustein, S., Bowman, T. D., Holmberg, K., Tsou, A., Sugimoto, C. R., & Larivière, V. (2015). Tweets as impact indicators: Examining the implications of automated “bot” accounts on Twitter. Journal of the Association for Information Science and Technology, 67(1), 232–238.

How Twitter Bots Turn Tweeters into Activists. (n.d.). Retrieved March 13, 2016, from https://www.technologyreview.com/s/544851/how-twitter-bots-turn-tweeters-into-activists/

McCord, M., & Chuah, M. (2011). Spam Detection on Twitter Using Traditional Classifiers. In Autonomic and Trusted Computing (pp. 175–186). Springer Berlin Heidelberg.

Newton, C. (2016, January 6). Bots are here, they’re learning — and in 2016, they might eat the web. Retrieved March 13, 2016, from http://www.theverge.com/2016/1/6/10718282/internet-bots-messaging-slack-facebook-m

Pensky, N. (2014, July 2). Twitter bots and the uncanny valley. Retrieved March 13, 2016, from http://www.dailydot.com/technology/twitter-bots-uncanny-valley/

Savage, S., Monroy-Hernandez, A., & Hollerer, T. (2016). Botivist: Calling Volunteers to Action using Online Bots. Presented at the ACM conference on Computer-Supported Cooperative Work and Social Computing, ACM – Association for Computing Machinery.

Singleton, M. (2015, February 12). Man questioned by police after his Twitter bot makes death threats. Retrieved March 14, 2016, from http://www.theverge.com/2015/2/12/8025475/twitter-bot-police-death-threats

The Twitter Rules. (n.d.). Retrieved March 9, 2016, from https://support.twitter.com/articles/18311#

Wald, R., Khoshgoftaar, T. M., Napolitano, A., & Sumner, C. (2013). Predicting susceptibility to social bots on Twitter. In Information Reuse and Integration (IRI), 2013 IEEE 14th International Conference on (pp. 6–13). IEEE.

Wang, A. H. (2010). Don’t follow me: Spam detection in Twitter. In Security and Cryptography (SECRYPT), Proceedings of the 2010 International Conference on (pp. 1–10). IEEE.

Wilkie, A., Michael, M., & Plummer-Fernandez, M. (2015). Speculative method and Twitter: Bots, energy and three conceptual characters: Speculative method and Twitter: Bots, energy and three conceptual characters. The Sociological Review, 63(1), 79–101.

Woolley, S., Boyd, D., Broussard, M., Elish, M., Fader, L., Hwang, T., … Shorey, S. (2016, February 23). How to Think About Bots. Retrieved March 13, 2016, from http://motherboard.vice.com/read/how-to-think-about-bots

2 Replies to “Please Bot Responsibly”

  1. Please do not bot at all

    One day in February 2006, Glass, Dorsey, and a German contract developer Florian Weber, presented Jack’s idea to the rest of the company. It was a system where you could send a text to one number and it would be broadcasted out to all of your friends: Twttr. (Carlson, 2011)

    These were four friends who wanted to get together and create a podcasting platform, which they had named Odeo. But that idea became invalid when Apple announced iTunes would include a podcasting platform built into Apple devices. So Odeo turned into Twittr, later spelt Twitter, but the core idea of shareability remained.

    This begs the question, why bots?

    Erik’s essay is well-researched and deals with the implications of Twitter bots while throwing light on the extensive studies exploring the different types of bots. However, my question is why create a bot at all? If shareability is the main point here, shouldn’t humans be better at it? Or more importantly, didn’t humans create the platform to share what they felt? A bot, as Erik has pointed out in his essay, cannot feel, think for itself, or show emotions such as anger, joy, etc. So why use a bot to show “@RealHuman Praise”? What is the point? Why inflate Twitter followers just to follow a piece of software? Where is the human element in that? “…social bots could be damaging because they are becoming more sophisticated and harder to detect.” (Morrison, 2014)

    So, why have social bots at all? Let humans do the socializing!

    According to a study by scientists from the University of Indiana, “These bots mislead, exploit and manipulate social media discourse with rumours, spam, malware, misinformation, political astroturf, slander or even just noise. This results in several levels of societal harm. For example, bots used in political astroturf artificially inflate support for a candidate; their activity can endanger democracy by influencing the outcome of elections.” (Morrison, 2014)

    I find it really hard to empathize with the so-called beneficial aspect of a Twitter bot as I am not a Twitter follower at all, but even converts have started to agree that it is just a nuisance. Nobody wants to find out they were following a piece of software, and not an actual human or worse still, their 1,000 plus followers are made up of bots! That’s a downer. So remove the “imposed” beneficial element and all we have is spam, no different from a malicious virus and should actually be stopped.

    “Bots are also easily used for malicious purposes, and as the paper speculates, any organization with sufficient resources and motivations could dominate a topic or conversation. Bots aren’t just bad for business, they’re bad for society.”

    Sources cited:

    Morrison, Kimberley. Social Times. 2014. Social Bots on Twitter are More Than a Minor Nuisance. Accessed on April 13, 2016. http://www.adweek.com/socialtimes/social-bots-twitter-minor-nuisance/202548

    Carlson, Nicholas. Business Insider. 2011. The Real History of Twitter. http://www.businessinsider.com/how-twitter-was-founded-2011-4

  2. This essay is well-researched, and gets increasingly interesting as it dives deeper into the more philosophical aspects of bots. It should, in my opinion, be flipped. It should start where it ends, on those unclear, controversial, and open questions about bots, and speak to those with a peppering of the more technical aspects that were used to introduce the essay. The same content, just organized differently, would have greatly strengthened the work.

Leave a Reply