Social media networks, especially Twitter and Tumblr, have proven to be hospitable environments for functional and artistic bots because they operate as publishing and development platforms that encourage automation and produce massive data streams.
Social bot accounts (Sybils) have become more sophisticated and deceptive in their efforts to replicate the behaviors of normal accounts. The term “Sybil” comes from the subject of the book Sybil (a woman diagnosed with dissociative identity disorder).
Bots can be designed for good intentions. They can be used to protect anonymity of members as mentioned in related work or automate and perform tasks much faster than humans, like automatically pushing news, weather updates or adding a template in Wikipedia to all pages in a specific category, or sending a thank-you message to your new followers out of courtesy.
They can also be extremely sophisticated such as (i) generating pseudo posts which look like human generated to interact with humans on a social network, (ii) reposting post, photographs or status of the others, and (iii) adding comments or likes to posts, (iv) building connections with other accounts.
But as many technological inventions, they also have darker applications. Some of the malicious functionalities of social bots are: the power of dissemination of misinformation, they are convenient way of propaganda or can be leveraged for getting fake ratings and reviews. There are influence bots that serve this purpose. Also, it is possible to find many web pages that serve fake followers and likes even for free by simply searching on any search engine.
Social networks are powerful tools that connect the millions of people over the world. Therefore, they are attractive for social bots as well. The possible harm caused by social bots such as identity theft, astroturfing, content polluter, follower fraud, misinformation dissemination etc… may not be underestimated.

Is there greater cause to worry further up the literary food chain since the rumor circulates that GPT-2 bots can also write books?
No doubt Amazon would lick its lips at the prospect of being able to sell completely computer-generated books (you don’t need to pay royalties to an algorithm). Up to now, these are just giant automated plagiarism machines that mash together bits of stories written by human beings rendered invisible by AI rhetoric. The major GPT-2 glitch consists of the fact that it can be sometimes prone to what its developers call “world-modeling failures”, “eg the model sometimes writes about fires happening underwater”.
A more realistic hope for a text-only program such as GPT2, meanwhile, is simply as a kind of automated amanuensis to generate the elusive raw material that human writers can then edit and polish. But until robots have rich inner lives and understand the world around them, they won’t be able to tell their own stories. And even then.
Maybe there is no better way to conclude this post than by quoting Wittgenstein: “If a lion could speak, we would not understand him”.