GPT-2 can generate entire articles, blog posts and text samples of unprecedented quality that can be indistinguishable from those created by humans. This has the potential not just to automate content creation processes for online media but also to generate human-like abusive messages on a scale never before achieved, which has broad implications for influencing popular opinion via social media and the proliferation of fake news.
OpenAI
detailed its findings in a new article and research paper, and has decided not to release the fully trained model "Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale." A smaller version of GPT-2 has been released along with sample code, but researchers seem to agree that this technology getting into the wrong hands is an inevitability. More research is needed now to defend against this emergent threat.
Source: OpenAI