Now, the AI featured text generator, called the GPT-2, a new language model, built by non-profit artificial intelligence research company OpenAI, is so eerily good its creators are worried about releasing it to the world.
OpenAI was founded in 2015 with $1bn backing from Elon Musk and others.
A group of scientists backed by Elon Musk has designed this predictive text machine can take a piece of writing and spit out many more paragraphs in the same vein. It was trained on a dataset of 8 million web pages and is so good at mimicking the style and tone of a piece of writing that it has been described as a text version of deep-fakes.
Researchers said, the software has difficulty with “highly technical or esoteric types of content” but otherwise is able to produce “reasonable samples” just over 50 percent of the time.
A couple of journalists from The Guardian also were suitably concerned by its power.
A journalist Hannah Jane Parkinson said,
The OpenAI computer was written an extension of it that was a perfect act of journalistic ventriloquism. This AI has the potential to absolutely devastating. It could exacerbate the already massive problem of fake news and extend the sort of abuse and bigotry that bots have already become capable of doling out on social media.
And Open AI said in a blog post,
Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights. This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.
Policy Director at OpenAI, Jack Clark tweeted,
The idea not to make the GPT-2 publicly available was not about hyping the research. The main thing for us here is not enabling malicious or abusive uses of the technology. We’ve published a research paper and a small model. A very tough balancing act for us.
Ultimately, it will likely be a question of whether the good outweighs the bad.