OpenAI Won’t Release AI Text Generator, Branding it Too Dangerous

artificial intelligence

Researchers who created an artificial intelligence writer are withholding the technology on the grounds that it could be used for ‘malicious’ purposes. 

A team of researchers at San Francisco-based research institute, OpenAI, have shared new research regarding using machine learning to create an Artificial Intelligence (AI) capable of producing natural language, GPT-2 (a successor to GPT).

The team claims that its new AI writing system can, unsupervised, produce coherent articles requiring only a brief prompt – meaning it does not have to be retrained to talk about different topics. It also has the potential to improve AI writing assistants, create more capable dialogue agents, enhance unsupervised translation between languages and provide better speech recognition.

Backed by Silicon valley big wigs, such as Elon Musk and Peter Thiel, OpenAI’s researchers have expressed concern that the new system is too realistic and could be used for the mass-production of convincing fake news, the production of spam/phising content, and the impersonation of others online. Due to these worries, the team has said they will only release a much smaller version of GPT-2 along with sampling code.

Related: Artificial Intelligence Ethics Crucial says House of Lords

Instead of scraping data indiscriminately from the web, the AI text generator works by only using content from pages posted to popular link sharing website, Reddit.

The system then narrows down that content further to links with a ‘karma’ score of three or more votes, meaning three or more users had up-voted the link thereby marking it as valuable. Using this data the AI then generates the new story word-by-word, which results in very coherent but largely untruthful stories.

The AI is not infallible, though, and will sometimes generate passages of nonsensical text containing blatant inaccuracies. In one demonstration given to the BBC, the AI wrote that a protest march had been organised by a man named “Paddy Power” – a widely recognisable book maker in the UK.

Researchers noted various ‘failure modes’ in the system, which resulted in repetitive text, world modelling failures and unnatural topic switching.

Other independent AI researchers have expressed doubt over the claims made by the OpenAI team, with some highlighting that the announcement had not been peer-reviewed. Benjamin Recht, associate professor of computer science at UC Berkeley, said: “They have a lot of money, and they produce a lot of parlour tricks.”

OpenAI said that it wanted its system to spark a debate on how much AI should be used and controlled, adding: “We think governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems.”

Specialist in the societal impact of technology, Brandie Nonnecke, director of Berkeley’s CITRIS Policy Lab, told the BBC that such misinformation was inevitable. Nonnecke said that the debate around this topic should focus more on the platforms, such as Facebook, that disseminated this information.

“It’s not a matter of whether nefarious actors will utilise AI to create convincing fake news articles and deepfakes, they will,” she told the BBC.

“Platforms must recognise their role in mitigating its reach and impact. The era of platforms claiming immunity from liability over the distribution of content is over. Platforms must engage in evaluations of how their systems will be manipulated and build in transparent and accountable mechanisms for identifying and mitigating the spread of maliciously fake content.”

Alex Croucher, director at VKY Intelligent Automation, said that tech itself was not the main issue, and that simply blocking its release would not solve the issue. Instead, he said the problem lay with the credibility given to fake news.

Croucher told DIGIT: “The threat arises when fake stories are afforded credibility by so called ‘influencers’ sharing the questionable content with impunity. This is followed by fake news stories being syndicated around the world through genuine news sites, which makes these fake stories appear ‘real enough’ to believe.

“This can be compounded by state sponsored social media trolls and bots engaging in ‘us and them’ discussion to polarise opinion on the story and related topics.

“In the age of the internet, individuals must take responsibility for the trust they place in news stories, just as they would when hearing a tall tale from a human. Influencers and media outlets must be held to account by regulators to change the culture that allows blatant propaganda to be disseminated.”

Recommended: Why Artificial Intelligence is More About People Than You Think

This new software from OpenAI has come at a time of heated debate around the ethical use of AI, and global concern over the spread of disinformation.

European regulators are taking a stronger stance against tech firms that fail to take proper measures to marshal their platforms more effectively against fake news and harmful content.

Today, a UK Commons Committee has called for greater social media platform regulations on how information is shared, saying big tech companies are failing in the duty of care they owe to their users to act against harmful content. This approach would suggest that government officials believe it is the platforms and not the technology that poses the real risk to the public.

In terms of the AI debate, it has shifted focus from if it will work and how can it be applied, to ethics, explainability and fairness. Its potential to benefit humankind is evident, however, clearer regulations may be required if serious pitfalls in its application are to be avoided.



Latest News

Digital Diversity News
Digital News
19th July 2019

DIGIT Tech News Roundup: 19th of July 2019

News Recruitment
Fintech News
19th July 2019

Edinburgh’s FNZ Snaps Up Wealth Management Firm Ebase