This story may have been written by a real journalist, but reader beware: the same cannot be said for everything you read online.

Researchers have demonstrated how AI - artificial intelligence - could create its own fake news, with very little input from people.

Releasing such a system raises concerns.

OpenAI, with backers including Elon Musk and Microsoft, have decided not to release the entire AI system code, known as GPT-2, over concerns about how it could be abused to produce convincing fake news on a large scale.

Instead, it released a small version.

The team working on the project developed a way for the AI to continue writing up stories independently using a dataset of eight million web pages, after providing just a few human-written lines as a prompt.

It specifically used data from outbound links from social news aggregation site Reddit with at least three votes, as an indicator of quality and value.

Several examples published by researchers show how false claims such as recycling being bad for the planet are written by the AI using an authentic tone.

OpenAI said that while there can be benefits to the usage of this kind of AI technology, it can also be used for malicious purposes.

"The public at large will need to become more sceptical of text they find online, just as the 'deep fakes' phenomenon calls for more scepticism about images."

Researchers warned that development of AI systems could not only be used to create misleading news articles, but could also be used to impersonate others online, automate abusive or fake social media posts, and automate the creation of spam or phishing content.

The group said that further research is required to build "better technical and non-technical countermeasures" against "as-yet-unanticipated capabilities for these actors", as well as urging governments to monitor the impact of such technologies.