Last month, the Guardian published an article that it stated was written from scratch by an artificial intelligence (AI) robot, GPT-3.
The piece went viral, with many online readers sharing mixed reactions, from impressed to disturbed to downright scared (with the most terrified understandably being journalists).
If efficiency in news reporting becomes even more important, what's to stop AI writers becoming the main source of content, especially if they're able to write up news pieces/features quicker than human writers? (2/2)
— Jane Corscadden (@janeinator) September 8, 2020
The powerful new language generating robot has been developed by OpenAI and been trained on the world’s biggest data set ever; essentially, the entirety of the Internet.
However, human journalists should not worry just yet about their roles being whisked out from under them by their android counterparts any time soon.
That’s according to Austin Tanney, the head of AI at Belfast-based software firm, Kainos.
RELATED: How is AI helping throughout the Covid-19 pandemic?
He told Sync NI that it’s important to realise, “the article was not written exclusively by AI.”
“The Guardian got three or four different articles and spliced them together. They also gave it the opening paragraph. It doesn’t belitte what it is. It was written by AI, but there was human editorial on that,” he said.
“Is this going to impact the future of journalism? Potentially. What does it really mean?
“It’s basically trained on a generalised model but also it predicts, well what would people usually say next or what would this particular person say?
“That’s the way these models tend to work. It’s trained on a massive data set and it looks for patterns. If you trained it on kids’ fairy stories and you said the word ‘once’ it would suggest ‘upon a time’.
“What it’s doing is replicating things that have been said lots before. The thing with GPT-3 that is different, is that it’s not figuring out what the rest of the sentence is. It’s extrapolating in longer form and coming up with the next sentence, and the next sentence, and so on.”
I think automated journalism will become more popular but there's no reason to panic yet. The GPT 3 article became a bit contradictory at times but it will surely improve with time. When it comes to reporting and questioning on key issues we still need journalists.
— Mark McKillen (@McKillenReport) September 8, 2020
Austin doesn’t contend however, that GPT-3 isn’t impressive, and agrees that it could impact on future journalism, but noted that when we built artificial intelligence tools, we built them to in many ways, supplement human intelligence, not substitute it.
“It doesn’t replace human ability but if you could use a tool like this to write a pile of copy for you that you then, spliced down to get the message across, it might be a way of generating content more easily,” he continued. “But I still think it’s a long way from getting AI to write you on article on what happened in Syria today (for example), and it coming up with something comprehensive.”
One Twitter user stated: “What’s most terrifying is that the efficiency of AI journalism will be exploited for nefarious purposes, and fake news- albeit highly persuasive- will be churned out at a rate that’s impossible for humans to rebut or correct in real-time.”
Austin responded that of course, any tech in the wrong hands is dangerous, but disputed that fake news and dubious content is already being written by humans, without the use of bots.
RELATED: COVID-19: Why do people create fake news and why do others want to believe it so badly?
“Can AI be used to generate the content more quickly and easily?” he pondered. “Yes, but they did it without the bots anyway. It doesn’t take much to pay a young, junior writer and say ‘we’d like you to write an article about X, Y and Z.”
Austin isn’t wrong. Many journalists have been approached by organisations with fake social media accounts associated with ‘front’ organisations. One such reporter was Jack Delaney, who was approached by a seemingly left-wing publication and asked to write a column for it.
It turned out, that the media outlet ‘PeaceData’ was actually potentially part of a Russian disinformation campaign.
Delaney was left feeling “confused, embarrassed, and frankly angry” at the possibility of a big writing break clouding his judgement.
But he isn’t the only one to fall foul of such trickery.
We know now from the 2016 US election, the Cambridge Analytica scandal and so on, that social media and fake news, pumped with money and meddling, go hand in hand.
This is why Austin Tanney doesn’t fear that “probably the most advanced technology in natural language processing that’s ever existed” will make much more of a negative impact in this process.
“I don’t see it having a massive impact from that perspective because humans are doing this anyway.”