Views and announcements

How can AI stop 'DeepFake' videos that undermine elections?

  • This article was written by the team at Liopa, a Belfast-based AI firm that develops Liopa tech to understand speech just from lip movements; augmenting audio recognition; supporting biometrics.

    You may have seen this video, a “DeepFake”, publicised on the BBC. It calls under threat the very facets of democracy.

    In the fake video, Labour leader Jeremy Corbyn is seen to endorse Conservative leader Boris Johnson, and vice versa – each politician telling viewers to vote for the other in the 12 December Parliamentary election.

    Obviously, it’s easy to see that the video is a fake. The “DeepFake” video was created as an academic exercise, to show how dangerously falsified videos can undermine the democratic process. In this case, the video was created by a research firm called Future Advocacy and UK Artist Bill Posters.

    What can be done about Deep Fakes?

    We believe that we may have one solution for determining DeepFakes – but first, it’s worthwhile to examine why they are so dangerous in today’s political climate.

    Undermining democracy

    It’s easy to see how false information could sway voters. At a time of relative political unrest, voters are sensitive to the information that is fed to them on their social media feeds. Voters also may have the sense that if it appears on their social media feed, it is handpicked, just for them. That sense of “personal communication” may make the ad more powerful than if they saw it on TV, for instance. Maybe it’s no surprise that changes in polls seem to happen on a daily basis – and anyway, polls no longer seem to be accurate measures of election outcomes.


    (c) BBC News

    The rising pace of propaganda

    It’s important to remember that propaganda is hardly a new concept – only the vehicle through which it is served to voters is new. In the age of social media, the main thing that has changed is that propaganda can be created quickly – and disseminated to millions of people within hours.

    In the United States, so-called “fake news” was blamed, in part, for Trump’s surprise victory over Hilary Clinton in the 2016 Presidential election. Misinformation about Clinton’s health, and other topics, spread virally over social networks like Twitter and Facebook. Closer to home, highly targeted advertisements were finger-pointed as helping sway voters to vote ‘Leave’ in the Brexit referendum on 23rd June 2016. Facebook released all of those ads to regulators who were seeking to determine if the targeted ads had infringed upon campaign law.

    In both cases, in the UK and the US, this information was released mere days before the election – sometimes, the day before the election – when independent voters may have still been deciding upon their vote.

    All of this adds up to one statement – DeepFakes are just one way that voters can be swayed.

    In November, Twitter took a bold move to ban all political advertisements. Jack Dorsey Tweeted the following: “Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale.”

    RELATED: Twitter and Google are both tackling political advertising, but Facebook isn't

    Then this week, Google has joined the bandwagon, indicating that it will no longer allow “micro-targeting” and that it will take further measures to ban DeepFakes. To date, Facebook has not taken a stance on what it will do to curb political advertising.

    So, how can DeepFake videos be stopped?

    Clearly, there is a need for automated solutions that can stop DeepFakes. The first step is separating them from legitimate videos – no human solution would be scalable enough for this gigantic task, even for giants like Facebook. Each day, 95 million photos and videos are shared on Instagram – and that is only one social network.

    This scalability clearly shows that the only way to fight an AI-created problem, is with AI.

    RELATED: AI vs. AI

    Liopa is developing LipRead, an automated lipreading technology based upon deep learning algorithms. We’re currently evaluating whether it could be a potential detection device for DeepFake videos.

    In LipRead, AI algorithms detect speech via lip movements – so they can see what a person is saying on camera from video alone, without an audio feed. In DeepFake videos there are major discrepancies in the lip movements and the audio – because even though the fake video is made with powerful AI algorithms, they will never get the lip movements 100% accurate. (At least, not with the computing power available today.)

    We believe that LipRead could be one solution to the enormous problem facing today’s democratic process.

    Click here to find out more about how LipRead works.

    About the author

    An article that is attributed to Sync NI Team has either involved multiple authors, written by a contributor or the main body of content is from a press release.

    Got a news-related tip you’d like to see covered on Sync NI? Email the editorial team for our consideration.

    Sign up now for a FREE weekly newsletter showcasing the latest news, jobs and events in NI’s tech sector.

Share this story