YouTube plans to use AI to enhance child protection

  • Tech giant Google has announced plans to use algorithms and AI to tackle the growing problem of ensuring child safety on its online video platform YouTube.

    Investigations into the child safety of YouTube started around 2017 when journalists uncovered disturbing copy-cat cartoons aimed at children featuring characters such as Elsa and Spiderman engaging in inappropriate behaviour. This content was even making its way into the YouTube Kids app that was supposed to be safe for kids to view, and reports on the issue led to major advertisers cancelling their contracts with YouTube.

    The most recent revelation comes from journalists investigating the comments on videos featuring children, where it was found that many of these videos had sexually explicit comments on them aimed at the children in the video. The fact that the comments aren't being detected and deleted despite Google's best efforts has once again hit the company's bottom line as major advertisers have pulled out over the controversy.

    People now routinely upload videos of their children to YouTube in order to share them with friends and family, and families are increasingly starting YouTube channels and vlogs as a way to express themselves online. Sending sexually explicit messages to minors is a crime and has been a growing problem in Northern Ireland recently, with the number of cases reported doubling in the past year.

    The UK Home Office and other groups around the world put pressure on Google to better curate its platform and help ensure the safety of children using the service, and now Google has made a radical decision: All videos featuring children will have comments automatically disabled. Selected partners will be able to enable comments on videos containing minors but will have to demonstrate that they are rigorously moderating the comments themselves.

    The technical problem for Google here is one of scale. Hundreds of hours of new video content and countless comments are uploaded to Youtube every second, so it would be infeasible to have human editors review all of it for suitability. The company relies on algorithms to detect and remove offensive comments or to flag up potentially harmful content, but it's still managing to slip through the net.

    Google plans to now use algorithms and Artificial Intelligence to identify videos containing minors and automatically disable the comments, though critics have pointed to the unreliable nature of these techniques given the broad range of videos possible and the fact that Google's current algorithmic approach clearly isn't working. It's hoped that better detection methods can be developed to help identify the offending content and enhance YouTube's child safety.

    About the author

    An article that is attributed to Sync NI Team has either involved multiple authors, written by a contributor or the main body of content is from a press release.

    Got a news-related tip you’d like to see covered on Sync NI? Email the editorial team for our consideration.

Share this story