Police have concerns over potential 'bias' when using AI, study shows

  • There are increasing concerns from police officers about using potentially “biased” artificial intelligence (AI) tools, according to a new study by the Royal United Services Institue (Rusi).

    The study suggests AI technology could “amplify” prejudices, meaning some societal groups and minorities may be more susceptible to being stopped and searched.

    Officers revealed in the study that they also worry they will become too reliant on automation, and clearer guidelines are needed for the use of facial recognition.

    Rusi is one of the UK government’s advisory bodies. It interviewed approximately 50 candidates in its study, including academics, government officials, legal experts and senior police officers in England and Wales who wish to remain anonymous.

    “Any potential benefits of these technologies may be lost because police forces’ risk aversion may lead them not to try to develop or implement these tools for fear of legal repercussions,” Rusi’s Alexander Babuta told BBC News.

    A main concern among police officers was about using existing police records to train machine-learning tools, as these records might already be skewed by the original arresting officers’ own prejudices. One officer added: “Young black men are more likely to be stopped and searched than young white men, and that's purely down to human bias.

    "That human bias is then introduced into the datasets and bias is then generated in the outcomes of the application of those datasets."

    Additionally, people from less privileged backgrounds were more likely to use public services frequently, which would generate more data about them that could in turn make them more likely to be flagged as a risk, according to the study.

    Mr Babuta commented: “"There are ways that you can scan and analyse the data for bias and then eliminate it - we need clearer processes to ensure that those safeguards are applied consistently.”

    The National Police Chiefs' Council has responded saying UK police always seek to strike a balance between keeping people safe and protecting their rights.

    "For many years police forces have looked to be innovative in their use of technology to protect the public and prevent harm and we continue to explore new approaches to achieve these aims," Assistant Chief Constable Jonathan Drake said.

    "But our values mean we police by consent, so anytime we use new technology we consult with interested parties to ensure any new tactics are fair, ethical and producing the best results for the public."

    The study was commissioned by the Centre for Data Ethics and Innovation, which plans to draw up a code of practice covering the police's use of data analytics next year.

    UK police began implementing AI tech heavily throughout 2018, with plans for more investment in the systems in 2020. Sync NI wrote a piece late last year, discussing how a lack of understanding of the technology has also led to an unusual situation in which the police aren't communicating effectively with the public about how they're using artificial intelligence in the field. Wired UK predicted that 2019 would be the year that ethics would have to catch up with the technology.

     

    Source: BBC News

    About the author

    An article that is attributed to Sync NI Team has either involved multiple authors, written by a contributor or the main body of content is from a press release.

    Got a news-related tip you’d like to see covered on Sync NI? Email the editorial team for our consideration.

    Sign up now for a FREE weekly newsletter showcasing the latest news, jobs and events in NI’s tech sector.

Share this story