Views and announcements

Trust me, I’m a robot - Explainable AI in Financial Services

Share

Latest jobs

  • By PwC's Leo Donnachie, Tom Boydell and Conor Macmanus

    In Douglas Adams’ ‘Hitchhikers Guide to the Galaxy’, it takes the supercomputer Deep Thought over seven million years to work out the answer to the meaning of life, and an even more powerful computer to interpret that answer. While AI is some way off reaching this milestone, accounting for and understanding how and why it makes decisions is a key area of focus for both regulators and firms.

    As the take-up of algorithmic and machine learning systems within financial services increases, it is incumbent on firms to ensure they are able to clearly articulate how decisions are reached (“explainability”) and the extent of AI involvement.

    The Financial Conduct Authority (FCA) has signalled that firms should focus on achieving “sufficient interpretability”, essentially a compromise between AI functionality and the ability to clearly explain its decisions to stakeholders. While sensible, the definition itself raises a number of questions firms need to tackle.

    First, the level of explainability that will suffice is unclear. Decisions reached with the help of AI may be explainable to a firm’s Chief Digital or Data Officer, but would a retail customer understand the implications?

    The need to ensure sufficient explainability of AI may also create a trade-off between human oversight of the decisions taken by AI and the ability to generate rapid and cost-effective predictions.

    The right approach to this trade off may depend on a balance of the risk appetite, the impact on customer and even by product. Clearly it would not be practical for a human to check the decisions made by a high frequency algorithm trading in equities or foreign exchange in less than a second. But for robo-advice, an area that has come under regulatory scrutiny in this respect, firms will want to apply significantly more human oversight.

    While this may sacrifice some of the predictive accuracy afforded by a fully automated solution, such a compromise would be more explainable, and therefore more likely to address the regulator’s concerns.

    That being said, regulators do acknowledge that AI explainability and responsible use must be balanced with its value to consumers and commercial viability. In a recent speech made by Chris Woolard, Executive Director of Strategy and Competition - FCA, it was noted that while firms should be aware of the risks associated with new technologies, this awareness should not serve as a “barrier to innovation in the interests of consumers”.

    For regulators, this means finding a balance between outcomes-focussed regulation and formal requirements for fair and ethical approaches. Meanwhile, firms must consider how to balance consumer needs and outcomes with AI functionality, governance and explainability. This is a central element of PwC’s Responsible AI framework, an approach to help establish a transparent AI strategy and deployment process that is geared towards fair consumer outcomes.

    When it comes to embedding responsible AI frameworks, it is clear that there will be no ‘one size fits all’ answer for firms. As noted in Chris Woolard’s speech, the risks created by adopting AI will differ depending on the context in which it is deployed.

    Of course, there are many contexts where AI deployment may help to increase operational effectiveness or reduce risk. For example, technology assisted solutions can help identify high-risk clients during KYC/AML checks through the analysis of news stories, financial information and company registers in multiple languages.

    However, deployed incorrectly or with scant regard for transparency, new and highly complicated risks are likely to emerge. Poor inputs will create poor outputs, and ‘implicit biases’ within AI models is a critical area in which firms need to evidence responsibility. For example, such bias could manifest in an underwriting model employed by an insurer.

    If the model draws on data sets which may be discriminatory in nature, it will end up deploying these same biases and may also increase the bias when determining whether a policy should be accepted, with potentially damaging consequences. Firms will need to scrutinise data being fed into models, as well as establish proportionate controls and escalatory processes to ensure that risk-mitigating systems don’t end up becoming risks themselves.

    As discussed in PwC's last blog, establishing appropriate accountability for different applications of AI within a robust governance framework will be essential in demonstrating responsible oversight and preventing the crystallisation of risks. Firms will need to review their governance models, particularly in light of the Senior Managers and Certification Regime, to ensure a proportionate, outcomes-based approach to new technology is in place. This approach should be tailored to specific use cases and overseen by accountable individuals - individuals who can articulate how technology is used and how risks will be mitigated.

    Getting the balance right will not be easy. Some firms may feel that such an approach is bordering on overkill. After all, it is unlikely that we will be witnessing the rise of the fictional AI systems Skynet or HAL anytime soon. Regardless, it is becoming clear that as firms continue to invest in AI, they cannot implicitly trust the ‘black box’. A responsible and explainable framework for AI usage will be essential if firms wish to avoid regulatory censure in the future.

    About the author

    An article that is attributed to Sync NI Team has either involved multiple authors, written by a contributor or the main body of content is from a press release.

    Got a news-related tip you’d like to see covered on Sync NI? Email the editorial team for our consideration.

    Sign up now for a FREE weekly newsletter showcasing the latest news, jobs and events in NI’s tech sector.

Share this story