Views and announcements

AI's influence on the tech landscape and job market

  • Interview with Steve Ellwood-Thompson, Head of Technology - Data and AI at Kainos. 

    There’s a lot going on in ‘AI’ at the moment, can you break it down for us?

    Sure. Although it’s hard to escape the hype.

    There’s a class of Machine Learning, called Generative AI, which as the name suggests can generate content based on its training. It has been around for some time but, through some insightful investing to leverage recent technical advancements, the capability has reached new levels, orders of magnitude bigger and better than has ever been possible before.

    More importantly though, they then released it as a service to the public, opening the door for all of us to use these powerful tools to create code, writing, images and even video content using nothing more than natural language and a bit of imagination.

    How much of a game changer is it really?

    It’s massive. We’ve seen these new tools being adopted by the big players and early signs of them being incorporated into tooling we use every day, such as Microsoft Outlook and Word, coding tools and even products like Photoshop. This is likely to have a material impact not just on the work that we do as a society, but also how we do the work itself.

    Can you give some examples?

    Sure, code generation tools for example, are allowing us to create programmes quicker, make code as efficient as possible, and even convert code from legacy programming languages to more modern equivalents rapidly. Language generation tools can not only draft letters, emails and reports, but even help us to summarise complex documents, compare text from different sources or assert sentiment.

    As well as improving productivity, for example, we’re seeing companies use this for handling customer interactions, understanding social media posts, generating correspondence and summarising complex case work.

    Should we be worried, some of the big names in AI have highlighted an existential risk?

    No, well, not yet anyway.

    The technologies ‘in the wild’ right now are not sophisticated enough for that, but the ‘call to arms’ from the recent open letters is both timely and important.

    There is a real need to ensure that the right guard rails are put in place globally now however, as more and more investment is being made into these technologies every day. This won’t be a door that can be easily closed once the horse has bolted.

    There is some activity, there’s emergent European Law and an open UK Government consultation in this space for example, it will be interesting to see how other parts of the world react.

    This isn’t a new question though, the potential impact of the irresponsible use of AI at a societal, if not global, level has been a real concern for many of us for some time now. Here at Kainos I work closely with our dedicated Ethicists to ensure that we consider this in all that we do, to minimise the risks and potential harms.

    Such as?

    One big risk we see currently is unintended bias. These services are trained on vast swathes of content from the internet (which I think we can all agree isn’t the most unbiased place) and, while a lot has been done to minimise the risk by the suppliers, they remain far from perfect.

    When we make use of any AI service (or build our own), it’s critical that we take steps to be sure that any result (or decision based on that result) is transparent/explainable, and most critically that it doesn’t treat any community unfairly.

    Another immediate concern is misuse, these services are very convincing and it’s easy to forget that they are just using training data to throw out words that make semantic and syntactic sense. They have little concept of whether what they are saying is correct. Keeping this in the back of our minds when we interact with these services allows us to avoid being misinformed.

    So, it won’t kill us (yet) but might still take our jobs, right?

    Many jobs will change, that’s inevitable, and some jobs might not exist in the future.

    Nobody can be sure of course, but with the right strategy, organisations can now think about how these tools can support their workforce to do the simple things quickly so that people can spend time on the more complex, and hopefully interesting, parts of their jobs.

    One example could be, handing off basic customer queries to a natural language service (with the right controls in place to keep it on track of course) allowing staff to concentrate on more complex case work.

    There’s also the chance of new types of jobs being created because of these products, we already see adverts in the market for Generative AI Engineers, who know what others might be created in the coming years.

    What advice would you give to someone starting out in tech then?

    I’d offer three things to think about:

    1. Whether you are targeting a specialism or aiming at a being a generalist, being able to pivot quickly and adapt has always been, and always will be, one of the keys to success in technology.

    2. You can still be at the forefront of using these tools responsibly. Things have changed so quickly that we’re all on a similar journey. There’s no reason why you can’t ride this wave too.

    3. Nurturing your creative side can only help. Even if (like me) art is not your forte, no machine will ever replace the human mind for coming up with and driving truly disruptive and novel ideas. There will always be value in being creative (STEM became STEAM for all the right reasons IMHO).

Share this story