Views and announcements

A Menacing Shadow: Testing the boundaries of AI and ChatGPT

  • David Collins, CEO at First Derivative, discusses how AI will change how humans and businesses operate with assistance from these emerging technologies.

    Something has happened in the world of AI. The emergence of Chat GPT, has passed a watershed moment in computer science or crossed the Rubicon if you prefer more drama - Chat GPT has passed the Turing Test. 

    We could argue that mastering natural language will do for working in words what calculators and spreadsheets did for working in numbers. To test that I asked Chat GPT to help me write this article. At the first attempt, it produced a fairly bland, ‘me-too’ block of text with no interesting insight or angles. I gave it some direction and it wrote a second version dramatising the risks and gave it a title of ‘the menacing shadow’, it still didn’t come up with anything I hadn’t heard before.

    If I was hoping for a ‘Move 37’ moment I was destined for disappointment. Perhaps that is not surprising, firstly because my test was very narrow and hardly scientific in its approach and secondly that isn’t what Chat GPT was trained to do. It is a very specific application of AI, broad in its understanding of language but limited to a specific use case. 

    One thing to be aware of is the different algorithmic techniques that AI uses, this ranges from simple logic trees that could be defined in the same way as sarcasm is to wit, to the more mysterious neural networks that do produce ‘Move 37’ moments, when applied to certain types of scenarios. Not all techniques are applicable to all use cases and the more sophisticated AI systems will often nest different approaches together.

    What they do have in common is they require two fuel sources, compute power and data. Training any algorithm requires a lot of data, it needs to be collected (created where there is insufficient access), cleaned, modelled and processed. The more sophisticated the ask the more support of associated data is required, we need to understand the context and where humans are involved, something about the emotional state that produced the actions. These tasks are continuous, as we deploy the AI we gather information, learn, enhance the data set and retrain. More data and more compute power required to retrain the systems.

    The use of AI to undertake certain tasks that previously required human interaction will bring the technology revolution to more parts of the economy, it will inevitably make certain roles redundant but I don’t share the fear of mass unemployment; the AI revolution will certainly create new roles for data analysts, data scientists and data engineers. The new demographic paradigm is crying out for machines to help in a world where we will not have enough workers. I could argue that we seriously need these tools to face a future where the workforce is declining. 

    Having said all of that, the ‘menacing shadow’ article produced by Chat GPT has some merit. It is inevitable that this technology will be weaponized, if it hasn’t been already and if you combine that with the technological singularity - the point where AI becomes capable of recursive self-improvement and surpasses human intelligence, we could get closer to that sky-net moment. 

    I think we are a long way from that point and the more immediate issue is our own self-worth as we become more attached to our generative tools from Stable Diffusion for image manipulation through to Chat GPT. These are augmentation tools; used well they make us more productive but create a dependency that removes some of our creativity. How do we really feel about AI producing art be it photographs, works of art, music or images? Can we celebrate the creativity programmed into a machine to produce images in the style of Ansel Adams in the same way as we did the original? As smart as it is to build a neural network that can be trained on Adam’s back catalog it is a long way from the sense of journey through Yosemite, shots taken using 35mm film and the manual development process. 

    Perhaps we need to revisit Turing's original question that he posed in his 1950 paper “Computing Machinery and Intelligence”, he originally asked “can machines think” but given the lack of definition of the word think he avoided the question proposing instead his natural language test. Would we define ‘Move 37’ as thinking or was it random brilliance?

    When faced with a scenario that requires a decision, we could code a machine to make that decision based on a series of inputs that are not based solely on deterministic decision trees and where the outcome is uncertain i.e. we only know if it was the right decision retrospectively. I suspect the computer could be at least as right as any human. It’s possible then to understand why Alan Turing selected the natural language test as the best example of thinking. 

    Perhaps we need to propose a new test of human-like behaviour based on Philip K Dicks dystopian novel ‘Do androids dream of electric sheep?’ How would we feel then if we discovered a machine dreaming?

    This article appears in the summer edition of Sync NI magazine. To receive a free copy click here

Share this story