Views and announcements

Is your AI Pilot going anywhere?

  • Back in 2019, Industry 4.0 was supposed to be the next industrial revolution. IoT, 5G, smart factories. AI was barely mentioned. Turns out the industrial revolution did arrive, just not the one anyone in those rooms was designing for. And it's following a familiar pattern. Take how Data Strategy evolved over the years. First it was a grand strategy, with Data teams responsible for it. That's still true, but every part of the business is now data literate and data is part of the company strategy rather than a standalone piece. AI is going through that same path, but a lot faster, and this tells you that AI needs to be so much more pervasive than a strategy document and a pilot by now. And yet, for a lot of organisations, that pilot is still where AI lives.

    That means dealing with the hard problems. Who’s liable when agents get it wrong? How do you control what it can touch? These look like down the road problems, but now, more than ever, you can’t afford to defer, and honestly, these problems are getting more and more solvable.

    If you haven't been paying close attention to what AI can do now, some of it will come as a surprise. The cost of building software has collapsed. Problems that needed a team to solve can now be tackled by two people, with the right approach. Markets that couldn’t justify bespoke tools two years ago can have something purpose built in days. These builds are happening today and a lot of businesses in this region are just beginning to notice.

    But consider, if you can build something real in days, what’s to stop someone else doing it? We’re talking defensibility here and that’s no longer the build itself when someone else can have the same thing built just as quickly as you get your first customers. So the question has switched from: “can we build this?” to: “What makes our product worth sticking with?” And the answer is usually the same: data nobody else has access to, deep integration into how a client already works, accreditations that take years to build. Those are hard to copy on a weekend.

    One thing that catches organisations out early is assuming that because an AI has access to something, it should. When you start to get the gains of AI doing things automatically - sending emails, pulling from databases and interacting with other systems, the question of what they’re allowed to access becomes important fast. Getting that wrong is a surefire way to create a liability, but closing it down so much that a user needs to approve every choice will kill your AI adoption. So don’t defer getting it right.

    Making the most of this opportunity when you're working with a mature, established product rather than building from scratch, presents a different kind of challenge. Take an SME or Enterprise, if you’re in this space you’ll likely already have had an AI pilot or you’ve got tooling across your business helping productivity and assisting in the software build process. However, actually having AI as part of your product suite or releasing the full potential of AI assisted development are both unsolved problems in a lot of places.

    I’ve been a frustrated practitioner working to get this right. Take AI assisted development as an example - engineers push back for reasons that usually sound technical; too much legacy code or a super niche domain. In practice the friction tends to be elsewhere. It’s not having protected time to learn how to actually work with AI effectively: writing good specs, iterating on plans and making use of integrations that keep it aware of the latest patterns. At the team level, shared AI context can be weak, specs are inconsistent and there's no agreed line between what AI is supposed to do and what engineers are supposed to own.

    At Hypership, the way we help other software businesses overcome this is by taking a real feature through to production using those more AI-native methods, with the client's own engineers in the build, not receiving a handoff at the end of it. It focuses on getting that feature actually live, rather than a demo. It’s an important distinction to make sure it’s not a performative action and builds an understanding of how to work with the possible gotchas in a mature product.

    Do that once and you'll have engineers who've built this way rather than heard about it. A guild model can follow from there, but only if engineers have protected time to participate. Sold through leadership as an efficiency initiative and dropped on engineers as extra obligation it becomes an inactive Slack channel within a few months. Protected time and a real mandate are what make the difference.

    The further AI gets into your operations, the more the question of access and control matters. When an AI can take actions without prompting the user - knowing exactly what it can touch, what it can do and having a record of that are non-negotiable. The enterprise offerings of the commercial AI platforms get you quite far, but you’ll need to have access policy and risk controls in place over and above this to operate with confidence at scale.

    Across the island we've met technological inflection points for decades. Through the rise of cyber, data, fintech and IoT we've built credible, lasting organisations in the process. The difference with AI is the window is shorter. The businesses and builders who lean into the hard problems early, on defensibility, on access, on how to actually ship rather than pilot, are the ones who'll have the most to show for this era. The tools are there. The problems are more solvable than they look. The main thing left is to decide to go at them.

    Source: Written by Shannon Holgate, Co-founder, Hypership

     

    Read the Spring 2026 edition free online →

    Stay connected with NI's tech community:

Share this story