Engineering manager, Cavan Fyans, explains how Apex Fintech Solutions UK are assisting businesses to make the most of Data and AI
Q. For those who are unaware of Apex Fintech Solutions, can you give us an insight into the history of the business and the services it provides?
Apex Fintech Solutions is a fintech powerhouse enabling seamless access and frictionless investing. Apex’s omni-suite of scalable solutions fuel innovation and evolution for hundreds of today’s market leaders, challengers, change makers, and visionaries. The Company’s digital ecosystem creates an environment where clients with the biggest ideas are empowered to change the world. Apex works to ensure our clients succeed on the frontlines of the industry via bespoke custody & clearing, advisory, institutional, digital assets, and SaaS solutions through its Apex Clearing™, Apex Advisor Solutions™, Apex Silver™, and Apex CODA Markets™ brands.
Q. Big Data is often defined in terms of Volume, Variety, Veracity, and Velocity – how does Apex optimize these features for its clients in the real world?
At Apex, the data layer is not just about the challenges that come with bigness; it's about managing the data, infrastructure, and services that make it easier and more efficient for our clients to interact with and extract meaningful insights from our diverse data sources. We, the data team, are enablers for them to build value on our data while we ensure tenets like accuracy, speed, and security.
The Apex data platform is relatively new to the cloud data space, and we are moving fast in building out and optimising for the fundamental features of big data while we respond to our client’s data needs. The core of our cloud data platform, built within GCP, operates as a central data lake with a surrounding data mesh architecture for consumption and access. We have both internal (service teams) and external consumers (clients) that pose differing challenges as we must operate both as an internal services team and a client product team.
I say it slightly carefully, but the bread and butter of the big data world are somewhat solved problems (that’s perhaps the first two V’s; central storage, distribution, access, etc.). We do have to be attentive to how we architect our central data solutions, but the challenges are more often in the outer layers and how we optimise against the change and movement of data at scale; how we find the right architectures, services, and mediums to drive quality, privacy, security, and efficiency that will deliver fast routes to value for the data products that are built on top of our central data platform.
Q. Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time – How does AI and Machine learning overcome these problems?
AI and machine learning are pivotal in addressing the challenges posed by big data. These technologies excel in handling large volumes of structured data that have traditionally been difficult to work with at scale. The convergence of structured data, AI/ML, and modern parallel & and distributed computing (… cloud-everything), makes it easy to process and sift through vast amounts of data fast. AI/ML systems and tools are very good at identifying patterns, trends, anomalies, modelling, summarising, etc. — tasks that would be impossible for humans or individual systems to achieve. This advanced technology enables efficient AI/ML-powered automation of big data tasks like data preparation, cleaning, identification, etc.
But, perhaps more importantly than just making us more efficient, these tools also allow us to redefine the ways we can work with data. For example, the way data users can explore or identify data; by leveraging GPT/LLM style tools, we can enable humans to be able to interact, query, and search data in a conversational manner without the necessity for data-query language skills or data catalogue abstractions. This ease-of-use can expand the accessibility and visibility of data, allow more people (different stakeholders) to explore and understand our data, and ultimately understand how to derive value from it.
Q. Is Cloud and Serverless the future or is there still a place for traditional data warehouses?
There’s no doubt that the landscape of data storage, processing, and access is undergoing continual transformation. There’s always going to be some debate between cloud-everything and traditional data warehouse approaches to data platforms. In most cases, there is a use case for everything. Engineers should be looking for the right tools for the job to build value for their client or business, rather than looking for the latest trend for the sake of buzzwords.
At Apex, we are inherently cloud focused. We leverage the efficiency, performance and availability of cloud-based data systems in order to provide the data surface and tools that our consumers require. Our live ML models are a prime example of the performance and scalability requirement that is only possible in the cloud. In order to run these models we have to be able to process the large volumes of activity data used to train the ML models and have high, performant availability of the online feature data (used to feed the live model on request), while also serving the ML models. All of this needs to be able to scale rapidly for both data volume (relatively known) and bursty API usage (relatively unknown) to ensure that we can guarantee healthy data and fast response at the live model.
Q. What makes Apex an exciting place to work, and what qualities do you look for when building a successful team in Belfast?
At Apex Fintech Solutions, the recent transition to cloud computing and cloud data platforms has opened up exciting new possibilities. We're not bound by years of legacy code or stuck in a traditional way of working. Instead, as a data team, we are building in fresh cloud pastures. We actively encourage our engineers to shape this future, to innovate, adapt, learn, and define as we move forward.
For me this is very exciting. As engineers, we have the scope and autonomy to work closely with the other business verticals and define our path forward. In our data-platform, we maintain centralised tooling and services but decentralise data ownership. In this way, our data team are no longer really “data engineers” in the traditional sense. And we’re not really SQL experts — I prefer to define us as cloud engineers who sit next to the data. We’re building cloud services, so we inherently need to be good engineers first, maybe with a data appreciation and focus second.
The qualities I look for in engineers are collaboration, innovation, and adaptability. Engineers who have a product mindset and want to understand the why as well as what and how. For me, these qualities are much more important than specific skills in languages or technologies. In the Apex data team in Belfast, we work concurrently across the breadth of our data platform; cloud (micro)services, analytics engineering, ML Ops and services, APIs and backing services — so there is a lot of scope for different skills, focuses and futures.
This article appears in the Big Data edition of Sync NI magazine. To receive a free copy click here.