We have updated our Privacy Policy, click here for more information.

Contact

    Thank you

    AI – The Hero or the Villain?

    Published: July 31, 2025

    Sarah Coyle - First Derivative

    Sarah Coyle
    Principal Business Analyst
    First Derivative

    First Derivative LinkedIn profile

    AI is everywhere — driving trades, flagging risks and accelerating decisions. But in a world where trust, speed, and compliance are non-negotiable, Capital Markets face a unique question: can we truly rely on machines? This article, written by Sarah Coyle, explores why fear often follows innovation, how machine error is judged more harshly than human error, and what it will take to build AI systems that are ethical, explainable, and regulation-ready. With the EU AI Act looming and client expectations rising, now is the moment to lead with confidence — or risk being left behind.

    Innovation always meets resistance

    History shows us that the first reaction to change is rarely acceptance!

    Across the ages, transformative change has constantly been met with fear or resistance. Socrates notoriously spoke out about the creation of writing, stating that it would “create forgetfulness in the learners’ souls, because they will not use their memories”. During the industrial revolution and the introduction of “iron horses,” more widely known as steam-powered trains, a sense of terror in society was stirred by the concern that the human body could not withstand such unnatural speeds. Each era of innovation brings out the sceptics. The internet was once dismissed as a passing fad. More recently, when cloud migration first emerged, the CEO of Oracle, Larry Ellison, referred to it as “complete gibberish” before later pivoting and launching Oracle’s own cloud services.

    The Familiar Resistance to a New Force

    In the case of AI, the resistance feels familiar yet uniquely urgent. In Capital Markets, where speed, trust, and regulation are tightly interwoven, the stakes are even higher. Here, change doesn’t just disrupt workflows. It shakes the foundation of value itself.

    Why We Trust Humans More Than Machines

    Imagine a financial trader makes a poor judgment call. We understand that poor judgement can be the result of stress, fatigue, human error. Now imagine a machine does the same. In finance, this might mean a trading algorithm making an error that leads to real financial losses. Suddenly, it’s a scandal. Why is human error forgivable, but machine error intolerable? The answer lies in expectations. We expect machines to be flawless, but that’s never been the promise of AI. The irony is that most AI systems are less prone to fatigue, bias or emotional decision making than humans. Yet trust remains low. Building trust in AI means recalibrating what we expect and designing systems that are transparent, explainable, and auditable.

    If you look around, AI is already completely embedded in many personal and professional lives. It powers the facial recognition used to unlock your mobile phone, decides the routes you drive and the advertisements you see. In Capital Markets, high-frequency trading and market predictions are all powered by some form of AI.

    Separating Science Fiction from Real-World AI

    Yet, not all “AI” is created equal. Much of what is being deployed today, from predictive models to intelligent automation, is not the generalised AI of science fiction. It is machine learning, sophisticated pattern recognition built on statistical training. Recognising this distinction is crucial. When people fear “AI taking over,” they often conflate narrow machine learning tools with the distant concept of conscious machines. For now, AI’s real power lies in accelerating specific tasks, not replacing human judgment entirely.

    Yet, not all “AI” is created equal.

    The cost of Flawed Data

    AI is only as powerful as the data it consumes. LLMs, like ChatGPT and Copilot are trained on vast amounts of text, but once the underlying data or model is exhausted, the performance degrades, often exponentially. The age-old argument of “garbage in, garbage out” still applies. It’s paramount that our Capital Markets clients invest in data governance and infrastructure, because a model trained on weak data will result in bad decisions at scale.

    Beyond performance, biased data introduces significant ethical and reputational risks. One notable example occurred in the US Auto finance industry where a company used geographical and surname data to estimate a borrower’s race, a method known as Bayesian Improved Surname Geocoding (BISG). This technique led to inaccurate assumptions and overestimations of minority borrowers which contributed to flawed and potentially discriminatory decisions. Consequently, minority borrowers were charged higher interest rates on their auto loans. In response, the Consumer Financial Protection Bureau (CFPB) and the Department of Justice (DOJ) ordered Ally Financial to pay $80 million to consumers harmed by discriminatory auto loan pricing. (CFPB and DOJ Order Ally to Pay $80 Million to Consumers Harmed by Discriminatory Auto Loan Pricing.” Consumer Financial Protection Bureau, 20 Dec. 2013)

    This case highlights how proxy data, even when well-intentioned, can reinforce societal biases when embedded in algorithms. For companies in Capital Markets where trust and compliance is critical, these kinds of risks are unacceptable and must be proactively managed. Ethical AI isn’t just about what a model can do; it is about ensuring the data it learns from doesn’t inherit systemic bias.

    One Size Doesn’t Fit All: Understanding AI Readiness

    This leads on to the importance of understanding an organisations AI maturity. Firms often engage with AI initiatives without assessing readiness. An AI maturity scale can provide a view of how far along a firm is in its AI journey from initial experimentation to full-scale, enterprise-wide integration. AI’s adoption is not uniform across the financial landscape. Different clients have markedly different risk appetites. Some aggressively embrace AI innovation, seeking a competitive edge. Others take a cautious, compliance-driven approach, wary of operational, reputational, and regulatory risks. Understanding a firm’s AI maturity alongside their risk appetite will help shape conversations with future buyers and uncover missed market opportunities.

    Meeting the Moment: Aligning with the EU AI Act

    Regulation is catching up fast. The EU AI Act which is the world’s first major legal framework for AI, has far reaching implications, especially for high-risk sectors like Capital Markets. It classifies AI systems based on the risk they pose, from minimal to unacceptable, and imposes strict obligations on those systems deemed high-risk. These obligations include requirements for transparency, human oversight, robust risk management, record-keeping, and sometimes public disclosure for certain AI uses.

    Penalties for non-compliance of the EU AI Act can reach up to €35 million or 8% of global annual turnover (in the previous year) whichever is higher. For Capital Markets firms, this is not a theoretical concern, but it is an immediate operational, reputational, and financial risk.

    Why Proactive AI Governance Builds Trust

    With the EU AI Act now in force and phased implementation underway, Capital Markets firms are navigating a new landscape of accountability. Clients and regulators alike are actively seeking assurance that AI systems are transparent, explainable, and compliant.

    Organisations that move early and demonstrate robust AI governance will position themselves as trusted, future-ready partners. In contrast, those who delay may face not just compliance risks, but reputational and commercial consequences.

    To stay ahead, firms should:

    • Design AI systems that are transparent, traceable, and aligned with the EU AI Act.
    • Invest in team training on AI ethics, governance, and regulatory requirements.
    • Proactively engage stakeholders to build trust through responsible AI deployment.

    The next chapter of AI in Capital Markets will be shaped by those who combine innovation with trust, governance, and expertise.

    The next chapter of AI in Capital Markets will be shaped by those who combine innovation with trust, governance, and expertise.

    Let’s lead that future.

    AI cannot operate in a vacuum. It needs to be integrated with existing platforms, workflows, and human expertise. In Capital Markets, this means syncing AI tools with trading desks, compliance systems, and research teams. The firms that succeed will treat AI not as an add-on solution but as an evolving layer of intelligence that enhances decision-making.

    The environmental impact of AI is constantly under scrutiny. Training large AI models consumes vast amounts of energy. This raises ethical and practical concerns, especially for industries already under pressure to decarbonise. Capital Markets have a growing focus on ESG (Environmental, Social, Governance) metrics and yet the irony is that some AI tools used for ESG reporting may themselves carry a heavy carbon footprint. To address this, many firms are now moving AI infrastructure to colder regions such as Scandinavia and Canada, where natural cooling reduces energy usage and supports greener data centre operations.

    This geographical shift is not just about cost-efficiency, but it is a strategic move toward aligning technological growth with climate goals. Clients will soon expect sustainability as much as they expect performance. Balancing innovation with sustainability is no longer optional. It’s becoming a defining feature of long-term competitiveness.

    AI Is Not the Threat. It’s the Advantage

    So, is AI the Hero or the Villain? As with every paradigm technology development, like fire or electricity, the answer lies not in the technology, but in how we choose to use it. With the right guardrails, governance, and intent, it becomes one of the most powerful tools we’ve ever had to drive smarter decisions, reduce bias, and unlock new value.

    In Capital Markets, those who combine innovation with governance, sustainability and most importantly, the human touch, won’t just survive, but they will lead. Let’s work to be one of the leaders.

    Explore how our cutting-edge AI can revolutionize your firm’s market oversight.

    Contact us today

    Explore

    More Insights

    Your rate of change

    Starts here