Table of Contents
Will AI Replace Accountants and Finance Professionals? The Real Threat, and the Real Opportunity
- 10 min read
- Authored & Reviewed by: CLFI Faculty
Something unusual is happening in finance hiring. AI startup Mercor, fresh off a $350 million funding round that pushed its valuation to $10 billion — now joining our Top $10B+ list of privately valued AI companies.. The company that has quintupled it value in a Series C round, is flooding job boards with openings for finance professionals. The listings sound familiar at first — financial analysts, junior investment bankers, corporate finance experts — but the work isn’t what it seems. These aren’t roles inside banks or companies. They’re short-term contracts to help train artificial-intelligence models to understand accounting, valuation, and deal-making. In plain terms: finance experts are being hired to teach machines how to do their jobs.
Mercor isn’t the only one. Another company, micro1, recently raised $35 million to build what it calls an “AI platform for human intelligence.” Its projects also rely on professionals — analysts, accountants, investment specialists — who review, label, and correct data so AI systems can learn to think more like people. The more human judgment they capture, the faster those systems can perform real analytical work without supervision.
It’s an odd twist. The very professionals who once worried that AI might replace them are now being paid to make that future happen faster. These projects are well funded, highly organised, and growing.
The same quiet transformation is happening higher up the financial food chain. OpenAI, according to recent reports, has been quietly running an internal project called Mercury — a programme that pays around 100 former investment bankers from firms like JPMorgan, Morgan Stanley and Goldman Sachs up to $150 an hour to help train its own financial models. These ex-bankers aren’t advising clients or closing deals; they’re crafting prompts, building valuation models, and testing transaction scenarios so the system can learn to perform the routine analysis once handled by junior associates. In effect, OpenAI is reconstructing an entire analyst floor inside its model-training pipeline. It’s the same pattern seen at Mercor and micro1, only with far deeper pockets: finance talent is being redeployed not to compete with AI, but to transfer its skills into it.
The goal is simple: collect enough expert feedback to automate as much of finance’s repetitive and judgment-based work as possible. Each correction, each rewritten formula, and each explanation of “why this forecast makes sense” becomes new training material. The more data the platforms gather, the less human input they’ll need next time.
AI Unicorns — Valuation Leaderboard
Leading AI companies by private valuation.
| Rank | Company | Valuation | Country | Specialisation |
|---|
Every week brings a new headline about automation in accounting, analytics, or investment research, and the story is often framed as a countdown to replacement. It is understandable that professionals in finance, from junior analysts to controllers and CFOs, read these stories and imagine a rapid erosion of roles. The better question is not whether artificial intelligence will replace finance jobs across the board, but how it will reshape the work itself, which tasks will be absorbed by machines first, and what that means for careers, capability building, and governance. AI replacing accountants is largely a question of developing, training, and fine-tuning AI models to accurately interpret and apply the complex financial reporting and tax rules, which vary significantly by country (e.g., IFRS vs. US GAAP) and industry.
Table of Contents
- The Fear of Replacement
- What AI Can Do Today in Finance
- What AI Cannot Replace
- Skills Finance Professionals Need to Stay Relevant
- AI Adoption in Companies
- From “Software Eats the World” to “Software Eats Labour”
The Fear of Replacement
Fear grows fastest when things are unclear, and the past two years have been exactly that. New AI systems can now recognise patterns, write fluently, and summarise vast amounts of information in seconds. It’s easy to assume that these abilities mean human jobs are next. The concern has deepened as graduate recruitment data shows many large employers have paused or scaled back entry-level hiring while they experiment with automating routine tasks.
Source:
Financial Times — “The Jobpocalypse: How AI Is Changing the Future of Work,” Oct 2025.
Report by Isabel Berwick from Financial Times and Chris Eldridge from Robert Walters, noted that job vacancies for new graduates in the US and UK dropped by approximately 33%, as large employers paused or reduced early-career hiring while assessing the impact of AI on entry-level roles.
In some industries, leaders have even discussed cutting trainee cohorts, which creates a further impression that professional ladders are being pulled up at precisely the moment a generation is ready to climb them. Recent data from Indeed suggests that UK graduates are entering the toughest job market since 2018, with advertised graduate roles reportedly down by a third year-on-year. The platform attributes much of this slowdown to employers pausing hiring and turning to AI to cut costs. Yet, it may be too early to conclude that AI is solely to blame. Such statistics often serve a marketing purpose for data providers keen to highlight their analytical relevance.
What is true is that generative AI has streamlined many middle layers of work, reducing the time professionals spend seeking information or routing tasks through intermediaries. But this doesn’t equate to large-scale substitution. Persistent hallucination risks make generative models unreliable for unsupervised decision-making, and data privacy and security constraints prevent most companies from deploying AI broadly across sensitive functions. In practice, firms remain cautious: AI can assist, accelerate, and simplify, but it cannot yet replace the human workforce that ensures accuracy, accountability, and confidentiality.
What AI Can Do Today in Finance
Venture capitalists, investors, and AI companies paint a very confident picture of what’s already possible. They say artificial intelligence, combined with modern data systems, can now handle a wide range of financial tasks with remarkable speed and accuracy. Transaction matching across bank feeds, supplier ledgers, and ERP systems supposedly happens in seconds, not hours. Variance reports that once took a day to prepare can be generated automatically, with every line linked back to its source. The time analysts spend cleaning data—often a third of the week—can now be scripted and supervised instead of done manually. Forecasting models can learn from historical patterns, produce multiple what-if scenarios instantly, and allow finance teams to focus on decisions rather than spreadsheets. These same tools detect anomalies in payments, highlight suspicious expenses, and flag revenue recognition risks without human intervention.
But from narrative to reality, there is still a gap.
Definition:
Automation vs Augmentation
Automation delegates a repeatable, rules-based task to a system so that it runs with minimal human effort. Augmentation uses the system to accelerate or improve a task while keeping the human’s judgement at the centre. Most effective finance use cases today are augmentation, with automation operating inside well-defined controls and review steps.
Valuefinex, a consulting business operating in CFO Services and Finance Transformation sector said: “Business intelligence platforms, Machine Learning and SaaS applications have, in truth, been automating much of this for more than a decade. Dashboards, self-service analytics, and integrated reporting suites already eliminated huge layers of manual work long before AI became the headline. The real difference is that few CEOs or CFOs were paying close attention and they did not implement these or gave any priority. You rarely heard the term business intelligence in quarterly calls, investor briefings, or industry conferences. Today, by contrast, AI is the word that surfaces in almost every discussion, and mentioned five times before lunch.”
Generative AI has made advanced support tools available to anyone who can frame a question clearly. Not long ago, implementing or troubleshooting an ERP system meant logging tickets, waiting on the function owner, or digging through outdated documentation. Or even trying complex Excel functions felt intimidating. Now, you drop a screenshot and a question into a chat, and nine times out of ten the guidance is spot on. It cuts through layers of dependency, removes the middlemen, and finally gives end users the autonomy that years of digital transformation projects promised but rarely delivered.
A working paper called “GPTs are GPTs” by OpenAI and University of Pennsylvania researchers looked at how large language models (LLMs) like ChatGPT could affect the U.S. labour market.
Their key finding:
- About 80% of workers could see at least 10% of their daily tasks affected by LLMs.
- Around 19% of workers could have half or more of their tasks influenced or sped up by AI tools.
- If we include new AI-powered software built on top of LLMs, as much as 47–56% of all tasks could be done significantly faster, not necessarily replaced, but transformed.
The authors call these systems “general-purpose technologies”, like electricity or the computer, meaning their influence will eventually reach almost every industry. Roles in finance, accounting, consulting, and data-driven management are among the most exposed, not because they disappear, but because much of the work is structured, text-based, and rules-driven.
What GenAI Changes First in Finance
Examples of tasks that LLMs can already perform or accelerate include:
Important: “Exposure” means tasks can be accelerated or reshaped — not that roles vanish. The value shifts to verification, judgment, and decision framing.
What AI Cannot Replace
A model can calculate, predict, and summarise with extraordinary breadth, yet it does not understand why a forecast that fits the data may still be wrong in context. It does not know that a new competitor has changed the price architecture of a category, or that a regulatory consultation due next quarter could alter a bank’s capital allocation preferences, or that a manufacturing site’s maintenance window is likely to slip because a key supplier has been acquired. A board needs a narrative that connects the numbers to the business reality and sets out the viable choices. The person who frames those choices must be credible with stakeholders and accountable for outcomes, and that remains a human skillset. Negotiation, trust building, and the ethics of trade-offs sit outside the reach of pattern recognition.
Bankers, such as Jamie Dimon from JP Morgan, are appearing more frequently than they used to, to make statements like “There is not one job that will be untouched by AI, yet the task is to plan and retrain so that change becomes an advantage rather than a shock to the system.” The bank has already invested more than $2 billion in building it own artificial intelligence stack and, by its own estimates, has generated a similar amount in annual savings and new efficiencies.
JPMorgan Chase has emerged as the financial sector’s most aggressive adopter of artificial intelligence, applying it across everything from legal reviews to developer workflows. Its early project, COiN (Contract Intelligence), uses natural language processing to scan thousands of contracts in seconds, saving an estimated 360,000 legal hours per year. Within its technology division, engineers now rely on AI-powered coding assistants such as GitHub Copilot and an in-house tool called PRBuddy to automate pull requests, documentation, and code reviews. The bank’s broader LLM Suite which is a secured, internal version of ChatGPT that integrates with models from OpenAI and Anthropic, is now accessible to more than 250,000 employees, assisting with everything from performance review drafting to investment banking pitch decks. Chief analytics officer Derek Waldron describes this as the foundation of a “fully AI-connected enterprise” where each employee will eventually work alongside personal AI agents.
Still, behind the ambition, the numbers warrant scrutiny. JPMorgan’s claim of $2 billion in AI-related value remains a projection rather than realised profit, with most benefits described as “expected efficiency gains.” The rollout of LLM Suite to 140,000 staff may showcase scale, but the use cases such as automating internal reports, generating PowerPoints, or writing self-assessments are still administrative, not transformational. Even the most impressive demonstrations, like an AI model producing an investment banking deck in 30 seconds, remain heavily supervised outputs requiring verification before client use.
The Deloitte case shows why AI-generated work still needs careful human checking. The firm delivered a $440,000 report to the Australian government that later turned out to contain fake references and even a wrongly quoted court judgment. Yes, all written by a generative AI tool. Deloitte said humans reviewed the draft, but the mistakes still slipped through. As a result, the government asked for a refund and plans to tighten future contracts to control how AI can be used.
It’s a reminder that tools like ChatGPT can write quickly, but they don’t always know what’s true. They sometimes “hallucinate,” which means making up facts or sources that sound real but aren’t. That’s why AI-generated documents, especially in areas like government, law, or finance, must always be reviewed by people before being shared or published. Speed is valuable, but accuracy and accountability matter more.
Definition:
Hallucination — In artificial intelligence, this term describes when a generative model (like ChatGPT or similar large language models) produces information that sounds factual but is actually incorrect, fabricated, or entirely invented.
For example, we asked ChatGPT to give us a reference to a Financial Times report on AI taking over jobs, and it produced the following link: View Source.
The link appears genuine, it matches the FT’s style and format, but in reality, it does not exist. This illustrates how AI can generate convincing but false information, underscoring the need for human verification and source checking before publication or professional use.
Skills Finance Professionals Need to Stay Relevant
The most resilient finance careers combine strong fundamentals with fluency in modern tools. Technical breadth still starts with accounting standards, valuation methods, and corporate finance. It now extends to data handling through Power Query, SQL, and model governance basics, since professionals who can shape clean inputs will always generate better outputs. Visual analytics and structured storytelling matter because boards and executive teams act on narratives that connect metrics to choices rather than on dashboards alone.
As automation and AI continue to take over much of the reporting and analytical workload, what distinguishes senior finance talent today is not technical speed but strategic depth. The most in-demand roles consistently centre on five core domains — Corporate Finance, Business Valuation, Corporate Governance, Private Equity, and Mergers & Acquisitions. These aren’t optional extras; they form the skill base required to lead in modern, PE-backed enterprises that are reshaping capital markets and redefining the role of finance leadership. The Executive Certificate in Corporate Finance, Valuation & Governance offered by CLFI is built around these exact competencies — developing the analytical rigour, commercial judgment, and governance fluency that help finance professionals preserve their value as AI automates manual and repeatable tasks.
Programme Content Overview
The Executive Certificate in Corporate Finance, Valuation & Governance delivers a full business-school-standard curriculum through flexible, self-paced modules. It covers five integrated courses — Corporate Finance, Business Valuation, Corporate Governance, Private Equity, and Mergers & Acquisitions — each contributing a defined share of the overall learning experience, combining academic depth with practical application.
Chart: Percentage weighting of each core course within the CLFI Executive Certificate curriculum.
Grow expertise. Lead strategy.
Build a better future with the Executive Certificate in Corporate Finance, Valuation & Governance.
AI Adoption in Companies
In an October 2025 interview with Bloomberg, Bob Sternfels, McKinsey’s Global Managing Partner, said: “The role of a CEO is never to be comfortable. You have to play offense and defense at the same time, adopting new technologies while building resilience for the shocks that will inevitably come.”
His comments reflect a clear shift: artificial intelligence is now a CEO-level priority. The discussion has moved from potential to urgency, pushing leaders to integrate AI into strategy, operations, and decision-making to stay competitive.
AI is already being deployed across functions, from marketing and customer support to forecasting and supply chain management. The challenge is no longer identifying where it fits, but ensuring it delivers measurable results.
Leaders succeeding with AI focus less on building models and more on embedding them. They automate service, improve forecasting, and personalise digital experiences, always linking performance back to financial outcomes like margins, retention, and efficiency.
The key question is no longer “How advanced is our AI?” but “How well does it convert computing power into cash flow?” Metrics such as revenue per unit of compute, gross margin per API call, and workflow penetration now define real value.
As Sternfels noted, volatility is here to stay. The companies that thrive will be those applying AI with discipline, combining innovation with governance and treating each deployment as both a technological and financial decision.
Gen AI adoption by function — Overall (% of respondents)
CLFI| Business function | Overall (%) |
|---|---|
| Marketing and sales | 42 |
| Product and/or service development | 28 |
| IT | 23 |
| Service operations | 22 |
| Knowledge management | 21 |
| Software engineering | 18 |
| Human resources | 11 |
| Risk, legal, and compliance | 11 |
| Strategy and corporate finance | 11 |
| Supply chain / inventory management | 8 |
| Manufacturing | 2 |
| Using Gen AI in at least 1 function | 71 |
Note: “Overall” represents the cross-industry share of respondents regularly using Gen AI in each function.
Data recreated from The State of AI: How organizations are rewiring to capture value by McKinsey.
Grow expertise. Lead strategy.
Build a better future with the Executive Certificate in Corporate Finance, Valuation & Governance.
From “Software Eats the World” to “Software Eats Labour”
Venture capital firms, major AI developers, and chipmakers share a clear message: automation is the next industrial revolution. What began as the idea that “software will eat the world” has evolved, A16Z now argues that software will eat labour. The reasoning is simple: if U.S. labour is worth around $13 trillion a year and the global software market about $300 billion, then turning capital into computing power that performs human work could reshape entire economies.
Sequoia Capital calls this moment a “cognitive revolution,” estimating a $10 trillion opportunity as AI expands from software into services, automating roles once filled by lawyers, analysts, engineers, and finance professionals. They describe this shift through a new measure, “flops per knowledge worker,” the computing power used per employee as AI tools integrate into daily work. This expectation fuels the race for chips, data centres, and energy capacity.
Yet investor optimism rarely reflects the lived reality of workers. The same capital funding AI breakthroughs can also widen inequality between those building systems and those adapting to them. Investors see efficiency and profit; employees face uncertainty. The real question is whether AI will be used only to cut costs or to enhance human capability and strategic judgment.
Every financial cycle starts with a bold story and ends with a correction. AI’s story, that intelligence itself can be industrialised, may be the boldest yet. Whether the current record high valuations of private and public companies becomes lasting transformation or another speculative bubble, will depend on discipline. The winners will be those creating real returns, durable productivity, and genuine human value beyond automation.
The question isn’t whether we’ll use AI…it’s whether we can stop ourselves from asking.
Further Reading:
Are We Witnessing One of the Historical Bubbles?
Since the release of ChatGPT in November 2022, Nvidia’s market capitalisation has multiplied several-fold — transforming GPUs into the new oil of the digital economy. Read the full analysis in our CLFI Insight article on AI Circular Financing.
References and Further Reading: