Your Team Is Already Using AI. Two Regulators Just Published the Rulebook.
Somewhere in your finance org right now, an analyst is pasting actuals into ChatGPT to draft variance commentary. An FP&A lead is using Claude to pressure-test a board narrative. A controller is running Gemini against a rec that won’t tie out. None of it is documented. None of it shows up in your control matrix. And until two weeks ago, there wasn’t a finance-specific governance framework designed to help you categorize and manage any of it.
That changed fast.
On February 19, the U.S. Treasury released the Financial Services AI Risk Management Framework, a practical governance toolkit built specifically for finance teams, including 230 control objectives mapped by deployment stage. Five days later, ECB Banking Supervision published a speech signaling it will tighten scrutiny on generative AI in banks, with particular focus on third-party concentration risk: the fact that most GenAI tools trace back to just a handful of foundation model providers.
Two regulators. One week. One message: document what you’re running, who owns it, and what happens when it fails.
The timing is uncomfortable. Only 14% of CFOs in a recent RGP survey said they’ve seen clear, measurable ROI from their AI investments. 86% cited legacy tools as a significant barrier to enterprise AI adoption. Meanwhile, 53% of investors expect AI projects to deliver returns within six months, a timeline only 16% of CEOs consider realistic, according to Teneo’s Vision 2026 survey. CFOs are absorbing pressure from both directions at once, and the instinct to either move fast without documentation or freeze everything until the framework is perfect will fail in both directions.
The CFOs Getting This Right Built Governance First
The finance leaders with actual AI ROI share one structural habit. They built the control layer before they scaled the use cases, then used that control layer to accelerate the next deployment. The Treasury framework makes this concrete. Its stage-based control matrix means a team in early-pilot mode applies a different control set than a team running AI across the full close cycle. The accompanying AI Lexicon gives finance and IT a shared vocabulary, which shortens approval cycles for new tools.
The World Economic Forum estimates roughly one-third of all work in capital markets, insurance, and banking has high potential for full AI automation. That’s not a future-state projection. It’s a description of the work surface that governance frameworks need to cover right now.
If You’re the One Actually Using AI Right Now, Read This
This section is for the analysts, accountants, and FP&A staff who are already using AI tools to get work done faster. You’re not waiting for a governance framework. You’re drafting variance commentary in ChatGPT, reconciling data with Copilot, or using Claude to stress-test assumptions before a board review. That’s not a problem. That’s the starting point for something your entire department needs.
But how you use these tools right now matters more than you think. Here’s how to protect yourself and make it easier for your org to formalize what’s already working.
Know what tier you’re on. Enterprise versions of ChatGPT, Claude, and Gemini are SOC 2 Type II compliant and don’t train on your inputs. Free and personal-tier accounts don’t carry those same protections. If you’re using a personal login to process anything with GL data, customer information, or pre-release financials, you’re creating an exposure that sits entirely outside your company’s control framework. Check with IT. If your company hasn’t provisioned enterprise licenses yet, that’s a conversation worth starting.
Keep a simple log of what you’re using and for what. You don’t need a formal inventory tool. A running note will do: which tool, which workflow, what data goes in, what output you rely on. When your department eventually runs the AI inventory (and the Treasury framework just made that a near-certainty), your log becomes the raw material. You’ll go from “person who was using unsanctioned tools” to “person who documented the use cases that are now part of the official rollout.”
Never let an AI output go straight into a deliverable without checking it against source data. This is the single most important habit. Variance narratives, rec explanations, forecast assumptions: every AI-generated output needs a human checkpoint before it touches anything reportable. This isn’t about trust. It’s about evidence. If an auditor asks how a number was produced, “the AI drafted it and I verified it against the trial balance” is a defensible answer. “The AI drafted it” is not.
Document what’s working and share it up. If you’ve found a workflow where AI saves real time or catches errors your manual process missed, write it down. Time saved per cycle, error rate before and after, specific steps you follow. That’s the raw material for the operating-metric value cases your CFO needs to justify broader investment. The analysts who surface these proof points become the internal champions who shape how the department scales AI adoption.
You’re not doing anything wrong by using these tools. But undocumented, ungoverned usage is the gap that every new framework is designed to close. The fastest way to make sure AI stays in your toolkit is to make it visible, documented, and defensible.
Three Things to Do Before Your Next Quarter Close
Run the AI inventory before someone else does. The ECB’s concentration risk signal means “which AI tools are your analysts actually using” is now a supervisory question. That includes direct-access tools like ChatGPT, Claude, and Gemini. It also includes AI capabilities embedded in your existing stack: Copilot for Finance in Excel, Joule in SAP, AI features in BlackLine or Workday. The control response is different for each category. The inventory surfaces which category you’re exposed to.
Map your existing SOX controls before adding new ones. The Treasury’s FS AI RMF extends the NIST framework into finance-specific territory. Teams already running SOX 404 and ICFR documentation have most of the scaffolding. Each AI use case that touches close, forecasting, or reporting needs three elements in the control matrix: a named owner, a human review checkpoint, and an evidence trail. If those exist for a given workflow, the governance work is largely done.
Tie your AI value case to an operating metric. Fintech Chime reported AI helped reduce cost-to-serve roughly 30% over three years. The significance is the framing. Cost-to-serve runs through the P&L. It’s auditable, comparable across periods, and doesn’t require assumptions about how analysts spent recovered hours. The framework for building this kind of value case has four steps: identify the metric, establish a baseline with variance, define the causal link, and monitor through the change. Each step is designed to produce evidence that holds up under board and audit scrutiny.
The governance floor just got poured. The regulators handed you the vocabulary, the control structure, and the staging logic. Use it as the foundation and you’ll deploy faster with less risk. Ignore it and you’ll be explaining the gap when an auditor asks what your analysts have been running for the last eighteen months.
Unlock the full model & templates
Pro subscribers get: the FS AI RMF mapped to SOX/ICFR control categories, a Finance AI Inventory Template with bottom-up and top-down classification, an Operating-Metric Value Case Builder, and a Five-Prompt Governance Documentation Pack.

