
Treasury Just Said the Quiet Part Out Loud: Not Adopting AI Is the Risk
I've been watching federal AI policy for a while now, and most of the time it follows the same script: new technology appears, regulators say "hold on," they write some guidance about risks, everyone complains it's too slow, and nothing changes for two years.
This is different.
On March 23, the Treasury Department's Office of Financial Stability Oversight Council (FSOC) and the new AI Transformation Office (AITO) launched what they're calling the AI Innovation Series. Four roundtables. Financial institutions, tech companies, regulators, and domain experts all in the same room. The stated goal is accelerating AI adoption in the financial sector while keeping the system safe.
That by itself would be mildly interesting. Government launches initiative, news at eleven. But read the actual language coming out of Treasury, and you'll see why I think this matters.
"Failure to adopt" is the new risk
Treasury Secretary Scott Bessent said something that I can't remember a sitting cabinet member saying before: "We are optimizing regulation to support growth for both Main Street and Wall Street: moving from a posture focused on constraint toward one that recognizes failure to adopt productivity-enhancing technology as its own risk."
Read that last part again. Failure to adopt is its own risk. That's not a tech CEO talking. That's the Secretary of the Treasury.

For years, the regulatory posture on AI in financial services was basically: prove it's safe, then maybe we'll let you use it. The burden was on the institutions to demonstrate that AI wouldn't blow things up. And that made sense for a while. Banks are not the place you want to move fast and break things.
But the framing has shifted. Deputy Assistant Secretary Christina Skinner put it bluntly: "When institutions cannot deploy tools that improve fraud detection, credit allocation, and operational resilience, the system becomes less efficient and less secure." The argument isn't just that AI is useful. It's that not having it makes you vulnerable.
I find this genuinely interesting because it maps onto what I've been seeing in the private sector for the last year. Banks that got serious about AI-driven fraud detection are catching stuff that rule-based systems miss. Credit underwriting models that incorporate broader data are making better decisions. And the institutions still running everything through legacy processes are falling behind in ways that create real systemic risk, not just competitive disadvantage.
What the initiative actually includes
The AI Innovation Series itself is four roundtables (no dates announced yet) bringing together the usual suspects: banks, fintechs, cloud providers, AI companies, and regulators. The format is meant to be collaborative rather than adversarial, which is a departure from the typical regulator-industry dynamic.
But the roundtables are just the visible part. In February, Treasury quietly released a stack of resources that got less attention than they deserved:
- An AI Lexicon defining key AI terms for financial regulators. This sounds boring, but it matters. Half the regulatory confusion around AI comes from people using the same words to mean different things.
- A Financial Services AI Risk Management Framework. Again, not glamorous, but exactly the kind of infrastructure you need before you can write coherent rules.
- Six additional resources covering governance, data practices, transparency, fraud, and digital identity.

Chief AI Officer Paras Malik framed it in operational terms: "AI is moving from experimentation to enterprise-wide integration, and disciplined implementation will determine its impact. The priority now is on operationalization, embedding AI into core workflows in ways that measurably enhance risk management and resilience."
That last sentence is doing a lot of work. "Measurably enhance" means they want metrics. "Operationalization" means they're past the proof-of-concept phase. Treasury isn't talking about whether AI should be in financial services. They're talking about how to do it properly.
Why this is a bigger deal than it looks
There's a pattern in how government policy shifts. It doesn't happen overnight. What happens is that the language changes first, then the frameworks follow, then the rules get rewritten. We're in the language phase right now.
When the Treasury Secretary says that not adopting AI is a risk to financial stability, he's laying the groundwork for a regulatory environment where AI adoption isn't just permitted but expected. Where a bank's failure to implement AI-driven fraud detection could be treated as a supervisory concern, the same way we treat inadequate cybersecurity programs today.
That's a big change. And it's happening faster than I expected.
I also think there's a geopolitical angle here that nobody in the official statements is saying directly. China's latest five-year plan puts AI at the center of its financial system modernization. If US financial institutions are sitting on the sideline because regulations are unclear, that's a competitive problem that becomes a national security problem.
What I'm watching
The roundtables themselves will be interesting, but the real signal will be what comes after. Specifically:
Does this translate into actual regulatory relief? Easing guidance on model explainability requirements, updating fair lending frameworks to account for AI-driven credit decisions, clarifying liability when an AI system makes an error. Those are the hard problems, and roundtables don't solve hard problems by themselves.
The other thing I'm watching is the AITO itself. Treasury stood up a dedicated AI Transformation Office, which means they're putting institutional weight behind this. Government offices can be either engines of change or places where good ideas go to generate reports. We'll see which one this turns out to be.
For now, the fact that Treasury is treating AI adoption as a financial stability issue rather than purely a risk management problem is worth paying attention to. The regulatory winds are shifting, and if you're building anything in fintech or financial services AI, this is the most concrete signal yet that Washington wants you to move faster, not slower.