Material Interest | Issue #2 — March 24, 2026

The intelligence brief for AI in regional and community banking

The Big Picture

Treasury Hands Community Banks Their First AI Governance Playbook

For months, the practical question facing every community bank board considering AI has been the same: we know we need governance, but what does that actually look like for a bank our size?

As of March 1, there's an answer. The U.S. Treasury published the Financial Services AI Risk Management Framework (FS AI RMF) alongside a companion AI Lexicon — the first concrete, usable governance toolkit from a federal body that community banks can pick up and implement without hiring a team of consultants to translate it.This is not another set of high-level principles. The framework includes 230 control objectives organized across four functions borrowed from the National Institute of Standards and Technology: govern, map, measure, and manage. It comes with a self-assessment questionnaire that lets a bank evaluate where it stands in the AI adoption lifecycle, a user guidebook that walks through implementation, and a control objective reference guide that details what “good” looks like at each stage. The companion AI Lexicon establishes common definitions for key AI concepts — addressing a real source of confusion across regulatory, technical, legal, and business teams that have been using the same terms to mean different things.

What makes the FS AI RMF especially relevant for community banks is its design philosophy. According to Treasury’s press release, the framework is “scalable and flexible, supporting adoption by institutions of varying size and complexity.” Translation: a $500 million community bank is not expected to implement the same controls as JPMorgan Chase. You scale the controls to your actual AI footprint and risk profile.

The framework is the first deliverable in a broader six-part Treasury initiative focused on AI cybersecurity, governance, and operational resilience for financial services. It was developed through the Artificial Intelligence Executive Oversight Group, a public-private partnership led by Treasury in coordination with the Financial Services Sector Coordinating Council and the Financial and Banking Information Infrastructure Committee. That public-private pedigree matters — this wasn’t built in a regulatory vacuum. Industry practitioners had input.

The timing is significant. Colorado’s AI Act takes effect June 30. The Fed is deploying AI system-wide across its own regional banks. Over 80% of banking executives report their AI initiatives are stuck in the pilot phase, according to a recent Capgemini study. The gap between “we should do something about AI” and “we have a structured, defensible approach to AI governance” has been the primary blocker for many community banks. Treasury just closed that gap.

What to do about it: Download the self-assessment questionnaire and the user guidebook from Treasury’s website this week. Run through the questionnaire with your senior leadership team — it will take less than an hour and will give your board a clear picture of where your institution stands. Then use the 230 control objectives as a checklist to build or validate your AI governance framework. Even if you haven’t deployed any AI tools yet, having a Treasury-endorsed governance structure in place positions you well for examiner conversations and gives you a defensible foundation before your first implementation. This is a “print it out and bring it to your next board meeting” resource.

Use Cases to Watch

Where AI Is Actually Being Deployed This Week

AI-powered lending goes cooperative. Commonwealth Credit Union and Zest AI launched the CU Lending Collective, a new credit union service organization (CUSO) designed to help small credit unions access enterprise-grade AI lending tools they could never afford individually. Zest AI will develop custom AI-powered scoring models for more accurate credit risk assessment across auto loans, personal loans, and credit cards. The structure is what makes this interesting: by pooling resources through a CUSO, small institutions share the cost and complexity of AI adoption while each benefiting from sophisticated underwriting models. Zest AI, which recently secured a $200 million growth investment, is also expanding its partnership with nCino to bring AI-automated underwriting further into mainstream lending workflows. Community bankers should watch this model closely. If cooperative AI lending works for credit unions, a similar shared-services approach — convened through state bankers’ associations or ICBA — could be the most realistic path to AI-powered lending for banks under $1 billion in assets. The economics are compelling: instead of each small institution bearing the full cost of building or buying custom AI models, a cooperative structure distributes both the expense and the technical overhead across dozens of participants.

Consumers are using AI to shop for their next bank. A growing number of consumers are turning to ChatGPT, Perplexity, and other AI tools to compare credit card rewards, mortgage rates, and financial products, according to reporting from The Financial Brand. But here’s the nuance: only 29% of consumers completely or highly trust AI output when making banking decisions. Fifty-two percent say AI “lacks human context,” and half don’t trust AI when they can’t identify the source. The silver lining for community banks? Consumers trust their primary bank twice as much as they trust tech companies for financial advice, per an Oliver Wyman analysis. That trust advantage is a competitive moat — but only if your bank is visible to these AI recommendation engines in the first place. If a consumer asks ChatGPT “what’s the best bank for a small business loan in [your market]?” and your institution doesn’t surface, the trust advantage doesn’t help you. This is an emerging competitive dynamic that deserves a place on your strategic planning agenda: AI discoverability is becoming as important as your website’s SEO was a decade ago.

Upstart wants to become a bank. On March 10, AI lending platform Upstart Holdings announced it will apply to the OCC and FDIC to establish Upstart Bank, N.A. — a branchless, digitally native national bank based in Delaware, operating in all 50 states. Upstart’s stated rationale is reducing “operational, regulatory, and financial costs and complexity” by accessing deposit funding and lending directly with a single rate-and-fee structure, per a company press release. Upstart is currently a major AI lending partner for many community banks. If it becomes a chartered bank, the dynamic shifts: a company that was your partner becomes a potential competitor with its own deposit base. This is part of a broader wave — at least 18 banking charter applications were filed with the OCC last year, including from crypto firms like Circle and Ripple. If your bank uses Upstart’s AI underwriting, now is the time to evaluate your vendor concentration risk and consider diversifying your AI lending relationships.

Regulatory Radar

Source: Pexels

What’s Moving and What It Means for You

Treasury’s FS AI Risk Management Framework. We covered this in depth above, but the regulatory angle bears repeating. The framework’s 230 control objectives, organized across NIST-aligned functions (govern, map, measure, manage), give community banks a structured, regulator-endorsed approach to AI governance. The self-assessment questionnaire lets you evaluate your institution’s AI adoption stage in concrete terms. Even if your AI footprint is limited to a single fraud detection tool, running through the assessment establishes a governance baseline. The practical implication: if an examiner asks about your AI risk management approach, pointing to a Treasury-endorsed framework is significantly more defensible than a internally drafted policy document. Download the full framework from Treasury’s website and review it with your compliance team this quarter.

Colorado AI Act enforcement delayed to June 30 — but the clock is still ticking. Governor Polis signed a special session bill pushing the Colorado Artificial Intelligence Act (CAIA) enforcement date from February 1 to June 30, 2026. Lawmakers are expected to revisit the framework during the regular session, so amendments are possible. But don’t mistake a delay for a reprieve. CAIA remains the first U.S. state law specifically governing high-risk AI systems in financial services. It covers AI systems making “consequential decisions” about consumers — explicitly including lending — and requires impact assessments, bias audits, vendor accountability, and consumer disclosures. There is a potential safe harbor for banks subject to federal prudential regulators, but it’s conditional and not guaranteed. If your bank operates in Colorado or lends to Colorado residents, three months is not a lot of time. Start documenting your AI use cases and vendor relationships now. And even if you’re not in Colorado, these requirements — bias audits, impact assessments, consumer disclosures — are likely to become the national standard over time.

Fed Governor Waller signals AI is a core operational priority. In a February 24 speech at the Boston Fed’s Technology-Enabled Disruption Conference, Federal Reserve Governor Christopher Waller said he has “never seen a technological revolution like AI” in his lifetime — among the strongest language on AI from any Fed official to date. Waller described the Fed’s plan to move toward system-wide AI deployment across all regional banks, ending the era of individual bank-by-bank decision making on AI adoption. He rejected “doom and gloom” narratives about AI replacing workers but acknowledged the transition will be “unsettling.” The plain-English implication for community banks: the Fed itself is going all-in on AI, and examiners who use AI-powered tools will likely expect examined institutions to at least understand AI fundamentals. This speech also signals the Fed is unlikely to take an overly restrictive approach to bank AI adoption — they want the financial system to benefit from AI, not avoid it.

Vendor Spotlight

Source: Pexels

Jack Henry: Can a Cloud-Native Core Unlock AI for Community Banks?

Jack Henry & Associates is one of the “Big Three” core banking technology providers, alongside Fiserv and FIS. Together, they serve approximately 70% of U.S. banks. For community banks, Jack Henry has historically been the most community-bank-focused of the three — and its next move could be the most consequential infrastructure development in the sector this year.

Jack Henry says it is on track for a first-half 2026 launch of its public cloud-native consumer and commercial deposit-only core. Fifteen core components are now live, with some already deployed internally and externally, according to CCG Catalyst. The company has adopted what it calls a “human-centric AI” strategy — emphasizing data strategy, ethical governance, and clear business objectives before deploying technology. That’s a notably measured approach compared to vendors racing to bolt AI features onto legacy platforms.

Why this matters now: The Capgemini data we mentioned — 43% of bank IT budgets going to legacy system maintenance, only 29% left for transformative technology — describes the exact problem a modern, cloud-native core is designed to solve. You cannot run meaningful AI on a platform built in the 1990s. If Jack Henry delivers a genuinely modern, API-first core, it directly addresses the infrastructure barrier that keeps most community banks stuck in AI pilot purgatory.

The limitations: “On track for H1 2026” is vendor language, and large-scale core migrations are notoriously complex. The deposit-only scope means lending and other functions will come later. Migration paths from existing Jack Henry cores (SilverLake, CIF 20/20, Symitar for credit unions) to the new platform remain unclear for most customers. And a cloud-native core is table stakes — Fiserv and FIS are pursuing similar modernization strategies, so Jack Henry’s competitive advantage depends on execution speed and community-bank-specific design.

The bottom line: If you’re a Jack Henry customer, ask your relationship manager for a timeline on the cloud-native core migration path and what AI capabilities it will enable. Specifically, push for clarity on data access — a modern core is only as valuable as the data layer it exposes, and AI-powered analytics, lending models, and fraud tools all depend on clean, accessible, real-time data. If you’re evaluating cores more broadly, the next six months will reveal whether the Big Three can actually deliver on their modernization promises — or whether smaller, born-in-the-cloud core providers gain an opening.

Until next week — Material Interest

Next week we’re watching: the Colorado legislature’s regular session moves on AI Act amendments, and we’ll have a closer look at how community banks are building internal AI use-case inventories — the practical first step most governance frameworks require.

© 2026 Material Interest. All rights reserved.

Material Interest is published weekly. We track AI adoption, regulatory developments, and vendor activity across America’s regional and community banks.

Keep Reading