Material Interest | Issue #3 — March 31, 2026
The intelligence brief for AI in regional and community banking

The Big Picture
The White House Wants One Set of AI Rules — Not Fifty. Here's What That Means for Your Bank.
On March 20, the White House released its National AI Legislative Framework — a seven-pillar policy document that represents the most consequential AI governance development since the Treasury Department's risk management framework earlier this month. The headline for community bankers: the administration is calling on Congress to preempt state AI laws and replace them with a single, "minimally burdensome national standard."
That phrase — "minimally burdensome" — is doing a lot of work. The framework explicitly states that Congress should preempt state AI laws "that impose undue burdens" and that states "should not be permitted to regulate AI development." It's a direct shot at laws like the Colorado AI Act, which covers high-risk AI systems in financial services and lending, and which is still set to take effect on June 30 of this year.
For the 4,500+ community and regional banks operating across state lines, the prospect of one federal rulebook instead of fifty state ones sounds like relief. But the details matter — and the framework is a wish list, not law. Congress still has to act, and the timeline is uncertain.
Here's what's actually in the framework and why it matters for your institution.
Sector-specific oversight, not a new AI agency. Rather than creating a standalone federal AI regulator, the framework routes AI oversight through existing agencies. For banking, that means the OCC, FDIC, and Federal Reserve would shape AI rules for financial institutions — not some new bureaucracy staffed by technologists unfamiliar with bank examination cycles. This is broadly good news for community banks. Your examiners already understand your business model. The question is whether those agencies will layer new AI-specific requirements on top of existing safety-and-soundness frameworks, or keep the current principles-based approach that gives smaller institutions flexibility.
State preemption with carve-outs. The framework preserves states' ability to enforce general laws that "protect children, prevent fraud, and protect consumers." But it draws a hard line against state-level AI-specific regulation. If Congress follows this playbook, laws like the Colorado AI Act — which requires impact assessments, bias audits, and consumer disclosures for AI used in lending decisions — could be preempted before they ever take full effect. That said, at least 15 states have introduced AI-related legislation. The political fight over preemption will be significant, as reported by Roll Call and CNBC.
What hasn't changed. The framework supports law enforcement efforts to combat AI-enabled fraud and scams — a growing concern for community banks whose customers are increasingly targeted by AI-generated phishing and voice cloning attacks. It also signals continued emphasis on American AI competitiveness, with a pillar dedicated to "Enabling Innovation and Ensuring American AI Dominance." The broader message: the administration wants banks to adopt AI, and it wants to remove regulatory friction that slows adoption.
What to do about it. Don't wait for Congress. The framework's direction is clear, but the legislative timeline is not — and the Colorado AI Act's June 30 effective date arrives well before any federal preemption bill could pass. The practical move: keep building your AI governance program using the Treasury AI risk management framework we covered in Issue #3. That framework is regulator-endorsed, principles-based, and will serve your institution well regardless of whether federal preemption happens this year or next. If your bank operates in Colorado or lends to Colorado borrowers, treat the Colorado AI Act requirements as your compliance floor — not as something that might go away. And at your next board meeting, brief directors on the preemption debate. They need to understand that the regulatory landscape is shifting, even if the destination isn't yet certain.
Use Cases to Watch
Where AI Is Getting Real — and Where It's Getting Stuck
Back-office automation and digital transformation. A new Capgemini report puts hard numbers on a frustration many community bankers already feel: 43% of bank IT budgets are consumed by maintaining legacy systems, leaving just 29% for transformative technologies like AI. The result? More than 80% of bank executives say their AI initiatives aren't boosting revenue as expected, and 51% report that new AI products didn't deliver anticipated cost savings. Yet 42% of U.S. financial companies plan to increase AI investment by more than 50% this year anyway. The disconnect is instructive. Banks that bolted AI onto aging core systems saw the weakest returns. Those that invested in data infrastructure modernization alongside AI — cleaning data, building integration layers, updating core platforms — were the ones seeing measurable results. For community banks with limited IT budgets, this argues strongly for a sequenced approach: modernize the foundation first, then layer AI on top. A $50,000 data cleanup project might deliver more ROI than a $200,000 AI tool sitting on top of dirty data.
Compliance and lending. Freddie Mac's AI governance requirements took effect on March 3 — making them the first concrete, deadline-driven AI compliance mandate to hit community bank mortgage operations. The requirements, outlined in Bulletins 2025-16 and 2025-17, apply to any Seller/Servicer using AI or machine learning anywhere in the mortgage process. That includes vendor-embedded AI in underwriting, document processing, chatbots, fraud detection, income calculation, and borrower outreach. The critical principle: responsibility doesn't transfer to your vendors. If your loan origination system uses AI to calculate income or flag fraud, your bank is accountable for how that AI performs — not the vendor. Banks need to be able to answer four questions right now: which of our vendors use AI, how is it deployed, what data feeds it, and who at our institution is accountable for oversight? If you can't answer all four, you have a compliance gap that needs immediate attention.
Risk management and vendor oversight. An American Banker analysis warns that AI risk is "sneaking up on community and regional banks" through vendor platforms — and the framing is worth taking seriously. In most community banks, AI wasn't adopted through a deliberate strategic decision. It arrived embedded in vendor platforms: loan origination systems with predictive underwriting, fraud engines that auto-score transactions, marketing systems that determine customer targeting. The difference between AI and prior technology waves, the analysis argues, is that AI "embeds itself in decision authority." It doesn't just process data — it makes or shapes decisions that carry regulatory and legal consequences. The gap between where AI risk forms (inside vendor platforms) and where oversight resides (your board and management team) is the governance challenge of 2026. Pair this with Freddie Mac's "you're still accountable" principle and the ABA's proposed "nutrition label" for AI vendors (see Regulatory Radar below), and a clear theme emerges: vendor AI oversight is the single most important AI governance priority for community banks this year.
Regulatory Radar

Three Developments That Should Be on Your Compliance Team's Desk
White House National AI Legislative Framework. As detailed in The Big Picture above, the administration's March 20 framework calls for federal preemption of state AI laws and sector-specific oversight through existing regulators. The practical implication for community banks: if Congress follows this roadmap, you'd face one set of federal AI rules rather than a patchwork of state requirements. That's generally favorable for smaller, multi-state institutions. But it also means your existing federal regulators — the OCC, FDIC, and Fed — will be writing the playbook. Their current approach is principles-based and relatively flexible for community banks. Watch whether that changes as they absorb formal AI oversight authority. (Sources: White House, ABA Banking Journal, Axios)
Colorado AI Act — delayed to June 30, but still very much alive. Governor Polis signed legislation last August pushing the Colorado AI Act's effective date from February 1, 2026 to June 30, 2026. The law remains the first U.S. state statute specifically governing high-risk AI systems in financial services. It requires impact assessments, bias audits, vendor accountability measures, and consumer disclosures for AI used in "consequential decisions" — expressly including lending. Banks and credit unions under "substantially equivalent" federal oversight may qualify for a narrow exemption, but the criteria are restrictive and untested. The White House framework's preemption language targets exactly this kind of state law, but Congressional action won't arrive before the June 30 deadline. If you lend in Colorado or to Colorado borrowers, prepare now. Even if federal preemption eventually passes, the Colorado requirements — impact assessments, bias audits, vendor accountability — represent good governance practice that will serve your institution regardless of the legal outcome. (Sources: Colorado General Assembly, IAPP, Anaptyss)
ABA and BPI propose "nutrition label" for AI vendor transparency. The American Bankers Association and Bank Policy Institute filed a joint letter to NIST's Center for AI Standards and Innovation in March, urging voluntary, consensus-based standards for agentic AI — the emerging class of AI systems that take autonomous actions, not just generate text. Their standout proposal: a "risk-scaled controlled-sharing profile" that would function like a nutrition label for AI products, giving banks a standardized way to evaluate what an AI system does, what data it consumes, and what risks it carries. Over 930 organizations submitted comments to NIST's request for information, signaling broad industry interest. For community banks, this is the rare lobbying story with immediate practical value. You don't build AI — you buy it from vendors. And today, there's no standardized disclosure framework for assessing what you're getting. Even before NIST finalizes standards, the ABA/BPI framework gives your vendor management team a vocabulary for asking better questions: What decisions does this AI make? What data does it train on? How is it monitored for bias and drift? Start asking those questions now. (Sources: ABA Banking Journal, Cybersecurity Dive)
Vendor Spotlight

interface.ai — AI-Powered Customer Experience Built for Community Banks
interface.ai is an AI platform purpose-built for credit unions and community banks, offering voice, chat, and operational automation tools designed specifically for sub-$10B institutions. Unlike many fintech vendors that build for enterprise banks and then try to scale down, interface.ai started with community institutions and credit unions as its core market.
The company's most recent product launch is worth attention. In February 2026, interface.ai introduced Smart Collections — a multi-channel, agentic AI collections agent that uses AI across voice calls, SMS, and email to engage delinquent borrowers earlier in the collections cycle, before accounts roll to later stages where recovery costs rise sharply. The timing is relevant: credit union total delinquency reached 95 basis points in Q3 2025, with 30-59 day delinquencies climbing to 1.13% in September 2025 — approaching pre-pandemic levels, according to PYMNTS.
Smart Collections features automated outbound voice calls with identity verification, two-way SMS with STOP/opt-out compliance, and email with deliverability safeguards. It's a concrete example of what "agentic AI" looks like in community banking practice — autonomous AI taking real-world actions (making calls, sending messages) rather than just analyzing data or generating reports.
Strengths. interface.ai's focus on community-scale institutions means its products are designed for the staffing, budget, and regulatory realities of sub-$10B banks — not retrofitted from enterprise tools. The multi-channel approach to collections addresses a real operational pain point, and the compliance guardrails (opt-out handling, identity verification) reflect an understanding of the regulatory environment community banks operate in.
Limitations. As with any agentic AI product that takes autonomous actions on behalf of your institution, the oversight burden is real. Smart Collections is making outbound contact with your borrowers — your bank is responsible for how those interactions go, what's communicated, and whether the system complies with FDCPA, TCPA, and state-level collections regulations. The vendor's compliance features are a starting point, not a substitute for your own oversight framework. Pricing is not publicly available and typically requires a discovery call. Community banks evaluating interface.ai should also consider how the platform integrates with their existing core banking system — integration complexity varies by core provider.
The bottom line. interface.ai is worth evaluating if your institution is struggling with collections capacity, customer service volume, or call center staffing — particularly if you've been burned by vendors whose products were designed for banks ten times your size. But as the ABA/BPI "nutrition label" proposal reminds us, ask the hard questions about data usage, model transparency, and accountability before signing. (Note: This is not sponsored content. Material Interest has no financial relationship with interface.ai.)
Until next week — Material Interest
Next week, we're watching for Congressional response to the White House AI framework — and whether any draft preemption legislation surfaces before the Colorado AI Act's June 30 deadline tightens the clock further.
Material Interest is published weekly. We track AI adoption, regulatory developments, and vendor activity across America's regional and state-chartered banks. Have a tip, a story, or feedback? Reply to this email.
© 2026 Material Interest. All rights reserved.