
By Ralph Carnicer, Head of Tax Engineering, Filed
In conversation with hundreds of firms this year, from solo practitioners and mid-size shops to Top 100 groups, the conversation often goes the same way. Leaders understand what AI is supposed to do. They’ve sat through demos and have a handle on the concept, but what they don’t have is trust. Here’s the thing: they shouldn’t trust it yet, at least not blindly. 
These firms aren’t inherently afraid of AI. They’re trying to protect their clients (and their licenses), and most have been burned before by something that looked great in a sales pitch but created more problems than it solved. One CPA in Miami told me he’d stopped counting how many “AI-powered” features had just added steps to his workflow. He called them “unhelpful tools,” and that’s exactly what keeps happening. Recent research from Thomson Reuters found that 47% of tax firms want AI but fear implementation. That gap between interest and adoption is the trust problem right there.
AI Hasn’t Earned Trust from the Market Yet
CPAs have seen what happens when AI gets it wrong: hallucinated tax positions, incorrect filings and disciplinary actions. While we’re still early in understanding the full scope of AI-related litigation and malpractice claims, the risk is real and likely growing. When a firm asks me whether adopting AI exposes them to risk, what they’re really asking is whether this thing will know when to stop and ask for help, or if it’s going to walk them straight into a malpractice claim.
I think about it like hiring someone new. You don’t hand them a complicated multi-state return on day one. You start them on straightforward returns, watch how they handle edge cases and see if they escalate when something’s unclear instead of just pushing it through. Trust builds over time when you see consistent judgment.
Firms want that same behavior from AI: visibility into what it understood, what it couldn’t figure out and where it needed human input. Without that transparency, you’re asking someone to sign off on work they can’t explain, and no CPA is going to do that.
What Actually Changes Minds
I sit in on a lot of demos with prospects, walking through how the system interprets source documents, what it pulls from each form and where it makes decisions. One call stands out. We took a W-2, copied it by hand and deliberately introduced a small typo, the kind that happens during manual entry when someone’s rushing. We ran it through the platform and it flagged the inconsistency, showed the evidence trail and explained why the number didn’t match.
The firm got it right away. It wasn’t the speed that impressed them but the fact that the system caught something a tired preparer might miss at 11 p.m. during busy season. That’s what firms are looking for: not a shortcut but a safeguard, something that behaves like your most reliable senior associate who’s thorough, willing to admit when something’s unclear and shows you why they did what they did.
The Pressure Looks Different by Firm Size
Smaller firms are drowning. Solo practitioners are retiring and can’t sell their practices, so their clients go to neighboring firms that are already stretched too thin. These firms keep asking how they can handle more work without hiring people they can’t find or afford. For them, AI needs to take the grunt work off the table: data entry, document sorting and first-pass prep.
Larger firms face different challenges. Leaders are trying to figure out where AI fits, whether it be a firmwide rollout, pilot in one service line or test on specific workflows. A Top 25 firm told me their biggest pain point was document intake, where staff members receive emails with attachments but have no automatic way to track what came in, what it applies to or whether they’re still waiting on something. They wanted the system to recognize documents, map them to the right return and push status updates into their CRM rather than require a complete overhaul.
Different situations, but the same underlying need: measurable benefit without breaking what’s already working.
How Firms Are Building Trust for 2026
The firms making real progress as they plan for 2026 share common practices. They’re starting with parallel testing, taking finished returns from last season, running them through an AI platform and comparing the output line by line with what their preparers did. What’s interesting is that this process exposes errors on both sides. I’ve seen firms find mistakes in their own work when they compare it against the AI output, and instead of getting defensive, they use it to strengthen their internal processes.
These firms are also asking vendors harder questions during demos. They’re pushing for evidence trails, wanting to see exactly how the AI arrived at each conclusion and where it flagged uncertainty. They’re testing edge cases and unusual scenarios rather than just watching clean examples, and they’re prioritizing tools that integrate with existing workflows rather than requiring wholesale process changes.
That’s how trust gets built: one return at a time, with full visibility into every decision.
What the Profession Actually Needs
CPAs are dealing with talent shortages, growing workloads, clients who expect more and compliance that keeps getting more complex. AI won’t solve all of that, but it can take the repetitive, high-volume work off people’s plates so they can focus on what actually requires professional judgment.
The profession doesn’t need another tool claiming to be intelligent. It needs systems that behave reliably, communicate clearly and earn trust the way a good employee does: through consistency, transparency and the willingness to say, “I’m not sure about this, can you look at it?”
Firms aren’t asking for a revolution. They want steady improvement, tools that show their work, flag uncertainty and support the reviewer instead of trying to replace them. When AI operates that way, adoption isn’t about hype. It’s about proof, which is what firms have been waiting for.
