SMB AI Ethics Series: Post 2

December 15, 202524 min read

Where SMBs Go Wrong (Without Realizing It)

Most AI implementation failures don’t start with bad intentions. They start with reasonable-sounding assumptions that turn out to be dangerously wrong. Here are the four mistakes we see repeatedly—and why they’re more costly than you think.

Mistake 1: “We’re too small for this to matter”

The assumption: AI regulations target tech giants, not small businesses.

The reality: Colorado’s SB24-205 applies to any organization deploying high-risk AI systems in the state. There’s a narrow exemption for companies under 50 employees, but only if you don’t train AI on your own data. Use a third-party AI recruiting tool that you’ve customized with your hiring data? You’re in scope. Deploy AI for credit decisions, tenant screening, or employee evaluations? You’re in scope.

Size doesn’t exempt you from discrimination laws, privacy regulations, or liability. It just means you have fewer resources to deal with the consequences when something goes wrong.

Mistake 2: “The vendor handles all that”

This is the most dangerous assumption we encounter. You purchase an AI-powered tool from a reputable vendor, and you assume they’ve handled the compliance, ethics, and governance pieces. After all, they’re the AI experts, right?

Here’s the problem: your responsibility doesn’t end at the purchase order. You own the outcomes.

Delta Air Lines learned this the hard way in 2025. They partnered with a third-party AI provider (Fetcherr) for pricing optimization. When Senate investigators came asking questions about discriminatory pricing, Delta couldn’t simply point to their vendor. The airline faced direct accountability—to regulators, to the public, to their customers—regardless of who built the underlying technology.

When your AI vendor’s model produces problematic results in your business context, you’re the one answering to regulators, customers, and potentially juries. Your vendor agreement won’t protect you from compliance violations or discrimination lawsuits.

Mistake 3: “We’ll deal with ethics after we get it working”

The technical debt of unethical AI is real, and it compounds fast. Biased training data doesn’t fix itself. Opaque decision-making becomes harder to explain the longer it runs. Ungoverned model drift creates liability with every passing day.

Colorado’s law requires impact assessments, documentation, and governance frameworks before deployment. The EU AI Act demands the same. Trying to retrofit compliance onto a running system isn’t just harder—it’s often technically impossible without starting over.

And here’s the timeline pressure: Colorado’s requirements take effect June 30, 2026. The EU AI Act reaches full applicability August 2, 2026. If you’re planning to “worry about it later,” later is already here. Companies building systems now without ethical foundations will face a choice in 16 months: scramble to achieve basic compliance or shut down non-compliant systems.

Mistake 4: “Our people know not to be biased”

Human oversight doesn’t fix algorithmic bias—it often amplifies it through a phenomenon researchers call “automation bias.” People trust AI outputs even when they shouldn’t, especially when the system seems sophisticated or when they’re under time pressure.

Northeastern University experts studying airline AI pricing found that “the ‘black box’ nature of AI models can undermine transparency and consumer awareness and enable price discrimination.” The problem isn’t that humans can’t recognize bias—it’s that they can’t see inside the AI to know it’s happening.

“Trust but verify” fails when you lack the technical capability to verify anything. Without proper validation frameworks, testing protocols, and bias detection tools, human oversight becomes rubber-stamping.


The Pattern: All four mistakes share a common thread—they delay ethical considerations until problems emerge. But as we’ve seen repeatedly in 2024 and 2025, by the time problems become visible, the damage is done.

“Most businesses treat AI ethics like they treat backup systems—something they’ll get to right after the crisis proves they needed it. But unlike backups, you can’t restore ethical AI after the damage is done.” — Gary Whitsett

The companies that succeed with AI aren’t moving faster. They’re moving smarter—building responsibility in from day one rather than trying to retrofit it later.

Back to Blog