AI Counseling & AI Governance  ·  Counsel & Strategy

AI governance isn't
an IT problem.
It's a boardroom one.

Companies building AI products face a regulatory environment that is fragmented, accelerating, and unforgiving of retroactive compliance. Navigating it well doesn't require a governance specialist. It requires senior counsel who understands your business, tracks the landscape, and integrates that awareness into every dimension of legal work — from contracts and financing to product decisions and board communications.

The core problem

Federal directives, state innovation, and international standards are now in direct conflict. This is not a future concern. It's a current operating reality.

The timing risk

Standard of care shifts before consensus. Capability becomes expectation before anyone formally declares it. Companies that shape the standard win. Companies that react to it lose.

The advantage

Governance built proactively becomes a competitive surface — with investors, enterprise customers, and regulators. Built reactively, it becomes a liability.

Point of View

Why this moment
is different.

For most of the last decade, AI governance was treated as a theoretical concern — something companies noted in risk disclosures and handled as a compliance checkbox. That is no longer an accurate description of the landscape.

The EU AI Act is no longer a future concern. Key provisions are active. New York's RAISE Act and California's TFAIA are creating overlapping and sometimes contradictory obligations at the state level. The FTC is actively recalibrating its enforcement theory — moving away from capability as misconduct toward demonstrated deceptive use, but leaving a live battleground over the line between product intent and user misuse. Federal and state priorities are pulling in different directions simultaneously.

Meanwhile, at the product level, AI is no longer a feature. As CES 2026 made clear, AI is now integrated infrastructure — embedded in cars, devices, operating systems, and enterprise workflows. Infrastructure gets held to infrastructure standards. Courts won't distinguish between "AI-powered features" and "how the product works." They will ask: did it work as you said it would? Marketing copy becomes contractual language. Claims that a system "predicts" or "optimizes" become testable.

And in the legal and compliance workflow specifically, the primary AI risk has shifted. It's no longer accuracy — it's reliability under scrutiny. AI systems optimize for plausibility and fluency, not semantic consistency. When outputs are reviewed in discovery six months after they were generated, subtle shifts in emphasis and framing become significant. Legal teams that haven't built documentation and testing protocols around this are carrying risk they haven't priced.

The companies doing this well aren't waiting for federal clarity. They're treating governance as a strategic discipline, building the unglamorous infrastructure — testing protocols, performance logs, incident escalation frameworks — that will determine how they're treated by regulators, investors, and courts when something eventually goes wrong.

Risk Framework

Four ways the right counsel
accelerates what
you're already building.

01
Know when to lead, not just comply

In health tech especially, AI capability becomes expected standard before anyone formally declares it. Companies that understand this curve early can shape what "best practice" means in their category — rather than scrambling to meet a standard someone else defined. The goal is to be the company that set the bar, not the one catching up to it.

02
Turn regulatory complexity into a moat

Federal, state, and international AI frameworks are moving in different directions simultaneously. Companies that navigate this strategically — not just reactively — build a structural advantage with enterprise customers, sophisticated investors, and regulators. The complexity is real; so is the opportunity for companies that get ahead of it.

03
Say what you mean. Mean what you build.

As AI becomes embedded infrastructure, what your company claims about its capabilities becomes testable. Getting this alignment right — between what you build, how you describe it, and what you can defend — is a source of market credibility, not just legal protection. Companies that communicate clearly about their AI build more durable trust with customers and partners.

04
Use AI in your workflows — with eyes open

AI tools in legal, compliance, and operational workflows deliver real value. The companies using them well are the ones who understand where the outputs are reliable, where they need human review, and how to document decisions so they hold up later. That's not a reason to avoid AI — it's a reason to use it thoughtfully, with the right protocols in place from the start.

"AI governance is often treated like a compliance checklist, when it is more like a strategic risk on a collision course."

— Michael Nichols, on AI governance at the board level
The Test

Three questions every AI company
should be able to answer.

01
Can you explain what it actually does?

If your AI-enabled product fails, can your team explain — to regulators, in court, or to customers — how it was supposed to work? "Machine learning algorithms" won't be sufficient. You need documentation that traces what the system does, what data it uses, and what happens when it produces an unexpected output.

02
Can you back up what your company claims?

Marketing copy can become contractual language. Claims that a system "predicts," "optimizes," or "detects" are increasingly read as testable commitments by regulators, enterprise customers, and plaintiffs. The question isn't what you're allowed to say. It's what you're willing to defend if something goes wrong after the system is deployed.

03
Do you know what breaks when it fails?

When AI is integrated into critical workflows or products, one failure can trigger cascading others. A misread in a diagnostic system, an error in a compliance summary, a bias in an underwriting model — each carries downstream consequences. Boards should map failure modes before they're discovered by someone else.

How This Works

AI governance isn't a
standalone project.
It's part of the work.

From Michael Nichols

I approach AI governance differently than a typical specialist practice. The following reflects my own perspective on how this type of work should be structured — and why that distinction matters.

A governance framework review or a regulatory readiness assessment is not something that can be done well in isolation. It requires understanding your product, your data architecture, your commercial relationships, your team, and your growth trajectory. That context doesn't come from a one-time engagement.

This is work that happens inside a deeper engagement — as part of fractional GC counsel or an ongoing advisory relationship where I'm already familiar with your company. The governance questions worth asking are the ones that emerge from knowing your business, not from a checklist applied from the outside.

AI governance is also an evolving discipline. The regulatory landscape is moving faster than any single practitioner can fully map. What Michael brings is senior judgment about where the real risks are, how to track what's coming, and when to bring in additional expertise — not a false certainty about a landscape that is still being written.

01
AI awareness as part of full-stack counsel

Governance, regulatory exposure, and product liability risk are woven into every area of legal work for AI companies — contracts, financing, employment, compliance. It's not a separate track.

02
Ongoing vigilance, not one-time assessment

The EU AI Act, RAISE Act, TFAIA, and FTC enforcement theory are all moving simultaneously. This requires a counsel relationship where staying current is part of the job — not a quarterly deliverable.

03
Knowing when to bring in specialists

Good general counsel knows the perimeter of their expertise. When specialized technical, regulatory, or jurisdictional work is needed, the value is recognizing it early and coordinating the right resources — not doing everything in-house.

04
Building before it's urgent

The companies that navigate regulatory scrutiny well didn't start preparing when the inquiry arrived. They built the documentation, policies, and incident protocols as a natural part of operating — because their counsel made it part of the conversation early.

Start the Conversation

Working with companies
building in AI.

If you're a funded company in health tech, fintech, or AI and you need senior counsel who tracks this space as part of the work — let's talk about what an engagement looks like.

Schedule a Consultation About Michael Nichols