Writing on AI governance, health tech regulation, and startup legal strategy — for founders, executives, and in-house teams navigating fast-moving legal landscapes without slowing down.
Articles and posts published here and on LinkedIn reflect the author's analysis of legal and regulatory developments. They are for informational purposes only, do not constitute legal advice, and do not create an attorney-client relationship. Readers should not act on this content without seeking counsel from a qualified attorney familiar with their specific circumstances.
The most consequential AI healthcare question isn't accuracy. It's when AI becomes a standard of care — and what that means for companies building before the standard is formally declared.
California's Delete Act gives residents unprecedented control over data broker records — and signals how individual data rights may evolve nationally.
Snap and TikTok settled on the eve of trial over claims that app developers deliberately engineer compulsive behavior. The question being asked: is engagement architecture a product defect? Will boards start treating it as governance risk?
While Davos debates frameworks, in-house legal teams face a three-way tug-of-war between federal directives, state innovation, and international standards. AI governance isn't an IT issue — it belongs in the boardroom.
AI no longer shows up as a feature — it's integrated infrastructure. When AI shifts from innovation to infrastructure, it gets held to infrastructure standards. Three questions every company should be able to answer now.
As AI tools move deeper into legal and compliance workflows, the risk profile is shifting. Hallucinations are understood. The harder problem: outputs that sound coherent but shift in meaning when reviewed under scrutiny or in discovery.
The foundational questions of AI copyright — training data, output ownership, liability — remain unsettled. Companies building AI products are making consequential decisions without the benefit of clear law.
The FTC vacated its own consent order against Rytr — one of the first instances of an enforcement order being reversed based on shifting AI policy. Mere capability is no longer misconduct. A new line of defense is emerging.
New York's RAISE Act makes NY the second state after California to adopt a comprehensive AI safety framework. Together, NY and CA are setting current benchmarks for frontier AI governance without waiting for federal policy.
Reach out directly. Happy to discuss how any of these issues apply to your company.
Get in Touch