I help legal teams stop reviewing contracts manually at scale and start trusting AI to do it accurately. 40+ clause detection models. 50,000 contracts processed. 95%+ accuracy, every time.
"Most people on this work are either lawyers who don't trust the model or engineers who've never read a contract. I've done both — and that changes everything about how I build."
Most AI implementations in legal fail for one of two reasons: engineers who don't understand contracts, or lawyers who don't trust AI outputs. I've spent years on both sides of that problem.
With 6 years as a certified paralegal and 1+ year building enterprise contract intelligence systems, I translate clause-level legal risk into model requirements — then build, evaluate, and deploy those models to a 95%+ production standard.
I've worked across financial services, energy, and law firm portfolios. I know how clause language varies by industry, how to design extraction pipelines that hold up under attorney scrutiny, and how to present model performance to stakeholders who have never seen a precision-recall curve.
That combination is rare. It's why teams bring me in when the stakes are high.
Clause detection modeling, extraction pipeline design, and structured dataset creation from unstructured legal language — built to hold up in production.
End-to-end delivery with governance structures attorneys can defend: confidence thresholds, escalation paths, human-in-the-loop workflows, and audit trail requirements.
I present AI performance findings to attorneys, legal ops directors, and executives — bridging the communication gap most technical teams can't close.
A financial services client needed to extract and analyze key clause provisions across a massive contract portfolio. Manual review at that volume was cost-prohibitive and slow.
Designed the full clause taxonomy and extraction architecture. Built and evaluated detection models for indemnification, limitation of liability, termination, assignment, and payment obligations using prompt engineering on Relativity's AI platform.
Ran precision, recall, and F1 evaluation at every iteration. Built reviewer playbooks and labeling guidelines so attorneys could audit and trust model outputs before any result entered production.
Rather than starting from zero on each engagement, I developed reusable clause libraries, prompt templates, and reviewer playbooks that could be adapted across financial services, energy, and law firm clients — compressing onboarding time and raising the quality floor on every new matter.
Before building AI models, I was the person doing the work they would eventually replace — reviewing 200+ municipal contracts annually across procurement, public works, and intergovernmental agreements. That hands-on clause analysis became the foundation of how I design detection models today.
Not because a client asked for it — because anything less isn't defensible in a legal context. If a model can't meet that bar, I iterate on prompt design and labeling guidelines until it does. Attorney trust is too expensive to lose on a bad output.
AI in legal doesn't replace attorney judgment — it has to earn it. Every model I deploy includes confidence thresholds, escalation paths, and audit trail design. Legal ops teams need to be able to explain what the AI did and why.
Gut-checking outputs isn't evaluation. I run rigorous statistical validation, track error patterns across clause types, and document what breaks and why. That's what separates a model that works in a demo from one that holds up on 50,000 contracts.
When attorneys ask "why did the model flag this?" I can answer in legal terms. When engineers ask "what should the model extract?" I can give them a structured requirement. That's where most AI legal implementations break down.
Indemnification language in a financial services agreement reads nothing like indemnification in a municipal contract. My 6 years of hands-on legal review means I build models that handle real-world variation — not just the clean examples that make demos look easy.
Not every AI use case in legal is worth building yet. Part of my consulting work is helping clients figure out where AI actually saves time versus where the error rate makes it a liability. Sometimes saying "not yet" is the highest-value thing I can offer.
If you're scaling AI-assisted contract review and need someone who understands both the legal risk and the technical execution, let's talk.