Building with AI, Not Learning to Code
The first thing you realise about these models is that they don't come with instructions. Those of us curious about what they can and can't do have to learn through trial and error. It's strangely the inverse of everyone's fear about AI replacing human thought—in the AI sphere itself, you have to take the journey into the unknown step by step.
That journey led me to vibe coding: AI-assisted development where you describe what you want in plain English, and the model proposes the implementation. You scope, steer, test, and iterate. I've built hundreds of tools this way—professional litigation software, home automation, games for my kids—without having learned to write a single line of code.
Many legal AI products are essentially UI layers over the same models you already have access to. If you're paying for a wrapper, you're paying for someone else's prompts and interface. The same capabilities are available to you directly—with more control and transparency about what's actually happening.
The Process
Start by scoping your idea in your normal AI provider—ChatGPT, Claude, Gemini. Describe what you want in plain English and ask what would be involved. If your idea is feasible (it almost always is), move to an IDE like Cursor where you can use an AI chat interface to build software from the ground up. Your job becomes reading the conversation and steering toward your goal, not writing syntax.
The real skill is document ingestion and context engineering—structuring what the model sees so it can reason effectively. This matters more than clever prompting. A well-architected context window beats a clever prompt every time.
The Discipline Problem
AI can break things faster than you can fix them. Without human oversight, models rapidly optimise the wrong thing. I've watched AI make rapid, impressive progress on a project—rewriting sections, adding features, refining structure—while gradually losing sight of why it was doing any of it. The end result looks smart, even impressive, but it's fundamentally broken.
In litigation, this manifests as elegant submissions that subtly drop your strongest point, briefs that confidently misstate the law, or analysis so elaborate it misses the crucial fact. Bad code usually fails visibly. Bad legal output can be fluent, plausible, and wrong.
The models simulate discipline without enforcing it. I've tested every major ChatGPT model over the past two years, alongside Claude, Gemini, and dozens of open-source alternatives. They follow custom protocols until they don't, then pretend they still are. Protocols can be taught and remembered. They can be performed. But none are verified against the model's own output. The model will quote your rules, apologise for breaches, simulate audit and recovery—then bypass everything on the next failure. This is performance, not enforcement.
The trick is to use AI like any other power tool: carefully, iteratively, always watching what it's actually doing. Make small changes. Step back. Reassess. Don't get pulled into the rabbit hole. Complexity feels impressive, but clarity wins cases.