ChatGPT is not a lawyer.

But it plays one on TV.

In the mid-80s, Vicks launched cough syrup commercials with a simple hook: they got actors who played doctors on soap operas to endorse the benefits of the product. “I’m not a doctor but I play one on TV” became an instant notable quotable.

Recently, I got an email from a candidate objecting to the Oceans’ employment contract. And because I take these things seriously, I sat down to see if there was anything we could do to improve it.

From the email’s first bullet point, it became clear what happened: the candidate had loaded the contract into ChatGPT and then sent me over the pasted output. To confirm, I tried it; ChatGPT gave me more or less the same arguments and even offered to produce a redlined version.

In some respects, I love this. Most folks don’t know how to access a good lawyer and having any legal advice can be better than none at all.

But the Vicks message was that listening to actors with no medical training is foolish; the whole point was that just because someone sounds like an expert doesn’t mean they are. The commercial was encouraging adults not to pretend to be doctors and to use the right medication that real doctors recommended for them.

When you give ChatGPT a legal document, by default it will give back an answer in the unique language that is legalese. And because it sounds like a lawyer (social psychologists would say the advice has face validity), it is easy to accept it as credible.

But ChatGPT is essentially an actor; it repeats the lines in a certain way that signals expertise but doesn’t genuinely understand them. Out of the eight points the candidate sent over, seven were clear misreadings of the contract that no actual lawyer would make; only one was a true disagreement in principle and relatively minor.

This may cause the candidate to reject a significant opportunity based on illusory concerns. I’m going to reach out to clarify but I am absolutely not a lawyer, don’t play one on TV, and won’t speak in legalese. Can being empathetic and direct beat out legalese? We’ll see.

It would be relatively easy for OpenAI to detect legal documents and refuse to engage with them or change tone to give advice while sounding more like your neighbor and less like an actual lawyer.

But in the current system, they have no incentive to do that: they benefit from the inferred expertise of their presentation. It makes their product look powerful and worth subscribing to.

And this is why we need to enforce the laws we have. If you induce someone to think you are offering qualified legal advice without being an actual lawyer, you’re both civilly and criminally liable; AI-generated advice is no different, except you’re holding the company and not the AI responsible. I’m not a particularly litigious person, but without economic and criminal damage, this won’t change.

So let’s see the lawsuits. Maybe ChatGPT will choose to self-represent in court.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *