Nippon Life Insurance sues OpenAI for invalid legal advice

  • Posted

A US company has sued OpenAI for allegedly letting ChatGPT act as a solicitor. This landmark judgment provides another example of how using AI for legal advice has become an issue that businesses, especially law firms, cannot ignore.

Nippon Life Insurance v OpenAI: what happened?

ChatGPT recommends reopening legal case

Dela Torre sued her employer, Nippon Life, in December 2022, which was resolved with a settlement agreement in 2024 that prevented her from taking further legal action related to the claim. Three years later, Dela sought advice from her lawyer about reopening the case, who advised that this would be unlikely to succeed. Unsatisfied with this, Dela turned to ChatGPT for advice.

ChatGPT determined that the lawyer’s response “invalidated Dela’s feelings, dismissed her perspective, and deflected responsibility for her dissatisfaction”. Following that, Dela dismissed her lawyer and used ChatGPT to prepare filings against Nippon Life, thereby reopening the case.

ChatGPT acted as an unlicensed solicitor

Nippon Life was faced with AI-generated filings that cost the company significant time, money, and resources to defend against them.

In March 2026, Nippon Life Insurance filed a lawsuit accusing OpenAI (the company behind ChatGPT) of interference with a legally-binding contract (the settlement agreement) and the unlicensed practice of law. The company alleged that ChatGPT provided invalid legal advice, including to reopen a case that had already been dismissed with prejudice. In other words, ChatGPT acted as an unpaid and unlicensed solicitor, and OpenAI should be held accountable.

Using AI for legal advice: a real and growing risk

Although OpenAI’s usage policy prohibits using ChatGPT for “tailored advice that requires a licence, such as legal or medical advice, without appropriate involvement by a licensed professional,” we have yet to see this enforced in the UK.

AI tools like ChatGPT are increasingly being used to:

None of this is appropriate without a qualified lawyer involved, and for multiple reasons. Read more about why you shouldn’t use AI for legal advice in our blog.

How does this affect you?

You might be thinking: “This happened to an insurance company in America, what does it have to do with me?”

This isn’t a US problem; it’s a universal one and an issue that’s only going to grow as AI continues to advance, and we’re sure it won’t be long before cases with similar characteristics start to show up in England and Wales.

In fact, in a recent case here in the UK (UK v Secretary of State for the Home Department [2026]), the court highlighted the risks of legal professionals using AI for court documents, particularly uploading confidential documents into open-source AI tools such as ChatGPT. The court emphasised the importance of a qualified legal professional checking documents:

The point is that the qualified legal professional with conduct of the matter is expected to ensure that such documents are checked, that errors are identified, and that only accurate documents are sent to the tribunal. To fail to conduct such checks is wasteful of the tribunal’s time.”

The courts are already taking a dim view of AI-produced documents being filed without appropriate checking and are not afraid to apply sanctions against the party in question, where the AI has produced inaccurate documentation.

It’s interesting that, in this US case, it was OpenAI that was being targeted as a defendant, following a court finding fault with the AI output. The UK case above highlights the risks of AI use without appropriate checks and balances in place and shows how even legal professionals are being criticised for its inappropriate use.

“AI told me to” is not a defence

“AI told me to” is not a legal defence in any court. If you use ChatGPT or a similar tool to draft a contract for your business, it’s likely to be poorly drafted and incorrect. This means the advice is wrong, and any loss falls on you, not the AI company.

Even when AI-generated legal advice goes wrong, there may be no one to make a claim against and no remedy available. As a business owner, you simply absorb the loss.

A qualified lawyer carries professional indemnity insurance, is subject to regulatory oversight, and owes you a duty of care. If something goes wrong, there are clear avenues for redress.

That accountability gap is precisely why professional legal advice cannot be replaced by AI, regardless of how convincing it may look.

How to protect your business

Whether or not this specific case succeeds, it has exposed the legal risks of relying on AI for legal matters.

Here are our top tips:

  1. Establish an AI policy that prohibits using AI tools as a substitute for legal advice on contracts, disputes, employment matters, or regulatory compliance.
  2. Train your team on where AI assistance ends and professional guidance begins.
  3. Involve a solicitor early and from the outset, particularly in settlement negotiations, contract drafting, or any situation involving a potential dispute.

Further information

Whether the Nippon lawsuit succeeds or not, it has already drawn a very public line between AI-generated legal content and proper legal advice.

The message is that AI is a technological marvel, in the same way as a fighter jet, but access to one doesn’t make you a qualified Pilot. You still need the services of a legal professional to ensure that any AI input and outputs comply with the relevant law and procedural requirements.

If you have questions about AI use in your business, or want to review your current agreements and policies, we can help. Call us on 0117 325 2929 or fill out our enquiry form.

    Close

    How can we help you?


    We’re here to help. Please fill in the form and we’ll get back to you as soon as we can. Or call us on 0117 325 2929.