ChatGPT Sued for "Unauthorized Practice of Law" for Providing Legal Services

Post time:04-13 2026 Source:CHINA INTELLECTUAL PROPERTY LAWYERS NETWORK
tags: ChatGPT AI
font-size: +-
563

On March 4, 2026, Nippon Life Insurance Company of America (hereinafter "NIPPON") filed a lawsuit against OpenAI Foundation and OpenAI Group PBC in the U.S. District Court for the Northern District of Illinois. This case is considered the world's first unauthorized practice of law lawsuit against an AI company arising from a chatbot providing legal advice and drafting legal documents.

The case stems from a dispute between NIPPON and Dela Torre, a beneficiary of its long-term disability insurance. In 2022, Dela Torre sued NIPPON, and the parties reached a settlement in January 2024: Dela Torre permanently released NIPPON from all claims, and the case was dismissed with prejudice.

In January 2025, Dela Torre became dissatisfied with the settlement agreement. His former attorney responded that the agreement had become effective. Dela Torre uploaded the attorney's email to ChatGPT and asked whether he was experiencing "gaslighting." ChatGPT analyzed the situation and concluded that the attorney had engaged in emotional manipulation. Influenced by this, Dela Torre fired his attorney and relied on ChatGPT to litigate pro se.

On January 22, 2025, Dela Torre filed a motion for reconsideration drafted by ChatGPT. Thereafter, he filed a total of 44 motions and 14 requests for judicial notice, all completed with the assistance of ChatGPT. In February 2025, the court denied the motion, holding that the settlement agreement was valid. During the litigation, Dela Torre cited a fictitious case (completely nonexistent) generated by ChatGPT; ChatGPT confirmed the fictitious case upon inquiry and provided a detailed summary.

NIPPON argues that OpenAI, through ChatGPT, was aware of the settlement agreement and intentionally generated legal arguments and drafted documents, inducing Dela Torre to breach the agreement, thereby constituting tortious interference with contract. Additionally, by providing legal analysis, research, and document drafting, ChatGPT effectively engaged in the practice of law without being licensed to practice in any state, constituting unauthorized practice of law.

OpenAI responded that its terms of use explicitly state that users shall not use ChatGPT for legal or medical advice unless a licensed professional is involved. Notably, on October 29, 2025, OpenAI amended ChatGPT's terms and usage policies to prohibit users from using ChatGPT to provide customized legal advice—this amendment occurred after Dela Torre's use of ChatGPT.

The core issue in this case is whether the output of AI constitutes "legal information" or "legal advice." In the 2025 case Walters v. OpenAI, the court held that OpenAI was not liable for defamation based on defamatory content generated by ChatGPT because it included a disclaimer. However, in the context of unauthorized practice of law, the standard may be different. In this case, Dela Torre received customized advice and directly submitted it to the court. Previously, a Boston lawyer was sanctioned for using ChatGPT to generate fictitious case law, with the judge stating that "lawyers cannot blame technology for their errors." However, the liability of AI companies to non-lawyer users remains unclear.

The case is currently in its early stages. China Intellectual Property Lawyers Network will continue to follow subsequent developments.

No more NextNext

Comment

Consultation