W47Y23 Weekly Review: Nvidia, Trump, and OpenAI 

+++ German Valeo GmbH Sues Nvidia Over Employees Stolen Trade Secrets 
+++ Trump’s Truth Social Sues 20 News Organizations For Defamation Asking $1.5 Billion


German Valeo GmbH Sues Nvidia Over Employees Stolen Trade Secrets 
Nvidia is facing a lawsuit from car technology firm Valeo, alleging that a senior Nvidia staff member, Mohammad Moniruzzaman, inadvertently revealed stolen tech secrets during a video call. Moniruzzaman, who previously worked for Valeo, allegedly displayed a file containing source code for Valeo’s parking and driving assistance software. Valeo claims he took gigabytes of data when he left the company to join Nvidia. German authorities convicted Moniruzzaman in September 2023, leading to Valeo’s lawsuit against Nvidia for benefiting from “stolen trade secrets.” The lawsuit seeks significant damages and an injunction against Nvidia’s use of Valeo’s code. Nvidia denies awareness of the stolen data until May 2022 and asserts it took prompt steps to protect Valeo’s rights.

Read the full report on BBC.
Read the full report on Fortune
Read the case Valeo Schalter und Sensoren GmbH v. Nvidia Corporation, U.S. District Court for the Northern District of California, No. 5:23-cv-05721-VKD 


Former President Donald Trump’s Truth Social Sues 20 News Organizations For Defamation Asking $1.5 Billion
Trump Media and Technology Group Corporation, the company behind Truth Social, is seeking $1.5 billion in damages from 20 news organizations for erroneously reporting that the social media platform had lost $73 million. The company filed a lawsuit in a Florida state court, claiming the reported figure was an “utter fabrication” and accusing the outlets, including the Guardian and Reuters, of a “deliberate, malicious, and coordinated attack” against Truth Social. The news organizations, including Reuters, later corrected their stories, attributing the mistake to miscounting a $50.5 million profit as a loss. Trump Media & Technology Group alleges a coordinated campaign, while Reuters maintains its commitment to fair and accurate reporting. Truth Social, launched last year as Donald Trump’s alternative social network, has gained prominence after his suspension from Twitter and Facebook.

Read the full report on Bloomberg Law News.
Read the case Trump Media and Technology Group v. Guardian, Hollywood Reporter, Reuters, Rolling Stone, Forbes, Axios, G/O Media, CNBC, et alia, 12th Judicial Circuit Court in and for Sarasota County, Filing # 186553510 E-Filed 11/20/2023 07:42:40 PM

More Headlines

  • AI: Are insurers using tech to automate claims denials? (via ModernHealthcare)
  • Antitrust: Amazon.com sued by tech startup after web-traffic deal sputters (via Reuters)
  • Copyright: New Lawsuit Ropes Microsoft Into OpenAI’s Legal Battle With Authors Over Training Data (via The Hollywood Reporter)
  • Free Speech: Are social media giants silencing online content? (via Guardian)
  • Privacy: Merck Unit Faces Online Privacy Case That Tests New Legal Theory (via Bloomberg Law News)
  • Social Media: I was addicted to social media – now I’m suing Big Tech (via BBC)

This post originated from my publication Codifying Chaos.

On The Importance Of Teaching Dissent To Legal Large Language Models

Machine learning from legal precedent requires curating a dataset comprised of court decisions, judicial analysis, and legal briefs in a particular field that is used to train an algorithm to process the essence of these court decisions against a real-world scenario. This process must include dissenting opinions, minority views, and asymmetrical rulings to achieve near-human legal rationale and just outcomes. 

tl;dr
The use of machine learning is continuing to extend the capabilities of AI systems in the legal field. Training data is the cornerstone for producing useable machine learning results. Unfortunately, when it comes to judicial decisions, at times the AI is only being fed the majority opinions and not given the dissenting views (or, ill-prepared to handle both). We shouldn’t want and nor tolerate AI legal reasoning that is shaped so one-sidedly.

Make sure to read the full paper titled Significance Of Dissenting Court Opinions For AI Machine Learning In The Law by Dr. Lance B. Eliot at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998250

(Source: Mapendo 2022)

When AI researchers and developers conceive of legal large language models that are expected to produce legal outcomes it is crucial to include conflicting data or dissenting opinions. The author argues for a balanced, comprehensive training dataset inclusive of judicial majority and minority views. Current court opinions tend to highlight the outcome, or the views of the majority, and neglect close examination of dissenting opinions and minority views. This can result in unjust outcomes, missed legal case nuances, or bland judicial arguments. His main argument centers around a simple observation: justice is fundamentally born through a process of cognitive complexity. In other words, a straightforward ruling with unanimous views has little value in learning or evolving a certain area of the law but considering trade-offs, reflecting on and carefully weighing different ideas and values against each other does.  

This open-source legal large language model with an integrated external knowledge base exemplifies two key considerations representative of the status quo: (1) training data is compiled by crawling and scraping legally relevant information and key judicial text that exceeds a special area and is not limited to supporting views. (2) because the training data is compiled at scale and holistically, it can be argued that majority views stand to overrepresent model input considering that minority views often receive less attention, discussion, or reflection beyond an initial post-legal decision period.  In addition, there might be complex circumstances in which a judge is split on a specific legal outcome. These often quiet moments of legal reasoning rooted in cognitive complexity hardly ever make it into a written majority or minority opinion. Therefore it is unlikely to be used for training purposes.

Another interesting consideration is the access to dissenting opinions and minority views. While access to this type of judicial writing may be available to the public at the highest levels, a dissenting view of a less public case at a lower level might not afford the same access. Gatekeepers such as WestLaw restrict the audience to these documents and their interpretations. Arguments for a fair learning exemption for large language models arise in various corners of the legal profession and are currently litigated by the current trailblazers of the AI boom. 

A recent and insightful essay written by Seán Fobbes cautions excitement when it comes to legal large language models and their capabilities to produce legally and ethically accurate as well as just outcomes. From my cursory review, it will require much more fine-tuning and quality review than a mere assurance of dissenting opinions and minority views can incorporate. Food for thought that I shall devour in a follow up post.

W46Y23 Weekly Review: BardAI, DoNotPay, and Legal Practice

+++ Google Sues Vietnamese Scammers Over BardAI Malware 
+++ AI-Powered Legal Service “DoNotPay” Wins Lawsuit Over Practice Without A License


Google Sues Vietnamese Scammers Over BardAI Malware 
Google is taking legal action against two groups of scammers. The first group spread malware by misleading users interested in Google’s generative AI tools. The second group abused the Digital Millennium Copyright Act (DMCA) to harm business competitors with fraudulent copyright notices. The lawsuits aim to stop these activities, set legal precedents, and raise awareness of the harm caused by fraudulent takedowns on small businesses. Google emphasizes its commitment to protecting users and promoting a safer internet through legal actions against scams and frauds.

Read the full press release on Google.
Read the case Google v. Does 1-3, U.S. District Court for the Northern District of California, No. 5:23-cv-05823-VKD


AI-Powered Legal Service “DoNotPay” Wins Lawsuit Over Practice Without A License
A federal judge has dismissed a lawsuit by an Illinois law firm, MillerKing, against the artificial intelligence company DoNotPay. The law firm accused DoNotPay of engaging in the unauthorized practice of law, but the judge ruled that MillerKing’s claims did not establish legal standing for the lawsuit. MillerKing had alleged that DoNotPay, which uses AI to assist consumers in legal matters, advertised and provided legal services without a proper license. The judge stated that MillerKing failed to show how it was harmed and allowed the law firm to amend its complaint. DoNotPay’s CEO expressed satisfaction with the decision, emphasizing the absence of concrete harm. Another lawsuit against DoNotPay, alleging unauthorized practice of law, is still pending.

Read the full report on Reuters.
Read the case MillerKing LLC v. DoNotPay Inc, U.S. District Court for the Southern District of Illinois, No. 3:23-CV-00863

More Headlines

  • AI: AI chatbot can pass national lawyer ethics exam, study finds (via Reuters)
  • AI: A lawyer fired after citing ChatGPT-generated fake cases is sticking with AI tools: ‘There’s no point in being a naysayer’ (via Fortune)
  • AI: ChatGPT Parent Company Fires CEO Sam Altman (via THR)
  • AI: Lawyers learn too late that chatbots aren’t built to be accurate; how are judges and bars responding? (via ABA Journal)
  • Copyright: AI Legal Protections May Not Save You From Getting Sued (via Bloomberg)
  • Privacy: T-Mobile sued after employee stole nude images from customer phone during trade-in (via CNBC)

This post originated from my publication Codifying Chaos.

Forecasting Legal Outcomes With Generative AI

Imagine a futuristic society where lawsuits are adjudicated within minutes. Accurately predicting the outcome of a legal action will change the way we adhere to rules and regulations. 

tl;dr
Lawyers are steeped in making predictions. A closely studied area of the law is known as Legal Judgment Prediction (LJP) and entails using computer models to aid in making legal-oriented predictions. These capabilities will be fueled and amplified via the advent of AI in the law.

Make sure to read the full paper titled Legal Judgment Predictions and AI by Dr. Lance B. Eliot at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3954615


We are in Mega-City One in the year 2099AD. The judiciary and law enforcement are one unit. Legal violations, disputes, infringements of social norms are enforced by street judges with a mandate to summarily arrest, convict, sentence, and execute criminals. Of course, this is the plot of Judge Joseph Dredd, but the technology in the year 2023AD is already on its way to making this dystopian vision a reality. 

Forecasting the legal outcome of a proceeding is a matter of data analytics, access to information, and the absence of process-disrupting events. In our current time, this is a job for counsel and legal professionals. As representatives of the courts, lawyers are experts in reading a situation and introducing some predictability to it by adopting a clear legal strategy. Ambiguity and human error, however, make this process hardly repeatable – let alone reliable for future legal action. 

Recent developments in the field of computer science, specifically around large-language models (LLM), natural language processing (NLP), retrieval augmented generation (RAG), and reinforced learning from human feedback (RLHF) have introduced technical capabilities to increase the quality of forecasting legal outcomes. It can be summarized as generative artificial intelligence (genAI). Crossfunctional efforts between computer science and legal academia coined this area of study “Legal Judgment Prediction” (LJP).  

The litigation analytics platform “Pre/Dicta” exemplifies the progress of LJP by achieving prediction accuracy in the 86% percentile. In other words, the platform can forecast the decision of a judge in nearly 9 out of 10 cases. As impressive as this result is, the author points out that sentient behavior is a far-fetched reality for current technologies, which are largely based on statistical models with access to vast amounts of data. The quality of the data, the methods leveraged to train the model, and the application determine the accuracy and quality of the prediction. Moreover, the author makes a case for incorporating forecasting milestones and focusing on those, rather than attempting to predict the final result of a judicial proceeding that is very much dependent on factors that are challenging to quantify in statistical models. For example, research from 2011 established the “Hungry Judge Effect” which in essence stated a judge’s ruling has a tendency to be conservative if it happens before the judge had a meal (or on an empty stomach near the end of a court session) versus the same case would see a more favorable verdict if the decision process took place after the judge’s hunger had been satisfied and his mental fatigue had been mitigated. 

Other factors that pose pitfalls for achieving near 100% prediction accuracy include the semantic alignment on “legal outcome”. In other words, what specifically is forecasted? The verdict of the district judge? The verdict of a district judge that will be challenged on appeal? Or perhaps the verdict and the sentencing procedure? Or something completely adjacent to the actual court proceedings? It might seem pedantic, but clarity around “what success looks like” is paramount when it comes to legal forecasting.  

While Mega-City One might still be a futuristic vision, our current technology is inching closer and closer to a “Minority Report” type of scenario where powerful, sentient or not, technologies churn through vast amounts of intelligence information and behavioral data to forecast and supplement human decision making. The real two questions for us as a human collective beyond borders will be: (1) how much control are we willing to delegate to machines? and (2) how do we rectify injustices once we lose control over the judiciary? 

W45Y23 Weekly Review: Fortnite, Top Gun: Maverick, Hollywood, and Amazon 

+++ Choreography Copyright Confirmed in Lawsuit Against Fortnite Maker Epic Games
+++ Movie Studio Seeks Dismissal In Copyright Lawsuit Over “Top Gun: Maverick” Movie 
+++ Hollywood Views Copyright Law Sufficient To Address AI-Infringement And Beyond
+++ Amazon Sued For Failing To Issue Refunds


Choreography Copyright Confirmed in Lawsuit Against Fortnite Maker Epic Games
Choreographer Kyle Hanagami’s lawsuit against Fortnite maker Epic Games, accusing them of stealing his dance choreography for the in-game emote “It’s Complicated,” has been reinstated by the Ninth Circuit U.S. Court of Appeals. The lower court had dismissed the case, but the appeals court disagreed, stating that the choreography was a substantial component of Hanagami’s work. The case will go back to court. This mirrors previous lawsuits against Epic for allegedly stealing dance moves, dropped in 2019 due to a Supreme Court ruling.

Read the full report on TechCrunch.
Read the case Kyle Hanagami v. EPIC Games Inc, U.S. Court of Appeals for the Ninth Circuit, No. 22-55890


Movie Studio Seeks Dismissal In Copyright Lawsuit Over “Top Gun: Maverick” Movie 
Paramount Pictures has requested a California federal court to dismiss a lawsuit alleging that “Top Gun: Maverick” violated the copyright of reporter Ehud Yonay’s heirs, who claim the film is a derivative work of Yonay’s article “Top Guns.” Paramount argues that the films are dissimilar, sharing only the subject of Top Gun, to which the heirs have no special right. The Yonays counter that “Maverick” infringes on their copyright, claiming Paramount ignores the significant similarities and creative choices made in crafting the original article. The legal dispute centers on exclusive movie rights obtained by Paramount for Yonay’s article. 

Read the full report on Reuters.
Read the case Shosh and Yuval Yonay v. Paramount Pictures et al, U.S. District Court for the Central District of California, No. 2:22-cv-03846-PA-GJS


Hollywood Views Copyright Law Sufficient To Address AI-Infringement And Beyond
Hollywood, typically an advocate for expanding copyright laws, surprisingly agrees with the view that existing copyright doctrines are sufficient to address AI-related questions. The Motion Picture Association (MPA) suggests that current laws provide the necessary tools for handling AI issues within copyright frameworks. This stance may be influenced by ongoing strikes in the entertainment industry, where AI plays a central role. The MPA is cautious about potential limitations on AI use if copyright laws are revisited. However, the article criticizes the MPA’s sweeping generalizations on fair use and emphasizes the need for nuanced considerations. The unusual alignment of interests in the AI space is noted, with internet properties opposing copyright expansion while some in Hollywood express concerns. The article emphasizes the importance of taking principled stands for the internet, people, and innovation.

Read the full report on Techdirt
Read the matter Artificial Intelligence and Copyright, U.S. Copyright Office, No. USCO 2023-6


Amazon Sued For Failing To Issue Refunds
Amazon is being sued in a class action lawsuit in Seattle for allegedly failing to issue refunds for returned products, violating its own policies, and engaging in a systematic scheme that deceived customers through unfair trade practices. This follows previous legal action, including an antitrust complaint by the Federal Trade Commission against Amazon. The new case, represented by Holly Jones Clark, claims widespread issues with refunds and cites instances where customers were not reimbursed after returns.

Read the full report on GeekWire.
Read the case Holly Jones Clarke v. Amazon.com Inc, U.S. District Court for the Western District of Washington at Seattle, No. 2:23-cv-01702 

More Headlines

  • AI: OpenAI To Pay Legal Fees Of Business Users Hit With Copyright Lawsuits (via Forbes)
  • Antitrust: What to know about Fortnite maker Epic Games’ antitrust battle with Google, starting today (via TechCrunch
  • Antitrust: Fight for Your Right . . . To Fight? Breaking Down the UFC’s Antitrust Lawsuit (via Romano Law)
  • Copyright: Lil Wayne, Birdman Sued Over Copyright (via Essence
  • Finance: Bitwise co-founders face federal charges alleging $100-million fraud scheme (via Los Angeles Times)
  • Finance: EU business crowdfunding is now bound by bloc-wide regulations (via TechCrunch)
  • Privacy: Amazon Prime privacy lawsuit dismissed (via IAPP)
  • Privacy: Your car can keep collecting your data after a judge dismissed a privacy lawsuit (via The Verge
  • Privacy: YouTube’s Ad Blocker Detection Believed to Break EU Privacy Law (via WIRED)
  • Privacy: Prince Harry Can Proceed With Privacy Lawsuit Against Daily Mail Publisher, U.K. Judge Rules (via Variety)
  • Privacy: FTC brings updated complaint against data broker Kochava (via IAPP
  • Social Media: Lawsuit claims Mark Zuckerberg ignored warnings about Instagram, mental health (via ABC
  • Social Media: Video chat site Omegle shuts down after 14 years — and an abuse victim’s lawsuit (via NPR)

In Other News (or publications you should read)

This post originated from my publication Codifying Chaos.