W39Y23 Weekly Review: FTC v. Amazon, Copyright Limitations, and H&R Block Data Racketeering

+++ FTC Sues Amazon Over Monopoly 
+++ SCOTUS To Hear Copyright Statute Of Limitations Case Against Rapper Flo Rida
+++ H&R Block, Google, Meta Hit With RICO Class-Action Lawsuit Over Illegal Scraping Of Taxpayer Data

FTC Sues Amazon Over Monopoly 
The Federal Trade Commission and 17 State Attorneys General have filed a lawsuit against Amazon accusing the e-commerce behemoth of being a monopolist that uses unfair tactics to maintain its dominance illegally. Allegedly, Amazon’s actions prevent rivals and sellers from lowering prices, reducing product quality, overcharging sellers, hindering innovation, and impeding fair competition. Amazon allegedly engages in exclusionary practices that stifle competitors. These practices impact billions of dollars in retail sales, numerous products, and millions of shoppers. The lawsuit aims to hold Amazon accountable for these monopolistic practices and restore fair competition by seeking a permanent injunction.

Read the full press release by the FTC.
Read the full report in the Washington Post.
Read the full academic paper by Lina M. Khan titled “Amazon’s Antitrust Paradox”.
Read the case FTC et al v. Amazon, U.S. District Court, Western District of Washington, No. 2:23-cv-01495. 

SCOTUS To Hear Copyright Statute Of Limitations Case Against Rapper Flo Rida
The U.S. Supreme Court has agreed to clarify the time period during which plaintiffs can seek damages in copyright claims. This case involves a Miami music producer, Sherman Nealy, who sued Warner Music’s Atlantic Records label over the use of a 1980s song in a 2008 Flo Rida track. Nealy argues that his record label owns the rights to the song and that the licenses given to the defendants were invalid because he did not give permission while he was incarcerated. SCOTUS will address the conflicting rulings of lower courts on the time limit for seeking damages in copyright cases. This case is set to be heard in the upcoming court term.

Read the full report on The Hollywood Reporter.
Read the full docket Warner Chappell Music v. Sherman Nealy, U.S. Supreme Court, No. 22-1078.

H&R Block, Google, Meta Hit With RICO Class-Action Lawsuit Over Illegal Scraping Of Taxpayer Data
Trial lawyer R. Brent Wisner is suing H&R Block alleging the tax firm collaborated with Meta and Google to use “spyware” on its website for profit from scraped taxpayer data. The class-action suit is filed under the Racketeer Influenced and Corrupt Organizations Act (RICO), claiming a pattern of racketeering tantamount to the mafia. It argues that these companies failed to inform consumers about data sharing and engaged in deceptive practices. A congressional report revealed H&R Block shared data through tracking pixels, potentially violating data privacy laws. The lawsuit cites tax code limitations on data use. The FTC recently warned H&R Block about data usage without consent.

Read the full report on Gizmodo.
Read the case Justin Hunt v. Meta Platforms, Google, H&R Block, U.S. District Court for the Northern District of California, No. 3:23-cv-04953.

More Headlines

  • Digital Services Act: “EU Court Says Amazon Is Not a ‘Very Large Online Platform,’ for Now” (by Gizmodo
  • Digital Services Act: “Elon Musk’s X headed for ‘rule of law’ clash with EU, warns Twitter’s former head of trust & safety” (by TechCrunch)
  • Free Speech: “SCOTUS to decide if Florida and Texas social media laws violate 1st Amendment” (by Ars Technica
  • Free Speech: “New York’s Hate Speech Law Violates the First Amendment” (by Cato)
  • Copyright Law: “Jay-Z, Timbaland, Ginuwine Win Years-Long Copyright Infringement Suit Over “Paper Chase” and “Toe 2 Toe” (by American Songwriter)
  • Copyright Law: “Ed Sheeran Wins ‘Thinking Out Loud’ Copyright Trial” (by Rolling Stone)
  • Copyright Law: “Yung Gravy and Rick Astley Settle Vocal Impersonation Lawsuit” (by Billboard)
  • Data Privacy: “IBM and Johnson & Johnson Health Care Systems Sued Over August 2023 Data Breach” (by HIPPA Journal)
  • Data Privacy: “Enforcement of California’s Age-Appropriate Design Code Act Is Put on Ice — for Now” (by JD Supra)
  • Data Privacy: “Revealed: US collects more data on migrants than previously known” (by Guardian
  • Data Privacy: “Honeywell facing multiple lawsuits over data breach” (by WCNC)
  • Data Privacy: “Donald Trump Sues Former British Spy in London Data Lawsuit” (by Bloomberg)

In Other News (or publications you should read)

This post originated from my publication Codifying Chaos.

The Problem With Too Much Privacy

The debate around data protection and privacy is often portrayed as a race towards complete secrecy. The author of this research paper argues that instead, we need to strike a balance between protection against harmful surveillance and doxing on one side and safety, health, access, and freedom of expression on the other side.

Privacy rights are fundamental rights to protect individuals against harmful surveillance and public disclosure of personal information. We rightfully fear surveillance when it is designed to use our personal information in harmful ways. Yet a default assumption that data collection is harmful is simply misguided. Moreover, privacy—and its pervasive offshoot, the NDA—has also at times evolved to shield the powerful and rich against the public’s right to know. Law and policy should focus on regulating misuse and uneven collection and data sharing rather than wholesale bans on collection. Privacy is just one of our democratic society’s many values, and prohibiting safe and equitable data collection can conflict with other equally valuable social goals. While we have always faced difficult choices between competing values—safety, health, access, freedom of expression and equality—advances in technology may also include pathways to better balance individual interests with the public good. Privileging privacy, instead of openly acknowledging the need to balance privacy with fuller and representative data collection, obscures the many ways in which data is a public good. Too much privacy—just like too little privacy—can undermine the ways we can use information for progressive change. Even now, with regard to the right to abortion, the legal debates around reproductive justice reveal privacy’s weakness. A more positive discourse about equality, health, bodily integrity, economic rights, and self-determination would move us beyond the limited and sometimes distorted debates about how technological advances threaten individual privacy rights.

Make sure to read the full paper titled The Problem With Too Much Data Privacy by Orly Lobel at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4578023

The United States has historically frayed away from enacting privacy laws or recognizing individual privacy rights. Arguably, this allowed the United States to innovate ruthlessly and progress society relentlessly. The Fourth Amendment and landmark Supreme Court cases shaped contemporary privacy rights in the U.S. over the past half of the century. In the aftermath of the September 11, 2001 terrorist attacks, however, privacy rights were hollowed out when H.R. 3162 aka the Patriot Act was passed, which drastically expanded the Government’s surveillance authority. In 2013, whistleblower Edward Snowden released top-secret NSA documents to raise the public’s awareness of the scope of surveillance and invasion of privacy done to American citizens and citizens of the world by and large. In 2016, the European Union adopted regulation EU 2016/679 aka General Data Protection Regulation (GDPR). Academic experts who participated in the formulation of the GDPR wrote that the law “is the most consequential regulatory development in information policy in a generation. The GDPR brings personal data into a complex and protective regulatory regime.” This kickstarted a mass adoption of privacy laws across different States from California’s Consumer Protection Act of 2018 (CCPA) to Virginia’s Consumer Data Protection Act of 2021 (VCDPA). 

History, with all its legislative back-and-forth evolutions, illustrates the struggle around balancing data privacy with data access. Against this backdrop, the author argues that data is information and information is a public good. Too much privacy restricts, hampers, and harms access to information and therefore innovation. And, while society has always faced difficult choices between competing values, modern technology has the capability to effectively anonymize and securely process data, which can uphold individual privacy rights while supporting progressive change.   

W38Y23 Weekly Review: More OpenAI Legal Trouble, Meta Trademark, Tiger King 

+++ Authors Guild Sues OpenAI Over Systematic Copyright Infringement
+++ Metabyte Sues Meta Platforms FKA Facebook Over “Meta” Trademark
+++ Family Sues Alphabet After Man Died Following Google Maps Directions
+++ Court Dismisses Tiger King Tattoo Lawsuit Against Netflix Over Alleged Copyright Infringement 

Authors Guild Sues OpenAI Over Systematic Copyright Infringement
The Authors Guild, representing prominent authors like George R.R. Martin, Jonathan Franzen, and John Grisham, has sued OpenAI for using their novels and original work to train ChatGPT without their permission or license. The authors claim that OpenAI has created infringing works that can summarize, analyze, and generate derivative works based on their novels, competing with their original works. The lawsuit is the third one against OpenAI over this issue, following similar suits by authors Paul Tremblay and Sarah Silverman. The authors argue that OpenAI obtained the novels from illegal online libraries and that ChatGPT facilitates the creation of infringing fan fiction by businesses and users. The authors seek an injunction and damages from OpenAI. 

Read the full report on The New York Times.
Read the full report on The Hollywood Reporter.
Read the case Jonathan Franzen, John Grisham, George R.R. Martin, Jodi Picoult, George Saunders et alia v. OpenAI, U.S. District Court, Southern District of New York, No. 1:23-cv-08292. 

Metabyte Sues Meta Platforms FKA Facebook Over “Meta” Trademark
Metabyte, a California-based company that provides staffing and tech services, has filed a trademark lawsuit against Meta Platforms, the new name of Facebook. Metabyte claims that it has been using its name since 1993 and has registered trademarks for it since 2014. It argues that Meta Platforms’ name change and rebranding will confuse consumers, as both companies offer related services and cover overlapping geographic areas. Metabyte seeks to stop Meta Platforms from using the name Meta and asks for damages and profits from the alleged infringement. The lawsuit comes after the two companies failed to reach an agreement on coexisting with their respective names.

Read the full report on Reuters.
Read the case Metabyte Inc v. Meta Platforms Inc, U.S. District Court for the Northern District of California, No. 4:23-cv-04862.

Family Sues Alphabet After Man Died Following Google Maps Directions
The family of a North Carolina man who tragically drove off a collapsed bridge while following Google Maps directions is suing Google for negligence. The bridge had collapsed nine years earlier. Philip Paxson died in September 2022 after his car plunged 20 feet into a washed-out creek. His family claims Google was aware of the bridge’s condition but failed to update its navigation system. Despite numerous warnings from the public, Google allegedly did not take action to correct the route information. The lawsuit names several private property management companies as responsible for the bridge, which had not been maintained or properly barricaded for years.

Read the full report on Associated Press.
Read the full report on Ars Technica
Read the case Paxson v. Google, Superior Court of the State of North Carolina for the County of Wake, No. 23CV026335-910.  

Court Dismisses Tiger King Tattoo Lawsuit Against Netflix Over Alleged Copyright Infringement 
A federal judge has ruled that Netflix did not infringe the copyright of a tattoo artist who claimed that the streaming service used his photo of a tiger tattoo without his permission in the documentary series “Tiger King”. The judge found that Netflix had a fair use defense because the photo was used for a transformative purpose and did not harm the market value of the original work. The judge also dismissed the artist’s claims of false designation of origin and unfair competition. 

Read the full report on Law.com.
Read the case Cramer v. Netflix et al, U.S. District Court for the Western District of Pennsylvania, No. 3:22-cv-00131-SLH.

More Headlines

  • Copyright Law: “Musicians are eyeing a legal shortcut to fight AI voice clones” (by The Verge)
  • Data Privacy: “Papa John’s Defeats Suit Over Session Replay Software on Website” (by Bloomberg Law)
  • Data Privacy: “Hunter Biden files lawsuit against IRS alleging privacy violations” (by CBS News)
  • Data Privacy: “Poland investigates OpenAI over privacy concerns” (by Reuters
  • Legal Practice: “Break the Law or Leave No Record, California Courts Face Dilemma” (by Bloomberg Law)
  • Antitrust Law: “Microsoft, Google and Antitrust: Similar Legal Theories in a Different Era” (by New York Times)

In Other News (or publications you should read)

This post originated from my publication Codifying Chaos.

Structuring Technology Law

Techlaw, which studies how law and technology interact, needs an overarching framework that can address the common challenges posed by novel technologies. Generative artificial intelligence seems to be a novel technology that introduces a plethora of legal uncertainties. This chapter excerpt and paper examines lawmakers, legislators, and legal actors legal response options to techlaw uncertainties and inspires a structured approach to creating an overarching framework.  

By creating new items, empowering new actors, and enabling new activities or rendering them newly easy, technological development upends legal assumptions and raises a host of questions. How do legal systems resolve them? This chapter reviews the two main approaches to resolving techlaw uncertainties. The first is looking back and using analogy to stretch existing law to new situations; the second is looking forward and crafting new laws or reassessing the regulatory regime.

Make sure to read the full paper titled Legal Responses to Techlaw Uncertainties by Rebecca Crootof and BJ Ard at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4545013

(Source: Tierney/Adobe Stock Photography via law.com)

New technologies often expand human interactions and transactions beyond previously codified regimes. This creates dissonance between historic legal decision-making and present or future adjudication of conflict. The authors argue that technology challenges laws in three distinct ways: (1) application of laws, (2) normative, i.e. creating an undesired result or legal loophole, and (3) institutional, i.e. which body should regulate and oversee a specific technology. To illustrate with a simplified, practical example: OpenAI released with ChatGPT a program that creates content based on the depth of access to training data and level of quality control. Does the act of creating content from training data which originates from thousands of human creators constitute a copyright violation? Is copyright law applicable? If so, does a judicial order contradict the purpose of intellectual property rights? Or, perhaps to take it a step further should property rights be applicable to artificial intelligence in the first place? 

The authors offer a two-pronged approach to overcome these challenges by (1) adopting a “looking back” and (2) a “looking forward” mindset when interpreting and resolving legal uncertainties. They discuss these approaches as binary to emphasize the distinctions between them, but they exist on a continuum. Looking back is using analogies to extend existing law to new situations. Looking forward is creating new laws or reevaluating the regulatory framework. They argue that technology law needs a shared methodology and overarching framework that can address the common challenges posed by novel technologies. Without diving into a backwards approach, which is commonly taught in law school, let’s skip to future proofing new laws. Lawmakers represent an eclipse of society with all its traditional and modern challenges. They have to balance between ease of amending a law and its scope. For a number of reasons, they often prefer stability over flexibility and flexibility over precision. Actual decision-making comes down to passing tech-neutral laws or tech-specific laws. Tech-neutrality implies a broad and adaptable set of regulations applicable to various technologies, offering flexibility and reducing the need for frequent updates when new tech emerges. However, they can be vague and overly inclusive, potentially interfering with desirable behaviors and enforcement. Tech-specific laws, on the other hand, are commonly clear in language and tailored to specific issues, making compliance easier while still promoting innovation. Yet, they may become outdated and create legal loopholes or greyareas if not regularly updated, and crafting them requires technical expertise. Technical expertise in particular is hard to convey to an ever-aging body of political representatives and lawmakers. 

Structuring technology law seems to prefer a high-level of flexibility and adaptability over system stability. However, the nuances and intricacies of technology and its impact on society can’t be quantified or summarized in a brief chapter. This original excerpt builds upon content Crootof and Ard originally published in 2021. You can read the full paper titled “Structuring Techlaw” at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3664124 to gain full perspective of lawmakers’ legal response options to techlaw uncertainties. 

W37Y23 Weekly Review: DPC v. TikTok, Chabon v. Meta, and California v. Google

+++TikTok Hit with $368 Million Fine by Irish Data Protection Commission
+++Meta Sued by Pulitzer Price Winner Over Llama-AI Training Data
+++California Attorney General Sues Google for $93 Million Over Location History Data

TikTok Hit with $368 Million Fine by Irish Data Protection Commission
Ireland’s Data Protection Commission (DPC) fined TikTok €345 million ($368 million) for violating the GDPR in relation to children and underaged users. The DPC found that TikTok’s sign-up process and family pairing feature exposed children’s data to public and parental access. The European Data Protection Board (EDPB) intervened and directed the DPC to amend its draft decision to include a new finding of infringement of the principle of fairness. TikTok disagreed with the decision and said it had made changes before the investigation. The Irish regulator is still investigating TikTok’s procedures around transferring European user data to China.

Read the full press release by the Irish Data Protection Commission.
Read the full report on Associated Press.
Read the decision in the matter of TikTok Technology Limited made pursuant to Section 111 of the Data Protection Act, 2018 and Articles 60 and 65 of the General Data Protection Regulation, DPC IN-21-9-1

Meta Sued by Pulitzer Price Winner Over Llama-AI Training Data
A group of writers, including Pulitzer Prize winner Michael Chabon, has sued Meta Platforms, accusing the tech company of using their writings, including pirated versions, to train its Llama AI software. This lawsuit follows a similar case against OpenAI. Authors argue that their works are valuable for AI language training yet Meta has failed to seek permission or pay compensation.

Read the full report on reuters.
Read the case Michael Chabon et alia v. Meta Platforms Inc. et alia, U.S. District Court, Northern District of California, No. 3:23-cv-04663.

California Attorney General Sues Google for $93 Million Over Location History Data
The California AG has sued Google for $93 million, accusing it of lying to users about how their location data was used and shared by third parties. The lawsuit also claims that Google violated the state’s laws on unfair competition and false advertising. The lawsuit says that Google misled users about their location data options, such as Location History, Web and app Activity, and ad personalization. The lawsuit also says that Google must pay $93 million to the state and agree to some new restrictions on its location services and deceptive communications.

Read the full report on techcrunch.
Read the case California v. Google, Superior Court of the State of California for the County of Santa Clara, No. 23CV422424.

More Headlines

  • Copyright Law: “Trump asks court to trim ‘Electric Avenue’ copyright lawsuit’” (by Reuters)
  • Copyright Law: “Japanese YouTuber convicted of copyright violation after uploading Let’s Play videos” (by The Verge)
  • Copyright Law: “Four large US publishers sue ‘shadow library’ for alleged copyright infringement” (by Guardian)
  • Antitrust Law: “In the Google antitrust trial, defaults are everything and nobody likes Bing” (by The Verge
  • Data Privacy: “Dutch Groups Launch Major Privacy Lawsuit Against Google” (by Forbes
  • Data Privacy: “Indiana AG Todd Rokita sues IU Health for disagreeing on patient privacy in Caitlin Bernard case” (by Indiana Capital Chronicle)
  • Data Privacy: “The Technology Facebook and Google Didn’t Dare Release” (by New York Times)
  • Data Privacy: “Clean data must be as much of a right as clean water” (by Financial Times
  • Watch: “Senators host tech leaders for closed-door AI summit” (by MSNBC)

In Other News (or publications you should read)

This post originated from my publication Codifying Chaos.

AI in Legal Research and Decision Making

Traditionally, legal research and judicial decisions are performed by legally certified, skilled humans. Artificial intelligence is supporting and enhancing these processes by introducing text analysis tools, case comparison, document review at scale, and accurate case predictions among other things. Where are the technical and ethical boundaries of machine-enhanced judicial decision-making? And, how long until large language models interpret and explain laws and societal norms better than legally certified, skilled humans do? 


This paper examines the evolving role of Artificial Intelligence (AI) technology in the field of law, specifically focusing on legal research and decision-making. AI has emerged as a transformative tool in various industries, and the legal profession is no exception. The paper explores the potential benefits of AI technology in legal research, such as enhanced efficiency and comprehensive results. It also highlights the role of AI in document analysis, predictive analytics, and legal decision-making, emphasizing the need for human oversight. However, the paper also acknowledges the challenges and ethical considerations associated with AI implementation, including transparency, bias, and privacy concerns. By understanding these dynamics, the legal profession can leverage AI technology effectively while ensuring responsible and ethical use.

Make sure to read the full paper titled The Role of AI Technology for Legal Research and Decision Making by Md Shahin Kabir and Mohammad Nazmul Alam at https://www.researchgate.net/publication/372790308_The_Role_of_AI_Technology_for_Legal_Research_and_Decision_Making

I want to limit this post to the most interesting facet of this paper: (1) machine learning as a means to conduct legal research and (2) expert systems to execute judicial decisions.  

The first part refers to the umbrella term machine learning, which in the legal profession comes down to predictive or statistical analysis. In other words, ML is a method to ingest vast amounts of legal and regulatory language, analyze, classify, and label it against a set of signals. For example, think about all laws and court decisions concerning defamation that were ever handed down. Feed the statistical means into your ML system and deploy it against a standard intake of text looking to identify (legally) critical language. Of course, this is an exaggerated example, but perhaps not as far-fetched as it seems. 

The second part refers to the creation of decision support systems, which – as far as we understand the author’s intent here – are designed to be the result of the aforementioned ML engagement that is tailored to the situation and, ideally, executed autonomously. It helps humans to identify potential legal risks. It helps to shorten the time required to overview an entire, complex case. If set and deployed accurately, these decision support systems could become automated ticket systems upholding the rule of law. That is a big if. 

One of the challenges for this legal technology is algorithmic hallucinations or simply put – a rogue response. These appear to take place without warning or noticeable correlation. They are system errors that can magnify identity or cultural biases. This raises ethical questions and liability for machine mistakes. Furthermore, it raises questions of accountability and the longevity of agreed-upon social norms. Will a democratic society allow its norms, judicial review, and decision-making to be delegated to algorithms?  

For some reason, this paper is labeled August 2023 when in fact it was first published in 2018. I only discovered this after I started writing. ROSS Intelligence has been out of business since 2021. Their farewell post “Enough” illustrates another challenging aspect of AI, legal research, and decision-making: access.     

W36Y23 Weekly Review: X Corp. v. California, Maryland v. Instagram/TikTok, and Government Takedown Requests

+++X Corporation Challenges California Law for Transparency in Content Moderation 
+++Maryland School District sues Instagram, TikTok, YouTube and others over Mental Health
+++Appeals Court Limits Government Power to Censor Social Media Content
+++California Lawmakers Wrestle with Social Media Companies over Youth Protection Laws

X Corporation Challenges California Law for Transparency in Content Moderation 

California’s AB 587 law, which demands that social media platforms reveal how they moderate content related to hate speech, racism, extremism, disinformation, harassment, and foreign political interference, is being challenged by X, the company that runs Twitter. X says that the law infringes on its constitutional right to free speech by making it use politically charged terms and express opinions on controversial issues. The lawsuit is part of a larger conflict between California and the tech industry over privacy, consumer protection, and regulation.

Read the full report on techcrunch.
Read the full text of Assembly Bill 587.
Read the case X Corporation v. Robert A. Bonta, Attorney General of California, U.S. District Court, Eastern District of California, No. 2:23-at-00903.

Maryland School District sues Instagram, TikTok, YouTube and others over Mental Health

A school district in Anne Arundel County, Maryland is taking legal action against major social media companies, such as Meta, Google, Snapchat, YouTube, and TikTok. The school district accuses these companies of causing a mental health crisis among young people by using algorithms that keep them hooked on their platforms. The school district says that these platforms expose young users to harmful content and make them spend too much time on screens. The school district demands that these platforms change their algorithms and practices to safeguard children’s well-being. The school district also wants to recover the money that it has spent on addressing student mental health issues.

Read the full report on WBALTV.
Read the case Board of Education of Anne Arundel County v. Meta Platforms Inc. et alia, U.S. District Court, Maryland, No. 1:23-cv-2327.

Appeals Court Limits Government Power to Censor Social Media Content

A federal appeals court has narrowed a previous court order that limited the Biden administration’s engagement with social media companies regarding contentious content. The original order, issued by a Louisiana judge on July 4th, prevented various government agencies and officials from communicating with platforms like Facebook and X (formerly Twitter) to encourage the removal of content considered problematic by the government. The appeals court found the initial order too broad and vague, upholding only the part preventing the administration from threatening social media platforms with antitrust action or changes to liability protection for user-generated content. Some agencies were also removed from the order. The Biden administration can seek a Supreme Court review within ten days. 

Read the full report on the associated press.
Read the case Missouri v. Biden, U.S. District Court for the Western District of Louisiana, No. 3:22-CV-1213.

California Lawmakers Wrestle with Social Media Companies over Youth Protection Laws

A bill to make social media platforms responsible for harmful content died in a California committee. Sen. Nancy Skinner (D-Berkeley) authored SB 680, which targeted content related to eating disorders, self-harm, and drugs. Tech companies, including Meta, Snap, and TikTok, opposed the bill, saying it violated federal law and the First Amendment. Lawmakers said social media platforms could do more to prevent harm. Another bill, AB 1394, which deals with child sexual abuse material, passed to the Senate floor. It would require platforms to let California users report such material, with fines for non-compliance.

Read the full report on losangelestimes
Read the full text of Senate Bill 680.
Read the full text of Assembly Bill 1394.

More Headlines

  • Copyright Law: “Sam Smith Beats Copyright Lawsuit Over ‘Dancing With a Stranger’” (by Bloomberg Law)
  • Copyright Law: “Copyright Office Denies Registration to Award-Winning Work Made with Midjourney” (by IP Watchdog)
  • Cryptocurrency: “Who’s Afraid Of (Suing) DeFi Entities?” (by Forbes)
  • Privacy: “Meta Platforms must face medical privacy class action” (by Reuters
  • Social Media: “Meta-Backed Diversity Program Accused of Anti-White Hiring Bias” (by Bloomberg
  • Personal Injury: “New York man was killed ‘instantly’ by Peloton bike, his family says in lawsuit” (by CNBC)
  • Social Media: “Fired Twitter employee says he’s owed millions in lawsuit” (by SF Examiner)
  • Social Media: “Georgetown County School District joining lawsuit against Meta, TikTok, Big Tech” (by Post and Courier
  • Defamation: “Elon Musk to sue ADL for accusing him, X of antisemitism” (by TechCrunch)

In-Depth Reads

  • Surveillance Capitalism: “A Radical Proposal for Protecting Privacy: Halt Industry’s Use of ‘Non-Content’” (via Lawfare)

In Other News (or publications you should read)

This post originated from my publication Codifying Chaos.

Machine Learning from Legal Precedent

When training a machine learning (ML) model with court decisions and judicial opinion, the result of these rulings is the training data needed to optimize the algorithm that determines an outcome. As lawyers, we take the result of these rulings as final. In some cases, however, the law requires change when rulings become antiquated or conflict with a shift in regulations. This cursory report explores the level of detail needed when training an ML model with court decisions and judicial opinions.   


Much of the time, attorneys know that the law is relatively stable and predictable. This makes things easier for all concerned. At the same time, attorneys also know and anticipate that cases will be overturned. What would happen if we trained AI but failed to point out that rulings are at times overruled? That’s the mess that some using machine learning are starting to appreciate.

Make sure to read the full paper titled Overturned Legal Rulings Are Pivotal In Using Machine Learning And The Law by Dr. Lance B. Eliot at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998249

(Source: Mapendo 2022

Fail, fail fast, and learn from the failure. That could be an accurate summary of a computational system. In law, judicial principles demand a less frequent change of pace. Under the common law principle of stare decisis, courts are held to a precedent that can either be a vertical or horizontal rule. Vertical stare decisis describes lower courts are bound by higher courts ruling whereas horizontal stare decisis describes an appellate court decision can become a guiding ruling but only for similar or related cases on the same level. In essence, stare decisis is meant to instill respect for prior rulings to ensure legal consistency and predictability. 

In contrast, the judicial process would grind to a halt if prior decisions could never be overturned or judges wouldn’t be able to deviate and interpret a case without the dogma of stare decisis. Needless to say, overturning precedent is the exception rather than the rule. According to a case study of 25,544 rulings of the U.S. Supreme Court of the United States from 1789 to 2020, the court only overturned itself in about 145 instances, or 0.56%. While this number might be considered marginal, it does have a trickle-down effect on future court rulings at lower levels. 

A high-level description of current ML training procedures could include the curation of a dataset comprised of court decisions, analysis, and legal briefs in a particular field that is used to train an algorithm to process the essence of these court decisions against a real-world scenario. On its face, one could argue to exclude overturned, outdated, or dissenting rulings. This becomes increasingly difficult for legal precedent that is no longer fully applicable yet still recognized by some of the judiciary. Exclusion, however, would lead to a patchwork of curated data that would not be robust and capable of reaching legal reasoning of high quality. Without the consideration of an erroneous or overturned decision, a judge or an ML system could not develop a signal around pattern recognition and sufficiently adjudicate cases. On the other hand, mindlessly training an ML model with everything available could lead the algorithm to amplify erroneous aspects while ranking lower current precedents in a controversial case. 

This paper offers a number of insightful takeaways for anyone building an ML legal reasoning model. Most notably there is a need for active curation of legal precedent that includes overturned, historic content. Court decisions and judicial opinions must be analyzed for their intellectual footprint that explains the rationale of the decision. Once this rationale is identified, it must be parsed against possible conflicts and dissent to create a robust and just system.      

W35Y23 Weekly Review: Google Job Search, AI Hallucinations, and Texas Age Verification Law

+++Danish Media Association sues Google over Job Search Results
+++OpenAI sued over Failed Subject Access Request under the EU’s General Data Protection Regulation (GDPR) 
+++Radio Host sues OpenAI for Defamation over ChatGPT Misinformation 
+++Texas Court orders Injunction of Age Verification Law to Comply with First Amendment

Danish Media Association sues Google over Job Search Results

The Danish Media Association has sued Alphabet (Google) on behalf of Jobindex, a Danish job-search platform, which alleges copyright violations. Jobindex claims that Google copied job ads to its platform without obtaining permission. The lawsuit is significant as it’s the first lawsuit under new EU Copyright rules, specifically EU Copyright Directive Article 17 which took effect in 2021. Jobindex calls for fair competition and equal terms. Google, on the other hand, maintains that its Jobs function in Google Search simplifies job searches and respects the choices of job providers, whether big or small.

Read the full report on reuters.com.

OpenAI sued over Failed Subject Access Request under the EU’s General Data Protection Regulation (GDPR) 

OpenAI’s compliance with European privacy regulations is challenged in a lawsuit by Polish cybersecurity researcher Lukasz Olejnik. Olejnik alleges that its ChatGPT language model violates various provisions of the EU’s GDPR, including transparency, fairness, data access rights, and privacy by design. Olejnik’s complaint stems from his discovery that a biography generated by ChatGPT about himself contained errors, and when he requested data under GDPR, significant information was missing. This lawsuit follows concerns about ChatGPT’s GDPR compliance, including a temporary ban in Italy and investigations in Germany, France, Spain, and Canada. In the U.S., authors have also sued OpenAI over training data issues. The case is filed by law firm GB Partners, aiming to ensure OpenAI demonstrates compliance with GDPR.

Read the full report on forbes.com.

Radio Host sues OpenAI for Defamation over ChatGPT Misinformation 

American talk radio host Mark Walters filed a defamation lawsuit against OpenAI in the Superior Court of Gwinnett County, Georgia. A journalist used OpenAI’s ChatGPT to research the proceedings of the Second Amendment Foundation (SAF) v. Washington State Attorney General Robert Ferguson when ChatGPT completely disregarded the initiated prompts and started to return a fabricated summary that stated Mark Walters used to be the SAF’s Chief Financial Officer and he embezzled funds while holding this position. These errors are often referred to as “hallucinations” and usually occur due to limitations of AI’s training data, model architecture, or the data they have been exposed to during the learning process.      

Read the full report on theverge.com.
Read the case Walters v OpenAI LLC, Superior Court, Gwinnett County, State of Georgia, No. 23-A-04860-2.

Texas Court orders Injunction of Age Verification Law to Comply with First Amendment

A Texas judge has issued an injunction to delay the enforcement of an online age verification bill, HB 1181. The bill would have required adult websites to verify users’ ages and display a public health warning about the potential consequences of accessing explicit material. The Free Speech Coalition and adult video sites like Pornhub challenged the bill, arguing it violated the First Amendment and Section 230 rights. The judge agreed, stating that while protecting children from explicit material is essential, the law must align with established First Amendment doctrine. Several states have enacted similar laws, but privacy concerns and enforcement challenges have arisen. This decision in Texas differs from precedents set in other states, but similar bills are under consideration elsewhere.

Read the full report on techcrunch
Read the full text of House Bill 1181.
Read the full text of the Kids Online Safety Act (KOSA). 
Read the injunction Free Speech Coalition, Inc. v. Colmenero, U.S. District Court, Western District of Texas, No. 1:23-cv-00917.

More Headlines

  • Digital Services Act: “The EU’s Digital Services Act goes into effect today: here’s what that means” (by The Verge)
  • Antitrust Law: “Google escapes Play Store class action after finding more persuasive expert” (by Ars Technica)
  • Copyright Law: “Copyright Law and Generative AI: What a mess” (by ABA Journal)
  • Copyright Law: “The US Copyright Office just took a big step toward new rules for generative AI” (by Business Insider)
  • LegalTech: “The legal issues presented by generative AI” (by MIT Sloan)

In Other News (or publications you should read)

This post originated from my publication Codifying Chaos.