W43Y23 Weekly Review: More Meta Legal Woes, YouTube, and AI Chat Bubble

+++ 41 U.S. States Sue Meta Alleging Instagram & Facebook Are Addictive To And Harm Kids
+++ YouTube Successfully Defends Its Copyright Repeat Infringer Policy In Court
+++ New York AG Sues Gemini, Genesis, and Digital Currency Group Over $1 Billion Crypto Fraud


41 U.S. States Sue Meta Alleging Instagram & Facebook Are Addictive To And Harm Kids
Forty-one U.S. states and D.C. are suing Meta, alleging that Instagram and Facebook’s addictive features harm children’s mental health. The lawsuits stem from a 2021 investigation, accusing Meta of misleading children about safety features, violating privacy laws, and prioritizing profit over well-being. This reflects bipartisan concern about social media’s impact on kids. The lawsuits seek penalties, business practice changes, and restitution. The legal actions followed revelations that Instagram negatively affected teen girls’ body image. While research on social media’s effect on mental health is inconclusive, these lawsuits show states taking action. Meta has made some safety changes, but it faces continued scrutiny and legal challenges.

Read the full report in the Washington Post
Read the full report in the New York Times.
Read the case States of Arizona, California, Colorado, Connecticut, Delaware, Georgia, Hawai’i, Idaho, Illinois, Indiana, Kansas, Kentucky, Louisiana, Maine, Michigan, Minnesota, Missouri, Nebraska, New Jersey, New York, North Carolina, North Dakota, Ohio, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Virginia, Washington, West Virginia, Wisconsin v. Meta Platforms Inc., U.S. District Court for the Northern District of California, No. 4:23-cv-05448 


YouTube Successfully Defends Its Copyright Repeat Infringer Policy In Court
Business Casual, a website, filed copyright infringement lawsuits against RT and YouTube. The case against RT involved the alleged use of Business Casual’s videos, modified with “parallax” technology. RT ignored the case, citing financial constraints due to sanctions. Business Casual’s case against YouTube argued that it infringed by allowing RT to infringe. The court rejected this, and Business Casual appealed. The 2nd Circuit Appeals Court upheld the lower court’s decision, stating that Business Casual’s claims were without merit and that YouTube was not liable for copyright infringement. The court also clarified that YouTube’s repeat infringer policy is not a separate cause of action under the DMCA.

Read the full report on Techdirt.
Read the case Business Casual Holdings, LLC v. YouTube, LLC et alia, U.S. Court of Appeals for the Second Circuit, No. 22-3007-cv 


New York AG Sues Gemini, Genesis, and Digital Currency Group Over $1 Billion Crypto Fraud
New York’s Attorney General, Letitia James, is suing cryptocurrency companies Gemini, Genesis, and Digital Currency Group, alleging they misled investors and caused over $1 billion in losses. Gemini marketed a high-yield program with Genesis but allegedly failed to disclose the risks. James seeks to ban these firms from the investment industry in New York and obtain damages. This legal action follows previous lawsuits against these companies for issues like customer protection and selling unregistered securities.

Read the full report on The Verge
Read the case The People of the State of New York v. Gemini/Genesis/Digital Currency Group et al, Supreme Court of the State of New York

More Headlines

  • AI: Are we being led into yet another AI chatbot bubble? (by FastCompany)
  • AI: Why AI Lies (by Psychology Today)
  • AI: Biden to sign executive order expanding capabilities for government to monitor AI risks (by The Hill
  • Copyright: An AI engine scans a book. Is that copyright infringement or fair use? (by Columbia Journalism Review)
  • Free Speech: Harvard professor Lawrence Lessig on why AI and social media are causing a free speech crisis for the internet (by The Verge)
  • Healthcare: Is AI ready to be integrated into healthcare? (by Silicon Republic)
  • Insurance: Let’s “chat” about A.I. and insurance (by Reuters
  • Privacy: Artists Allege Meta’s AI Data Deletion Request Process Is a ‘Fake PR Stunt’ (by WIRED)
  • Social Media: Old laws open up a new legal front against Meta and TikTok (by Politico)
  • Social Media: The UK’s controversial Online Safety Bill finally becomes law (by The Verge)

In Other News (or publications you should read)

This post originated from my publication Codifying Chaos.

About Black-Box Medicine

Healthcare in the United States is a complex and controversial subject. Approximately 30 million Americans are uninsured and at risk of financial ruin if they become ill or injured. Advanced science and technology could ease some of the challenges around access, diagnosis, and treatment if legal and policy frameworks allow innovation to balance patient protection and medical innovation.  

tl;dr
Artificial intelligence (AI) is rapidly moving to change the healthcare system. Driven by the juxtaposition of big data and powerful machine learning techniques, innovators have begun to develop tools to improve the process of clinical care, to advance medical research, and to improve efficiency. These tools rely on algorithms, programs created from health-care data that can make predictions or recommendations. However, the algorithms themselves are often too complex for their reasoning to be understood or even stated explicitly. Such algorithms may be best described as “black-box.” This article briefly describes the concept of AI in medicine, including several possible applications, then considers its legal implications in four areas of law: regulation, tort, intellectual property, and privacy.

Make sure to read the full article titled Artificial Intelligence in Health Care: Applications and Legal Issues by William Nicholson Price II, JD/PhD at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3078704


When you describe your well-being to ChatGPT it takes about 10 seconds for the machine to present a list of possible conditions you could suffer from, common causes, and treatment. It offers unprecedented access to medical knowledge. When you ask how ChatGPT concluded your assessment, the machine is less responsive. In fact, the algorithms that power ChatGPT and other AI assistants derive their knowledge from a complex neural network of billions of data points. Information – some publicly available, others licensed and specifically trained into the model’s predictive capabilities – that is rarely analyzed for accuracy and applicability to the individual circumstances of each user at the time of their request. OpenAI, the current market leader for this type of technology, is using reinforced learning from human feedback and proximal policy optimization to achieve a level of accuracy that has the potential to upend modern medicine by making healthcare assessments available to those who cannot afford it. 

Interestingly, the assessment is something of a black box for both medical professionals and patients. Transparency efforts and insights into the algorithmic structure of machine learning models that power these chat interfaces still seem to be insufficient to explain reason and understanding about how the specific recommendation came to be and whether the prediction is tailored to the users’ medical needs or derived from statistical predictions. The author paints a vivid picture by breaking down the current state of medicine/healthcare and artificial intelligence and characterizing it with the “three V’s”: 

  1. Volume: large quantities of data – both public and personal, identifiable (health) information that is used to train ever-voracious large language models. Never before in history has mankind collected more health-related data through personal fitness trackers, doctor appointments, and treatment plans than it does today.  
  2. Variety: heterogeneity of data and access beyond identity, borders, languages, or culture references. Our health data comes from a wealth of different sources. While wearables track our specific wellbeing; location and travel data may indicate our actual wellbeing. 
  3. Velocity: fast access to data – in some instances with seconds to process medical data that otherwise would have taken weeks to process. Arguably, we have come a long way since WebMD broke down the velocity barrier. 

The “three V’s” allow for quick results, but usually lack the why and how a conclusion has been reached. The author coined this as “Black-Box Medicine”. While this creates some uncertainty, it also creates many opportunities for ancillary medical functions, e.g. prognostics, diagnostics, image analysis, and treatment recommendations. Furthermore, it creates interesting legal questions: how does society ensure black-box medicine is safe and effective and how can it protect patients and patient privacy throughout the process? 

Alomst immediately the question of oversight comes to mind. The Food and Drug Administration (FDA) does not regulate “the practice of medicine” but could be tasked to oversee the deployment of medical devices. Is an algorithm that is trained with patient and healthcare data a medical device? Perhaps the U.S. Department of Health and Human Services or local State Medical Boards can claim oversight, but the author argues disputes will certainly arise over this point. Assuming the FDA would oversee algorithms and subject them to traditional methods of testing medical devices, it would likely subject algorithms to clinical trials that couldn’t produce scientific results because an artificial intelligence, by virtue of its existence, changes over time and adapts to new patient circumstances. Hence the author sees innovation at risk of slowing down if the healthcare industry is not quick to adopt “sandbox environments” that allow safe testing of the technology without compromising progress. 

Another interesting question is who is responsible when things go wrong? Medical malpractice commonly traces back to the doctor/medical professional in charge of the treatment. If medical assessment is reduced to a user and a keyboard will the software engineer who manages the codebase be held liable for ill-conceived advice? Perhaps the company that employs the engineer(s)? Or the owner of the model and training data? If a doctor is leveraging artificial intelligence for image analysis – does it impose a stricter duty of care on the doctor? The author doesn’t provide a conclusive answer and courts yet have to decide case law of this emerging topic in healthcare. 

While this article was first published in 2017, I find it to be accurate and relevant today as it raises intriguing questions about governance, liability, privacy, and intellectual property rights concerning healthcare in the context of artificial intelligence and medical devices in particular. The author leaves it to the reader to answer the question: “Does entity-centered privacy regulation make sense in a world where giant data agglomerations are necessary and useful?”   

W42Y23 Weekly Review: UMG v. Anthropic AI, Bad Bunny’s Fish Market, Costco and Meta Pixel Lawsuit, Greer v. Kiwi Farms, and Google Data Scraping

+++ Universal Sues Anthropic Over Unlicensed, AI-Generated Music Lyrics
+++ Bad Bunny, J Balvin Face Copyright Lawsuit Over “Fish Market” Drum Pattern
+++ Costco Sued Over Sharing Consumer Data With Facebook Through Meta Pixel Tracking
+++ Book Author Beats Copyright Infringement Platform; Setting Concerning Precedent
+++ Google Attempts To Dismiss Lawsuit Over Scraping User Data To Train Its BARD AI


Universal Sues Anthropic Over Unlicensed, AI-Generated Music Lyrics
Record label Universal Music Group and other music publishers have filed a lawsuit against AI company Anthropic for distributing copyrighted lyrics through its AI model Claude 2. The complaint claims that Claude 2 can generate lyrics almost identical to popular songs without proper licensing, including songs by Katy Perry, Gloria Gaynor, and the Rolling Stones. The complaint also alleges that Anthropic used copyrighted lyrics to train its AI models. The music publishers argue that while sharing lyrics online is common, many platforms pay to license them, while Anthropic omits critical copyright information. Universal Music Group accuses Anthropic of copyright infringement and emphasizes the need to protect the rights of creators in the context of generative AI. 

Read the full report on Axios
Read the full report on ArsTechnica.
Read the case Universal Music Group et alia v. Anthropic PBC, U.S. District Court for the Middle District of Tennessee, Nashville Division, No. 3:23-cv-01092 


Bad Bunny, J Balvin Face Copyright Lawsuit Over “Fish Market” Drum Pattern
Influential reggaeton artists Bad Bunny and J Balvin face copyright infringement claims for illegally copying or sampling an instrumental percussion track called “Fish Market,” created by Jamaican producers Steeely & Clevie in 1989. The lawsuit alleges that the distinctive drum pattern from “Fish Market” has been used by reggaeton artists without proper licensing and has become the foundation of the reggaeton genre. The artists’ defense argument is about the scope of the lawsuit. It is unclear about what works the plaintiffs own and whether a rhythm like “Fish Market” is entitled to protection under U.S. copyright law. The judge expressed concerns about the impact of copyright protection on creative activity in music, specifically the reggaeton genre.

Read the full report on Rolling Stone
Read the case Cleveland Browne/ Estate of Wycliffe Johnson et al v. Benito Martínez Ocasio (aka Bad Bunny) and José Álvaro Osorio Balvin (aka J Balvin) et alia, U.S. District Court for the Central District of California, No. 2:21-cv-02840-AB-AFM 


Costco Sued Over Sharing Consumer Data With Facebook Through Meta Pixel Tracking
A lawsuit filed against Costco alleges that the company shared users’ private communications and health information with Meta (Facebook’s parent company) without their consent. The lawsuit claims that Costco used Meta Pixel, which allows companies to track website visitor activity, in the healthcare section of its website. Users’ personal and health information was made accessible to Meta through this tracking pixel. The lawsuit alleges that using tracking services like Pixel violates privacy. It further alleges that Meta could monetize the information by selling ads to insurance companies looking to market to tracked customers. 

Read the full report in The Seattle Times
Read the case Castillo, Knowles, Rodriguez, Throlson v. Costco Wholesale Corporation, U.S. District Court for the Western District of Washington at Seattle, No. 2:23-cv-01548


Book Author Beats Copyright Infringement Platform; Setting Concerning Precedent
Self-published author Russel Greer successfully sued Kiwi Farms, a website known for cyberattacks, for copyright infringement of his book “Why I Sued Taylor Swift and How I Became Falsely Known as Frivolous, Litigious and Crazy”. Greer’s book was copied and distributed on Kiwi Farms without his permission. Greer sent a DMCA takedown notice, but Kiwi Farms didn’t comply and posted the notice. Greer sued Kiwi Farms for contributory copyright infringement. The lower court saw Greer’s claims to be unsubstantiated, but after appeal, the court found Kiwi Farms had direct copyright infringement, knowledge of it, and materially contributed to it. 

Read the full report on Ars Technica.
Read the full blog post by Eric Goldman
Read the appellate case Russell Greer v. Joshua Moon & Kiwi Farms, United States Court of Appeals for the Tenth Circuit, No. 21-4128


Google Attempts To Dismiss Lawsuit Over Scraping User Data To Train Its BARD AI
Google is seeking the dismissal of a proposed class action lawsuit in California that alleges the company violated people’s privacy and property rights by scraping data for training AI systems. Google argues that using public data is essential for training generative AI systems and that the lawsuit would be detrimental to the development of AI. The lawsuit was filed by eight unnamed individuals who claim Google misused content from social media and its platforms for AI training. Google’s general counsel called the lawsuit “baseless” and stated that US law allows the use of public information for beneficial purposes. Google also contended that it followed fair use copyright law in using certain content for AI training.

Read the full report on Reuters
Read the case J.L. v. Alphabet Inc, U.S. District Court for the Northern District of California, No. 3:23-cv-03440.


Read the motion to dismiss J.L. v. Alphabet Inc, U.S. District Court for the Northern District of California, No. 3:23-cv-03440-AMO. 

More Headlines

  • AI: A.I. May Not Get a Chance to Kill Us if This Kills It First (by Slate
  • AI: Religious authors allege tech companies used their books to train AI without permission (by The Hill
  • AI: Rising AI Use Paired With Layoffs Invites Age Bias Litigation (by Bloomberg Law)
  • Antitrust: Amazon Offered to Settle California Antitrust Suit in 2022 (by Bloomberg)
  • CDA230: Big Tech’s Favorite Legal Shield Takes a Hit (by The Hollywood Reporter
  • Crypto: Cryptocurrency firms sued over ‘$1bn investor fraud’ by New York state (by Guardian)
  • Cybersecurity: Who’s Afraid of Products Liability? Cybersecurity and the Defect Model (by Lawfare)
  • Data Privacy: The new EU-US data agreement is facing familiar privacy challenges (by The Hill
  • Data Privacy: Meta shareholder lawsuit over user privacy revived by appeals court (by Reuters
  • Data Privacy: US senator asks 23andMe for details after reported data for sale online (by Reuters)
  • Net Neutrality: FCC moves ahead with Title II net neutrality rules in 3-2 party-line vote (by Ars Technica)
  • Social Media: Elon Musk’s X fined $380K over “serious” child safety concerns, watchdog says (by Ars Technica
  • Social Media: TikTok’s Trials and Tribulations Mount (by Tech Policy Press)
  • Social Media: Is ‘Xexit’ nigh? Elon Musk denies talking about pulling X from the EU, but he may not have a choice (by Fortune

In Other News (or publications you should read)

This post originated from my publication Codifying Chaos.

Playing With Power 

A short book review of Tim Higgin’s controversial book “Power Play – Tesla, Elon Musk, and the Bet of the Century.”


“Power Play” by Tim Higgins chronicles the inception and scaling of Tesla Motors. The book explores the history and challenges faced by Tesla, its growth as a company, and the impact it has had on the automotive and clean energy sectors. It delves into the business strategies, controversies, and innovations that have defined Tesla’s journey, as well as the broader implications of its success for the future of transportation and sustainable energy. 

A strong focus of the author is on the leadership decisions, behavior, and actions of Elon Musk. He is depicted as a complex, stubborn, and erratic micromanager who is also a visionary entrepreneur and romantic futurist. Musk’s relentless drive to perform ultra hardcore all the time and his asking the same of his employees is a common theme throughout the book. This mindset seemingly allows Musk to deliver on promises that traditional automakers thought to be impossible. But it also creates an adversarial environment between him and his employees and supporters; testing the depth and longevity of those relationships and – unsurprisingly – churning through most. The complex order of historical events in conjunction with the author’s writing style, jumping from crisis to peacetime to crisis, creates unwanted contradictions about Elon Musk making it harder to follow the events as they unfold.

Personally, I found it an intriguing account with a neutral depiction of a leader who attempts the impossible. As an entrepreneur or startup founder, I can relate to the everlasting moments of despair Musk must have experienced. The book is inspiring to the extent that it conveys a sense of urgency and survival for Tesla and its leaders in an environment that is rooting against them. 

The hardcover book is currently priced at $30 on Bookshop. While a very large online platform offers a discounted price on the book, I found $30 too expensive for a subject so very much in the public eye of our times. If you Musk (see the pun there), start with Ashlee Vance’s account of Elon Musk titled “Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future.” I read it a while back and still fondly remember how energized I felt after reading it.  

Initial criticism of the book called out Higgins’s depiction of Elon Musk as a business leader is unsubstantiated since he had no access to internal communications – or the man himself. However, works of non-fiction are often written with information derived from former employees and accounts close to the business. His notes indicate he relied plenty on interviews but does not disclose with whom and when the interview took place. It’s not a negative sign, but a possible canary in the coal mine that this book, too, is somewhat riding on the media interest around Elon Musk. In a note from the author, Higgins relays a response from Elon Musk: “Most, but not all, of what you read in this book is nonsense.” Elon Musk later posted on X about the book “Higgins managed to make his book both false *and* boring 🤣🤣”

W41Y23 Weekly Review: Utah v. TikTok, FCC v. Dish, and Killing a Yoga App in the Metaverse

+++Utah Sues TikTok Over Its “Addictive Nature” That Harms Children
+++FCC Fines Dish Television Network Over Satellite Space Debris
+++Yoga App Developer Sues Meta Over Collusion To Kill VR App 
+++EU Requests X’ Compliance With Digital Service Act In Light Of Israel-Hamas Misinformation

Utah Sues TikTok Over Its “Addictive Nature” That Harms Children
Utah’s Division of Consumer Protection is suing TikTok, alleging the app’s addictive nature harms children and that it hides its connection to its Chinese parent company, ByteDance. The lawsuit claims TikTok violates the Utah Consumer Sales Practices Act, demands a jury trial, and seeks an injunction, legal fees, restitution, damages exceeding $300,000, and $300,000 in civil penalties. 

Read the full report on The Verge
Read the case Utah Consumer Protection Agency v. TikTok, U.S. District Court for the County of Salt Lake. 


FCC Fines Dish Television Network Over Satellite Space Debris
The FCC imposed a $150,000 fine on Dish for failing to move a satellite to a safe orbit, marking a significant step in addressing space debris concerns. The fine affected Dish’s reputation, causing a 4% drop in its share price, serving as a warning to other companies. This action may boost the market for commercial space debris removal services and encourage other countries to take similar measures. With the growing number of satellites in orbit, managing space debris is essential to prevent collisions. Dish admitted liability, and the FCC could impose higher fines in the future. This reflects a trend of enforcing compliance with licensing requirements to address space debris issues.

Read the full report on MIT Technology Review
Read the order FCC v. Dish Operating LLC, No. DA 23-888


Yoga App Developer Sues Meta Over Collusion To Kill VR App 
Meta is facing a lawsuit from a software developer, Andre Elijah, who alleges that the company canceled his virtual reality yoga app, AEI Fitness, just before its launch upon learning of his discussions with Apple and ByteDance. Elijah claims that Meta’s actions cost him potentially tens of millions in the growing VR fitness app market. The lawsuit accuses Meta of trying to control the VR headset and app distribution market, which could limit innovation and consumer choice. Elijah is seeking $3.2 million and substantial lost revenue.

Read the full report on Fortune.
Read the case Andre Elijah Immersive Inc. v. Meta Platforms Technologies LLC, U.S. District Court for the Northern District of California, No. 5:23-cv-05159.


EU Requests X’ Compliance With Digital Service Act In Light Of Israel-Hamas Misinformation
The European Union has escalated scrutiny of Elon Musk’s company, X (formerly Twitter), after reports of illegal content and disinformation related to the Israel-Hamas conflict circulated on the platform. The EU issued a formal request for more information, which may lead to a formal investigation under the Digital Services Act (DSA). Non-compliance could result in fines of up to 6% of annual turnover and service blocking. X’s compliance with DSA rules, including content moderation, complaint handling, risk assessment, and enforcement, is under review. The company has until October 18 to provide information, and the EU will assess the next steps based on X’s response. 

Read the full report on TechCrunch
Read the press release by the European Commission requesting X to comply with the Digital Services Act. 

More Headlines

  • Right To Repair Act: Right-to-repair is now the law in California (by The Verge
  • Data Privacy: Apple AirTags stalking led to ruin and murders, lawsuit says (by ArsTechnica)
  • AI: This Prolific LA Eviction Law Firm Was Caught Faking Cases In Court. Did They Misuse AI? (by LAist
  • Online News Act: Canada news industry body backs Google’s concerns about online news law (by Reuters)
  • Data Privacy: What we know about the 23andMe data breach (by San Francisco Business Times
  • Data Privacy: Delete Act signed by Gavin Newsom will enable residents to request all data brokers in the state remove their information (by Guardian

In-Depth Reads

  • How a billionaire-backed network of AI advisers took over Washington (by Politico
  • The AI regulatory toolbox: How governments can discover algorithmic harms (by Brookings

In Other News (or publications you should read)

This post originated from my publication Codifying Chaos.