W43Y23 Weekly Review: More Meta Legal Woes, YouTube, and AI Chat Bubble

+++ 41 U.S. States Sue Meta Alleging Instagram & Facebook Are Addictive To And Harm Kids
+++ YouTube Successfully Defends Its Copyright Repeat Infringer Policy In Court
+++ New York AG Sues Gemini, Genesis, and Digital Currency Group Over $1 Billion Crypto Fraud

41 U.S. States Sue Meta Alleging Instagram & Facebook Are Addictive To And Harm Kids
Forty-one U.S. states and D.C. are suing Meta, alleging that Instagram and Facebook’s addictive features harm children’s mental health. The lawsuits stem from a 2021 investigation, accusing Meta of misleading children about safety features, violating privacy laws, and prioritizing profit over well-being. This reflects bipartisan concern about social media’s impact on kids. The lawsuits seek penalties, business practice changes, and restitution. The legal actions followed revelations that Instagram negatively affected teen girls’ body image. While research on social media’s effect on mental health is inconclusive, these lawsuits show states taking action. Meta has made some safety changes, but it faces continued scrutiny and legal challenges.

Read the full report in the Washington Post
Read the full report in the New York Times.
Read the case States of Arizona, California, Colorado, Connecticut, Delaware, Georgia, Hawai’i, Idaho, Illinois, Indiana, Kansas, Kentucky, Louisiana, Maine, Michigan, Minnesota, Missouri, Nebraska, New Jersey, New York, North Carolina, North Dakota, Ohio, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Virginia, Washington, West Virginia, Wisconsin v. Meta Platforms Inc., U.S. District Court for the Northern District of California, No. 4:23-cv-05448 

YouTube Successfully Defends Its Copyright Repeat Infringer Policy In Court
Business Casual, a website, filed copyright infringement lawsuits against RT and YouTube. The case against RT involved the alleged use of Business Casual’s videos, modified with “parallax” technology. RT ignored the case, citing financial constraints due to sanctions. Business Casual’s case against YouTube argued that it infringed by allowing RT to infringe. The court rejected this, and Business Casual appealed. The 2nd Circuit Appeals Court upheld the lower court’s decision, stating that Business Casual’s claims were without merit and that YouTube was not liable for copyright infringement. The court also clarified that YouTube’s repeat infringer policy is not a separate cause of action under the DMCA.

Read the full report on Techdirt.
Read the case Business Casual Holdings, LLC v. YouTube, LLC et alia, U.S. Court of Appeals for the Second Circuit, No. 22-3007-cv 

New York AG Sues Gemini, Genesis, and Digital Currency Group Over $1 Billion Crypto Fraud
New York’s Attorney General, Letitia James, is suing cryptocurrency companies Gemini, Genesis, and Digital Currency Group, alleging they misled investors and caused over $1 billion in losses. Gemini marketed a high-yield program with Genesis but allegedly failed to disclose the risks. James seeks to ban these firms from the investment industry in New York and obtain damages. This legal action follows previous lawsuits against these companies for issues like customer protection and selling unregistered securities.

Read the full report on The Verge
Read the case The People of the State of New York v. Gemini/Genesis/Digital Currency Group et al, Supreme Court of the State of New York

More Headlines

  • AI: Are we being led into yet another AI chatbot bubble? (by FastCompany)
  • AI: Why AI Lies (by Psychology Today)
  • AI: Biden to sign executive order expanding capabilities for government to monitor AI risks (by The Hill
  • Copyright: An AI engine scans a book. Is that copyright infringement or fair use? (by Columbia Journalism Review)
  • Free Speech: Harvard professor Lawrence Lessig on why AI and social media are causing a free speech crisis for the internet (by The Verge)
  • Healthcare: Is AI ready to be integrated into healthcare? (by Silicon Republic)
  • Insurance: Let’s “chat” about A.I. and insurance (by Reuters
  • Privacy: Artists Allege Meta’s AI Data Deletion Request Process Is a ‘Fake PR Stunt’ (by WIRED)
  • Social Media: Old laws open up a new legal front against Meta and TikTok (by Politico)
  • Social Media: The UK’s controversial Online Safety Bill finally becomes law (by The Verge)

In Other News (or publications you should read)

This post originated from my publication Codifying Chaos.

About Black-Box Medicine

Healthcare in the United States is a complex and controversial subject. Approximately 30 million Americans are uninsured and at risk of financial ruin if they become ill or injured. Advanced science and technology could ease some of the challenges around access, diagnosis, and treatment if legal and policy frameworks allow innovation to balance patient protection and medical innovation.  

Artificial intelligence (AI) is rapidly moving to change the healthcare system. Driven by the juxtaposition of big data and powerful machine learning techniques, innovators have begun to develop tools to improve the process of clinical care, to advance medical research, and to improve efficiency. These tools rely on algorithms, programs created from health-care data that can make predictions or recommendations. However, the algorithms themselves are often too complex for their reasoning to be understood or even stated explicitly. Such algorithms may be best described as “black-box.” This article briefly describes the concept of AI in medicine, including several possible applications, then considers its legal implications in four areas of law: regulation, tort, intellectual property, and privacy.

Make sure to read the full article titled Artificial Intelligence in Health Care: Applications and Legal Issues by William Nicholson Price II, JD/PhD at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3078704

When you describe your well-being to ChatGPT it takes about 10 seconds for the machine to present a list of possible conditions you could suffer from, common causes, and treatment. It offers unprecedented access to medical knowledge. When you ask how ChatGPT concluded your assessment, the machine is less responsive. In fact, the algorithms that power ChatGPT and other AI assistants derive their knowledge from a complex neural network of billions of data points. Information – some publicly available, others licensed and specifically trained into the model’s predictive capabilities – that is rarely analyzed for accuracy and applicability to the individual circumstances of each user at the time of their request. OpenAI, the current market leader for this type of technology, is using reinforced learning from human feedback and proximal policy optimization to achieve a level of accuracy that has the potential to upend modern medicine by making healthcare assessments available to those who cannot afford it. 

Interestingly, the assessment is something of a black box for both medical professionals and patients. Transparency efforts and insights into the algorithmic structure of machine learning models that power these chat interfaces still seem to be insufficient to explain reason and understanding about how the specific recommendation came to be and whether the prediction is tailored to the users’ medical needs or derived from statistical predictions. The author paints a vivid picture by breaking down the current state of medicine/healthcare and artificial intelligence and characterizing it with the “three V’s”: 

  1. Volume: large quantities of data – both public and personal, identifiable (health) information that is used to train ever-voracious large language models. Never before in history has mankind collected more health-related data through personal fitness trackers, doctor appointments, and treatment plans than it does today.  
  2. Variety: heterogeneity of data and access beyond identity, borders, languages, or culture references. Our health data comes from a wealth of different sources. While wearables track our specific wellbeing; location and travel data may indicate our actual wellbeing. 
  3. Velocity: fast access to data – in some instances with seconds to process medical data that otherwise would have taken weeks to process. Arguably, we have come a long way since WebMD broke down the velocity barrier. 

The “three V’s” allow for quick results, but usually lack the why and how a conclusion has been reached. The author coined this as “Black-Box Medicine”. While this creates some uncertainty, it also creates many opportunities for ancillary medical functions, e.g. prognostics, diagnostics, image analysis, and treatment recommendations. Furthermore, it creates interesting legal questions: how does society ensure black-box medicine is safe and effective and how can it protect patients and patient privacy throughout the process? 

Alomst immediately the question of oversight comes to mind. The Food and Drug Administration (FDA) does not regulate “the practice of medicine” but could be tasked to oversee the deployment of medical devices. Is an algorithm that is trained with patient and healthcare data a medical device? Perhaps the U.S. Department of Health and Human Services or local State Medical Boards can claim oversight, but the author argues disputes will certainly arise over this point. Assuming the FDA would oversee algorithms and subject them to traditional methods of testing medical devices, it would likely subject algorithms to clinical trials that couldn’t produce scientific results because an artificial intelligence, by virtue of its existence, changes over time and adapts to new patient circumstances. Hence the author sees innovation at risk of slowing down if the healthcare industry is not quick to adopt “sandbox environments” that allow safe testing of the technology without compromising progress. 

Another interesting question is who is responsible when things go wrong? Medical malpractice commonly traces back to the doctor/medical professional in charge of the treatment. If medical assessment is reduced to a user and a keyboard will the software engineer who manages the codebase be held liable for ill-conceived advice? Perhaps the company that employs the engineer(s)? Or the owner of the model and training data? If a doctor is leveraging artificial intelligence for image analysis – does it impose a stricter duty of care on the doctor? The author doesn’t provide a conclusive answer and courts yet have to decide case law of this emerging topic in healthcare. 

While this article was first published in 2017, I find it to be accurate and relevant today as it raises intriguing questions about governance, liability, privacy, and intellectual property rights concerning healthcare in the context of artificial intelligence and medical devices in particular. The author leaves it to the reader to answer the question: “Does entity-centered privacy regulation make sense in a world where giant data agglomerations are necessary and useful?”   

W42Y23 Weekly Review: UMG v. Anthropic AI, Bad Bunny’s Fish Market, Costco and Meta Pixel Lawsuit, Greer v. Kiwi Farms, and Google Data Scraping

+++ Universal Sues Anthropic Over Unlicensed, AI-Generated Music Lyrics
+++ Bad Bunny, J Balvin Face Copyright Lawsuit Over “Fish Market” Drum Pattern
+++ Costco Sued Over Sharing Consumer Data With Facebook Through Meta Pixel Tracking
+++ Book Author Beats Copyright Infringement Platform; Setting Concerning Precedent
+++ Google Attempts To Dismiss Lawsuit Over Scraping User Data To Train Its BARD AI

Universal Sues Anthropic Over Unlicensed, AI-Generated Music Lyrics
Record label Universal Music Group and other music publishers have filed a lawsuit against AI company Anthropic for distributing copyrighted lyrics through its AI model Claude 2. The complaint claims that Claude 2 can generate lyrics almost identical to popular songs without proper licensing, including songs by Katy Perry, Gloria Gaynor, and the Rolling Stones. The complaint also alleges that Anthropic used copyrighted lyrics to train its AI models. The music publishers argue that while sharing lyrics online is common, many platforms pay to license them, while Anthropic omits critical copyright information. Universal Music Group accuses Anthropic of copyright infringement and emphasizes the need to protect the rights of creators in the context of generative AI. 

Read the full report on Axios
Read the full report on ArsTechnica.
Read the case Universal Music Group et alia v. Anthropic PBC, U.S. District Court for the Middle District of Tennessee, Nashville Division, No. 3:23-cv-01092 

Bad Bunny, J Balvin Face Copyright Lawsuit Over “Fish Market” Drum Pattern
Influential reggaeton artists Bad Bunny and J Balvin face copyright infringement claims for illegally copying or sampling an instrumental percussion track called “Fish Market,” created by Jamaican producers Steeely & Clevie in 1989. The lawsuit alleges that the distinctive drum pattern from “Fish Market” has been used by reggaeton artists without proper licensing and has become the foundation of the reggaeton genre. The artists’ defense argument is about the scope of the lawsuit. It is unclear about what works the plaintiffs own and whether a rhythm like “Fish Market” is entitled to protection under U.S. copyright law. The judge expressed concerns about the impact of copyright protection on creative activity in music, specifically the reggaeton genre.

Read the full report on Rolling Stone
Read the case Cleveland Browne/ Estate of Wycliffe Johnson et al v. Benito Martínez Ocasio (aka Bad Bunny) and José Álvaro Osorio Balvin (aka J Balvin) et alia, U.S. District Court for the Central District of California, No. 2:21-cv-02840-AB-AFM 

Costco Sued Over Sharing Consumer Data With Facebook Through Meta Pixel Tracking
A lawsuit filed against Costco alleges that the company shared users’ private communications and health information with Meta (Facebook’s parent company) without their consent. The lawsuit claims that Costco used Meta Pixel, which allows companies to track website visitor activity, in the healthcare section of its website. Users’ personal and health information was made accessible to Meta through this tracking pixel. The lawsuit alleges that using tracking services like Pixel violates privacy. It further alleges that Meta could monetize the information by selling ads to insurance companies looking to market to tracked customers. 

Read the full report in The Seattle Times
Read the case Castillo, Knowles, Rodriguez, Throlson v. Costco Wholesale Corporation, U.S. District Court for the Western District of Washington at Seattle, No. 2:23-cv-01548

Book Author Beats Copyright Infringement Platform; Setting Concerning Precedent
Self-published author Russel Greer successfully sued Kiwi Farms, a website known for cyberattacks, for copyright infringement of his book “Why I Sued Taylor Swift and How I Became Falsely Known as Frivolous, Litigious and Crazy”. Greer’s book was copied and distributed on Kiwi Farms without his permission. Greer sent a DMCA takedown notice, but Kiwi Farms didn’t comply and posted the notice. Greer sued Kiwi Farms for contributory copyright infringement. The lower court saw Greer’s claims to be unsubstantiated, but after appeal, the court found Kiwi Farms had direct copyright infringement, knowledge of it, and materially contributed to it. 

Read the full report on Ars Technica.
Read the full blog post by Eric Goldman
Read the appellate case Russell Greer v. Joshua Moon & Kiwi Farms, United States Court of Appeals for the Tenth Circuit, No. 21-4128

Google Attempts To Dismiss Lawsuit Over Scraping User Data To Train Its BARD AI
Google is seeking the dismissal of a proposed class action lawsuit in California that alleges the company violated people’s privacy and property rights by scraping data for training AI systems. Google argues that using public data is essential for training generative AI systems and that the lawsuit would be detrimental to the development of AI. The lawsuit was filed by eight unnamed individuals who claim Google misused content from social media and its platforms for AI training. Google’s general counsel called the lawsuit “baseless” and stated that US law allows the use of public information for beneficial purposes. Google also contended that it followed fair use copyright law in using certain content for AI training.

Read the full report on Reuters
Read the case J.L. v. Alphabet Inc, U.S. District Court for the Northern District of California, No. 3:23-cv-03440.

Read the motion to dismiss J.L. v. Alphabet Inc, U.S. District Court for the Northern District of California, No. 3:23-cv-03440-AMO. 

More Headlines

  • AI: A.I. May Not Get a Chance to Kill Us if This Kills It First (by Slate
  • AI: Religious authors allege tech companies used their books to train AI without permission (by The Hill
  • AI: Rising AI Use Paired With Layoffs Invites Age Bias Litigation (by Bloomberg Law)
  • Antitrust: Amazon Offered to Settle California Antitrust Suit in 2022 (by Bloomberg)
  • CDA230: Big Tech’s Favorite Legal Shield Takes a Hit (by The Hollywood Reporter
  • Crypto: Cryptocurrency firms sued over ‘$1bn investor fraud’ by New York state (by Guardian)
  • Cybersecurity: Who’s Afraid of Products Liability? Cybersecurity and the Defect Model (by Lawfare)
  • Data Privacy: The new EU-US data agreement is facing familiar privacy challenges (by The Hill
  • Data Privacy: Meta shareholder lawsuit over user privacy revived by appeals court (by Reuters
  • Data Privacy: US senator asks 23andMe for details after reported data for sale online (by Reuters)
  • Net Neutrality: FCC moves ahead with Title II net neutrality rules in 3-2 party-line vote (by Ars Technica)
  • Social Media: Elon Musk’s X fined $380K over “serious” child safety concerns, watchdog says (by Ars Technica
  • Social Media: TikTok’s Trials and Tribulations Mount (by Tech Policy Press)
  • Social Media: Is ‘Xexit’ nigh? Elon Musk denies talking about pulling X from the EU, but he may not have a choice (by Fortune

In Other News (or publications you should read)

This post originated from my publication Codifying Chaos.

Playing With Power 

A short book review of Tim Higgin’s controversial book “Power Play – Tesla, Elon Musk, and the Bet of the Century.”

“Power Play” by Tim Higgins chronicles the inception and scaling of Tesla Motors. The book explores the history and challenges faced by Tesla, its growth as a company, and the impact it has had on the automotive and clean energy sectors. It delves into the business strategies, controversies, and innovations that have defined Tesla’s journey, as well as the broader implications of its success for the future of transportation and sustainable energy. 

A strong focus of the author is on the leadership decisions, behavior, and actions of Elon Musk. He is depicted as a complex, stubborn, and erratic micromanager who is also a visionary entrepreneur and romantic futurist. Musk’s relentless drive to perform ultra hardcore all the time and his asking the same of his employees is a common theme throughout the book. This mindset seemingly allows Musk to deliver on promises that traditional automakers thought to be impossible. But it also creates an adversarial environment between him and his employees and supporters; testing the depth and longevity of those relationships and – unsurprisingly – churning through most. The complex order of historical events in conjunction with the author’s writing style, jumping from crisis to peacetime to crisis, creates unwanted contradictions about Elon Musk making it harder to follow the events as they unfold.

Personally, I found it an intriguing account with a neutral depiction of a leader who attempts the impossible. As an entrepreneur or startup founder, I can relate to the everlasting moments of despair Musk must have experienced. The book is inspiring to the extent that it conveys a sense of urgency and survival for Tesla and its leaders in an environment that is rooting against them. 

The hardcover book is currently priced at $30 on Bookshop. While a very large online platform offers a discounted price on the book, I found $30 too expensive for a subject so very much in the public eye of our times. If you Musk (see the pun there), start with Ashlee Vance’s account of Elon Musk titled “Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future.” I read it a while back and still fondly remember how energized I felt after reading it.  

Initial criticism of the book called out Higgins’s depiction of Elon Musk as a business leader is unsubstantiated since he had no access to internal communications – or the man himself. However, works of non-fiction are often written with information derived from former employees and accounts close to the business. His notes indicate he relied plenty on interviews but does not disclose with whom and when the interview took place. It’s not a negative sign, but a possible canary in the coal mine that this book, too, is somewhat riding on the media interest around Elon Musk. In a note from the author, Higgins relays a response from Elon Musk: “Most, but not all, of what you read in this book is nonsense.” Elon Musk later posted on X about the book “Higgins managed to make his book both false *and* boring 🤣🤣”

W41Y23 Weekly Review: Utah v. TikTok, FCC v. Dish, and Killing a Yoga App in the Metaverse

+++Utah Sues TikTok Over Its “Addictive Nature” That Harms Children
+++FCC Fines Dish Television Network Over Satellite Space Debris
+++Yoga App Developer Sues Meta Over Collusion To Kill VR App 
+++EU Requests X’ Compliance With Digital Service Act In Light Of Israel-Hamas Misinformation

Utah Sues TikTok Over Its “Addictive Nature” That Harms Children
Utah’s Division of Consumer Protection is suing TikTok, alleging the app’s addictive nature harms children and that it hides its connection to its Chinese parent company, ByteDance. The lawsuit claims TikTok violates the Utah Consumer Sales Practices Act, demands a jury trial, and seeks an injunction, legal fees, restitution, damages exceeding $300,000, and $300,000 in civil penalties. 

Read the full report on The Verge
Read the case Utah Consumer Protection Agency v. TikTok, U.S. District Court for the County of Salt Lake. 

FCC Fines Dish Television Network Over Satellite Space Debris
The FCC imposed a $150,000 fine on Dish for failing to move a satellite to a safe orbit, marking a significant step in addressing space debris concerns. The fine affected Dish’s reputation, causing a 4% drop in its share price, serving as a warning to other companies. This action may boost the market for commercial space debris removal services and encourage other countries to take similar measures. With the growing number of satellites in orbit, managing space debris is essential to prevent collisions. Dish admitted liability, and the FCC could impose higher fines in the future. This reflects a trend of enforcing compliance with licensing requirements to address space debris issues.

Read the full report on MIT Technology Review
Read the order FCC v. Dish Operating LLC, No. DA 23-888

Yoga App Developer Sues Meta Over Collusion To Kill VR App 
Meta is facing a lawsuit from a software developer, Andre Elijah, who alleges that the company canceled his virtual reality yoga app, AEI Fitness, just before its launch upon learning of his discussions with Apple and ByteDance. Elijah claims that Meta’s actions cost him potentially tens of millions in the growing VR fitness app market. The lawsuit accuses Meta of trying to control the VR headset and app distribution market, which could limit innovation and consumer choice. Elijah is seeking $3.2 million and substantial lost revenue.

Read the full report on Fortune.
Read the case Andre Elijah Immersive Inc. v. Meta Platforms Technologies LLC, U.S. District Court for the Northern District of California, No. 5:23-cv-05159.

EU Requests X’ Compliance With Digital Service Act In Light Of Israel-Hamas Misinformation
The European Union has escalated scrutiny of Elon Musk’s company, X (formerly Twitter), after reports of illegal content and disinformation related to the Israel-Hamas conflict circulated on the platform. The EU issued a formal request for more information, which may lead to a formal investigation under the Digital Services Act (DSA). Non-compliance could result in fines of up to 6% of annual turnover and service blocking. X’s compliance with DSA rules, including content moderation, complaint handling, risk assessment, and enforcement, is under review. The company has until October 18 to provide information, and the EU will assess the next steps based on X’s response. 

Read the full report on TechCrunch
Read the press release by the European Commission requesting X to comply with the Digital Services Act. 

More Headlines

  • Right To Repair Act: Right-to-repair is now the law in California (by The Verge
  • Data Privacy: Apple AirTags stalking led to ruin and murders, lawsuit says (by ArsTechnica)
  • AI: This Prolific LA Eviction Law Firm Was Caught Faking Cases In Court. Did They Misuse AI? (by LAist
  • Online News Act: Canada news industry body backs Google’s concerns about online news law (by Reuters)
  • Data Privacy: What we know about the 23andMe data breach (by San Francisco Business Times
  • Data Privacy: Delete Act signed by Gavin Newsom will enable residents to request all data brokers in the state remove their information (by Guardian

In-Depth Reads

  • How a billionaire-backed network of AI advisers took over Washington (by Politico
  • The AI regulatory toolbox: How governments can discover algorithmic harms (by Brookings

In Other News (or publications you should read)

This post originated from my publication Codifying Chaos.

Can Elon Musk Turn “X” Into Humanity’s Collective Consciousness? 

What is the end goal of “X” formerly known as Twitter? A recent article about a cryptic tweet by Elon Musk tries to make a case for a platform that centralizes mankind’s shared cultural beliefs and values, and, the authors argue that it will not be “X”. 

On August 18th, 2023, a thought-provoking tweet by the visionary entrepreneur, Elon Musk – owner of “X” (formerly known as Twitter), set the stage for public contemplation and attention. That tweet forms the basis of this article which examines the captivating ideas that have sprung from that fateful Friday tweet.

Make sure to read the full article titled Does X Truly Represent Humanity’s Collective Consciousness? by Obinnaya Agbo, Dara Ita, and Temitope Akinsanmi at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4558476


The authors focus the article on a post by Elon Musk that reads: “𝕏 as humanity’s collective consciousness”. They start defining the term humanity’s collective consciousness with a historical review of the works of French sociologist Emile Durkheim and Swiss psychiatrist and psychoanalyst Carl Jung. Those works responded to industrialization, which influenced contemporary viewpoints and connected collective consciousness to labor. The authors define it as “shared beliefs, values, attitudes, ideas, and knowledge that exist within a particular group or society. It is the sum of an individual’s thoughts, emotions, and experiences of people within a group, which combine to create a common understanding of the world, social norms, and cultural identity. It is also the idea that individuals within a society are not only influenced by their own thoughts and experiences but also by broader cultural and societal trends.” The authors continue to review a possible motive for Elon Musk. They refer to the name change earlier this year from Twitter to “X, the everything app”. Elon Musk defended the decision by providing the scope planned for “X”. He stated Twitter was a means for bidirectional communication in 140 characters or less – and nothing more. “X” on the other hand allows different types of content, at varying levels of length, and it plans to allow users “to conduct your entire financial world” on “X”, implying similar features as WeChat. The authors interpret Elon Musk’s statements as “X” becoming a mirror for the world’s thoughts, believes and values at any given point in time. The authors continue to review comments and reactions from users concluding humanity’s collective consciousness must be free from censorship and oppression. Moreover, it requires digitization of human content, which in and of itself is a challenge considering the influence of artificial intelligence over human beliefs and values. This leads the authors to explore spiritual and religious motives asking “Does Elon Musk intend X to play the role of God”? They then ask the true question “Can X achieve to truly influence cultural norms and traditions” but conclude it to be a mere means to an end of humanity’s collective consciousness.       


At first glance, this article is missing a crucial comparison to other platforms. The elephant in the room is, of course, Facebook with more than 3 billion monthly active users. WhatsApp is believed to be used by more than 2.7 billion monthly active users. And Instagram is home to approximately 1.35 billion users. This makes their owner and operator, Meta Platforms, the host for more than 7 billion users (assuming the unlikely scenario that each platform has unique users). “X” by contrast is host to around 500 million monthly active users. Any exploration that concerns a social network or platform could become or aims to be humanity’s collective consciousness must draw a comparison.

The authors do conduct a historical comparison between “X’s” role in shaping social movements, revolutions, and cultural shifts and the Enlightenment Era and the Civil Rights Movement. They correctly identify modern communication as being more fluid and impacted by dynamic technologies allowing users to form collective identities based on shared interests, beliefs, or experiences. Arguably, the Enlightenment era and the Civil Rights Movement were driven by a few, select groups. In contrast, modern movements experience crossover identities supporting movements across the globe and independent of cultural identity as demonstrated in the Arab Spring of 2011, the Gezi Park Protests of 2013, or Black Lives Matter. It can be interpreted that humanity’s collective consciousness is indeed influenced by social networks, but the critical miss, again, is the direct connection to “X”. Twitter did assume an influential role during the aforementioned movements. But would they have played out the way they did – soley on Twitter – without Facebook, WhatsApp, and other social networks?  

The authors make a point about “X’s” real-time relevance arguing information spreads on “X” like wildfire often breaking news stories before traditional media outlets. However, the changes to the “X” recommendation algorithm, the introduction of paid premium subscriptions, and some controversial reinstatements of accounts that were found to spread misinformation and hate speech have made “X” bleed critical users, specifically journalists, reporters, and media enthusiasts. 

Lastly, the authors conclude that “X” has evolved from a microblogging platform to an everything app. They state it has become a central place for humanity’s collective consciousness. Nothing could be further from the truth. To date, “X” has yet to introduce products and features to manage finances, search the internet, plan and book travel or simply maintain uptime and mitigate bugs. Users can’t buy products on “X” nor manage their health, public service, and utilities. WeChat offers these products and features and it doesn’t make a claim to be humanity’s collective consciousness.


A far more interesting question around social networks and collective consciousness is the impact of generative artificial intelligence on humanity. While the authors of this article believed a (single) social network could become humanity’s collective consciousness, it is more likely that the compounding effect of information created and curated by algorithms is already becoming if not overriding humanity’s collective consciousness. Will it reach a point, at which machine intelligence will become self-aware, independent of its human creators, and actively influence humanity’s collective consciousness to achieve (technological) singularity

W40Y23 Weekly Review: SEC v. Elon Musk, ROSS Intelligence AI, and More Elon Musk Lawsuits

+++ SEC Sues Elon Musk Over Twitter Purchase 
+++ College Graduate Sues Elon Musk For Defamation
+++ Update: Thomson Reuters’ Copyright Lawsuit Against Ross Intelligence Over Westlaw

SEC Sues Elon Musk Over Twitter Purchase 
The Securities and Exchange Commission (SEC) is seeking a court order to compel Elon Musk to comply with an administrative subpoena. Musk had been scheduled to testify as part of an SEC investigation into his 2022 purchases of Twitter stock and his related statements and filings. However, he failed to appear for his scheduled testimony, citing various objections, including the location of the testimony. 

Read the full report on The Hill
Read the case SEC v. Elon Musk, U.S. District Court for the Northern District of California, No. Case 3:23-mc-80253-LB. 

College Graduate Sues Elon Musk For Defamation
Benjamin Brody is suing Elon Musk for falsely accusing him of being involved with a neo-Nazi group during a brawl between right-wing extremist groups at a Pride event in Portland, Oregon. The lawsuit claims that Musk has a pattern of making reckless false statements that harm innocent third parties and promote disinformation. The incident involves a viral video of the street fight, where Brody was wrongly identified as one of the participants. Musk’s tweets amplified these false accusations, causing panic, fear, and depression for Brody. He is seeking damages of at least $1 million, a jury trial, and a judgment to clear his name.

Read the full report on NPR.
Read the case  Benjamin Brody v. Elon Musk, District Court of Travis County.  

Update: Thomson Reuters’ Copyright Lawsuit Against Ross Intelligence Over Westlaw
Thomson Reuters accuses the now defunct AI startup, Ross Intelligence, of unlawfully copying content from its legal research platform, Westlaw, to train a competing AI-based platform. The case is moving forward to be heard by a jury to determine, among other allegations, whether scraping Westlaw’s “headnotes”, which are curated legal case summaries, without proper licensing or permission was a misuse of Westlaw. 

Read the full report on Reuters.
Read the case Thomson Reuters and West Publishing v Ross Intelligence, U.S. District Court for the District of Delaware, No. 1:20-cv-00613-SB.

More Headlines

  • Data Privacy: “UK Information Commissioner issues preliminary enforcement notice against Snap” (by ICO)
  • Data Privacy: “Australian federal police officers’ details leaked on dark web after law firm hack” (by Guardian)
  • Data Privacy: “Child online safety laws will actually hurt kids, critics say. Why child online safety is so complicated” (by MIT Technology Review)
  • Data Privacy: “The TikTok ban isn’t Montana’s only new tech privacy law” (by MTFP)
  • Social Media: “What’s at stake in the Supreme Court’s landmark social media case” (by TechCrunch)
  • Deepfakes: “Lawmakers question Meta and X on how they’ll police AI-generated political deepfakes” (by LA Times)
  • AI: “Governments race to regulate AI tools” (by Reuters)
  • AI: “Spotify boss urges UK to enact tougher regulation of tech gatekeepers” (by Financial Times)
  • AI: “Disney Has No Comment on Microsoft’s AI Generating Pictures of Mickey Mouse Doing 9/11.” (by Futurism)

In Other News (or publications you should read)

This post originated from my publication Codifying Chaos.

Legislative Considerations for Generative Artificial Intelligence and Copyright Law

Who, if anyone, may claim copyright ownership of new content generated by a technology without direct human input? Who is or should be liable if content created with generative artificial intelligence infringes existing copyrights?

Innovations in artificial intelligence (AI) are raising new questions about how copyright law principles such as authorship, infringement, and fair use will apply to content created or used by AI. So-called “generative AI” computer programs—such as Open AI’s DALL-E and ChatGPT programs, Stability AI’s Stable Diffusion program, and Midjourney’s self-titled program—are able to generate new images, texts, and other content (or “outputs”) in response to a user’s textual prompts (or “inputs”). These generative AI programs are trained to generate such outputs partly by exposing them to large quantities of existing works such as writings, photos, paintings, and other artworks. This Legal Sidebar explores questions that courts and the U.S. Copyright Office have begun to confront regarding whether generative AI outputs may be copyrighted and how generative AI might infringe copyrights in other works.

Make sure to read the full paper titled Generative Artificial Intelligence and Copyright Law by Christopher T. Zirpoli for the Congressional Research Service at https://crsreports.congress.gov/product/pdf/LSB/LSB10922/5

The increasing use of generative AI challenges existing legal frameworks around content creation, ownership, and attribution. It reminds me of the time streaming began to challenge the – then – common practice of downloading copyrighted and user-generated content. How should legislators and lawmakers view generative AI when passing new regulations? 

Copyright, in simple terms, is a type of legal monopoly afforded to the creator or author. It is designed to allow the creator to monetize from their original works of authorship to sustain a living and continue to create because it is assumed that original works of authorship further society and expand knowledge of our culture. The current text of the Copyright Act does not explicitly define who or what can be an author. However, both the U.S. Copyright Office and the judiciary have afforded copyrights only to original works created by a human being. In line with this narrow interpretation of the legislative background, Courts have denied copyright for selfie photos created by a monkey arguing only humans need copyright as a creative incentive.

This argument does imply human creativity is linked to the possibility of reaping economic benefits. In an excellent paper titled “The Concept of Authorship in Comparative Copyright Law”, the faculty director of Columbia’s Kernochan Center for Law, Media, and the Arts, Jane C. Ginsburg refutes this position as a mere byproduct of necessity. Arguably, a legislative scope centered around compensation for creating original works of authorship is failing to incentivize creators and authorship altogether, who, for example, seek intellectual freedom and cultural liberty. This leaves us with a creator or author of a copyrightable work can only be a human. 

Perhaps, generative AI could be considered a collaborative partner used to create original works through an iterative process. Therefore creating an original work of authorship as a result that could be copyrighted by the human prompting the machine. Such cases would also fall outside of current copyright laws and not be eligible for protection. The crucial argument is the expressive element of a creative work must be determined and generated by a human, not an algorithm. In other words, merely coming up with clever prompts to allow generative AI to perform an action, iterating the result with more clever prompts, and claiming copyright for the end result has no legal basis as the expressive elements were within the control of the generative AI module rather than the human. The interpretation of control over the expressive elements of creative work, in the context of machine learning and autonomous, generative AI, is an ongoing debate and likely see more clarification by the legislative and judicial systems.    

To further play out this “Gedankenexperiment” of authorship of content created by generative AI, who would (or should) own such rights? Is the individual who is writing and creating prompts, who is essentially defining and limiting parameters for the generative AI system to perform the task, eligible to claim copyright for the generated result? Is the Software Engineer overseeing the underlying algorithm eligible to claim copyright? Is the company owning the code-work product eligible to claim copyright? Based on the earlier view about expressive elements, it would be feasible to see mere “prompting” as an ineligible action to claim copyright. Likewise, an engineer writing software code performs a specific task to solve a technical problem. Here, an algorithm leveraging training data to create similar, new works. The engineer is not involved or can be attributed to the result of an individual using the product to the extent that it would allow the engineer to exert creative control. Companies may be able to clarify copyright ownership through its terms of service or contractual agreements. However, a lack of judicial and legal commentary on the specific issue leaves it unresolved, or with few clear guidances, as of October 2023.     

The most contentious element of generative AI and copyrighted works is the liability around infringements. OpenAI is facing multiple class-action lawsuits over its allegedly unlicensed use of copyrighted works to train its generative models. Meta Platforms, the owner of Facebook, Instagram, and WhatsApp, is facing multiple class-action lawsuits over the training data used for its large-language model “LLaMA”. Much like the author of this paper, I couldn’t possibly shed light on this complex issue with a simple blog post, but lawmakers can take meaningful action. 

Considerations and takeaways for lawmakers and professionals overseeing the company policies that govern generative AI and creative works are: (1) clearly define whether generative AI can create copyrightable works, (2) exercise clarity over authorship and ownership of the generated result, and (3) outline the requirements of licensing, if any, for proprietary training data used for neural networks and generative modules.

The author looked at one example in particular, which concerns the viral AI-song “Heart On My Sleeve” published by TikTok user ghostwriter977. The song uses generative AI to emulate the style, sound, and likeness of pop stars Drake and The Weeknd to appear real and authentic. The music industry understandably is put on guard with revenue-creating content generated within seconds. I couldn’t make up my mind about it, so here you listen for yourself.