Can Elon Musk Turn “X” Into Humanity’s Collective Consciousness? 

What is the end goal of “X” formerly known as Twitter? A recent article about a cryptic tweet by Elon Musk tries to make a case for a platform that centralizes mankind’s shared cultural beliefs and values, and, the authors argue that it will not be “X”. 

tl;dr
On August 18th, 2023, a thought-provoking tweet by the visionary entrepreneur, Elon Musk – owner of “X” (formerly known as Twitter), set the stage for public contemplation and attention. That tweet forms the basis of this article which examines the captivating ideas that have sprung from that fateful Friday tweet.

Make sure to read the full article titled Does X Truly Represent Humanity’s Collective Consciousness? by Obinnaya Agbo, Dara Ita, and Temitope Akinsanmi at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4558476

Overview

The authors focus the article on a post by Elon Musk that reads: “𝕏 as humanity’s collective consciousness”. They start defining the term humanity’s collective consciousness with a historical review of the works of French sociologist Emile Durkheim and Swiss psychiatrist and psychoanalyst Carl Jung. Those works responded to industrialization, which influenced contemporary viewpoints and connected collective consciousness to labor. The authors define it as “shared beliefs, values, attitudes, ideas, and knowledge that exist within a particular group or society. It is the sum of an individual’s thoughts, emotions, and experiences of people within a group, which combine to create a common understanding of the world, social norms, and cultural identity. It is also the idea that individuals within a society are not only influenced by their own thoughts and experiences but also by broader cultural and societal trends.” The authors continue to review a possible motive for Elon Musk. They refer to the name change earlier this year from Twitter to “X, the everything app”. Elon Musk defended the decision by providing the scope planned for “X”. He stated Twitter was a means for bidirectional communication in 140 characters or less – and nothing more. “X” on the other hand allows different types of content, at varying levels of length, and it plans to allow users “to conduct your entire financial world” on “X”, implying similar features as WeChat. The authors interpret Elon Musk’s statements as “X” becoming a mirror for the world’s thoughts, believes and values at any given point in time. The authors continue to review comments and reactions from users concluding humanity’s collective consciousness must be free from censorship and oppression. Moreover, it requires digitization of human content, which in and of itself is a challenge considering the influence of artificial intelligence over human beliefs and values. This leads the authors to explore spiritual and religious motives asking “Does Elon Musk intend X to play the role of God”? They then ask the true question “Can X achieve to truly influence cultural norms and traditions” but conclude it to be a mere means to an end of humanity’s collective consciousness.       

Evaluation

At first glance, this article is missing a crucial comparison to other platforms. The elephant in the room is, of course, Facebook with more than 3 billion monthly active users. WhatsApp is believed to be used by more than 2.7 billion monthly active users. And Instagram is home to approximately 1.35 billion users. This makes their owner and operator, Meta Platforms, the host for more than 7 billion users (assuming the unlikely scenario that each platform has unique users). “X” by contrast is host to around 500 million monthly active users. Any exploration that concerns a social network or platform could become or aims to be humanity’s collective consciousness must draw a comparison.

The authors do conduct a historical comparison between “X’s” role in shaping social movements, revolutions, and cultural shifts and the Enlightenment Era and the Civil Rights Movement. They correctly identify modern communication as being more fluid and impacted by dynamic technologies allowing users to form collective identities based on shared interests, beliefs, or experiences. Arguably, the Enlightenment era and the Civil Rights Movement were driven by a few, select groups. In contrast, modern movements experience crossover identities supporting movements across the globe and independent of cultural identity as demonstrated in the Arab Spring of 2011, the Gezi Park Protests of 2013, or Black Lives Matter. It can be interpreted that humanity’s collective consciousness is indeed influenced by social networks, but the critical miss, again, is the direct connection to “X”. Twitter did assume an influential role during the aforementioned movements. But would they have played out the way they did – soley on Twitter – without Facebook, WhatsApp, and other social networks?  

The authors make a point about “X’s” real-time relevance arguing information spreads on “X” like wildfire often breaking news stories before traditional media outlets. However, the changes to the “X” recommendation algorithm, the introduction of paid premium subscriptions, and some controversial reinstatements of accounts that were found to spread misinformation and hate speech have made “X” bleed critical users, specifically journalists, reporters, and media enthusiasts. 

Lastly, the authors conclude that “X” has evolved from a microblogging platform to an everything app. They state it has become a central place for humanity’s collective consciousness. Nothing could be further from the truth. To date, “X” has yet to introduce products and features to manage finances, search the internet, plan and book travel or simply maintain uptime and mitigate bugs. Users can’t buy products on “X” nor manage their health, public service, and utilities. WeChat offers these products and features and it doesn’t make a claim to be humanity’s collective consciousness.

Outlook

A far more interesting question around social networks and collective consciousness is the impact of generative artificial intelligence on humanity. While the authors of this article believed a (single) social network could become humanity’s collective consciousness, it is more likely that the compounding effect of information created and curated by algorithms is already becoming if not overriding humanity’s collective consciousness. Will it reach a point, at which machine intelligence will become self-aware, independent of its human creators, and actively influence humanity’s collective consciousness to achieve (technological) singularity

Legislative Considerations for Generative Artificial Intelligence and Copyright Law

Who, if anyone, may claim copyright ownership of new content generated by a technology without direct human input? Who is or should be liable if content created with generative artificial intelligence infringes existing copyrights?

tl;dr
Innovations in artificial intelligence (AI) are raising new questions about how copyright law principles such as authorship, infringement, and fair use will apply to content created or used by AI. So-called “generative AI” computer programs—such as Open AI’s DALL-E and ChatGPT programs, Stability AI’s Stable Diffusion program, and Midjourney’s self-titled program—are able to generate new images, texts, and other content (or “outputs”) in response to a user’s textual prompts (or “inputs”). These generative AI programs are trained to generate such outputs partly by exposing them to large quantities of existing works such as writings, photos, paintings, and other artworks. This Legal Sidebar explores questions that courts and the U.S. Copyright Office have begun to confront regarding whether generative AI outputs may be copyrighted and how generative AI might infringe copyrights in other works.

Make sure to read the full paper titled Generative Artificial Intelligence and Copyright Law by Christopher T. Zirpoli for the Congressional Research Service at https://crsreports.congress.gov/product/pdf/LSB/LSB10922/5


The increasing use of generative AI challenges existing legal frameworks around content creation, ownership, and attribution. It reminds me of the time streaming began to challenge the – then – common practice of downloading copyrighted and user-generated content. How should legislators and lawmakers view generative AI when passing new regulations? 

Copyright, in simple terms, is a type of legal monopoly afforded to the creator or author. It is designed to allow the creator to monetize from their original works of authorship to sustain a living and continue to create because it is assumed that original works of authorship further society and expand knowledge of our culture. The current text of the Copyright Act does not explicitly define who or what can be an author. However, both the U.S. Copyright Office and the judiciary have afforded copyrights only to original works created by a human being. In line with this narrow interpretation of the legislative background, Courts have denied copyright for selfie photos created by a monkey arguing only humans need copyright as a creative incentive.


This argument does imply human creativity is linked to the possibility of reaping economic benefits. In an excellent paper titled “The Concept of Authorship in Comparative Copyright Law”, the faculty director of Columbia’s Kernochan Center for Law, Media, and the Arts, Jane C. Ginsburg refutes this position as a mere byproduct of necessity. Arguably, a legislative scope centered around compensation for creating original works of authorship is failing to incentivize creators and authorship altogether, who, for example, seek intellectual freedom and cultural liberty. This leaves us with a creator or author of a copyrightable work can only be a human. 

Perhaps, generative AI could be considered a collaborative partner used to create original works through an iterative process. Therefore creating an original work of authorship as a result that could be copyrighted by the human prompting the machine. Such cases would also fall outside of current copyright laws and not be eligible for protection. The crucial argument is the expressive element of a creative work must be determined and generated by a human, not an algorithm. In other words, merely coming up with clever prompts to allow generative AI to perform an action, iterating the result with more clever prompts, and claiming copyright for the end result has no legal basis as the expressive elements were within the control of the generative AI module rather than the human. The interpretation of control over the expressive elements of creative work, in the context of machine learning and autonomous, generative AI, is an ongoing debate and likely see more clarification by the legislative and judicial systems.    

To further play out this “Gedankenexperiment” of authorship of content created by generative AI, who would (or should) own such rights? Is the individual who is writing and creating prompts, who is essentially defining and limiting parameters for the generative AI system to perform the task, eligible to claim copyright for the generated result? Is the Software Engineer overseeing the underlying algorithm eligible to claim copyright? Is the company owning the code-work product eligible to claim copyright? Based on the earlier view about expressive elements, it would be feasible to see mere “prompting” as an ineligible action to claim copyright. Likewise, an engineer writing software code performs a specific task to solve a technical problem. Here, an algorithm leveraging training data to create similar, new works. The engineer is not involved or can be attributed to the result of an individual using the product to the extent that it would allow the engineer to exert creative control. Companies may be able to clarify copyright ownership through its terms of service or contractual agreements. However, a lack of judicial and legal commentary on the specific issue leaves it unresolved, or with few clear guidances, as of October 2023.     

The most contentious element of generative AI and copyrighted works is the liability around infringements. OpenAI is facing multiple class-action lawsuits over its allegedly unlicensed use of copyrighted works to train its generative models. Meta Platforms, the owner of Facebook, Instagram, and WhatsApp, is facing multiple class-action lawsuits over the training data used for its large-language model “LLaMA”. Much like the author of this paper, I couldn’t possibly shed light on this complex issue with a simple blog post, but lawmakers can take meaningful action. 

Considerations and takeaways for lawmakers and professionals overseeing the company policies that govern generative AI and creative works are: (1) clearly define whether generative AI can create copyrightable works, (2) exercise clarity over authorship and ownership of the generated result, and (3) outline the requirements of licensing, if any, for proprietary training data used for neural networks and generative modules.

The author looked at one example in particular, which concerns the viral AI-song “Heart On My Sleeve” published by TikTok user ghostwriter977. The song uses generative AI to emulate the style, sound, and likeness of pop stars Drake and The Weeknd to appear real and authentic. The music industry understandably is put on guard with revenue-creating content generated within seconds. I couldn’t make up my mind about it, so here you listen for yourself. 

Zuckerberg’s Ugly Truth Isn’t So Ugly

A review of the 2021 book “Inside Facebook’s Battle for Domination” by Sheera Frenkel and Cecilia Kang. The truth is far more complex.

Writing this review didn’t come easy. I spent five years helping to mitigate and solve Facebook’s most thorny problems. When the book was published, I perceived it to be an attack on Facebook orchestrated by the New York Times, a stock-listed company and direct competitor in the attention and advertising market. Today, I know that my perception then was compromised by Meta’s relentless, internal corporate propaganda.

Similar to Chaos Monkeys, An Ugly Truth tells a story that is limited to available information at the time. The book claims to have had unprecedented access to internal, executive leadership directly reporting to Mark Zuckerberg and Sheryl Sandberg. It is focused on the time period roughly between 2015 and 2020; arguably it was Facebook’s most challenging time. Despite a constant flow of news reporting about Facebook’s shortcomings, the book, for the most part of it, remains focused on the executive leadership decisions that got the company into hot waters in the first place. Across 14 chapters, well-structured and perfectly written, the authors build a case of desperation: in an increasingly competitive market environment, Facebook needs to innovate and increase its user statistics to beat earnings to satisfy shareholders. Yet, the pursuit of significance infiltrated the better judgment of Facebook’s executive leadership team and eventually led to drowning out the rational voices, the protective and concerned opinions of genuine leadership staff over the self-serving voices of staff only interested to progress at any cost.

To illustrate this point, the authors tell the story of former Chief Security Officer Alex Stamos, who persistently called out data privacy and security shortcomings:

Worst of all, Stamos told them (Zuckerberg and Sandberg), was that despite firing dozens of employees over the last eighteen months for abusing their access, Facebook was doing nothing to solve or prevent what was clearly a systemic problem. In a chart, Stamos highlighted how nearly every month, engineers had exploited the tools designed to give them easy access to data for building new products to violate the privacy of Facebook users and infiltrate their lives. If the public knew about these transgressions, they would be outraged […]

His calls, however, often went unanswered, or, worse invited other executive leadership threatened by Stamos’ findings to take hostile measures.      

By December, Stamos, losing patience, drafted a memo suggesting that Facebook reorganize its security team so that instead of sitting on their own, members were embedded across the various parts of the company. […] Facebook had decided to take his advice, but rather than organizing the new security team under Stamos, Facebook’s longtime vice president of engineering, Pedro Canahuati, was assuming control of all security functions. […] The decision felt spiteful to Stamos: he advised Zuckerberg to cut engineers off from access to user data. No team had been more affected by the decision than Canahuati’s, and as a result, the vice president of engineering told colleagues that he harbored a grudge against Stamos. Now he would be taking control of an expanded department at Stamos’s expense.

Many more of those stories would never be told. Engineers and other employees, much smaller fish than Stamos, who raised ethical concerns of security and integrity were routinely silenced, ignored, and “managed out” – Facebook’s preferred method of dealing with staff refusing to drink the kool-aid and toe the line. Throughout the book, the authors maintain a neutral voice yet it becomes very clear how difficult the decisions were for executive leadership. It seemed as though leading Facebook is the real-world equivalent of Kobayashi Maru – an everyday, no-win scenario. Certainly, I can sympathize with the pressure Mark, Sheryl, and others must have felt during those times.

Take the case of Donald John Trump, the 45th President of the United States. His Facebook Page has a reach of 34 million followers (at the time of this writing). On January 6, 2021, his account actively instigated his millions of followers to view Vice President Mike Pence as the reason for his lost bid for reelection. History went on to witness the attack on the United States Capitol. Democracy and our liberties were under attack on that day. And how did Mark Zuckerberg and Sheryl Sandberg respond on behalf of Facebook? First, silence. Second, indecision. Shall Trump remain on the platform? Are we going to suspend his account temporarily? Indefinitely? Eventually, Facebook’s leadership punted the decision to the puppet regime of the Oversight Board, who returned the decision power due to a lack of existing policies that would govern such a situation. When everybody was avoiding the headlights, Facebook’s executive leadership acted like a deer. Yes, Zuckerberg’s philosophy on speech has evolved over time. Trump challenged this evolution.

Throughout Facebook’s seventeen-year history, the social network’s massive gains have repeatedly come at the expense of consumer privacy and safety and the integrity of democratic systems. […] And the platform is built upon a fundamental, possibly irreconcilable dichotomy: its purported mission is to advance society by connecting people while also profiting off them. It is Facebook’s dilemma and its ugly truth.

The book contains many more interesting stories. There were a wealth of internal leaks to desperately influence and return Facebook’s leadership back to its original course. There were the infamous Brett Kavanaugh hearings, which highlighted the political affiliations and ideologies of Facebook’s executive leader Joel Kaplan, who weathered the sexual harassment allegations against Brett Kavanaugh by Christine Blasey-Ford despite an outrage of Facebook’s female employees. Myanmar saw horrific human rights abuses enabled by and perpetrated through the platform. The speaker of the U.S. House of Representatives and Bay Area representative since 1987, Nancy Pelosi was humiliated when Facebook fumbled to remove a deepfake video about a speech of hers that was manipulated to make it sound slurred. And the list goes on and on and on and on.

The book is worth reading. The detail and minutiae afforded to report accurately and convincingly are rich and slow-burning. That being said, Facebook has been dying since 2015. Users leave the platform and delete Facebook. While Instagram and WhatsApp pull the company’s advertising revenue for the time being with stronger performances abroad, it is clear that the five years of the executive leadership of Facebook covered in this book point towards an undefiable conclusion: it failed. 

NPR’s Terry Gross interviewed the authors Sheera Frenkel and Cecilia Kang on Fresh Air. It further demonstrates the dichotomy of writing about the leadership at one of the most influential and controversial corporations in the world. You can listen to the full episode here

Meaning Is The New Money

This provocative new book on religion and work in the technology sector will make you see life in a different light.

According to 4 U.S.C. §4 the United States is one Nation under God. H.R. 619 (84th) passed and approved by President Dwight D. Eisenhower mandates the official motto of the United States “In God We Trust” to appear on all currency issued by the Federal Government of these United States. Without a doubt, religion and spirituality are deeply rooted in this country. Hence it comes as no surprise when Associate Professor of Ethnic Studies at the University of California, Berkeley Carolyn Chen posits “Work Becomes Religion in Silicon Valley” in her new book Work Pray Code

“Today, companies are not just economic institutions. They’ve become meaning-making institutions that offer a gospel of fulfillment and divine purpose in a capitalist cosmos.”

The colorful, borderline-sacred language of this statement illustrates Chen’s ambitions to base the premise of this book on the workplace is replacing religious needs. At her core argument, Chen reasons companies create a meaningful work experience by emulating religious themes, omitting the spiritual or discriminating aspects of faith, which is becoming a substitute for exercising religion outside of work and with the community.

“Religions and companies are collective enterprises. They are ‘faith communities’, communities that support the act of faith. On one level, faith communities do this by articulating the articles of faith– the doctrines, creeds, and sacred texts and teachings. For most companies, and many other organizations, these articles of faith are their mission statement and statement of core values.”

Taken at face value, Chen makes it appear that companies’ mission statements emulate or are synonymous with religious beliefs. However, a closer look reveals that the mission statements of neither Google, Meta (née Facebook) nor Microsoft purport articles of faith. Taking it a step further, if Chen defines religions as collective enterprises, I’d argue companies may as well be independent organizations each governed by unique financial and economic goals, limited by available budget and human resources. A number of technology companies operating out of Silicon Valley engage in eco-friendly sustainability to power data centers and other parts of the organization, but is the water supply division of Meta truly vested in the intricacies of reviewing python code to reign in inauthentic behavior and other automated malicious behavior on Instagram? Could each division link the other’s efforts back to the mission statement? Whose division will shut down first to protect the integrity of the mission statement? I have doubts. 

“In the Silicon Valley workplace, work and life are no longer separate and opposing spheres because life happens at work. In fighting the notion that work and life occupy distinct spaces and times, tech companies are reviving a much older way of organizing society. In agrarian societies, work and life were integrated for both women and men. The farm was both home– where people ate, slept, and played– and workplace– where people labored and participated in the economic system. Industrialization began to impose stark boundaries between work and life, particularly for men. Work became confined to a particular space, time, and logic– the factory, with its rhythm governed by the values of efficiency and productivity. Life– defined as activities that don’t contribute to production– happened outside of the factory in the home, church, neighborhood, bowling alley, baseball diamond, saloon, hair salon, and so on. […] Today’s tech company is returning to the undifferentiated spheres of its preindustrial predecessor, however, by making life a part of work.”

This paragraph resonated with me for its accuracy and insight. Coming from a farmer’s family, I experienced some variation of an undifferentiated sphere where work and life all took place at the same time. Somewhere along the road, it all separated into standalone parts of our day. As a technology company, an unrelenting global market of competition for highly-skilled talent as well as pushing products directly to the consumer in real-time is an incentive to maximize productivity and workforce utilization by ensuring a highly-skilled employee is 100% focused on its division’s roadmap and driving execution of it.

I cannot make up my mind about this book. On one hand, Chen makes a valid point by stating technology companies emulate religious characteristics in order to alleviate their employee’s spiritual needs. Moreover, I subscribe to the general argument of mindfulness in conjunction with corporate materialism appears to create an industrial-technology complex that emanates virtues and exercises characteristics of religions. On the other hand, however, I fail to identify a link between a technology company using methods and characteristics developed to further religious beliefs resulting in a replacement theory that Chen appears to offer in her introduction. I view these efforts as motivated by raw capitalism: to benefit its workforce and increase productivity, utilization, and retention as a side effect. Furthermore, her focus is exclusively on technology companies located in Silicon Valley. In reality, however, technology companies are located all over the United States with varying numbers of full-time employees. Limiting her research on the technology sector alone appears to be a flimsy base for a solid argument too. For example, 3M, General Motors, Kraft Heinz, and even Exxon Mobile have a history of wide-ranging benefits similar to Silicon Valley. Setting aside economic motives, Chen missed out on exploring these other sectors including academia, which is known for its fraternal, cult-esque exclusivity, and the almighty military, which is known for strict indoctrination and behavioral codes

Altogether I learned a lot about the perception and correlation of both religion and Silicon Valley. Whether it applies to the modern workplaces as Carolyn Chen weaves it together remains to be discovered by the reader. Perhaps concluding with more critique than praise for Work Pray Code is a good thing for it forced me to reflect on some preconceived notions about religion. Chen devoted an entire chapter to the art of reflection and I found Lin Chi’s quote to question more perfect to end: “if you meet the Buddha, kill him.” But before you do, read this book.

Becoming Boss

Do you have what it takes to be a leader? Probably not. But that’s all right. In her mid-twenties, Julie Zhuo answered the call for leadership when she became a manager at Facebook. In her book, she compiled her mistakes, lessons, and strategies to lead people and create better organizations – so you can learn to become a leader.

What do you do when everyone looks to you for guidance and leadership? Some thrive in the spotlight. Others crumble and fail. Julie Zhuo went from being the first intern “at this website called Facebook” to becoming a Vice President of Product Design in her 13.5 years at the social network. Her career is not a career of an outlier but a results-driven, hard-working individual. Managing people is no different. Managers are made, not born. 

The Making of a Manager is a field guide for growth. First, I read it cover-to-cover. Then I realized how powerful each chapter is by itself and started keeping it near my desk to calibrate my thinking against experiences at work. Zhuo describes her growth through a forward-leaning approach to people management. Most notably, her approach seeks to stress test her own leadership protocol to fail – only to allow her a chance to improve it. It’s hard work. Dedication. And (my personal favorite) thoughtful questions directed at peers, partners, reports, but perhaps most importantly herself. After all, leadership starts with managing yourself.   

Any entrepreneur will benefit from her early experience at a company that would grow to redefine how people connect with one another. Any employee in a large organization will relate to her tactful yet challenging questions during individual and group meetings. Zhuo’s relatable and empathetic writing style reels in any reader contemplating a career in people management. That being said, the market for business books is quite saturated with leadership or self-improvement books and to some, her experience might be too far from reality given her unique circumstances coming up at Facebook. To this day, I truly enjoy reading her posts or notes and the general public can do so too on her blog The Looking Glass, or on her website at https://www.juliezhuo.com/

Find A Behavioral Solution To Your Product Design Problem

Our actions are (very much) predictable and can be influenced.

Humans are complicated. Humans are different. Humans are irrational, unpredictable, and emotional. In DECODING the WHY – How Behavioral Science is Driving the Next Generation of Product Design author Nate Andorsky embraces all these idiosyncrasies by answering these underlying questions: what makes us do what we do and how can product designers learn from these behavioral patterns to build better products. 

Andorsky takes the reader on a story-driven adventure into behavioral science. Decoding the Why lives in a constant tension between the evolution of product design and human behavior. It describes psychological concepts and how they influence product designs. It provides practical guidance on how to meet the consumer’s cognitive state before intent is formed and how to use behavioral science to nudge the consumer towards action. For example in the part about ‘Meeting Our Future Selves’ Andorsky reviews Matthew McConaughey’s iconic Oscar acceptance speech after winning the Oscar for his performance in Dallas Buyers Club.

“When I was 15 years old I had a very important person in my life come to me and say, ‘Who’s your hero?’ I said, ‘I don’t know, I gotta think about that, give me a couple of weeks.’

This Person comes back two weeks later and says, ‘Who’s your hero?’ I replied, ‘You know what, I thought about it and it’s me in ten years.’

So I turn twenty-five. Ten years later, that same person comes to me and says, ‘So are you a hero?’ I replied, ‘No, no, no, not even close.’ ‘Why?’ she said. ‘Cause my hero is me at thirty-five,’ I said.

See, every day, every week, every month, every year of my life, my hero is always ten years away. I’m never going to meet my hero, I am never going to obtain that, and that’s totally fine because it gives me somebody to keep on chasing.”

If humans were rational we’d all pursue the rational thing to maximize our time and energy. However, we are not rational. All too often we give in to the instant gratification that lies in the moment by putting off the thing that helps us tomorrow. This concept is also known as Hyperbolic Discounting. Andorsky walks the reader through the obstacles that keep us from meeting our future selves by reviewing methods such as reward systems, gamification models, commitment devices, and goal setting, all of which, are used to inform product design. 

If I ever write a book, I will likely attempt to create a similar structure and flow. Andorsky did an excellent job by breaking down the content into easily digestible parts. Each part tells a captivating story concluding in an engaging question for the reader. While the subject matter could have easily been told with jargon and psychology terminology, the author consistently uses clear and non-academic language to explain a variety of behavioral and psychological concepts and theories. Altogether this makes for an accessible page-turner offering a wide range of practical applications. 

Taking a birds-eye view on Decoding the Why, I feel, I could come to two conclusions that could not be further apart: (1) Andorsky answers the eternal question of what makes us do what we do and how product designers can learn from these behavioral patterns to build better products or (2) Andorsky provides ammunition to weaponize psychology in order to calibrate intrusive technology that can be used to manipulate and exploit human behavior. Whatever your position is on the question of using behavioral science to influence user behavior, this book is a gateway to explore psychological concepts, and it is an important read for changemakers. It can be used for good, or, it can be used to inform better public policy. I’d rank Decoding the Why as a must-read for product designers, product managers, and anyone working to improve user experiences in technology. 

One Flew Over The Bitcoin Mine

Rarely have I found myself more confused about technology than after reading George Gilder’s “Life After Google – The Fall of Big Data and the Rise of the Blockchain Economy.” A book, supposedly, on the very technology of big data and the blockchain.

In twenty-five chapters across 276 pages, the author attempts to show off but not discuss, how the internet as we know it came into our daily lives. Gilder uses a wealth of buzzwords without ever defining them for the reader. The compounding effect of broad terminology, out-of-place analogies that seemingly disrupt the storytelling, make this book a dense and frustrating read. Even for the tech-savvy. He moves from monetary theory to artificial intelligence to silicon valley startup culture without skipping a beat. Until the underwhelming end of the book, I failed to understand the author’s rage against Google and new, emerging technology companies. In the absence of a clear theme of this book, I tried to theorize that the author set out to warn against Google’s free products, attempts to predict the end of the free product business model as the economy is moving towards cryptographic ledgers, most notably blockchain technology and decentralized cryptocurrency. However, Gilder then compares bitcoin to gold and points out the flaws of a scarce resource to become a stable coin in an economy. How this all ties together or even argues for a future with a decreased need of big data processing remains unclear. Why he chose not to discuss cybersecurity as the most potent threat to fiduciaries within a digitalized, capitalistic system remains unclear. This book is incoherent while being overly focused on ideological aspects. It would have served the readers to restrict the discussion to the actual technology.

With all that in mind, I feel this book has some minuscule merit for a philosophical audience without much need for technical detail. Gilder delivers on creating an entry-level overview for future exploration of blockchain technology, large scale computing and its implementation within an economic system that is supported by for-profit corporations. But beyond that, I feel, I am left more confused than enlightened about the interplay between data processing within financial markets, artificial intelligence deployed to equalize market barriers and blockchain as technology that would enable a seismic shift towards decentralized currencies. 

Threat Assessment: Chinese Technology Platforms

The American University Washington College of Law and the Hoover Institution at Stanford University created a working group to understand and assess the risks posed by Chinese technology companies in the United States. They propose a framework to better assess and evaluate these risks by focusing on the interconnectivity of threats posed by China to the US economy, national security and civil liberties.

tl;dr

The Trump administration took various steps to effectively ban TikTok, WeChat, and other Chinese-owned apps from operating in the United States, at least in their current forms. The primary justification for doing so was national security. Yet the presence of these apps and related internet platforms presents a range of risks not traditionally associated with national security, including data privacy, freedom of speech, and economic competitiveness, and potential responses raise multiple considerations. This report offers a framework for both assessing and responding to the challenges of Chinese-owned platforms operating in the United States.

Make sure to read the full report titled Chinese Technology Platforms Operating In The United States by Gary P. Corn, Jennifer Daskal, Jack Goldsmith, John C. Inglis, Paul Rosenzweig, Samm Sacks, Bruce Schneier, Alex Stamos, Vincent Stewart at https://www.hoover.org/research/chinese-technology-platforms-operating-united-states 

(Source: New America)

China has experienced consistent growth since opening its economy in the late 1970s. With its economy at about x14 today, this growth trajectory dwarfs the growth of the US economy, which increased at about x2 with the S&P 500 being its most rewarding driver at about x5 increase. Alongside economic power comes a thirst for global expansion far beyond the asian-pacific region. China’s foreign policy seeks to advance the Chinese one-party model of authoritarian capitalism that could pose a threat to human rights, democracy and the basic rule of law. US political leaders see these developments as a threat to their own US foreign policy of primacy but perhaps more important a threat to the western ideology deeply rooted in individual liberties. Needless to say that over the years every administration independent of political affiliation put the screws on China. A most recent example is the presidential executive order addressing the threat posed by social media video app TikTok. Given the authoritarian model of governance and the Chinese government’s sphere of control over Chinese companies their expansion into the US market raises concerns about access to critical data and data protection or cyber-enabled attacks on critical US infrastructure among a wide range of other threats to national security. For example:

Internet Governance: China is pursuing regulation to shift the internet from open to closed and decentralized to centralized control. The US government has failed to adequately engage international stakeholders in order to maintain an open internet but rather has authorized large data collection programs that emulate Chinese surveillance.

Privacy, Cybersecurity and National Security: The internet’s continued democratization encourages more social media and e-commerce platforms to integrate and connect features for users to enable multi-surface products. Mass data collection, weak product cybersecurity and the absence of broader data protection regulations can be exploited to collect data on domestic users, their behavior and their travel pattern abroad. It can be exploited to influence or control members of government agencies through targeted intelligence or espionage. Here the key consideration is aggregated data, which even in the absence of identifiable actors can be used to create viable intelligence. China has ramped up its offensive cyber operations beyond cyber-enabled trade or IP-theft and possesses the capabilities and cyber-weaponry to destabilize national security in the United States.

Necessity And Proportionality 

Considering mitigating the threat to national security by taking actions against Chinese owned- or controlled communications technology including tech products manufactured in China the working group suggests an individual case-based analysis. They attempt to address the challenge of accurately identifying the specific risk in an ever-changing digital environment with a framework of necessity and proportionality. Technology standards change at a breathtaking pace. Data processing reaches new levels of intimacy due to the use of artificial intelligence and machine learning. Thoroughly assessing, vetting and weighing a tolerance to specific risks are at the core of this framework in order to calibrate a chosen response to avoid potential collateral consequences.

The working group’s framework of necessity and proportionality reminded me of a classic lean six sigma structure with a strong focus on understanding the threat to national security. Naturally, as a first step they suggest accurately identifying the threat’s nature, credibility, imminence and the chances of the threat becoming a reality. I found this first step incredibly important because a failure to identify a threat will likely lead to false attribution and undermine every subsequent step. In the context of technology companies the obvious challenge is data collection, data integrity and detection systems to tell the difference. By that I imply a Chinese actor may deploy a cyber ruse in concert with the Chinese government to obfuscate their intentions. Following the principle of proportionality, step two is looking into the potential collateral consequence to the United States, its strategic partners and most importantly its citizens. Policymakers must be aware of the unintended path a policy decision may take once a powerful adversary like China starts its propaganda machine. Therefore this step requires policymakers to include thresholds for when a measure to mitigate a threat to national security outweighs the need to act. In particular inalienable rights such as the freedom of expression, freedom of the press or freedom of assembly must be upheld at all times as they are fundamental American values. To quote the immortal Molly IvinsMany a time freedom has been rolled back – and always for the same sorry reason: fear.” The third and final step concerns mitigation measures. In other words: what are we going to do about it? The working group landed on two critical factors: data and compliance. The former might be restricted, redirected or recoded to adhere to national security standards. The latter might be audited to not only identify vulnerabilities but further instill built-in cybersecurity and foster an amicable working-relationship. 

The Biden administration is faced with a daunting challenge to review and develop appropriate cyber policies that will address the growing threat from Chinese technology companies in a coherent manner that is consistent with American values. Only a broad policy response that is tailored to specific threats and focused on stronger cybersecurity and stronger data protection will yield equitable results. International alliances alongside increased collaboration to develop better privacy and cybersecurity measures will lead to success. However, the US must focus on their own strengths first, leverage their massive private sector to identify the specific product capabilities and therefore threats and attack vectors, before taking short-sighted, irreversible actions.