Find A Behavioral Solution To Your Product Design Problem

Our actions are (very much) predictable and can be influenced.

Humans are complicated. Humans are different. Humans are irrational, unpredictable, and emotional. In DECODING the WHY – How Behavioral Science is Driving the Next Generation of Product Design author Nate Andorsky embraces all these idiosyncrasies by answering these underlying questions: what makes us do what we do and how can product designers learn from these behavioral patterns to build better products. 

Andorsky takes the reader on a story-driven adventure into behavioral science. Decoding the Why lives in a constant tension between the evolution of product design and human behavior. It describes psychological concepts and how they influence product designs. It provides practical guidance on how to meet the consumer’s cognitive state before intent is formed and how to use behavioral science to nudge the consumer towards action. For example in the part about ‘Meeting Our Future Selves’ Andorsky reviews Matthew McConaughey’s iconic Oscar acceptance speech after winning the Oscar for his performance in Dallas Buyers Club.

“When I was 15 years old I had a very important person in my life come to me and say, ‘Who’s your hero?’ I said, ‘I don’t know, I gotta think about that, give me a couple of weeks.’

This Person comes back two weeks later and says, ‘Who’s your hero?’ I replied, ‘You know what, I thought about it and it’s me in ten years.’

So I turn twenty-five. Ten years later, that same person comes to me and says, ‘So are you a hero?’ I replied, ‘No, no, no, not even close.’ ‘Why?’ she said. ‘Cause my hero is me at thirty-five,’ I said.

See, every day, every week, every month, every year of my life, my hero is always ten years away. I’m never going to meet my hero, I am never going to obtain that, and that’s totally fine because it gives me somebody to keep on chasing.”

If humans were rational we’d all pursue the rational thing to maximize our time and energy. However, we are not rational. All too often we give in to the instant gratification that lies in the moment by putting off the thing that helps us tomorrow. This concept is also known as Hyperbolic Discounting. Andorsky walks the reader through the obstacles that keep us from meeting our future selves by reviewing methods such as reward systems, gamification models, commitment devices, and goal setting, all of which, are used to inform product design. 

If I ever write a book, I will likely attempt to create a similar structure and flow. Andorsky did an excellent job by breaking down the content into easily digestible parts. Each part tells a captivating story concluding in an engaging question for the reader. While the subject matter could have easily been told with jargon and psychology terminology, the author consistently uses clear and non-academic language to explain a variety of behavioral and psychological concepts and theories. Altogether this makes for an accessible page-turner offering a wide range of practical applications. 

Taking a birds-eye view on Decoding the Why, I feel, I could come to two conclusions that could not be further apart: (1) Andorsky answers the eternal question of what makes us do what we do and how product designers can learn from these behavioral patterns to build better products or (2) Andorsky provides ammunition to weaponize psychology in order to calibrate intrusive technology that can be used to manipulate and exploit human behavior. Whatever your position is on the question of using behavioral science to influence user behavior, this book is a gateway to explore psychological concepts, and it is an important read for changemakers. It can be used for good, or, it can be used to inform better public policy. I’d rank Decoding the Why as a must-read for product designers, product managers, and anyone working to improve user experiences in technology. 

Redefining The Media

LiveLeak was a video-sharing website for uncensored depictions of violence, human rights violations, and other world events – often shocking content. Earlier this year, the website reportedly shut down. Surprisingly, there is little detailed academic research on LiveLeak. While available research centers around internet spectatorship of violent content, this 2014 research paper discusses the communication structures created by LiveLeak; arguably redefining social media as we know it.

tl;dr

This research examines a video-sharing website called LiveLeak to be able to analyze the possibilities of democratic and horizontal social mobilization via Internet technology. In this sense, we take into consideration the Gilles Deleuze and Félix Guattari’s philosophical conceptualization of “rhizome” which provides a new approach for activities of online communities. In the light of this concept and its anti-hierarchical approach, we tried to discuss the potentials of network communication models such as LiveLeak in terms of emancipating the use of media and democratic communication. By analyzing the contextual traffic on the LiveLeak for a randomly chosen one week (first week of December 2013) we argue that this video-sharing website shows a rhizomatic characteristic.

Make sure to read the full paper titled An Alternative Media Experience: LiveLeak by Fatih Çömlekçi and Serhat Güney at https://www.researchgate.net/publication/300453215_An_Alternative_Media_Experience_LiveLeak

liveleak
(Source: Olhar Digital)

LiveLeak has often been referred to as the dark side of YouTube. LiveLeak had similar features to engage its existing userbase and attract new users. Among them were Recent Items, Channels, and Forums. Furthermore, immersive features such as Yoursay, Must See, or Entertainment. Its central difference to mainstream video-sharing websites was the absence of content moderation. Few exceptions, however, were made with regard to illegal content, racist remarks, doxing, or obvious propaganda of terrorist organizations. LiveLeak first attained notoriety when it shared a pixelated cellphone video of the execution of former dictator Saddam Hussein. 

Gilles Deleuze and Felix Guattari described aborescence, a tree model of thinking and information sharing whereby a seed idea grows into a concept that can be traced back to the seed idea, as the fundamental way of Western logic and philosophy. In our postmodern world, they argue, arborescence no longer works but instead, they offer the concept of rhizomatic structures. In essence, a rhizomatic structure is a decentralized social network with no single point of entry, no particular core, or any particular form of unity. Fatih Çömlekçi and Serhat Güney describe rhizomes as a

“swarm of ants moving along in an endless plateau by lines. These lines can be destroyed by an external intervention at one point but they will continue marching in an alternative and newly formed way/route”

Most social media networks are built around arborescence: a user creates an account, connects with friends and all interactions can be traced back to a single point of entry. LiveLeak resembled rhizomes. Content that circulated on its platform had not necessarily a single point of entry. It was detached from the uploader and often shared with little context. Therefore it was able to trigger a social mobilization around the particular content from all kinds of users; some with their real-life personas, most anonymously but none connected in an arborescent way. Another interesting feature about LiveLeak was its reversal of the flow of information. Western media outlets define the news. LiveLeak disrupted this power structure by, for example, leaking unredacted, uncensored footage of atrocities committed by the Assad regime during the Syrian Civil War. Moreover, users in third-world countries were able to share footage from local news channels which were not visible in the mainstream media. Taken together, LiveLeak enabled social movements such as the Arab Spring or the Ukraine Revolution. Arguably, the video-sharing platform influenced public opinion about police brutality in the United States fueling the Black Lives Matter movement. Undoubtedly, its features contributed to a more defined reality, that is less whitewashed. LiveLeak played a seminal role in establishing our modern approach to communication moderation on social media networks.

Learn To Discern: How To Take Ownership Of Your News Diet

I am tired of keeping up with the news these days. The sheer volume of information is intimidating. It creates a challenge to filter relevant news from political noise only to then begin a process of analyzing the information for its integrity and accuracy. I certainly struggle to identify subtle misinformation when faced with it. That’s why I became interested in the psychological triggers weaved into the news to better understand my decision-making and conclusions. Pennycook and Rand wrote an excellent research paper on the human psychology of fake news.

tl;dr

We synthesize a burgeoning literature investigating why people believe and share false or highly misleading news online. Contrary to a common narrative whereby politics drives susceptibility to fake news, people are ‘better’ at discerning truth from falsehood (despite greater overall belief) when evaluating politically concordant news. Instead, poor truth discernment is associated with lack of careful reasoning and relevant knowledge, and the use of heuristics such as familiarity. Furthermore, there is a substantial disconnect between what people believe and what they share on social media. This dissociation is largely driven by inattention, more so than by purposeful sharing of misinformation. Thus, interventions can successfully nudge social media users to focus more on accuracy. Crowdsourced veracity ratings can also be leveraged to improve social media ranking algorithms.


Make sure to read the full paper titled The Psychology of Fake News by Gordon Pennycook and David G. Rand at https://www.sciencedirect.com/science/article/pii/S1364661321000516

This recent research paper by psychologists of the University of Regina and the Sloan School of Management at Massachusetts Institute of Technology took a closer look at the sources of political polarization, hyperpartisan news, and the underlying psychology that influences our decision-making on whether news is accurate or misinformation. They answered the question of why people fall for misinformation on social media. Lessons that can be drawn from this research will be helpful to build effective tools to intercept and mitigate misinformation online. It will further advance our understanding of the underlying human psychology when interacting with information on social media. And while the topic could fill entire libraries, they limited their scope of research to individual examples of misinformation rather than organized, coordinated campaigns of inauthentic behavior excluding the spread of disinformation.

So, Why Do People Fall For Fake News?

There are two fundamental concepts that explain the psychological dynamics when faced with misinformation: truth discernment aims to establish a belief in the relative accuracy of news that is greater than known-to-be false information on the same event. Basically, this concept is rooted in active recognition and critical analysis of the information to capture people’s overall beliefs. Another concept that is used to explain why people fall for misinformation is the idea of truth acceptance. Thereunder the accuracy of news is not a factor but the overall belief of it. Instead of critical analysis of the information people chose to average or combine all available information, true or false, to establish an opinion about the veracity of the news that captures people’s overall belief. This commonly results in a biased perception of news. Other concepts related to this question look at motives. Political motivations can influence people’s willingness to reason based on their partisan, political identity. In other words, when faced with news that is consistent with their political beliefs, the information is regarded as true; when faced with news that is inconsistent with their political beliefs, the information is regarded as false. Loyalty to their political ideology can become so strong that it can override an apparent falsehood for the sake of party loyalty. Interestingly, the researchers found that political partisanship has much less weight than the actual veracity of news when assessing information. Misinformation that is in harmony with people’s political beliefs is less trustworthy than accurate information that is against people’s political beliefs. They also discovered that people tend to be better at analyzing information that is in harmony with our political beliefs, which helps to discern truth from falsehood. But if people hardly fall for misinformation consistent with our political beliefs, which characteristics make people fall for misinformation?

“People who are more reflective are less likely to believe false news content – and are better at discerning between truth and falsehood – regardless of whether the news is consistent or inconsistent with their partisanship”

Well, this brings us back to truth discernment. Belief in misinformation is commonly associated with overconfidence, lack of reflection, zealotry, delusionality, or overclaiming where an individual acts on completely fabricated information as a self-proclaimed expert. All of these factors indicate an inability of analytical thinking. On the opposite side of the spectrum, people determine the veracity of information through cognitive reflection and tapping into their relevant existing knowledge. This can be general political knowledge, a basic understanding of established scientific theories, or simple online media literacy.

“Thus, when it comes to the role of reasoning, it seems that people fail to discern truth from falsehood because they do not stop to reflect sufficiently on their prior knowledge (or have insufficient or inaccurate prior knowledge) – and not because their reasoning abilities are hijacked by political motivations.” 

The researchers found that the truth has little impact on sharing intentions. They describe three types of information-sharing on social media:

  • Confusion-based sharing: this concept encompasses a genuine belief in the veracity of the information-shared (even though the person is mistaken)
  • Preference-based sharing: this concept places political ideology, or related motives such as virtue signaling, above the truth of the information shared accepting misinformation as a collateral
  • Inattention-based sharing: thereunder people are only intending to share accurate information, but are distracted by the social media environment

Steps To Own What You Know

If prior knowledge is a critical factor to identify misinformation, then familiarity with accurate information goes a long way. An awareness of familiar information is critical to determine whether the information presented is the information that you already know or a slightly manipulated version. Be familiar with social media products. What does virality look like on platform XYZ? Is the uploader a verified actor? What is the source of the news? In general, sources are a critical signal to determine veracity. The more credible and established a source, the likelier the information is well-researched and accurate. Finally, a red flag for misinformation is emotional headlines, provocative captions, or shocking images.

Challenges To Identify Misinformation

Truth is not a binary metric. In order to determine the veracity of news, a piece of information may be falsified or laced with inaccuracies or compared against established, known information. Therefore the accuracy and precision, or overall quality, of a machine learning classifier for misinformation hinges on the clarity of the provided training data times the depth of exposure on the platform where the classifier will be deployed. Another challenge to consider is the almost ever-changing landscape of misinformation. Misinformation is rapidly evolving, convulsing into conspiracy theories and maybe (mistakenly) supported by established influencers and institutions. This creates problems to discern the elements of a news story, which undermines the chances to determine accuracy. Inoculation (deliberate exposure to misinformation to improve recognition abilities) is in part ineffective because people fail to stop, reflect and consider the accuracy of the information at all. Therefore successful interventions to minimize misinformation may start with efforts to slow down interactions on social media. This can be achieved by changing the user interface to introduce friction and prompts to help induce active reflection. Lastly, human fact-checking is not scalable. For so many reasons: time, accuracy, integrity, etc. Leveraging a community-based (crowd-sourced) fact-checking model might be an alternative until a more automated solution will be ready. Twitter has recently introduced experiments with these types of crowd-sourced products. Their platform is called Birdwatch.

This research paper didn’t unearth breakthrough findings or new material. It rather helped me to learn more about the dynamics of human psychology when exposed to a set of information. Looking at the individual concepts people use to determine the accuracy of information, the underlying motives that drive our attention, and the dynamics for when we decide to share news made this paper a worthwhile read. Its concluding remarks to improve the technical environment by leveraging technology to facilitate a more reflective, conscious experience of news on social media leaves me optimistic for better products to come. 

Why You Can’t Quit Social Media

What is the fuel of our social media habits? To answer that question researchers from the University of Southern California in Los Angeles analyzed user behavior across established social media platforms. They offer insights into user habit formation, but also explain the dynamics and technology that prevent users from gaining control over the daily-use habits on social media.

tl;dr

If platforms such as Facebook, Instagram, and Twitter are the engines of social media use, what is the gasoline? The answer can be found in the psychological dynamics behind consumer habit formation and performance. In fact, the financial success of different social media sites is closely tied to the daily-use habits they create among users. We explain how the rewards of social media sites motivate user habit formation, how social media design provides cues that automatically activate habits and nudge continued use, and how strong habits hinder quitting social media. Demonstrating that use habits are tied to cues, we report a novel test of a 2008 change in Facebook design, showing that it impeded posting only of frequent, habitual users, suggesting that the change disrupted habit automaticity. Finally, we offer predictions about the future of social media sites, highlighting the features most likely to promote user habits.

Make sure to read the full paper titled Habits and the electronic herd: The psychology behind social media’s successes and failures by Ian A. Anderson and Wendy Wood at https://onlinelibrary.wiley.com/doi/abs/10.1002/arcp.1063

(Source: Getty Images/iStockphoto)


Social media platforms serve our communities in a variety of functions. Anybody can participate, share stories or become a community leader by creating user-generated content that is available to a specific group of people or the entire public. Connecting with people is human, but the frequency, means and reach as well as the how and who we connect with is not. In particular the dichotomy of conflicting social interests and user habits is discussed in this paper, which explains on a high level fundamental social media platform’s need to draw on user habits and how these habits are cultivated by sophisticated technology. In  fact, social media platforms are designed to encourage habit formation through repeat use. This is demonstrated by its ever-expanding options to find new people to connect and share content, new entertainment products and means to build larger online communities. This is to generate consistent revenue through effective, targeted marketing of its users.

One aspect of the paper explores whether frequent use of social media is habitual use and if overuse is tantamount to an addiction. What are the contributing factors that make people form a habit to frequently check their social media profiles? How can you manage these habits more effectively? What does it take to rewire these habits? The researchers found that users who post more frequently also reported increased automation of their actions. In other words, these users logged onto Facebook or Twitter posted about something without deliberately thinking about the act of posting itself. Some of the factors that contribute to forming a habit are the repeated steps it takes to participate on social media. For example, the login process, posting original content, exploring new content from others, liking, sharing or discussing content. In psychology this phenomenon is called ideomotor response wherein a user unconsciously completes an order of steps to perform a process. Of course the formation of a habit is not only due to repetition but rewards of continuous use. Likes, shares and general interaction with people on social media are a double-edged sword for it brings us closer together while also appealing to our subconscious need for affirmation. The former helps us to build positive attitudes linked with the particular platform. Whereas the latter often remains unrecognized until the habit is already established in one’s daily routine. Initial rewards subside fast, however, as these motivations are replaced by habitual use that is linked to a specific gain arising from a certain community engagement. These habits, once formed and established, are hard to overcome as demonstrated by an experiment with well-known, sugared beverages: 

“In an experiment demonstrating habit persistence despite conflicting attitudes, consumers continued to choose their habitual sugared beverages for a taste test even after reading a persuasive message that convinced them of the health risks of sugar”. 

It must be noted that social media use is not the same as drinking soda pop, smoking cigarettes or snorting cocaine. Social media use is also not a mindless, repetitive action. Rather it is a composition of different, highly individualized behaviors, attitudes and motivations that compound depending on the particular use case. For example a community organizer who uses Facebook Groups to bring together and coordinate high-school students across a county to play pickup ultimate frisbee will establish different habitual behaviors from someone using social media purely to connect online with a closed-circle of family and friends. The researchers found that active engagement on social media is linked to positive subjective experiences of well-being. Users who are more passive, scroll and read only reported lower levels of life satisfaction. Scrolling introduces an element of uncertainty for the user. Thus it is among the top rewards that don’t require active engagement. Unexpected posts tend to surprise users with sometimes highly emotional content such as misinformation or community nostalgia. Needless to state, controversial content tends to spread fast and far increasing the reward for engagement. Moreover it entrenches habitual use for users to come back to discover more emotional content.

To put this into perspective: social media habits form because the platform highlights signals that makes us feel good and keep us engaged. Preexisting emotional and social needs are captured by an easy process to use the platform. Notifications, likes, comments and shares increase participatory experiences that emulate real-world communities. Reciprocity between family, friends and others as well as elements of uncertainty are adjusted based on tailored content delivery through sophisticated algorithms. These lines of code ensure that once a user establishes a footprint on the platform, enough incentives are created to encourage and facilitate repeat use. Therefore further ingraining the platform in our daily lives, daily-use habits.

Maybe We Should Take A Break

In my thought provoking headline I challenge the notion that it is impossible to reduce or quit social media altogether. Note I wouldn’t want anybody to reduce or quit social media if it adds value to your life. Facebook is invaluable with regard to connecting with family and friends. YouTube or TikTok offer some of my favorite pastimes. And Twitter has become the newsstand of the 21st century. Nevertheless I believe this research paper is an important contribution to raise awareness of our daily habits, our time management and how we consume information. I would be remiss to not contemplate options to improve my social media diet. In psychology research the terminology for quitting a habit is coined discontinuance intention. Forming an intent to cease social media is a decision process at times overshadowed by feelings of regret, lack of alternative means to communicate across our social graph and general, societal inertia (take these Google search queries pictured below as an indicator for the impact of societal inertia). If you find yourself wanting to change your social media diet then be on the lookout for these factors: 

  • Familiarity Breeds Inaction: the longer a user is with a social media platform, the more likely feelings of familiarity and a sense of control prevent actions to reduce time spent on the platform
  • Habits Trump Intentions: everyday signals manifested in our phones, computers or environment trigger ideomotor responses to use social media despite social norms, resolutions etc. Remember the old saying “the road to hell is paved with good intentions” is true for managing our social media habits
(Source: Interest over time on Google Trends for delete tiktok, delete facebook, delete twitter, delete instagram, delete snapchat – United States, Past 12 months)


Straight-forward self-control has been found to be an effective strategy to reduce the use of social media. Discipline to use social media with a specific intent and for a specific purpose equals freedom from habitual, time-consuming use. However, the researchers found that self-control is hard to maintain and a more effective strategy is changing the signals upon which we use social media. For example, leveraging silent or airplane mode on our phones, turning off push-notifications or unsubscribing from notification emails help to dig a moat between a healthy daily routine and mindless use of social media. Interestingly, the researchers found short term absences from social media, i.e. only a few days, is less effective than an entire week or longer breaks from social media. It will depend on an individual’s preferences, needs and benefits that must be carefully balanced against the inherent cost of social media use. Of course all of this is highly subjective. I recommend reading this well-written research paper as a start. It helps to formulate a balanced strategy for social media use and online habit management.

Why Are We Sharing Political Misinformation?

Democracy is built upon making informed decisions by the rule of the majority. As a society, we can’t make informed decisions if the majority is confused by fake news in the shape of false information distributed and labeled as real news. It has the potential to erode trust in democratic institutions, stir up social conflict and facilitate voter suppression. This paper by researchers from New York and Cambridge University examines the psychological drivers of sharing political misinformation and is providing solutions to reduce the proliferation of misinformation online. 

tl;dr

The spread of misinformation, including “fake news,” disinformation, and conspiracy theories, represents a serious threat to society, as it has the potential to alter beliefs, behavior, and policy. Research is beginning to disentangle how and why misinformation is spread and identify processes that contribute to this social problem. This paper reviews the social and political psychology that underlies the dissemination of misinformation and highlights strategies that might be effective in mitigating this problem. However, the spread of misinformation is also a rapidly growing and evolving problem; thus, scholars also need to identify and test novel solutions, and simultaneously work with policy makers to evaluate and deploy these solutions. Hence, this paper provides a roadmap for future research to identify where scholars should invest their energy in order to have the greatest overall impact.

Make sure to read the full paper titled Political psychology in the digital (mis)information age by Jay J. Van Bavel, Elizabeth Harris, Philip Pärnamets, Steve Rathje, Kimberly C. Doell, Joshua A. Tucker at https://psyarxiv.com/u5yts/ 

It’s no surprise that misinformation spreads significantly faster than the truth. The illusory truth effect describes this phenomenon as misinformation that people had heard before were more likely to be believed. We all have heard of a juicy rumor in the office before learning it is remotely true or made up altogether. Political misinformation takes the dissemination rate to the next level. It has far greater rates of sharing due to its polarizing nature driven by partisan beliefs and personal values. Even simple measures seemingly beneficial to all of society are faced with an onslaught of misinformation. For example, California proposition 15 designed to close corporate tax loopholes was opposed by conservative groups resorting to spread misinformation about the reach of the law. They conflated corporations with individuals making it a family affair to solicit an emotional response from the electorate. It’s a prime example for a dangerous cycle in which political positions are the drivers of misinformation which in turn is facilitating political division and obstructing the truth to make informed decisions. Misinformation is found to be shared more willingly, quicker and despite contradicting facts if the misinformation was in line with the political identity and seeking to derogate the opposition. In the example above, misinformation about proposition 15 was largely shared if it (a) contained information in line with partisan beliefs and (b) it sought to undercut the opponents of the measure. As described in the paper, the more polarized a topic is (e.g. climate change, immigration, pandemic response, taxation of the rich, police brutality etc.) the more likely misinformation will be shared by its individual political in-groups to be used against their political out-groups without further review of its factual truth. This predisposed ‘need for chaos’ is hard to mitigate because the feeling of being marginalized is a complex, societal problem that no one administration can resolve. Further, political misinformation tends to be novel and trigger more extreme emotions of fear and disgust. It tends to confuse the idea of being better off is equal to being better than another political out-group. 

Potential solutions to limit the spread of political misinformation can already be observed across social media:

  1. Third-Party Fact Checking, is the second review by a dedicated, independent fact-checker committed to neutrality in reporting information. Fact-checking does reduce belief in misinformation but is less effective for political misinformation. Ideological commitments and exposure to partisan information foster a different reality that, in rare extreme cases, can create scepticism of fact-checks leading to an increased sharing of political misinformation, the so-called backfire effect.  
  2. Investing in media literacy to drive efforts of ‘pre-bunking’ false information before they gain traction including to offer tips or engage in critical reflection of certain information is likely to produce optimal long-term results. Though it might be problematic to implement effectively for political information as media literacy is dependent on the provider and bi-partisan efforts are likely to be opposed by their respective extreme counterparts. 
  3. Disincentivizing viral content by changing the monetization structure to a blend of views, ratings and civic benefit would be a potent deterrent for creating and sharing political misinformation. However, this measure would likely conflict with growth objectives of social media platforms in a shareholder-centric economy.

This paper is an important contribution to the current landscape of behavioral psychology. Future research will need to focus on developing a more comprehensive theory of why we believe and share political misinformation but also how political psychology correlates with incentives to create political misinformation. It will be interesting to learn how to manipulate the underlying psychology to alter the lifecycle of political information on different platforms, in different mediums and through new channels.