Investigationsvol. 4

An Overload of Information

How big data and cognitive dissonance are shaping the age of information 

By Jevin Nishioka


“My phone is listening to me!” At this point this sentiment is so common, you may have even heard your grandma say it. This is an easy to understand concept, if an application delivers you a specific advertisement then it was your personal engagement driving it. It is interesting how understanding that advertisements are targeted is simple, but for a lot of social media users this logic does not extend to opinions, political “news” and other pieces of information. This willingness to share “fake news” is an important conversation that is often at the forefront of regulation and even scrutiny for tech giants. The aforementioned conversation is more nuanced than just people posting incorrect information, as factors such as machine-learning algorithms, cognitive biases and online echo chambers are all a part of the bigger picture.

Machine learning algorithms (big data)

Machine learning algorithms and big data have long been treated as a “black-box” of sorts, but it is important to understand on the surface level how they are able to anchor human bias. The greatest example of this is TikTok’s “For You” page. This “page” is an infinite scroll of short-form videos that are carefully selected by TikTok’s algorithm to enhance the experience of the user. The algorithm incorporates several factors such as trending hashtags and sounds, popularity of the video, user interactions with related content in the form of shares, comments, likes and views, and most likely several more factors. This is the “big data” part; in order for TikTok to provide this feed, they must be harvesting large amounts of data points on each user to reference when curating their feed. Multiply this by the reported 656 million users and the term “big data” makes sense. The “machine learning” part is referencing how TikTok is able to take this massive amount of data to label videos, label user interests and finally curate a feed of videos that have labels aligning with user interests. In short, TikTok knows what your interests are to the degree in which you display them.

A lack of awareness

A study by Pega found that 33% of users thought they were using AI in their everyday life while 77% were actually using it. Intuitively, it is hard to ask users to be cognitively aware of the bias they present to AI when they are not even aware they are using it.  However, this problem extends beyond just a lack of technical savvy as users may be aware of how they spread information but not necessarily where it came from.

Fake news

At this point the term “fake news” has been thrown around in social discourse. However, it has been very loosely defined and perhaps is more easily broken into two categories, misinformation and disinformation. According to UW Bothell, misinformation is, “false information that is spread, regardless of whether there was intent to mislead.” Disinformation is “deliberately misleading or biased information; manipulated narrative or facts; propaganda.” In the context of this conversation this distinction is very important. As fake news pertains to echo chambers, machine learning bias and cognitive bias, misinformation is more interesting. There are plenty of sources, trolls and bots sharing disinformation to pollute the internet but this is seemingly more of a security issue for social media giants. Misinformation, on the other hand, can be perpetuated by everyday users, on personal accounts without negative intent and for a variety of reasons.

A University of Michigan student was asked about how they most often come across misinformation. “My uncle regularly sends Facebook memes that just aren’t based on anything true in our family group chats and I am not sure he knows.” 

Too much to handle

It is no secret we live in the age of information, with an information sharing economy. According to Pew Research Center, up to 68% of social media users use social media for news consumption. However, with tens of thousands of pieces of news information via articles, videos, snippets, etc. circulating daily, the amount of information is overwhelming and difficult to filter. For any given person to have complete knowledge on information that is shared to them seems unlikely and for every person to achieve this, impossible. While it may seem that there would be a positive correlation between the amount of information received and informed decision making, this is only true up until a point. When this “point” is surpassed, users suffer from information overload, whereby additional pieces of information cloud decision-making rather than enhance it. This overload can be quantified with the “velocity” of news information, which increasesover time and is a major contributing factor to the willing consumption and spread of misinformation.

A large velocity of news information puts users in a position of uncertainty as having adequate background knowledge for every news point is difficult. In this case, humans tend to rely on internal information, using the technology as an external validator to reduce their uncertainty. This reliance on internal information becomes problematic in the face of misinformation as humans tend to bridge gaps in their knowledge in two ways. By selectively seeking information consistent with their beliefs, and with biased interpretation of ambiguous information that enhances confidence in prior beliefs. 

As we are flooded with this content daily this cycle is repeated. 

Kingsley Amadi, a recent graduate from Michigan State and frequent TikTok user, reports usage of up to 2 hours per day. “The content is super addictive. I can definitely tell how the For You Page gives me specific content. If I send something to a few people then I end up getting similar stuff in the future.” TikTok reports an average use of 52 minutes per day. Given the nature of TikTok’s short form content the average user is interacting with hundreds if not thousands of posts per day. 

The cost of validation

On paper, the fight against misinformation is already won. With access to tools such as Google, misinformation should not stand a chance. In practice, this is not the case. Everytime we seek to validate information it comes at a mental cost. Processing information comes at a mental cost and with a high velocity of information this adds up quickly. Validating this information is seen as an additional cognitive cost to the brain whereas leaning on previous partial knowledge is easier. 

When a student from the University of Michigan was asked how he validates what he sees, he says, “I do not. When I am watching TikTok I do not want to have to do research on the side. It is tough to even validate this stuff because when you Google it you have no idea how to wade through what you are reading.” This illuminates the problem of misinformation. Although everyday users may not be intentional in consuming or sharing misinformation, by not validating their information it opens the floodgates for this to occur. 

Importance of validation

A recent example of the harm fake news causes is seen with Twitter. When Elon Musk bought Twitter he laid out a plan to sell verification check marks for $8. This verification check was previously used to help to signal to users that this information was coming from a verified source. The selling of this previously meaningful check mark resulted in people impersonating large companies such as Eli Lilly and Raytheon. The impersonators of Eli Lilly tweeted that, “insulin was now free.” This tweet caused Eli Lilly to lose around 15 billion in market cap. These tweets garnered dozens of thousands of shares and had very real consequences; without a surefire way to validate news on Twitter even large investment firms were tricked and billions were lost. 

The role of technology

As previously mentioned, our society relies heavily on social media outlets for news information. News information in the most general form of the word should be impartial, factually-based, presented with context and not misleading. The problem with this is that as we become more dependent on technology for news, and technology becomes more reliant on machine learning and artificial intelligence to curate content, impartial news becomes unlikely. 

A student from the University of Michigan examined his TikTok For You page to try to see if there was a pattern in his content. The user is a self-proclaimed right-leaning individual. In this small sample of 10 posts alone, we found 2 posts that were considered tied to politics, both being small clips of President Joe Biden. The first was the clip of the President falling off a bike and the second was a video of him posing for a photo with a younger girl on stage where the original poster was implying that he was a creep. The same exercise was done with a student who claimed to be left-leaning with similar results. This student’s stream featured a small clip of republican Tudor Dixon speaking against abortion. Both students reported seeing similar content daily and believe that their interactions were potential causes. While these micro-examples did not necessarily contain blatant misinformation, they show how previous interactions can result in more one-sided information in the future. 

A slippery slope

It is well understood by now that users of TikTok’s For You page are able to curate information based on their interests and even opinions. This has resulted in the propagation of echo chambers. Echo chambers are not always bad; for example, a page dedicated to Michigan Wolverines football is an echo chamber. Echo chambers become more problematic when the ideas are centered around being against another group of people. The most recent and relevant example of this is the “Alt Right” pipeline. In regards to “labels” this pipeline is a metaphorical rabbit hole of internet content that machine-learning algorithms have given an “Alt Right” label. The ideas perpetuated by this echo chamber focus on individualism, conspiracy theories, and often a disdain for women.  This pipeline is especially attractive to young men, more specifically those that feel ostracized by society. 

The cycle is pretty simple. Young, broken men seek an answer to why they feel ostracized by society and this attracts them to videos like “Feminist gets owned.” Confirmation bias causes these young men to feel validated and continue down the pipeline. Videos of Ben Shapiro speaking at a University could quickly spiral into dangerous conspiracies and sexist rhetoric by algorithms labeling them as similar. 

As we navigate our information economy it becomes increasingly important to have self-accountability when interacting with content. As tech giants continuously struggle to limit misinformation it appears that fake news is inevitable. As responsible users it falls on us to be aware of our bias, take value in validating information, and be thoughtful of the information we share. The ability to get worldwide news instantaneously from our smartphone is powerful and to use it to set back intellectual discourse via spreading fake news is to squander it.

 

Photo by Charles Deluvio via Unsplash