Bad Robots: “Filter Bubbles” Control The Information We See Online & Drive Us Apart

Bad Robot Outcome:
Because social media channels and online search engines prioritize content that they believe will be engaging to us, we are often stuck within ideological and intellectual echo chambers that serve to reinforce our own biases. These “filter bubbles” are widening the gaps between us and even creating dangerous political instability.

The Story

If you and I both search for the exact same thing online, we might get two different results. This is because our digital lives exist largely within filter bubbles – the algorithmic prioritization of content that providers believe will be most engaging to us.

The effect of a filter bubble can be a state of intellectual or ideological isolation resulting from being constantly fed content that we presumably already agree with. This can happen both when using search engines or consuming news via social media.

Social media usage actually combines two types of “personalization” that work together to create filter bubbles within our timelines or news feeds. First, there is the self-selected personalization associated with who we choose to follow, which social media platforms we use, and so on. It’s no secret that people choose to surround themselves with like-minded others.

Pre-selected personalization, the second type of personalization, happens on an algorithmic level. Posts that may appear to be random on Twitter or Facebook, for example, are actually carefully curated based on what such platforms already know about us from our self-selected personalization preferences (who and what we follow, “like”, etc.).

The result here is that we actually have much less control over what we see (and perhaps more importantly what we don’t see) within our social media feeds. Instead of a diverse array of content that reflects the vastness of potential information available, we are forced into digital echo chambers that reinforce our own belief and biases.

However, the problem does not exist solely within the realm of social media. There are filter bubbles present even when performing searches through engines like Google. When searching for news through Google, it’s possible that their algorithm will feed you content based on your past usage, search history, etc. Here their interest is to serve content that you will find engaging (and hopefully click on), so as to maximize the advertising revenue they are able to generate.

The term “filter bubble” was coined by internet activist Eli Pariser, who wrote a book on the topic back in 2012. Bill Gates warned us of the detrimental effects to news consumption that had been, and would continue to be caused by filter bubbles in 2017. This is not a new phenomena.

The Fall-Out

Despite knowing of the perils associated with filter bubbles for almost a decade, the effects continue to be significant and are, perhaps, even worsening over time.

If you need to be reminded of just how dangerous the situation has become, look no further than the 2020 United States presidential election. American voters, who now more than ever are getting their news from social media, entrenched themselves into their own filter bubbles surrounded by familiar and comfortable viewpoints while vehemently shouting down those that do not align with their own.

The resulting challenges here are a lot more dire than getting into a Facebook argument with a distant uncle who doesn’t support your candidate. In fact, a recent piece written by a sociologist who formerly ran the CIA’s State Failure Task Force noted that current American political instability is in line with where it was not long before the country’s bloody, multi-year civil war.

Our view

Popping these dangerous filter bubbles will require private and public coordination.

It will necessitate the thoughtfulness of governments, like we’ve seen with respect to the Filter Bubble Transparency Act in the United States. This proposed piece of legislation would require companies that collect personal information of over one million individuals and make over $50M per year to both inform their users that algorithms are used to determine which content is shown to such users, as well as the ability to opt out of the content curation.

However, bursting filter bubbles cannot be achieved solely at the governmental level. It will also require private action, both from tech companies and individuals themselves. In positive news, there does seem to be a real movement aimed at addressing these challenges.

For example, in the EU, nearly 20 major media outlets have launched Europe Talks – a platform through which Europeans with differing views can engage and discuss ideas. On each of the outlets’ websites, users are presented with a popup window that asks a policy question (e.g., should all European cities be car free?). Once you have answered the question, you are then paired with another user who answered the question differently for an email introduction and then, eventually, a video chat. These thoughtful and creative solutions can hopefully be used to get us out of our own filter bubbles.

However, you don’t necessarily have to seek out third parties like Europe Talks to help burst your own individual bubble. There are things you can do within your own social media feeds to get things started. One simple place to start is by resisting the urge to delete or “unfriend” those with differing opinions.

The internet (and, yes, even social media) can be a wonderful place to learn new information and engage with a diverse set of worldviews. Now, it’s up to all of us to make sure we don’t miss out on such a wonderful opportunity to do so.

Joy Townsend

Written by:

Andy Dalton