The setup. In this post I want to discuss the topic of filter-bubbles / echo-chambers. I think it’s dangerous to define too specifically what a filter bubble is (for reasons which will become apparent below), but the intuition is some kind of algorithmic sorting that exposes specific groups of people to content that resonates with them (so democrats see democrat-stuff, republicans see republican-stuff, etc).
The two camps: When I look at the literature, there seems to be two camps.
- On one hand, there is amazing research [1,2,3] (explained with exemplary clarity in my recent interview with Piotr Sapiezynsky) which a) clearly reveals the mechanisms behind algorithmically sorted content on Facebook and b) explains how this sorted content leads to filter bubbles along partisan/racial/etc lines.
- On the other hand there is another stream of research stating that the issue with social media is a kind of breaking of filter-bubbles (bubbles that exist in the first place). This one is illustrated below
(Actually it was the tweet above from my friend and collaborator Michael, that made me think about this whole thing.)
The paradox. So what’s going on!?! It’s a paradox: There’s incontrovertible evidence of filter bubbles. Yet the problem is that social media exposes us to opposing viewpoints (from outside the bubbles). That made me think: Is there a way that these viewpoints can both be true simultaneously?
And upon reflection, I think there is.
One of the things that came up in my podcast talk with Piotr is that it’s difficult for democrats to target content to republicans for a surprising reason. It is difficult because it’s highly painful to republicans to see Biden portrayed in a positive light (e.g. in political ads) that republican users actually stop browsing Facebook sooner when they see such ads (too much cognitive dissonance, perhaps). This leads to lost revenue from other advertisers. That’s the essence of how the filter bubble works, in fact. (And it’s the same for democrats and Trump, obviously.)
But the research in Michael’s tweet above suggests that there’s a different kind “democrat content” that does not have that effect on republicans (and vice versa).
In fact my hypothesis is that there is content from the “other side” which does not compel people to leave the platform. Stuff that does not result in cognitive dissonance. One candidate for such content could be statements that are particularly outrage-generating (and conforming with our negative image of the other side, hence no cognitive dissonance).
Stated differently, perhaps there’s not just partisan and non-partisan content. It’s highly plausible to me that there are some types of partisan content that drive people away from their social platform of choice (displaying reasonable aspects of political opponents, discussing them in a positive light) and other types of partisan content (the more outrageous stuff) that keep people engaged.
Resolving the paradox. This explains the apparent paradox. There is filtering, but the filter bubbles do not filter all partisan content. It just separates out the reasonable stuff and only sends the crazy stuff across political divides.
Hence the title of this piece, post versus person. This sorting of content is only possible because social media is about posts rather than people. Posts are bite-sized opinions, factoids, etc. People are complex, nuanced, good & bad.
In the off-line world, I am presented with the full person, not just highly selected parts of them. And (in my experience) even people I strongly disagree with politically are usually pretty reasonable in most of their opinions and behavior. Maybe the person I’ve know as a fun and friendly coffee-machine acquaintance for years will say something pretty outrageous one day, and I’ll think “wow, that was weird and unexpected” … and move on.
But in the online world, I don’t get the whole person. I don’t get the reasonable and boring stuff. I don’t get the history and context of the person. I just get the single crazy utterance, just the disembodied single post, because that’s what’ll keep me staring at the screen longer .
What do you say? My hypothesis around which types of content makes it across the partisan divide is totally testable. I hope someone will go and investigate!
(Also, I haven’t really done a literature study, so perhaps this idea isn’t new at all. If it isn’t, let me know!)
We’ve long known that social media only rewards only extreme snippets with attention, but I think this filter bubble aspect is new: There is some partisan content that reinforces existing views in a righteous not-cognitive-dissonance-inducing way (that’s the stuff that makes it across the divide), then there’s the more nuanced stuff that causes us to have so see the other side in a different light – a list of the good deeds done by my enemy. That stuff we never see.
Further, by only exposing us to the most extreme views of our neighbors, social media is making us forget all the mundane stuff we have in common with all people.
By the way, realizing this issue even shows the outline of a solution: As we connect people online, on path towards a better system is one that establishes mechanisms that help us remember to see other people as whole beings – not just hot-takes.
Let me know what you think
- This view also explains why the old non-algorithmic Twitter somehow felt less insane. You were exposed to all tweets from entire people. There was still a selection, people want likes resulting in more extreme content, but at least you got the full person. With algorithmic sorting of maximally engaging content, we get rid of all the reasonable stuff that doesn’t reinforce our existing beliefs.