Walking the Line

The past decade, a major storm has been brewing. As the online platforms that we use every day have grown, so have their misinformation and echo chambers. Although in the past this issue has seemed distant and intangible, earlier this month we got a glance at what our future could look like if nothing changes.

Each platform has a legal right to moderate and make decisions about what is allowed on their platforms. But what is the best way to approach moderating this content? Should social media companies like Twitter facilitate harmful content, or should they filter out this content for alternative providers to host?

Should social media companies pull content?

The core of this issue is whether giving a platform to these ideas is better. Should we allow them on Twitter, where they can be seen and spread by more people but monitored, or should we lock it down to smaller corners of the web?

I thought about this for a while, and at first I was conflicted. I do understand the concern that denying certain users and viewpoints access to social networks that already tend to be monopolies is counter to the idea of free speech. However, when I considered how these behaviors would be treated in the real world, it started to become clearer.

The ability for a person to post an idea or opinion that can be seen by millions of others within hours is still new to this century. The idea of free speech is that, no matter your opinion, you have the right to say something and the government can't punish your for saying it. It doesn't mean that there are no consequences for what you say, or that a service is required to give you a platform to say it to millions of others.

At its core, the internet is a bunch of computers and networks that are interconnected. Inherently, there really isn't a major organization or group that oversees managing what is allowed and what isn't. However, since the internet is made of a bunch of computers, those servers hosting content need to be located somewhere. Generally, the content allowed on those servers is subject to the laws in place where they are hosted. Although the internet seems like an invisible web of information, that information is in fact physically living under someone's ownership somewhere.

Although there have always been smaller platforms like Gab and 8chan that have spread conspiracy theories and promoted violence, we've never seen any go mainstream. Parler's success in bringing a network of misinformation and conspiracies paired with almost no accountability or moderation is a major threat not only to their users but to our democracy and country. We've only recently started to see what effects this alternative reality can have on populations, and we should not support or allow these to grow at scale.

While Twitter and Facebook's moderation policies are far from perfect, the important part is that they are doing some moderation. Although companies should consider free speech when making decisions about what is allowed on their platforms, it should be their responsibility to ensure as much as possible that their users are having good faith arguments with valid information if it makes sense for their business model.

Meeting in the middle

While our social media networks and platforms should allow reasonable free speech, they should not and can not be responsible for enabling an environment that puts our democracy and country at risk.

Often, I see the 'slippery slope' argument come up, where as soon as we do a tiny bit of moderation we lose our freedom of speech. But what I keep coming back to is that as a society, we can't let misinformation or conspiracies have a breeding ground to spread. We've never had platforms before where this content can be spread so far and so quickly without intervention, and we need to really consider where we can draw the line. While the government can’t force you to stop saying the election was stolen, Facebook and Amazon shouldn’t give those voices a platform.