The Death of Freedom by Software
By Justin Albano [+]
How software is responsible for eroding freedom of speech and what we can do as software engineers to stop this trend before it’s too late.
“There have always been concerns and debates about free speech and when and where it is appropriate to censor speech, but the pace at which we have moved towards suppression of speech in the last two months is startling. Just as observers, it is frightening the direction public opinion about speech has turned and where this path will lead us.1 As software engineers and developers, we are not afforded the luxury of being simple observers; unlike speech suppression of the past, we are the ones on the frontlines. It is no longer some obscure aberration that censors speech from on high, but it is us creating these systems that are cracking down on offensive speech. We are no longer passive on-lookers, but rather, active participants.
In the last few months, 1984 — a dystopian novel written in 1949 — has become one of the best-selling books on both Amazon and Barnes & Noble2 and it begs the question: Why has a 72-year-old piece of fiction become one of the most popular books in 2020 and early 2021? Many of us have an intuitive understanding there is something eerie about what has happened over the last few months. While there has been major political upheaval in the United States over the last several months — from a hotly contested Federal election to an insurrectionist breach of the Capitol Building — these events should be leading us to histories of internal conflict and Civil War literature, not dystopian novels about the authoritarian Big Brother.
While this domestic unease is concerning, many in the United States (and around the world) have also become acutely concerned about another disturbing movement: The suppression of free speech by “Big Tech.” In the last six weeks alone, we have seen the sitting President of the United States permanently removed from a majority of the major social media platforms3 and a social media competitor (Parler) have its Platform as a Service (PasS) refuse to support the application and its mobile app removed from both Google Play and the Apple App Store.
In this article, we will take a hard, challenging look at the direction software companies and software developers are taking with regards to free speech. First, I will make a case for free speech and how suppression of speech is not a moral and righteous cause. Then I will delve into where we are today and what software companies have already done to move us into a disturbing direction, as well as where this can lead us in the future. Lastly, I will look at what we as software developers can do to combat this sanctimonious drift into control. I am not the first to present the case for free speech and there are numerous counterarguments (some more valid than others) that I have encountered anecdotally, as well as in my research. Addressing each one comprehensively in this article would be infeasible; instead, I have addressed the most common ones in Appendix A found at the end of this article. Additionally, as the cases of speech suppression continues to grow, a list of some of the most egregious examples can be found in Append B.
A Case for Free Speech
Free speech — the right to express any opinions without censorship or restraint — is a difficult proposition because it challenges our core beliefs and requires a demanding level of maturity. It means we will inevitably hear speech that offends our sensibilities or that we consider to be undeniably wrong. Worse, others may spread this misinformation, giving further credence to these falsehoods (or even overt lies). It is a natural urge to want this speech stopped in its tracks and even have the speaker silenced so that he or she cannot continue to spread false information. As natural as this tendency is, it is wrong, and it stems from a position of conceited self-supremacy.
Assuming we reject the false concept of moral relativism,4 there are universal truths that all people across the global agree upon. For example, murder, rape, and stealing are evil acts in the absolute sense. While different cultures and peoples may disagree about the nuance of what constitutes a murder and the justifications therein (that said, the differences across cultures are surprisingly minor),5 we all agree that there exists a concept of wrongful or unjustified killing.6 In essence, we all agree that murder is bad. These truths are easy to defend because we all agree that they apply to all people and cultures and are rarely debated. This absoluteness provides a moral foundation for our lives.
This is not the case for all actions and thoughts, though. Many statements are subjective and can be interpreted differently by different people. For example, one person may strongly believe one food tastes better than another while another person may believe a particular car looks best in blue. These opinions are indefinitely-debatable because they stem from a subjective claim.
This leaves us with universally accepted, objective truths at one end and debatable, subjective claims at the other. With the former, there are few arguments and therefore, little contention; with the latter, there is constant argument, but an understanding that no single opinion is objectively true (few mature adults would sever ties with another adult over their dislike of a certain movie or food). The most hotly contested arena is the space between these two poles: Objective truths that are fiercely debated. In essence, these are claims we individually consider to be absolute truths, but are vehemently disagreed upon. This is where religion and politics lie. Since we consider them absolute, and therefore foundational in our lives, a challenge to them cuts to our core.
This is also where the most intense debate materializes; unlike with subjective claims, we consider an ideological opponent to be wrong in the absolute sense. It is not simply a matter of taste or a subtle difference in opinion, but a complete disagreement about truth (as offensive as someone claiming that 2 + 2 does not equal 4). To each of us, we genuinely believe we are in agreement with the absolute, universal truth, but others (who hold that same genuine belief) do not, and this disagreement challenges us. It forces us to ask, “how could someone be so confident, yet so wrong?”
It requires a difficult level of maturity on our part to fervently disagree with someone, but still concede they have every right to speak their mind. Unless the speech is directly harming another person, such as specifically threatening violence or besmirching a person’s character by hiding or distorting facts about that person (i.e., defaming a person’s name), we have no moral authority to stop it. We know our only method of combating these falsehoods is with open, rational debate. Essentially, we gladly take on the task of battling these falsehoods in the marketplace of ideas.
This is not the case with suppression of speech. To suppress speech, we must assume our judgment of speech is inherently superior to another person’s judgment. Thus, we baselessly claim we have the authority to determine which speech is permissible and which speech is not. No longer do we battle out ideas and concepts in the open, satisfied that our strongest beliefs will stand the test and our weakest beliefs will be shattered and replaced with ideas closer to the truth. Instead, we deem ourselves arbitrators of what others can or cannot say.
This superiority hinges on the belief that we hold some trait that another does not which provides us with this moral authority. It cannot be a trait common to all humans, lest our subjugated person lay the same claim against us and stifle our speech.7 Instead, it must be something only we possess. Usually this is based on education (such as a sufficient level of college education) or identity. The fallacy in any of these claims is: Who determines what is a sufficient level of education or which identity (or intersectional identities) is sufficient or superior? For example, is it one year of graduate education that makes us knowledgeable to be the moral arbiters, or a doctorate degree? Is it blacks or whites that are morally superior to the other?
No matter the measuring stick, we are left with a basis that falls apart to infinite regress: What inherent trait allows us to determine this metric for morally superiority? And once we make that determination about the trait, what gives us the moral authority to determine that trait in the first place? We can follow this journey ad infinitum. In the end, we always come back to the same basis: We are the ones sufficiently endowed to determine permissible speech solely because we think we are right. The other person, whose speech we have deemed impermissible, is simply wrong and we are simply right. And thus, we are left where we started; we think ourselves right and we think others wrong in the absolute sense.
The difference with suppression of speech, though, is that we take on the infantile and arrogant assertion that we have the self-endowed authority to silence another person. Those in favor of free speech do not think themselves any less right than those in favor of speech suppression. Those in favor of speech suppression simply crown themselves — using arbitrary rules that favor themselves, of course — with moral authority over another person and determine that since another person is wrong, they should not be given the right to speak. They fear that the speaker’s falsehoods may spread and infect others — who we also deem too fragile or weak-minded to make their own determination of the truth. Since they have crowned themselves with a mantle of moral authority, then it is not only their right, but also their moral duty, to silence that speech. They not only claim to have the authority to suppress speech, but forgoing this claim would be immoral.
Thus, speech suppression is the one of the most arrogant forms of thought. It is the self-fulfilling claim one has the authority to hinder another’s speech simply because that person has determined he or she is right, and worse, has the moral obligation to suppress impermissible speech wherever possible.
Certainly, there are cases when specific types of speech can be reasonably suppressed. These valid claims occur in the realm of universally-agreed-upon truths. For example, nearly all would agree that speech that threatens death or severe bodily harm should be curtailed in order to preserve the life of the would-be victim. But this is not the speech that is often suppressed today. The speech suppressors of today cast a much wider net. Instead, the speech suppressed today are those truths we deem absolute but are hotly contested (i.e., the speech between the poles). In reality, these are the only thoughts they consider worth suppressing: Opinions do not matter because they are subjective and universal truths are not contested in the first place. This leaves them with the challenging, difficult speech to suppress.
Given the irrational and immature basis for speech suppression, free speech is the only feasible, coherent route. It does not don the crown of self-appointed moral authority, but instead bases itself on the idea that all humans inherently have an equal right to speak. It presupposes that because each man and woman have this universal right, we cannot suppress his or her speech, but instead, we must form cogent and convincing arguments, challenging our own beliefs along the way. Unlike speech suppression, free speech forgoes arrogance, and requires only maturity and deference. It forces us to let others speak, no matter how vehement our disagreements may be. Only in this arena can we seek the truth, hearing as many voices as possible and determining what is right and what is wrong to the best of our ability. In the long history of the world, the dichotomy has been clearly laid before us: Freedom to speak has always led to the light (despite the bumpy road along the way), while suppression has always led to darkness.8
The Dangerous Road
My contention is that good men (not bad men) consistently acting upon that position [imposing “the good”] would act as cruelly and unjustly as the greatest tyrants. They might in some respects act even worse. Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience. They may be more likely to go to Heaven yet at the same time likelier to make a Hell of earth. This very kindness stings with intolerable insult. To be “cured” against one’s will and cured of states which we may not regard as disease is to be put on a level of those who have not yet reached the age of reason or those who never will; to be classed with infants, imbeciles, and domestic animals.
— C.S. Lewis9
The danger in speech suppression is not that it is some cheap trick used by its practitioners here or there to gain a quick dollar, but instead, it is a moral imperative. Those who suppress speech do so because they believe they are doing the right thing. As observed by C.S. Lewis, this is the greatest of all tyrannies: That one person harms another thinking it is for the victim’s own good. Unlike outright immorality (like robbery), this type of tyranny does not sleep because the perpetrator believes that he or she is making the world a better place.
Had these “moral busybodies” of today been an insignificant minority, there would be no need to write this article, but unfortunately, speech suppression is not a onesie-twosie occurrence. Instead, it has become a systemic, coordinated type of tyranny. And doubly-unfortunate, it is us — software engineers — who have become creators of this suppression machine. In the past, suppressing speech would require going door-to-door like Bradbury’s fireman to find banned books, threatening publishers to cease printing of certain materials, or burning down a business to make a point. But in our modern age, suppression has become simultaneously more insidious and comprehensive.
Most of us believe speaking on the internet is equivalent to speaking in a public forum, where participants come and go without any gatekeepers to obstruct the flow of ideas. This is simply not the case. With internet communication, it is more akin to a road with checkpoints and junctions. In order to speak (“post”) on the internet, we must first access the internet through an Internet Service Provider (ISP), interact with a server owned by a company, and store our posts in a database hosted by a company (possibly the same company or maybe a different one entirely). If we wish to create a forum ourselves, we need to host it on a service (such as a PasS, SaaS, etc.) created by another company and register its domain name with yet another company. For participants to access our posts or forum, they must navigate through a browser, and possibly a search engine, to see our speech. Adding an additional layer of complexity, we must pay for all of these services through a financial institution that gatekeeps where and when our money can be spent.
When each stop on this road lets traffic freely flow, the stops seem transparent: The road looks wide-open. But when the stops on this road believe they have a moral impetus to play an active role in regulating the road, these stops become very apparent. Instead of simply acting as neutral, transparent parties (platforms rather than publishers),10 many of these companies — including Google, Amazon, Apple, Facebook, Twitter, and Snap — have taken on a more active role. Just as the speech suppressors of the past, these companies do not feel they simply have a crown of moral authority to suppress speech when the need arises, but instead, they have a moral mandate to suppress speech they deem impermissible wherever it may be. Failure to do so is not only negligence, but a moral failure as well.
Suppression in Action
Some have sidelined claims of widespread speech suppression as conspiratorial, but such dismissals are simply false. While small acts of suppression have been occurring for years, the last six months have put on full display that many of the largest software companies in the world feel they have a moral imperative to be the arbiters of permissible speech on the internet. Some of the most egregious examples include:
In mid-October 2020, Twitter suspended the NY Post’s account and restricted users from posting a NY Post article about alleged corruption in the Biden family found on the hard drive of the son of then-Presential-candidate Joe Biden.11 Facebook likewise targeted this article on its platform, reducing its spread until its “third-party fact checker partners” could verify the story.12 After nearly one month, Twitter CEO, Jack Dorsey, admitted, “…we recognize it as a mistake that we made, both in terms of the intention of the policy and also the enforcement action of not allowing people to share it publicly or privately.”13 Dorsey also claimed the policy was reversed within “24 hours,”14 but the NY Post’s Twitter account continued to be banned unless it deleted its original Tweet linking to the Hunter Biden story.15 Nearly two weeks later, Twitter eventually reinstated the NY Post account.16
On January 7, 2021, YouTube (a Google owned company) stated, in regards to videos about election fraud in the 2020 US Presidential election, that, “*any* channels posting new videos with false claims in violation of our policies will now receive a strike.”17 YouTube continued, “Channels that receive a strike are temporarily suspended from posting or live streaming. Channels that receive three strikes in the same 90-day period will be permanently removed from YouTube. We apply our policies and penalties consistently, regardless of who uploads it.”18 The platform also claimed, “Over the last month, we’ve removed thousands of videos which spread misinformation claiming widespread voter fraud changed the result of the 2020 election, including several videos President Trump posted to his channel.”19 In a practical sense, YouTube deemed 2020 US election fraud disputes as misinformation and officially took action to suppress the spread of this impermissible speech.
On January 8, 2021, Google removed Parler from its Play Store20 after linking the social media app to the US Capitol riots on January 6th. Apple followed suit shortly after, removing Parler from its App Store.21 Google justified its actions with the following statement: “In order to protect user safety on Google Play, our longstanding policies require that apps displaying user-generated content have moderation policies and enforcement that removes egregious content like posts that incite violence.”20 Apple stated similar rationale, claiming, “the processes Parler has put in place to moderate or prevent the spread of dangerous and illegal content have proved insufficient.”21 In essence, both platforms removed the app because they claimed Parler did not provide sufficient mechanisms for controlling speech on its platform, a sentiment that was echoed by Facebook COO Sheryl Sandberg: “I think [the Capitol riots] were largely organized on platforms that don’t have [Facebook’s] abilities to stop hate, and don’t have our standards, and don’t have our transparency.” Ironically, the US Justice Department — in documents about the arrest of 223 perpetrators of the Capitol Riots — referenced posts and messages made on Facebook 73 times and Instagram (a Facebook owned company) 20 times.22 Parler content was referenced only 8 times.23 Amazon also contributed to the removal of Parler by deplatforming Parler from its Amazon Web Services (AWS).24
These are just a few of the many examples of software companies suppressing speech they have deemed incorrect or “misinformation.”25 Regardless of whether we agree or disagree with the content of the speech being banned, its banning is starting to lead us down a dark road. Although it has been making its way around the internet lately, and has become an eye-rolling cliché for some, the words of Martin Niemoller should act as a staunch warning for us as we head down this road. Niemoller, a Lutheran pastor who lived through Nazi oppression in Germany, recounted after World War II how citizens remained silence as the authoritarian regime rose to power in Germany:
First they came for the socialists, and I did not speak out—because I was not a socialist.
Then they came for the trade unionists, and I did not speak out—because I was not a trade unionist.
Then they came for the Jews, and I did not speak out—because I was not a Jew.
Then they came for me—and there was no one left to speak for me.
Niemoller’s words are a great reminder that we often act too late because a problem does not directly affect us. We may think what one particular YouTube user said was wrong, so why should we act in his or her defense? We may think a social media platform like Parler is the wild west of the internet, so why should we stake our reputation on defending an organization we disagree with? The answer to those questions lies in the final line of Niemoller’s quote: Soon we will be too far down this road and there will be no one left to stand up for us. At some point, as we let these exits pass us by on this road, we will run out of exits. At some point, when we finally are placed in the crosshairs, all of the voices that would have come to our rescue will have already been silenced.
Where This Leads
While the cases of existing speech suppression are startling — who would have guessed that the sitting President of the United States, regardless of political affiliation, would have been permanently banned from one of the most popular methods of speech? — they pale in comparison to what is to come if we continue down this road. I do not question the intentions of those that suppress speech — they do so because they truly think it is the right thing to do — but intentions alone are not enough. As wisdom teaches us, the road to hell is paved with such intentions.
We are heading down a dark road, and although it does not appear that a simple celebrity being banned from Twitter or a video being demonetized on YouTube will lead us there, I see no reason to think the moral imperative of free speech suppressors will rest anytime soon — and why should they if they believe they are doing the right thing? We already have the capability to guess the thoughts and beliefs of average citizens to a degree that is awe-inspiring and frightening, and my fear is this will soon be used to aid in the suppression of speech.
For example, a simple look at the items Amazon suggests for us will provide a startling amount of insight into our lives. Based solely off our previous purchases, Amazon can quickly tell what we are likely to buy in the future. Using this as a launch point, Amazon can easily define a profile about us, gauging metrics such as:
- Age
- Occupation
- Socio-economic status
- Relationship status
- The car we likely drive
- The authors we likely read
- The food we likely eat
A cursory knowledge of human awareness can provide a great deal of insight about a person or neighborhood, even from a simple trip to the parking lot of a local Walmart.26 Amazingly, companies like Amazon do not need to guess. We provide this information ourselves. This is made even easier on social media platforms, like Twitter and Facebook, because we directly provide personal information, such as our age, occupation, relationship status, sex, network of friends, etc. From this web of knowledge and relationships, it is easy for a marketing branch or casual observer to guess our beliefs. For example, if Facebook determines that a certain piece of misinformation has been spread mainly by four accounts, and another user follows three of those four accounts, it is likely that that user believes the false information — or at least has some inclination to believe it.
From a technical perspective, this amounts to tracking the spread of a disease or virus, where the contaminant is misinformation. Even if companies do not outright ban the accounts that the misinformation has spread to, they can now be put under stricter watch. If most accounts will be judged on a three-strike policy, those contaminated accounts may already have a strike by association. Without slipping down the false slope of predeterminism, we can still form a good guess about certain characteristics or behaviors of people (i.e., if you are a baseball fan and someone offers you free tickets to the game tonight, and you don’t have any plans, isn’t it hard to resist going to the game?).
This may seem far fetched, or even conspiratorial, but it is reality. Many of these software companies have already proven they do not have a strict policy of free speech — as we have seen, most have some policy that allows for “misinformation” to be curtailed. What’s more, they view the suppression of certain speech as a good thing (a moral imperative). Combine this with the predictive analytics they are already capable of and we have a recipe for disaster. Even beyond the recipe or possibility for disaster, it begs the question: Why shouldn’t they? If they have a moral imperative to suppress speech and the means to do it, why shouldn’t they? Wouldn’t that simply be immoral on their part? It’s simply a logical conclusion to the beliefs of these companies (or at least the beliefs of the management of these companies).
But that is not what scares me. What scares me is where it will inevitably lead: A Minority Report digital world, where speech is banned before it is even circulated. At some level, we have to ask the question: Why not ban an account that is highly susceptible to sharing what a company deems as misinformation before it spreads that misinformation? At the current state, we are simply trying to contain the spread of a virus by containing people after they have caught the virus and spread it to other people. In reality, the way to stop the spread of a virus would be to contain those who are most likely to catch it and spread it to other people. In essence, why not have a pre-misinformation department that can stop misinformation before it happens in the same manner that John Anderton (Tom Cruise) is responsible for stopping a crime before it happens in the Pre-Crime Department?
The only objections that can be found today are: (1) preemptive banning is a step too far and (2) what if the predictions are wrong? The first objection is illogical at best, because it draws an arbitrary line in the sand. Why is banning a person for speaking misinformation acceptable (or even moral) while banning them before they speak it (if they are deemed very likely to speak it) not? In most cases, the former does not give us the same uneasy feeling as the latter (completely subjective), or we worry the predictions are incorrect and we may ban someone who is innocent.27 This leads us to the second objection.
If the predictions are wrong, we ban someone for speaking non-misinformation (not necessarily true, but at least not false for the reason of the banning). But these predictive-analyic systems are not some new technological devices. These are systems that have been tuned and trained to the point where most of “Big Tech” companies are wholly dependent (financially) on them functioning accurately. In practice, these companies have already put their existential trust in these systems. This is also in the infancy of the Artificial Intelligence (AI) age, which will continue to usher in more and more astounding advancements in predictive analytics. Despite this, these systems are fallible, but they are surprisingly accurate.
But this technology is a double-edged sword, and one that must be wielded with an acute sense of responsibility. Unfortunately for us, we have already seen that many of the largest software companies do not have a principled stance on free speech. Instead, they arbitrarily decide who to ban based on who they agree with and who they disagree with.28
This leaves us with companies who see themselves as arbiters of free speech, with not only the moral authority to decide who should speak and what should be spoken, but also the moral impetus to suppress “misinformation” whenever possible. Based on how the stage is set, I do not doubt how this play will unfold and eventually lead to the logical conclusion of their premise: Systematic suppression of speech — through software — on a scale that would make the authoritarians of the past rife with jealousy.
The Exit Off This Road
All this is not to say these software companies — and others who support speech suppression — are inherently evil, nor are the employees and their software developers inherently evil in any sense of the word. Even those in these companies who support speech suppression,29 I believe, are doing so with genuine intentions. This is where there is hope, especially in regards to the software developers within these companies.
At their heart, companies are simply a collection of people, and the software they produce are simply extensions of the creativity and ingenuity of the developers that make up the company. For all its genius, applications like Facebook and Google are all the more impressive because they come from the minds of fellow men and women. And therein lies the solution to the problem: The men and women who create these software systems. Fortunately and unfortunately, we are the ones responsible for creating these systems that are curtailing freedom of speech, and thus we are the ones on the frontlines who can stop it.
The solution does not come from some open rebellion within these companies against existing software systems — such as engineers sabotaging existing systems — but rather, from a change in perspective and an adherence to principles. With an understanding of the importance of free speech, we must look for a constructive, not destructive, means to solve this problem. As developers, this means we must:
Enable free speech in our personal lives. Change on the outside happens only after change on the inside, and likewise, change in large groups only starts with change in small groups. If we wish to enable free speech in our companies and across the internet, we must first preserve it within our personal circles.
Create software that enables, not suppresses, free speech. I cannot make a determination what a developer should do if they currently work on software they deem to be used to curtail speech. That is a matter for him or her to decide and must be wrestled within his or her conscience. Regardless, the solution is constructive. It does not come from tearing down existing software, but creating alternative software. Instead, we must work towards creating platforms and services that enable more voices, not less. Although it may sound naive, I have faith that if there exist platforms that support free speech, people will vote with their dollar and move towards these systems. It does not require we tear down and leave destruction in our path, but give people an alternative to which they can flock.
There is no perfect solution to this problem, since this problem has been in existence since one person first disagreed with another. But in the age of software, developers have been foisted onto the frontlines of this battle and it is we who share a large portion of the burden to promote and enable more voices, not less.
Appendix A: Common Rebuttals
The topic of free speech is a divisive one, especially as it pertains to political speech and speech on the internet. I have gathered some of the most common arguments I have heard in defense of speech suppression and provided some counter-arguments for them.
Are these companies not free to do what they want as private companies?
These companies are private enterprises — in the sense that they are not government entities and are therefore not subject to government constitutions — and thus, they have the right to support and enable whom they see fit. Even assuming that discrimination laws are not as nuanced as they are in actuality and these companies can discriminate at will, this does not mean they are correct in suppressing speech in the moral sense. They may be absolved from repercussions in the legal sense, but more importantly, this does not absolve them of moral responsibility (which is more important than legal responsibility). Just because a behavior is not restricted by the law does not mean we are morally free to practice it. Doing so is to have no moral compass at all, swaying with the dictates of the law.30 In short, just because these companies are legally capable of restricting speech does not mean they should, and furthermore, it does not absolve them of their responsibility in leading us down this dark road.Can we introduce regulation to enforce free-speech?
Regulation may sound like an enticing prospect on its face — it ostensibly ensures companies cannot do what they please in censoring speech — but rarely does regulation reach this goal without unintended, more dramatic consequences. By introducing regulation, we are inviting governments around the world to get a say in what types of speech are permissible in private companies. In this sense, companies lose their autonomy and now become public-private partnerships. Additionally, the call for regulation means the social and moral battle for free speech has already been lost. In essence, we have failed to convince people at a societal level that free speech is a worthy cause and therefore we must try (again, ostensibly) to force companies to abide by free speech regulations against their will.It is also concerning that many within the technology industry (“Big Tech”) have called for regulation themselves. It begs two questions: (1) if these companies are calling for change on themselves, why not implement the change voluntarily and (2) what advantage do these companies gain by the regulation? The second question becomes more concerning when we include the fact that most of the Big Tech companies are publicly traded and have a fiduciary responsibility to make decisions that make the business more profitable. Thus, combining these questions, we are left asking: What actions could Big Tech be refusing to take voluntarily that would be more beneficial for them through regulation?
Can we fact-check posts in order to determine misinformation?
The concept of a fact checker is a spurious one because it presumes that there exists some entity that is unbiased and should be the determining factor in deciding whether a story or post is true or not. This is simply not the case. Every person and organization is biased to some degree. There does not exist a wholly unbiased entity this side of Heaven. We all approach a story or post with a set of preconceived notions, past experiences, and things we want to be true that shape how we view information. That is not to say there is no objective fact or no way to find objective fact, but a single entity should not be responsible for determining which information can be seen by others.It is one thing to make a statement about a post and claim it is either true or untrue, but it is another to stop its circulation based on that determination. The most effective means of finding the truth is having it scrutinized by as many people as possible, challenging it, and seeing if it stands the test of time. In essence, letting people decide for themselves what they believe to be true. If they use one of these fact checkers, then they are entitled to do so, but using the determination of this fact checker to suppress the circulation of information is curtailing of speech.
Isn’t banning offensive speech accountability?
This argument suffers from the same fatal flaw as speech suppression itself: It presupposes that one person has an inherent authority over another. To claim that we are holding a person accountable for speaking is to assume that person is responsible for providing us with an account of their actions. By definition, accountability means “subject to giving an account to explanation.”31 Essentially, we are asserting our authority over the speaker and instructing them to explain or justify their speech. Just as with speech suppression in the first place, this requires a wild level of arrogance and self-ordained power.We can challenge people on their beliefs, but it must be with the understanding that we have no authority over them and they are not responsible to us to justify why their speech should be heard in the first place.
Append B: Examples of Speech Suppression
The original draft of this article contained these examples in a footnote, but even in the days it took to write this article, more and more examples continued to pour in and the footnote grew to the point that an entire appendix was warranted. This appendix contains examples of some of the most overtly biased instacnes of suppression, where software companies nonuniformly applied their Terms of Service (ToS) (i.e., considered one account “misinformation” or contrary to the ToS, while another account with the opposite political affiliation was not) and banned (or suspended) accounts.
Twitter banning MyPillow CEO, Mike Lindell
YouTube’s (Google) suppression of Tulsi Gabbard’s campaign in the 2019 Democratic primary race — The dismissal of Gabbard’s lawsuit against Google does not claim that Google did not suppressed Gabbard’s campaign, but instead, claims that Google’s suppression does not fall under the same restrictions against speech suppression by the US government protected in the First Amendment to the US Constitution (i.e., it claims that suppression did occur, but the damages Gabbard claimed are not justifiable under the First Amendment, since Google is private company, not a government entity).
YouTube banning anti-Chinese Communist Party phrases
Discord banning r/WallStreetBets’s channel — Discord justified the ban in accordance with their stated Community Guidelines, which protects against, “hate speech, glorifying violence, and spreading misinformation” (emphasis added).
Amazon removing Ryan J. Anderson’s book When Harry became Sally — Anderson, nor his publisher, were notified that Amazon had removed the book from its website and (at the time of writing) no explanation has been given as to why the book was removed. The book was also temporarily removed from Apple books, but has since been re-added.32 The cover of the book has also been flagged on Twitter as “potentially sensitive content.”33 According to Anderson, “It’s not about how you say it, or how rigorously you argue it, or how charitably you present it. It’s about whether you affirm or dissent from the new orthodoxy of gender ideology.”34
Twitter suspending the account of Steven Crowder over claims of election fraud — Although Twitter has deemed that there was no election fraud in the 2020 US Presidential election (at least to an extent that altered the outcome of the election), Crowder obtained the publicly-available voter rolls and verified which addresses were deliverable according to the United Parcel Service (UPS), and then physically visited the addresses that were not. Since this research — which according to Crowder, proves beyond a reasonable doubt that there is some voter fraud — is contrary to the determination made by Twitter that claims of election fraud are “disputed,” Twitter locked Crowder’s account35 and restricted commenting, liking, and retweeting the original post.36
In addition to the speech suppression committed by software companies today, governments around the world have started to put pressure on these companies to crack down on speech. This is even more frightening than suppression by companies. While companies such as Facebook and Google hold power unknown to companies of the past, they are still unable to damage the life of citizens to the extent that misguided governments can (i.e., arresting, jailing, or oppressively fining a citizen).
In a chilling inquiry sent by US Rep. Anna G. Eshoo (D-CA 18th district) and US Rep. Jerry McNerney (D-CA 9th district) on Feb. 22, 2021, both Congressmen demanded that twelve companies – AT&T, Verizon, Roku, Amazon, Apple, Comcast, Charter Communications, Dish Network, Cox Communication, Altice, Alphabet (parent company of Google), and Hulu – answer seven questions about “Right-wing media outlets, like Newsmax, One America News Network (OANN), and Fox News.” These questions included:
How many of your subscribers tuned in to Fox News on [your service] for each of the four weeks preceding the November 3, 2020 elections and the January 6, 2021 attacks on the Capitol? Please specify the number of subscribers that tuned in to each channel.
Have you ever taken any actions against a channel for using your platform to disseminate any disinformation? If yes, please describe each action and when it was taken.
While these questions are frightening in their heavy-handedness — in the same way questions at the Salem Witch Trials or during McCarthyism were — their justification is disturbingly shallow. According to the letters of inquiry sent by Reps. Eshoo and McNerney, “Experts have noted that the right-wing media ecosystem is ‘much more susceptible…to disinformation, lies, and half-truths’” (emphasis added). Following the footnote used as a source for this quote (and subsequently as proof of “expert” determination), the inquiry lists the single book, Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. This single source is used as justification for representatives of the US Federal Government to strongarm companies into answering for airing new stations associated with one political affiliation in the United States. This is a frightening move towards authoritarianism that echoes the path taken by many of the software companies in the United States: If a single entity – such as Facebook, Twitter, Google, or a book – declares that a debatable thought is fact, any contrary opinion must not only be silenced, but crushed.
Footnotes
- See Partisans in the U.S. increasingly divided on whether offensive content online is taken seriously enough. About 53% of all US adults polled believe that feeling safe and welcomed online is more important than being able to speak your mind freely.
- See ‘On Tyranny,’ ‘1984’ Boosted Book Sales Last Week and George Orwell’s “1984” is topping Amazon’s best sellers
- See Permanent suspension of @realDonaldTrump for Twitter ban; See Trump banned from Facebook indefinitely, CEO Mark Zuckerberg says for Facebook ban. It is important to note that Twitter made its decision to ban the former President due to, “specifically how [his words] are being received and interpreted on and off Twitter.” This statement shows that Twitter in particular considered how speech is “received and interpreted on and off Twitter” — an immensely subjective metric — when determining what speech is permissible.
- See Lewis, C. S. (2001). The Abolition of Man. Grand Rapids, MI: Zondervan. Pp. 83-101. In Appendix: Illustrations of the Tao, Lewis delves into some of the universal ideas of humanity, including punishment for “sin,” duty to parents, and duty to children, and how vastly different cultures share a common understanding of these universal principles. Despite broad differences between cultures around the world, there is surprisingly little difference between many of the basic moral tenets of each culture.
- Relativism states that truths are not universal (or true for all people at all times) and is instead contingent upon the standpoint of each individual and the context to which truths are applied. This proposition is self-defeating because relativism assumes a priori that relativism is universally true while at the same time concluding that there is no universal truth. Thus, the conclusion of relativism disproves its premise. In essence, relativism provides itself special privilege (a logical fallacy), claiming that it is exempt from its own conclusion. Therefore, relativism is illogical and we must accept that there are universal truths, despite our disagreements about particular universal truths.
- What we know as “murder.”
- The baselessness of this claim of moral authority can be seen if we play out the claim to its logical conclusion. If we assume all people have the moral authority to suppress speech and person A suppresses the speech of person B, then there is nothing that stops person B from suppressing the speech of A in turn. This argument devolves into absurdity when person B claims that the call for suppression by person A should be suppressed, thus removing the ground on which person A stood to suppress person B’s speech in the first place. Therefore, the trait that allows us to claim the moral authority to suppress another person’s speech must be one that we alone (or a group to which we belong) holds, and is not held by the person whose speech we are suppressing (or the group to which that person belongs). In essence, we arbitrarily claim that we belong to some privileged group that the suppressed person does not belong to, lest the suppressed person turn around and use the weapon of suppression against us.
- Free speech does not always lead to the light in the immediate sense. Sometimes free speech can lead the masses astray (i.e., the bumpy road). The difference between free speech and speech suppression is that the road of free speech can sometimes be bumpy while the road of speech suppression will always be bumpy. Likewise, free speech does not guarantee an immediate happy ending, but it allows for us to eventually reach the light. Speech suppression, on the other hand, inevitably leads to darkness.
- God in the Dock
- Section 230 of the Communication Decency Act (officially known as 47 U.S.C. § 230, a Provision of the Communication Decency Act, but colloquially referred to as “Section 230”) states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In practice, this law provides exemption for social media platforms from litigation for the content that is posted on their application. Thus, Facebook cannot be sued for illicit content posted by a user on Facebook in the same way that a newspaper or online publication (a publisher) can be sued for publishing illicit content. There is some debate about whether social media companies should be afforded these protections in the light of the speech suppressor they have committed, but we will not address this further as this article is focused on the moral argument against censorship rather than the legal actions that can be taken against these companies.
- See Facebook and Twitter are restricting a disputed New York Post story about Joe Biden’s son.
- See this tweet by Facebook’s Policy Communications Director, Andy Stone.
- See Twitter: Censoring NY Post’s Hunter Biden Story a ‘Mistake’.
- See Twitter Reverses Course on Hunter Biden Stories, Says It Won’t Block Hacked Content. The full quote, from Vijaya Gadde, Twitter’s Legal, Policy and Trust & Safety Lead, is: “Over the last 24 hours, we’ve received significant feedback (from critical to supportive) about how we enforced our Hacked Materials Policy yesterday. After reflecting on this feedback, we have decided to make changes to the policy and how we enforce it.”
- See Jack Dorsey says the New York Post Twitter account will remain locked until it deletes the original tweet featuring its Hunter Biden story. The full quote from Dorsey is: “They have to log in to their account, which they can do right this minute, delete the original tweet, which fell under our original enforcement action, and they can tweet the exact same material…and it will go through.”
- See Twitter Unlocks New York Post Account After Two-Week Standoff.
- See this tweet by the official @YouTubeInsider Twitter account (Twitter account responsible for official updates about YouTube to the press or media).
- Ibid.
- Ibid.
- See Parler removed from Google Play store as Apple App Store suspension reportedly looms.
- See Apple removes Parler from the App Store.
- See Sheryl Sandberg Downplayed Facebook’s Role In The Capitol Hill Siege—Justice Department Files Tell A Very Different Story.
- Ibid.
- See Parler’s de-platforming shows the exceptional power of cloud providers like Amazon.
- See Appendix B.
- See Tim Kennedy Teaches Fundamentals of Situational Awareness! | Sheepdog Response (warning: Some adult language may be included)
- See the previous example of Twitter’s admission that it should not have handled the NY Post situation the way it did.
- The removal of “misinformation” speech appears to be arbitrary on many social media platforms. For example, Twitter and Facebook commonly fact check (and sometimes outright suppress) posts for containing misinformation, but still allows content such as the historically inaccurate1619 Project from the New York Times (and the account of the Project’s creator, Nikole Hannah-Jones) to remain on its platform. Actions such as these seem to indicate the Terms of Service and “misinformation” policies of these platforms are applied at the discretion of the platform, rather than universally.
- See Spotify Employees Threaten to Strike If Joe Rogan Podcasts Aren’t Edited or Removed, where employees of Spotify threatened to walk-out if some of the language included in the Joe Rogan Experience (JRE) podcast was not curtailed.
- And to which law? The law of our town? The law of our state or country? Global law? What happens when one of these laws comes into conflict with one another? How do we decide, apart from our moral law, which should take precedence over another?
- See Merriam-Webster’s definition.
- See Digital Book-Burning: Amazon Bans Scholar Ryan T. Anderson’s Book On Transgenderism.
- Ibid. See also Amazon Removes Well-Known Book on Transgenderism.
- See When Amazon Erased My Book.
- See Steven Crowder’s tweet.
- The original tweet can be found here. The tweet included a warning that stated, “This claim of election fraud is disputed, and this Tweet can’t be replied to, Retweeted, or liked due to a risk of violence.” The comment, like, and share buttons were all disabled. Clicking any of these buttons brought up a warning that stated, “We try to prevent a Tweet like this that otherwise breaks the Twitter Rules from reaching more people, so we have disabled most of the ways to engage with it. If you want to talk about it, you can still Quote Tweet.” Only the option to quote tweet the original tweet was available.”
PingMyLinks.com – FREE Website Submission