top of page

Freedom of Expression

Every time I start typing a search phrase into the Google search engine, two things happen. First, Google’s autocompletion function provides me with suggestions on how to complete the search, encouraging me to use pre-selected phrases. Second, when I press enter, Google provides me with results for my search in order of “relevance” as determined by Google’s trademark algorithm called PageRank. Content which is not indexed or ranked highly by the Google search engine is less likely to reach a large audience or to be seen at all.

When I log onto Facebook, I do not see the newsfeed posts of my 5000 Facebook friends in the order these were posted, nor do I see adverts randomly placed on the Facebook page. Instead, a Facebook algorithm predicts my user preferences to guide not only what advertisements I might see on my Facebook page, but also to dictate the way my social media feed, including newsfeed, is arranged.

Moreover, if I were to type something on my Facebook page that offends the Facebook community standards, this post may be removed. Content removal on social media platforms often takes place through semiautomated or automated processes. Algorithms are widely used for content filtering and content removal processes, including on social media platforms. While large social media platforms like Google or Facebook have frequently claimed that human beings remove all content, a European Union Report claims that large parts of the process are automated.

These examples all help to demonstrate that algorithms used by large companies like Google and Facebook play a pivotal role in curating information on the internet. This raises serious questions about the impact of such algorithms on the rights of individuals to receive and impart information.

This is so, even where companies like Google and Facebook act in an honourable manner and even when they do not themselves manipulate these algorithms and take steps to prevent them from being manipulated by outside parties (as the Russian government allegedly did in 2016 to try and influence the United States presidential election).

Because of the size of platforms such as Google or Facebook, their centrality in the creation and maintenance of the internet as a quasi-public sphere and their ability to massively amplify certain voices, companies like Google and Facebook are pivotal to ensuring the equal enjoyment of the right to freedom of expression.

Moreover, the personalisation of information that users receive based on the predictions made by algorithms can create “filter bubbles” and may substantially compromise the freedom of expression, which includes the right to information.

An even more pressing concern is that a search algorithm might be biased towards certain types of content or content providers, further leading to an indirect ‘censorship” of information without the general public being aware of it. Leaving aside the question of whether the algorithms used by Google, Facebook or other large internet websites produce biased results, Even if there is

Search engines like Google, and social media sites like Facebook, are privately owned, and in most countries, they are either lightly regulated or not regulated at all. Yet, search engines like Google and social media platforms like Facebook act as crucial gatekeepers for human beings who wish to seek, receive or impart information, raising serious freedom of expression concerns.

Traditional conceptions of the right to freedom of expression do not adequately address the potential impact of algorithm technology on the enjoyment of the right to freedom of expression. The traditional conception of the right to freedom of expression is based on the assumption that freedom of expression is infringed when the state takes action to limit the ability of individuals to impart and receive information.

In this view, the right to freedom of expression is viewed as a negative right enforced against public bodies to prevent them from controlling or limiting the free flow of information. In this view, there is no positive duty on the state to regulate private institutions that play such an oversized role in mediating the imparting and receiving of information. It takes seriously the inherent threat to freedom of expression posed by powerful state actors, but not the threat posed by private institutions like Google and Facebook.

However, the Constitution of the Republic of South Africa contains powerful provisions that allow us to rethink what the contours of the right of freedom of expression. One of the pivotal provision in this regard is section 7(2) of the Bill of Rights states that: “The state must respect, protect, promote and fulfil the rights in the Bill of Rights.”

While the duty to “respect” rights imposes a negative duty on the state not to interfere with the enjoyment of a right, the duties to “protect, promote and fulfil” imposes a positive duty on the state to ensure the equal enjoyment of all rights in the Bill of Rights. South Africa’s Constitutional Court has confirmed in several judgments that this positive obligation applies not only to the social and economic rights (like the right of access to housing, health care and water) but also to traditional civil and political rights (such as the right to freedom of expression).

An argument could, therefore, be made that the state (in the form of the national legislature) may have a duty to take positive steps to enhance the ability of individuals to impart and receive information. This would include not only a responsibility to take steps to make access to the internet cheaper and more readily available to all but also to ensure that where private companies employ algorithms to select and order the information we access on the internet, this is done in a fair and relatively transparent manner.

A second pivotal provision is contained in section 8(2) of the Bill of Rights which states that: “A provision of the Bill of Rights binds a natural or a juristic person if, and to the extent that, it is applicable, taking into account the nature of the right and the nature of any duty imposed by the right.” This section confirms that rights do not only apply vertically (binding the state) but in some instances also apply horizontally (binding private parties and legal entities).

This provision is important because it implies that even where the state fails to take positive action to ensure fair access to information on the internet, private companies themselves could be hauled to court whenever evidence emerge that algorithms selecting, ordering or assisting in deleting information have been manipulated to make it more difficult for individuals to access controversial, unpopular or politically sensitive information on the internet.

Reconceptualising the right to freedom of expression requires careful thought. The dangers inherent in state regulation of access to information are apparent, and any intervention by either the legislature or the court will have to continue to take seriously the rights of individuals and businesses to impart and receive information and must guard against unnecessary interference that will limit rather than enhance the substantive enjoyment of the rights to freedom of expression.

In my masterclass, I will explore in more detail how this balance could be struck.

– Pierre de Vos

Professor Pierre De Vos is the current Claude Leon Foundation Chair in Constitutional Governance and Deputy Dean at the University of Cape Town, where he also lectures and teaches Constitutional Law.

Pierre has extensively studied the legal field having obtained a B Comm (Law), LLB and LLM (cum laude) from the University of Stellenbosch, an LLM from Columbia University in New York and an LLD from the University of Western Cape.

#freedomofexpression #pierredevos

3 views0 comments

Recent Posts

See All
bottom of page