Artificial intelligence, which stems from the algorithm, plays a significant role in augmenting human judgment. Our daily interaction with the algorithm is witnessed in many aspects of our lives, especially when selecting what information is most relevant to us on the internet. Search engines use algorithms to help internet users navigate the enormous volumes of information in online databases (Gillespie, 2014). The algorithmic functions are perverse in social media, where they are used to tailor messages for specific feeds. By determining the preferences and tastes of social media users, this artificial intelligence can determine which messages to send as a feed to a specific user. Similarly, algorithms also determine whose posts are seen by which people and at what rate. This is done under the concept of media visibility. With media visibility, algorithms come up with users referred to as ‘clear winners’ (Gillespie, 2014). Recently, developments in artificial intelligence presented algorithms that enable machines to perform dexterous tasks, further eliminating humans in certain job sections. This is seen in a case like Tesla’s mass production company, which uses robots to assemble instead of humans (Markoff, 2012). Due to the rising concerns that artificial intelligence has widely grown to perform functions that were earlier done by human beings, there have been fears that this technology will completely erase human creativity in the areas performed by AI. Will machines eventually eliminate the human workforce? This essay argues that algorithms and humans can productively work, but machines can sometimes be too intrusive in the users’ decision-making processes.
First, when scholars discuss the social impact of human judgment being replaced by some forms of machinic judgment, they mean that sometimes algorithms can be more intrusive than desirable. Humans are accustomed to dialogue and consensus socially in decisions regarding their livelihoods. However, in many instances, artificial intelligence has come to challenge this culture by constantly being manipulative or mis-directive. According to Crawford (2016), “algorithms, like Google’s EdgeRank search algorithm, are described as autocratic—making decisions without our knowledge” (p.84). The idea of EdgeRank filtering information and sending feeds without the user’s request is a demonstration of autocracy, where one is forced to read or interact with content without their consent. It filters the users’ past browsing history and posts to determine what the user could be interested in in the future. By relying on this faulty generalization of preferences, the algorithm keeps sending feeds to narrow the user’s worldview so that they are not exposed to rival messages. Such feeds can sometimes mislead, or they could be true. There is, therefore, an imminent threat to remaining caged in the world of consumerism without suspecting it. The reason why algorithms send feeds narrowed down to a specific worldview is to concentrate the users’ attention on certain products or cultures to maximize profits by certain companies. The bigger picture is that the person will not interact much with rival products of some companies.
Socially, algorithms digitally mediate public knowledge, online discourse, and decision-making. One of the ways it does this is by analyzing information already existing in the online database and relating it with the inputs from the user to make a final output in terms of feeds or results, thereby determining which voices to take seriously and which ones to take lightly. In these discussions, participants are stratified into categories of powerful voices and listeners or receivers of the message. Therefore, some people may speak but only attract a small audience, while others speak and gain an unprecedentedly large number of audiences. This means that some people are more visible in such discourses than others. According to Bucher (2012), “the regime of visibility constructed imposes a perceived ‘threat of invisibility on the part of the participatory subject” (p.1165). The concept of visibility ensures that only those posts ranked highly by the social media algorithms will attract larger viewership among the audience. When some other posts are not promoted to have better visibility, the participatory subjects face the threat of invisibility. For example, when a prominent person like President Obama posts an issue about abortion on Facebook, the post will be visible to many people, including those who are not following him. This means that the scope of discussion about the issue will be wide enough. However, when an unknown person posts the same issue in the third world, it might attract less attention than the former because EdgeRank will have filtered it and ranked it as deserving less audience.
Still, on the issue of online discourse, algorithms stifle democracy by making some people more visible than others, causing frustration and self-doubt. As much as algorithms enhance the exchange of information, there is an extent of pain associated with this gain. Savolainen (2022) found out that “users struggle, desperately, to form expectations” with their posts, but they are not guaranteed visibility (p.1105). Algorithms promote the interests of pacific people, creating a hegemony in the online discourse. Prominent people whose feeds are highly marketed and promoted garner more attention than ordinary people. When an ordinary person makes a post and fails to garner the much-needed attention, the person ends up in frustration and self-doubt. This means that their interests are not promoted for reasons best known to the designers of the algorithmic commands. Self-doubt ensues if one makes several desperate attempts to be visible on an online platform to no avail. This is worsened if their colleagues are highly visible and gathering sufficient visibility. In a nutshell, the capitalists have weaponized algorithms to exhibit presence toward users who dance to the capitalists’ tunes. This is true, considering that one can pay some cash so that this visibility can be enhanced by manipulating structures like Google’s EdgeRank. The capitalists take advantage of the frustration to earn money to enhance visibility.
Fortunately, algorithms enhance access to information quicker than manual search for the same knowledge. A good case where access to information is easier today than in the past when algorithms were absent in the field of the justice system. In this field, AI can rapidly gather information from different databases and combine related common laws applicable in a given case to give an accurate judgment. According to Masuhara (2017), “an AI program named CaseCruncher Alpha won a challenge against 100 commercial London lawyers” (, p.7). CaseCruncher Alpha could comfortably access the legal database and provide a conclusion that is coherent with the judgments of human jurists. This means that the information filtering capabilities of algorithms are higher than those of ordinary humans. CaseCruncher was designed to read through laws just like lawyers do a produce a judgment on a case. When it was placed against the trained lawyers on an Ombudsman’s case, CaseCruncher ended up with 86.6 percent against the lawyers’ 62.3 (Masuhara, 2017). With this kind of statistics, it is evident that AI is smarter in logical decision-making than humans. Therefore, if lawyers could incorporate the application of AI in their decisions, the dispensation of justice could be more accurate and satisfying. It also enhances quick decision-making because it saves the time of having to comb through the mounds of old files and encyclopedias for critical legal decisions.
In conclusion, it is evident that in as much as algorithms and humans can productively work together, machines can sometimes be too intrusive in the decision-making processes of the users. Google EdgeRank has been accused of leading in feeding people with tailored information, constantly influencing their choices and the amount of information they are exposed to. This means that, at times, algorithms feed us with information which it thinks is relevant to us even when it is not. Further, algorithms mediate online discourses by determining which people are to be heard with more respect and which are to be ignored. Using EdgeRank, a Facebook post can attract good viewership or a handful of visibility. What follows is the threat of invisibility of those considered insignificant by the algorithms of EdgeRank. However, there is a general consensus that algorithms can help humans access and use information better. Lawyers can especially benefit from these technologies to decide cases without having to comb through archives of old files for days.
Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New media & society, 14(7), 1164-1180. DOI: 10.1177/1461444812440159
Crawford, K. (2016). Can an algorithm be agonistic? Ten scenes from life in calculated publics. Science, Technology, & Human Values, 41(1), 77-92. DOI: 10.1177/0162243915589635
Gillespie, T. (2014). The relevance of algorithms. Media technologies: Essays on communication, materiality, and society, 167(2014), 167.
Markoff, J. (2012). Skilled work, without the worker. The New York Times, 18.
Masuhara, D. M. (2017). Artificial intelligence and adjudication: some perspectives. Amicus Curiae, 111, 2.
Savolainen, L. (2022). The shadow banning controversy: perceived governance and algorithmic folklore. Media, Culture & Society, 01634437221077174.