“Those who profess to favor freedom, and yet deprecate agitation, are men who want crops without plowing up the ground.”
Writer, statesman and abolitionist leader Frederick Douglass uttered these words during his delivery of the “West India Emancipation” speech in 1857.
Frederick Douglass agitated the status quo in an effort to procure freedom for enslaved peoples and independence from the nasty grips of imperialism and colonialism. As time progressed and degrees of freedom were achieved, efforts to disrupt the rule of white supremacy and colonialism continued through the work of leaders such as Malcolm X, Martin Luther King Jr. and Marcus Garvey.
Yet, nearly 200 years later after Douglass gave his speech, and almost 70 years after the Civil Rights Movement, the world continues to be rung by the remnants of these systems.
Nelson Maldonado-Torres refers to the pervasive nature of colonialism through the use of the term “coloniality”, which refers to the long-lasting patterns of colonial power that have defined and influenced our culture, our patterns of labor, and more recently, our technology. AI, or artificial intelligence, has especially become a platform through which colonial and white supremacist attitudes are upheld due to implicit biases of the white majority that creates these technologies.
Generally, the proposed solution is to diversify tech spaces, dismantle biases and amend the technologies, but what would happen if we were to dismantle AI altogether?
With the foundations set by Douglass, Sojourner Truth and other abolitionists of the 19th century, we are going to set upon an exploration of abolition and whether or not it has a place as a viable solution to problems posed by coloniality-informed AI.
What is “abolition”?
Defined simply as the termination of a system, institution or practice, abolitionist movements of the United States have always aimed to end oppressive systems, particularly those within linkage to colonialist institutions.
From the mid to late 1800s, the movement aimed to end chattel slavery, or the ownership of another human being. The abolitionist movement was first conceived with religious undertones, as it saw slavery as an abomination. Over time, it would inadvertently adapt a political, controversial face.
In hindsight, the demands of abolitionists to end slavery was more than warranted, but at the time, the majority stood in opposition. In a similar way, one can aptly argue that the abolitionist movements of the 21st century could meet the same fate– an initial reaction of opposition that, in years to come, would be viewed with much more reverence.
Abolitionist movements of 2022 still aim to sever ties with systems linked to colonialist efforts, with the most prominent targets being the prison industrial complex and policing.
But the agents of colonialist thought take on a number of faces, especially in the digital age.
I spoke with Nabil Hassein, a PhD student at NYU in the Media Culture and Communications program and the author of “Against Black Inclusion in Facial Recognition”, about his thoughts on the prospect of abolition in tech and at large. Listen below.
The Transcendence of Coloniality Through AI
As stated previously, coloniality serves as the method through which colonialism remains a pervasive aspect of modern society. Where colonialism provides the structure through which nations can adapt political and economic control and exploitation over another, coloniality refers to the long-lasting impact, ensuring that the system is maintained over time. Particularly, the “coloniality of being” refers to the tangible, lived experiences of those who experience colonization, along with their conceptualization of the world around them.
Ultimately, coloniality survives the act of colonization, and its remnants are seen in the development of modern technologies. AI serves as a prime example of how this ideology persists.
The term “artificial intelligence”, despite having first been coined in 1956, refers to “systems or machines that mimic human intelligence to perform tasks and can iteratively improve themselves based on the information they collect”, according to Oracle. The advances that have taken place within AI research, particularly over the last 20 or so years, has made it an integral part of everyday life, including the political, economic and cultural climate.
Digital assistants such as Siri and Alexa are able to both decipher voice commands and sort through expansive databases in order to provide users with information; they can even function to control other forms of technology within the home, such as light fixtures and security systems. Streaming apps such as Netflix and HBOMax supply data to machine learning algorithms in order to suggest content for users to watch. Social media apps use a similar methodology, collecting customer data and behaviors to create algorithms that suggest content to users. Large companies also use data collected from their customers to personalize their marketing campaigns, which drives more engagement with their products. These are only a few examples of the many ways in which artificial intelligence is used in seemingly meaningless tasks.
Although AI is presented as a technology that supplies consumers with the content, products and services that they want or need, it also serves as a tool through which power can be maintained. Articulated best by philosopher Carissa Veliz, tech companies have the ability to gain power through digital surveillance; the more knowledgeable that these companies are over the habits of their consumers, the easier it is for them to anticipate, and at times manipulate, what their consumers do.
Not only can AI be used to impact the decisions of consumers, but it can also inform the decisions that are made on behalf of consumers. Particularly, legal and political decision makers– such as judges and lawmakers– can use AI to determine the way in which they deal with potential offenders, constituents or residents of a particular area.
For example, in 2016, ProPublica released a report on the way in which a recidivism algorithm produced biased “risk assessment” scores. These risk assessments, which are often used in courtrooms to assign bond amounts or even charges, are determined by an algorithm. This algorithm relies on statistical measures along with “theory-driven” assumptions, and when examining the likelihood of offenders re-offending, it often assigned higher scores to Black people and lower scores to white people. The most preeminent issue posed by this algorithm lies with the fact that it was often inaccurate; in many cases, white offenders re-offended despite being assigned lower risk assessments. Yet, there are also a number of systemic disadvantages that could potentially come with these score assignments.
Black people, an already marginalized group, face even greater disadvantages once they enter the prison industrial complex. According to the American Bar Association, African-Americans are incarcerated at five times the rate of white individuals in state prisons. They also make up 47% of wrongful conviction exonerations and 35% of those who are given the death penalty, despite making up less than 20% of the US population.
The American justice system is already flawed and skewed against Black people and other marginalized populations; therefore, an algorithm that both assists in this process and could be potentially informed by biased data, does much more harm than good.
The recidivism algorithm is an extreme case of discriminatory or biased AI, but it serves as a perfect example of the potential for AI; if adapted throughout more aspects of life, it could contribute to the disenfranchisement of already-disenfranchised groups, working in a manner similar to that of colonists years ago.
Popular Proposed Solutions to Problems Caused by AI
With disparities such as the above in mind, there are a number of proposed solutions for remedying faulty or biased AI.
Skair Mohamed, Marie-Therese Png and William Isaac propose decolonial theory; asserting that current ethical frameworks surrounding the soundness of AI exist within a US or Euro-centric lens, the scholars believe that decolonial theory could provide the analytic tools that are needed to evaluate global and colonial power relations.
Arguing that algorithmic coloniality takes the forms of policing, recidivism, exploitation and centralized power, the three propose tactics that are centered around community engagement and development, stating that giving the community a stake in how AI is developed could help to resist coloniality.
Intelligence Explosion proposes an action-based strategy, calling on the consumer to persuade AI researchers and those in positions of power to take AI ethics and safety more seriously.
But one of the most common proposals for problems caused by AI is the “diversity and inclusion” approach.
The 2021 Artificial Intelligence Index Report, conducted by Stanford University, found that the percentage of female AI PhD graduates only make up 18.3% of all AI PhD graduates within the last 10 years. In a 2019 Computing Research Association (CRA) survey, it was found that among new US AI PhD residents, 45% were white, 22.4% were Asian, 3.2% were Hispanic and 2.4% were Black. These figures have not changed much within the past three years.
The demographics of those who program technology, and especially AI, are extremely important with regard to the way in which the technology functions or is applied. All human beings, regardless of their profession, are biased or informed by implicit bias. Therefore, in a society shaped by coloniality, the application of discriminatory bias into our technologies is inevitable, and if those who program these technologies are of diverse identities and experiences, it is more likely that more can be done to prohibit disenfranchisement.
But are these proposed solutions enough? Could integrating the community, or programmers of diverse identities, be enough to remedy faulty AI, along with the deeply-ingrained ideologies that are programmed into them?
Could Abolition Be The Answer?
Black queer womanist writer Audre Lorde once said that the master’s tools could never be used to dismantle the master’s house. The “master’s house” in this analogy serves as the present-day manifestations of coloniality, such as the over-policing and underfunding of impoverished communities, or the rate at which African-Americans are jailed in comparison to their white counterparts. The capabilities of technology, and particularly AI, are the “tools”, as those in power can use it as a means to establish dominance over systematically disadvantaged groups.
The heart of the problems posed by AI extends past implicit biases or faulty programming. Even with the implementation of cultural competency training, diverse programmers or community engagement, there is no guarantee that those who use AI will operate in an ethical manner– and if these technologies are made to be more accurate, it will only be a much more reliable tool to enact disenfranchisement.
With that in mind, getting rid of AI altogether would not rid of the issues that are associated with AI systems, such as inaccurate labelings or biased algorithms– but it would certainly ensure that those in power have one less tool at their disposal.
As AI continues to evolve, the prospect of abolishing it altogether would not necessarily be something that happens soon– if at all– but the purpose of this analysis is not to favor one solution over another.
Rather, it is important that in conversations surrounding radical change, radical solutions are considered.
Abolitionism is not perfect, but it allows individuals to envision a world in which those placed at the top of the societal hierarchy cannot continue to oppress their perceived subordinates. And as society continues to adopt new technologies, it is imperative that we consider the potential ramifications for marginalized groups.
Change is constant, change is inevitable, and at times, change is slow– but change is needed.
And if programmers, activists and the like proclaim to profess freedom, they must be willing to at least consider agitating the status quo.
Otherwise, what’s the point?