
This month, the U.S. The Department of Justice released upwards of 240,000 pages of FBI surveillance records on Dr. Martin Luther King Jr. The sheer quantity demonstrates the lengths taken by the federal government to disparage and discredit the civil rights leader during the 1960s. This unforeseen disclosure has helped spur discourse surrounding the intersection of surveillance and civil liberties.
The rapid technological innovation of the past decade has expanded the capabilities of digital surveillance technologies like AI-powered recognition technology or advanced predictive models. While these tools are portrayed as efficient public safety measures, in practice, there is an insidious cost: digital surveillance technology is susceptible to disproportionately targeting marginalized communities. This leaves large swaths of the population victim to potential racial profiling and over-policing, infringing on their civil and political rights.
Surveillance has long been used as a tool to oppress marginalized communities. Long before the development of AI technology, human cognitive biases disseminated through surveillance apparatuses, suppressing and intimidating people of color. For instance, as mentioned above, the FBI COINTELPRO program unfairly targeted Black civil rights leaders like MLK. Another example is the stop=and-frisk policies that discriminately targeted African Americans and Latino people.
Now, because people are taken out of the equation, the scale and speed at which surveillance can occur is unprecedented: technology is ever present. This poses a distinct problem for communities of color who are, because of this technology, more at risk of their civil liberties being violated. And, while modern surveillance tools claim to be objective, studies have shown that these tools are in fact biased as they are trained on data that reflects society’s biases. As a result, marginalized communities are not only subject to discriminatory surveillance, but now this surveillance is constant. This disturbing reality has resulted in people of color being disproportionately targeted, arrested, and misidentified. This creates a feedback loop where wrongful arrests based off of misguided technology encourages more stringent surveillance resulting in even more unjust arrests. It goes without saying that when communities are unfairly persecuted, social mobility and stability becomes a distant reality.
Major cities like San Francisco and Boston have attempted to dispel these concerns by banning or heavily regulating the use of facial recognition in policing. However, cities like New York City and Los Angeles continue to use these technologies in a debilitating manner. With even more technologies bound to be incorporated in surveillance, it is important for cities to start recognizing the bias in current tools and work towards altering or limiting their use. If things are left as they are, racial profiling will become a bleak reality. The only way to change reality is to demand it through our voice.