
The rapid integration of artificial intelligence (AI) across economic, social, political and defence systems has caused significant harm in the structures that govern technology. For those of us committed to an intersectional feminist vision of the internet, it is clear that AI, as it exists today, does not simply reflect the world’s inequalities, but exacerbates them. AI, far from being neutral, is built upon, shaped by and reinforces the same structures of racism, sexism, colonialism, ableism and economic injustice that intersectional feminist advocacy seeks to dismantle.
Digital technologies have always mirrored societal power imbalances. AI intensifies this reality by replicating biases at scale, and embedding them seamlessly into decision-making processes that profoundly impact marginalised communities. As Dr. Joy Buolamwini and Dr. Timnit Gebru’s research, Gender Shades, demonstrates, facial recognition systems have far higher error rates for women with darker skin tones than for lighter-skinned men, a fact that is not incidental but reflective of who is included and excluded in training data sets and design processes. Safiya Umoja Noble, in her book Algorithms of Oppression, further discusses how search engines themselves operationalise racism and sexism through the ways they organise information. She highlights that data discrimination against people of colour, notably Black women, is an inherent issue with search engines, which is then intrinsic to the datasets used to train AI tools and systems.
Continue reading at GenderIT.org.
Also on GenderIT.org:
AI-facilitated GBV takes various forms: Takeaways from global events