Decoding Coded Bias for a more equitable artificial intelligence

Coded bias, by Shalini Kantayya. Source: FIFDH.

During her first semester at MIT, researcher Joy Buolamwini obtained computer-vision software that did not recognise her face until she put on a white mask. This was the beginning of her quest against biases in technology, and the story of a documentary, Coded Bias presented at the Geneva International Film Festival and Forum on Human Rights (FIFDH) in March.

American filmmaker Shalini Kantayya’s Coded Bias is now available on Netflix, an opportunity to reflect questions its raises: how does artificial intelligence affect our freedoms? What sexist and racist biases does it carry? And can we, as individuals, respond to these attacks?

To get a better understanding of theses issues, a Geneva-based expert and AI Crowd CEO and co-founder, Sharada Mohanty, gives his view on the documentary and explores the role of researchers in addressing biases.

“It is important to work both ways, by talking to the public and the researchers. Unless we talk to the public, the researchers won’t take it seriously. With efforts like Coded Bias and other movements, you’re starting to ask the right questions about why there is bias in artificial intelligence. There is a pressure on researchers.”

Mohanty joins the discussion launched by other experts at the FIFDH to imagine a just and equal algorithmic future in this new episode of the Geneva Solutions podcast.

The Geneva Solutions Podcast · #7 Coded Bias
link

The GS news podcast series: Conversations on resilience is produced in partnership with Open Geneva and the University of Geneva. Interviews are available also on our SoundCloud page.

Newsletters