Addressing AI Bias – Four Critical Questions
By Hayley Pike
As AI becomes even more integrated into business, so does AI bias.
On February 2, 2023, Microsoft released a statement from Vice Chair & President Brad Smith about responsible AI. In the wake of the newfound influence of ChatGPT and Stable Diffusion, considering the history of racial bias in AI technologies is more important than ever.
The discussion around racial bias in AI has been going on for years, and with it, there have been signs of trouble. Google fired two of its researchers, Dr. Timnit Gebru and Dr. Margaret Mitchell after they published research papers outlining how Google’s language and facial recognition AI were biased against women of color. And speech recognition software from Amazon, Microsoft, Apple, Google, and IBM misidentified speech from Black people at a rate of 35%, compared to 19% of speech from White people.
In more recent news, DEI tech startup Textio analyzed ChatGPT showing how it skewed towards writing job postings for younger, male, White candidates- and the bias increased for prompts for more specific jobs.
If you are working on an AI product or project, you should take steps to address AI bias. Here are four important questions to help make your AI more inclusive:
- Have we incorporated ethical AI assessments into the production workflow from the beginning of the project? Microsoft’s Responsible AI resources include a project assessment guide.
- Are we ready to disclose our data source strengths and limitations? Artificial intelligence is as biased as the data sources it draws from. The project should disclose who the data is prioritizing and who it is excluding.
- Is our AI production team diverse? How have you accounted for the perspectives of people who will use your AI product that are not represented in the project team or tech industry?
- Have we listened to diverse AI experts? Dr. Joy Buolamwini and Dr. Inioluwa Deborah Raji, currently at the MIT Media Lab, are two black female researchers who are pioneers in the field of racial bias in AI.
Rediet Adebe is a computer scientist and co-founder of Black in AI. Adebe sums it up like this:
“AI research must also acknowledge that the problems we would like to solve are not purely technical, but rather interact with a complex world full of structural challenges and inequalities. It is therefore crucial that AI researchers collaborate closely with individuals who possess diverse training and domain expertise.”
- Black in AI
- ChatGPT and Foundation Models: The Future of AI-Assisted Workplace
- What Separates ChatGPT and Foundation Models from Regular AI Models
- Data Trends: Six Ways Data Will Change in Business in 2023 and Beyond
- Designing for Good: Equitable Design
- Types of Machine Learning