A new report from New York University’s AI Now Institute titled Discriminating Systems: Gender, Race and Power in AI highlights the diversity crisis in the AI sector and its effect on the development of AI systems with gender and racial biases.
The lack of diversity in the AI sector and academia spans across gender and race. Recent studies show that women comprise only 15 percent of AI research staff at Facebook and 10 percent at Google. Women make up 18 percent of authors at leading AI conferences, while more than 80 percent of AI professors are men. Representation of other minorities is also sparse. Only 2.5 percent of Google’s workforce is black, while this is true of 4 percent for both Facebook and Microsoft.
According to researchers, AI’s lack of diversity extends past the under representation of women and other minority groups to power structures and the creation and use of various AI systems. Most of all, the report suggests that historical discrimination in the AI sector needs to be addressed in tandem with biases found in AI systems.
In addition to the apparent diversity crisis in the AI sector across race and gender, other research findings are
- The AI sector needs to change how it addresses the current diversity crisis. This includes admitting that previous methods have failed and recognizing the connection between bias in AI systems and historical patterns of discrimination.
- Focusing on “women in tech” is not broad enough to address different experiences in AI, such as the intersection of race, gender and other identities.
- Fixing the corporate pipeline won’t fix AI’s diversity problems. Other issues need to be addressed, such as workplace culture, power asymmetries, harassment in the workplace, exclusionary hiring practices, unfair compensation and tokenization.
- The need to reevaluate the use of AI systems for the classification, detection and prediction of race and gender, which only reinstate pre existing patterns of racial and gender bias.
The report also provides recommendations for improving workplace diversity and addressing bias and discrimination in AI systems. For example, the former issue can be addressed by a publishing compensation levels and ending pay and opportunity inequality; changing hiring practices to maximize transparency and diversity; increasing representation of underrepresented groups; and ensuring executive incentive structures.
Addressing bias and discrimination in AI systems will require making sure AI systems are as transparent as possible, such as tracking what these tools are used for and who is benefiting from them; conducting rigorous testing during the life cycle of AI systems; improving research to include a wider range of disciplinary expertise; and determining the usefulness of AI systems through risk assessment.
Read the full report here.