Toward an inclusive AI future for women


(MENAFN- Asia Times) Stanford University researcher Andrew Ng describes artificial intelligence as“the new electricity” – a general-purpose technology, reshaping business, and societal landscapes. As AI-related technologies penetrate ever more aspects of society, the management of AI will be a societal challenge as well as a technical one.

Since AI systems are designed by human beings with their own set of biases, when algorithms are applied to social and economic quandaries, they can perpetuate racism, sexism, ableism, and other harmful forms of discrimination.

When flawed AI is substituted for human decision-making, algorithmic biases manifest as algorithmic harms, especially for marginalized communities, when it comes to health care and economic opportunity.

Gender biases in the use of AI

AI technologies reflect human decision-making and limitations, learning from both human merits and flaws. While a machine's ability to process and analyze large volumes of data may address our finite capability to do so, if that data is burdened with stereotypical concepts of gender, the results will perpetuate this bias.

So if artificial intelligence is based on data that is androcentric or datasets with more male profiles than women's, the results could isolate women. 

Reuters reported in 2018 that Amazon shelved its AI recruiting tool because it was not rating candidates for software-developer jobs and other technical posts in an unbiased manner. The résumés it was fed were largely of men and as a result, it played down résumés consisting of the word“women's,” as in“women's chess club captain.” This raises a serious question: How do we build inclusive, gender-neutral AI technologies?

While the gender gap in data is not always life-threatening, the design and use of AI in sectors such as health care, finance and education can become a detriment to women's lives. Women's World Banking found that credit-scoring AI systems commissioned by global financial service providers are likely to discriminate against women, excluding women from loans and other financial services.

The general belief is that good data, data that is not incomplete or skewed, can help close gender gaps. Concerns persist, however, that if the right questions are not being asked in the data-collection process, including by women, gender gaps can widen. 

Inclusive development of AI

The real challenge of building human-centered AI far exceeds how the algorithms are designed, or computing resources; it lies in the access to accurate, reliable, and timely data – and ensuring this access is continuous.

Data availability determines what problems are worked on and what populations are served. Leaders driving social change, as well as leaders at organizations developing AI technologies, have the responsibility to advance gender equity.

An illustration of how inclusivity at the decision-making stage makes a difference is the Lacuna Fund. A multi-stakeholder collaborative comprising of technical experts, thought leaders and end users, the Lacuna Fund aims to make AI more equitable by creating datasets that are representative thus reducing bias in training datasets.

It is designed for and by the communities it will serve to provide more accurate low- and middle-income perspectives to scientists, researchers and social leaders. 

Long-term solutions

According to the World Economic Forum , only 22% of AI professionals globally are female, compared with 78% who are male. A non-homogeneous workforce is more capable than homogeneous teams of identifying their biases, asking the right questions, and solving issues when interpreting data, testing solutions or making decisions.

Women need to be included in the decision-making process, which still suffers mostly from being highly technical. We need to invite more women and girls to sit at the table when insights are gathered. We need economists, sociologists, teachers, social workers. 

Leaders in technology and those driving social change will be successful in increasing women's participation in AI-related fields only if more girls are given the opportunity to pursue STEM (science, technology, engineering and mathematics) in education.

Globally, women are under-represented in the sciences , so to be able to achieve gender equity in the workforce, educators must intentionally and consistently try to break gender norms in the classroom. 

Another step toward mitigating gender bias is to set up gender-sensitive governance structures and policies for responsible AI. Human oversight needs to expand from those operating AI systems to leadership.

While a diverse group of developers scrutinize data to find biases, and experts from various quarters of society involve themselves in decision-making, the responsibility toward developing equitable AI must trickle down from the boardrooms. 

Eliminating bias in AI and closing the gender divide are challenging goals, but not impossible. The Algorithmic Justice League and the first genderless voice for virtual assistants are notable initiatives that are advancing the goals of making AI less biased.

At the heart of AI are humans and their behavior, and it is going to need humans working together to solve its issues. It is every stakeholder's responsibility – those who design and those who deploy these systems – to ensure AI remains fair and equitable to all.

Deepali Khanna is the managing director of the Rockefeller Foundation's Asia Regional Office.

MENAFN14092021000159011032ID1102793349


Asia Times

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.