Major corporations teamed up to fight AI bias

To fight AI bias

American corporations have teamed up to form the Data & Trust Alliance, which has developed a software assessment system to fight AI bias.

When hiring employees, HR departments are increasingly turning to artificial intelligence (AI) technologies for help. With their help, CVs are analyzed, video interviews are conducted and the mental state of applicants is assessed.

Major American corporations have decided to join forces to prevent these technologies from producing biased results that could perpetuate or even exacerbate discrimination.

According to The New York Times, last week they launched the Data & Trust Alliance, which includes representatives from various industries, in particular CVS Health, Deloitte, General Motors, Humana, IBM, Mastercard, Meta (parent company of Facebook), Nike and Walmart.

The organization does not engage in lobbying or research. With the participation of corporate and third-party experts, the Data & Trust Alliance has developed a system for evaluating software that uses AI. The system consists of 55 questions covering 13 topics, and its purpose is to combat algorithmic bias.

This is not just an acceptance of principles, but a real implementation of concrete measures.said former American Express CEO and co-chairman of the Data & Trust Alliance Kenneth Chenault.

Today’s AI software is data driven, so it matters which data is used and how it is used. If the data for training the algorithms are mostly white males, then the results produced by these algorithms are likely to discriminate against blacks and females. If, for example, the data for predicting success in a company refers to employees who have performed well in the past, the results produced by the algorithm may reinforce pre-existing bias.

Seemingly neutral datasets, when used with others, can lead to results that discriminate against applicants based on race, gender, or age.

Internal research of Data & Trust Alliance companies has shown that their HR departments use AI-based software, often sourced from third-party vendors. Enterprise users generally don’t know what data the vendor used to train the AI models, and how those models work.

To develop the solution, the alliance recruited its employees from HR, data analysis, legal and procurement departments, as well as software vendors and external experts. This collaboration resulted in a bias detection, measurement and mitigation system for learning data processing practices and developing HR software.

Let me remind you that I also wrote that Scientist discovered a vulnerability in the universal Turing machine.

By Vladimir Krasnogolovy

Vladimir is a technical specialist who loves giving qualified advices and tips on GridinSoft's products. He's available 24/7 to assist you in any question regarding internet security.

Leave a comment

Your email address will not be published. Required fields are marked *