Topics: artificial intelligence (AI), social biases, recruitment, biases in AI
Dr Dana McKay, Senior Lecturer in Innovative Interactive Technologies
“AI algorithms are mirrors, reflecting the data they are fed. Given most data reflects social biases, many AI algorithms automatically present biased results.
“This can have significant negative consequences, especially when it comes to recruitment.
“A few years ago, Google was found to be advertising high-paid jobs to men and not women, because fewer women held high-paid jobs.
“This reinforces the very bias the underlying data represented – women couldn’t apply for the jobs they didn’t see.
“Similarly, a sentencing algorithm used in the US that was supposed to remove judicial bias by assigning sentences automatically. This algorithm was developed based on existing judicial decisions and assigned African Americans longer sentences.
“These biases are particularly insidious because we often don’t know how an AI generated recommendation has been reached, and computers are believed to be unbiased.
“With a lot of employers now starting to use some form of AI in recruiting and hiring, there’s a question around whether candidates are being rejected solely because they did not fit the bias.
“The law is catching up with these problems though, and a recent landmark case in the US made companies legally responsible for using biased recruiting software, even if it is off the shelf.
“Similar laws could apply here in Australia, with, for example, the Victorian ‘positive duty’ law that requires employers to eliminate discrimination.
“Ultimately, we always need to remember that AI algorithms are only as good as the data they are based on.”
Dr Dana McKay studies the intersection of people, technology and information, and her focus is on ensuring advances in information technology benefit society.
***
General media enquiries: RMIT External Affairs and Media, 0439 704 077 or news@rmit.edu.au