There are no ‘AI workers’, only human ones

There are no ‘AI workers’, only human ones

The idea of an ‘AI workforce’ and ‘AI workers’ has been popping up, with recent example HR software company Lattice integrating AI into their org charts before backlash. An RMIT expert comments on how we mitigate the risk to workers as AI is integrated into workforces.

Dr Emmanuelle Walkowiak, Vice Chancellor’s Senior Research Fellowship

“GenAI represents a profound transformation in human-machine interaction and collaboration, as machines and workers can ‘communicate’ through a common language.

“This technology can drive significant productivity gains in the workforce.

“However, optimising these productivity gains sustainably depends on protecting workers' fundamental rights like fair wages, working conditions and job security as AI becomes more widely deployed.

Our research at RMIT shows that with GenAI, productivity and risks are inseparable. These risks include privacy breaches, cybersecurity, breach of professional standards, bias, misinformation, accountability, and intellectual property risks.

“Our findings are clear: workers are highly exposed to AI risks.

“Importantly, AI will never be a ‘workforce’ and we will not have ‘AI workers’. The idea of reporting ‘digital workers’ (i.e. technology) as we report human employees is non-sensical.

“If you naively consider AI as an autonomous agent at work, you should audit your AI risks.

“Language like this promotes the well-known narrative of job displacements by technology and gives incentives to embody it into workplace organisational processes.

“It dangerously dehumanises work and the potential brought by digital transformation to complement workers.

“It is true that we need to design new AI-resource management practices to mitigate AI risks and ensure a safe and ethical deployment of AI.

“The role of HR management should be to upskill workers to use AI, improve job quality with AI, ensure that the distributional effects of AI are shared, and support collective bargaining for the deployment of AI.

“We must upskill computer scientists to underpin this technology with ethical parameters and new procedures to protect data privacy, as well as avoid worker surveillance through AI monitoring, control wage theft, and identify accountability rules for AI decisions impacting workers.

“This is how we optimise the return on investments in AI at work.”

Emmanuelle Walkowiak is Vice-Chancellor’s Senior Research Fellow at RMIT in the School of Economics, Finance and Marketing. She leads the FLOW-GenAI initiative (Future of Labour, Organisation and Work with GenAI).

***

General media enquiries: RMIT External Affairs and Media, 0439 704 077 or news@rmit.edu.au

25 July 2024

Share

25 July 2024

Share

Related News

aboriginal flag
torres strait flag

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Luwaytini' by Mark Cleaver, Palawa.