UTS Human Technology Institute’s report shows AI’s risk in Australia
Organisations across Australia are routinely deploying artificial intelligence (AI) in their operations without clear strategies or adequate safeguards, undermining AI’s potential benefits.
Around two-thirds of Australian organisations are already using, or actively planning to use, AI systems to support a wide variety of functions. Corporate leaders need to rapidly appreciate their existing legal duties and emerging responsibilities regarding AI, according to a new report from the Human Technology Institute (HTI) at the University of Technology Sydney (UTS).
Led by HTI’s Lauren Solomon and Professor Nicholas Davis, The State of AI Governance in Australia provides a timely overview of how organisations are approaching the governance of AI in Australia today. Its findings are based on surveys, structured interviews, and workshops engaging more than 300 Australian company directors and executives, as well as expert legal analysis and extensive desk research.
“AI systems are rapidly becoming essential to creating value across all sectors, but existing governance systems are not managing the commercial, regulatory and reputational risks that AI systems pose,” Professor Davis said. “Directors and senior executives are putting their organisations – and the broader community – at risk.”
AI systems can add huge value to organisations, but they introduce new risks and exacerbate existing ones.
Professor Nicholas Davis
The report reveals that corporate leaders are largely unaware of how existing laws govern the use of AI systems in Australia.
“AI systems can cause real harm to people, both to individuals and society more broadly,” said lead author Lauren Solomon. “Threats to safety, discrimination, loss of personal information, and manipulation need to be addressed by organisations using AI systems to ensure our lives are improved by this innovation.
“While reform is undoubtedly needed, AI systems are not operating in a ‘regulatory Wild West’. AI systems are subject to privacy, consumer protection, anti-discrimination, negligence, cyber security, and work, health and safety obligations, as well as industry-specific laws.”
The report finds that both company directors and senior executives see huge opportunities for AI systems to improve productivity, process efficiencies, and customer service. But investment in AI systems and technical skills has not been matched by investment in AI system management and governance. Furthermore, corporate leaders report that they lack the awareness, skills, knowledge and frameworks to use AI systems effectively and responsibly.
Threats to safety, discrimination, loss of personal information, and manipulation need to be addressed by organisations using AI systems to ensure our lives are improved by this innovation.
Lauren Solomon
“AI systems can add huge value to organisations, but they introduce new risks and exacerbate existing ones,” Professor Davis argues. “AI systems tend to be used in more complex contexts, rely on more sensitive data and are often less transparent than traditional IT systems. They require special attention from boards and senior executives.”
“This report serves as a wakeup call for corporate Australia. Action can and should be taken now to address governance failures to improve outcomes for organisations using AI and the broader community,” said Lauren Solomon.
The report suggests four areas where corporate leaders should take urgent action to improve the governance of AI used by their organisations, in the form of suitable strategies, governance systems, new forms of expertise and human-centred cultures around AI.
The State of AI Governance in Australia forms part of HTI’s Artificial Intelligence Corporate Governance Program (AICGP). With the support of philanthropic partner Minderoo Foundation, and project advisory partners KPMG, Gilbert + Tobin and Atlassian, the AICGP aims to identify the governance strategies that can support investment in accurate and effective AI systems, while ensuring safe and inclusive outcomes.