UTS Leads Dialogue on Technology, Ethics, and AI Responsibility
The intersection of technology, ethics and responsible AI development, and their impact on our shared future was the focus of the 2024 UTS Vice-Chancellor’s Democracy Forum with guest speaker Meredith Whittaker, President of the Signal Foundation.
The Signal Foundation is an American non-profit organisation founded in 2018 with the mission to “protect free expression and enable secure global communication.” It has developed the encrypted messaging app Signal, which is popular worldwide due to its privacy protections.
In her keynote, Whittaker laid bare the “toxic surveillance business model” of big tech monopolies, and how they rely on collecting enormous amounts of data to “know the customer” and sell stuff.
This mountain of data is the foundation of the current explosion in Artificial Intelligence (AI) applications, she said, such as facial recognition technology.
“They’re selling the derivatives of the toxic surveillance business model as the product of scientific innovation,” Whittaker said.
“And they’re working to convince us that probabilistic systems that recognise statistical patterns in massive amounts of data are objective, intelligent and sophisticated tools,” she said. And the idea that we should step aside and trust these AI tools is “incredibly dangerous”.
“The metastatic shareholder capitalism driven pursuit of endless growth and revenue that ultimately propels these massive corporations frequently diverges from the path toward a liveable future.”
Whittaker should know. She worked at Google for 13 years, and campaigned to change Google’s culture, helping to organise the “Google Walkouts” and draw attention to claims of sexual harassment, gender inequality and systemic racism.
She said the danger is that people are not given a choice in what tech they use in order to participate meaningfully in economic and social life.
“We are conscripted to use technologies that ultimately accrue benefit to these corporations and their business model.”
However, she did offer a ray of hope. She said it was possible to set the terms, and to rebuild and reimagine technology in a way that serves real social needs and reflects the world we want to live in.
“Signal’s massive success demonstrates the tech that prioritises privacy, rejects the surveillance business model, and is accountable to the people who use it, is not only possible but it can flourish and thrive as a non-profit supported by the people who rely on it.”
Following her keynote, Meredith was joined by moderator Professor Ed Santow, Director of Policy and Governance at the UTS Human Technology Institute , and panel guests UTS Dean of Engineering Professor Peta Wyeth and UTS Associate Professor of Law Ramona Vijeyarasa.
Their conversation centred around the need for a balanced approach to regulation that prioritises individual rights while still enabling the development of cutting-edge technologies.
“Australia is at the brink of deciding how we’re going to regulate the use of AI in ways that affect our lives,” said Associate Professor Vijeyarasa, who developed the Gender Legislative Index, an innovative tool that incorporates machine learning to measure how well legislation advances gender equality.
She said that rather than looking to the UK, which takes a soft approach and has voluntary guidelines around the use of AI, we should instead look to places like Canada and Brazil.
“If we did, we could learn what a human-rights centred model would look like. The Brazilian draft bill makes it very clear that AI driven technologies have a disproportionate impact on certain people based on gender, race, class or disability.
“If the bill passes, a person impacted by an AI driven decision can appeal it, ask for a human to be involved, and they can ask for their data to be anonymised or deleted,” she said.
The metastatic shareholder capitalism driven pursuit of endless growth and revenue that ultimately propels these massive corporations frequently diverges from the path toward a liveable future.
Meredith Whittaker
Professor Wyeth, an expert in the field of human-computer interaction and a member of the Government’s AI expert working group, highlighted the complexities around holding companies, particularly big tech, to account.
“When we’re thinking about our regulatory environment, it’s not just the act of creating it. It’s then how do we monitor; how do we hold those developers accountable. What recourse is there if they are found to be in breach.”
Whittaker said while these technologies are governed in the boardrooms of a handful of corporations whose objective function is and always has been profit and growth, the public good will never be the overriding principle.
“It’s always going to be the good of the shareholders, and the good of the large corporations, and governments that have the capital to license these capital-intensive systems.”