Stellenbosch University’s seminar focuses ChatGPT

0

The big buzzword of 2023, ChatGPT, came under scrutiny in the first instalment of a brand-new lecture series presented on the Stellenbosch University (SU) main campus this week. Themed “The Future of Intelligence: ChatGPT and its Implications”, this first lecture in the Media Futures Seminar Series hosted by the Department of Journalism and the Faculty of Arts and Social Sciences lead to fascinating insights. ​

The ChatGPT phenomenon was introduced by Prof Herman Wasserman, newly-appointed Chair of the Department of Journalism at SU, who touched on some of the ethical dilemmas the popular new artificial intelligence (AI) technology has introduced to the media and academic landscape. Thorny questions relating to plagiarism, privacy, censorship, fairness and AI’s impact on inequality were also raised by participants.

“Although it’s somewhat of a cliche that we’re educating students for jobs that don’t yet exist, it is uncannily true when we consider that the journalism students sitting here this afternoon, maybe next year, when they enter the world of work, will have to contend with press releases churned out by a chatbot, which did not exist this time last year,” Wasserman noted in his introduction.

Rector and Vice-Chancellor Prof Wim de Villiers praised the organisers for tackling the topic. “Consumers are faced with ‘information disorder’,” De Villiers said. “We have to navigate vast amounts of fake news, disinformation, hate speech and conspiracy theories circulating online.” He referred to the fear ChatGPT has created in academic circles and the radical impact digital technologies have on different disciplines. “I’m very pleased to see the Department of Journalism taking the lead on such a topical issue and setting the tone for future transdisciplinary research and social impact.”

The ins and outs of the artificial intelligence behind ChatGPT were explained by Prof Bruce Watson of the Centre for AI Research at the School for Data Science and Computational Thinking who called himself “an AI optimist”. It is difficult to overstate the impact ChatGPT will have on our daily lives, Watson said. “But it’s important to remember it’s not sentient and has no hope of being so. It’s just a huge model for predicting what the next word should be, based on what it has learned. But it has no feelings, actual thoughts or proper feedback. Still, the implications are enormous. AI is already exceptionally good at writing short pieces, creating working programmes and malicious computer viruses.”

AI is here to stay and students will increasingly use it as it gets more powerful, confirmed Dr Antoinette van der Merwe, Senior Director: Learning and Teaching Enhancement at SU. She described the “flight or fight” reaction many academic institutions have had in response to ChatGTP. Some universities have already banned the technology, she noted. “Others will try to outsmart it or catch it out and try to implement better detection software. I think it’s a natural response to this cat among the pigeons,” Van der Merwe said.

“Students today need to be prepared for a future in which writing with AI is already becoming essential. Just as word processor functions such as spelling and grammar checks have become accepted and integrated into writing practices, so too will text generators. So, I would argue we need to reimagine, rethink and refocus teaching, learning and assessment so we can equip our graduates with the necessary skills to work responsibly with AI,” Van der Merwe said.

Instead of focusing on the tool, it is essential to reflect on the factors that distinguish humans from technology, such as academic integrity, creativity, critical thinking, common sense and social connections, Van der Merwe argued. Socially responsible engagement with AI text generators needs to be both creative and critical. At academic institutions, ChatGPT has the potential to support more personalised and interactive experiences for students as well as more efficient and effective approaches for lecturers.

“Students should be encouraged to use ChatGPT for peer learning and feedback as a means to ask good questions while also critically evaluating the answers it provides so they can formulate their own opinions. We need to partner with the technology. But we have to remain critical and very self-aware,” she cautioned. Van der Merwe ended her presentation with an AI-generated quote: “The potential uses of ChatGPT are still relatively new and untested and further research is needed to fully understand their potential benefits and limitations.”

Meaningful conversations with machines are now a reality and it has far-reaching implications, Dr Fanie van Rooyen, editor of Quest magazine said. While it makes some journalistic tasks such as news gathering and data sifting much easier, ChatGPT could make discerning the truth more difficult. Van Rooyen warned that AI like ChatGPT will influence public perception and could entrench existing societal biases. He stressed the importance of human involvement in content creation. “While AI language models can aid journalism, it’s important for journalists to remain involved in the writing, editing and verification processes to ensure accuracy and ethical reporting. In that sense, AI makes journalism more important than it has been. Reputable, trustworthy, non-sensationalist news outlets will become more valuable,” he stressed. Van Rooyen quoted the American journalist and media entrepreneur Steven Brill who predicted that in the future AI will be able to produce better writing and analysis than most professional reporters.

Two researchers from Research ICT Africa, Dr Scott Timcke and Zara Schroeder, gave a joint presentation on the implications of AI for a country such as South Africa where inequality and economic exclusion are still major obstacles. “No technological product is neutral,” Timcke noted. While AI can potentially be a tool to provide affordable educational, medical and legal information and advice to the poor, it is likely that profit maximisation will trump social protections, he said. The flip side of such technological advancement is that in the future only rich people might have access to human specialists while the poor will have to rely on machine-generated expertise.

Shroeder warned that since its release in November 2022, ChatGPT has been flagged as having misogynistic and racist biases. “AI technologies encode not only explicit hegemonic social attitudes but the implicit logic of the society in which these technologies are based. It can amplify toxic attitudes such as racism, sexism, violence and hate speech,” she said. “We will need to be thoughtful about how we use these systems and how they are connected with the politics of who belongs and who are excluded.” She also mentioned it is cause for concern that currently most AI technology is built on English and excludes people who cannot interact with it in their own language.

There was broad consensus among the experts taking part in the first Media Futures Seminar that ChatGPT can be a tool for both good and bad. The message was clear: As our lives are increasingly consumed and mediated by new technologies, we should approach AI with cautious optimism. “This increases the imperative for all of us to equip ourselves with the skills to consume, curate and co-create media in ethical, critical and responsible ways,” Wasserman concluded. ​