AI tools present an existential threat to diversity and editorial independence in journalism

The journalism industry must ensure strict guidelines are in place in newsrooms to combat the threat that Generative AI tools such as ChatGPT, BARD, and DALL-E pose to diversity and editorial independence, three media experts from Birmingham City University have cautioned.

SIR LENNY HENRY CENTRE FOR MEDIA DIVERSITY
BIRMINGHAM CITY UNIVERSITY

Leading academic and industry figures Diane Kemp, Marcus Ryder, and Paul Bradshaw, from the Sir Lenny Henry Centre for Media Diversity, have produced an advisory document for media professionals to avoid propagating in-built bias and amplifying diversity problems already present in journalism if using Artificial Intelligence apps and services.

According to a survey by the World Association of News Publishers currently half of all newsrooms use Generative AI tools, yet only a fifth have guidelines in place, it is unclear if any of these guidelines explicitly address diversity and inclusion.

Paul Bradshaw, Course Leader for MA in Data Journalism at Birmingham City University, said: As journalists start to experiment with different ways to incorporate generative AI tools such as ChatGPT into their workflow, it’s vital that we think about how editorial independence is maintained.

Generative AI has enormous potential to aid our reporting and spark different ideas, but as it learns from the voices already heard and the issues already explored, it will reinforce existing biases and blind spots if we don’t understand the most effective ways to use it.

Professor Paul Bradshaw

“A central part of our role as journalists is giving a voice to the voiceless and shining a spotlight on important issues.

“These principles are a first step towards establishing those best practices.”

Under the six guiding principles, which were peer-reviewed by colleagues in journalism and academia, journalists are urged to report mistakes and biases, build diversity and transparency into prompts, and view text or copy results with a healthy scepticism, if using Generative AI.

The guidelines suggest a lack of plurality in ownership of media outlets and a lack of diversity and representation in original source material will continue to cause inherent imbalances and inaccuracies across generative AI industry unless rebalanced.

Marcus Ryder, Head of External Consultancies at the Sir Lenny Henry Centre for Media Diversity, said: Journalists urgently need a set of guidelines of how to work with ChatGPT and generative AI programmes responsibly that does not exacerbate existing diversity issues.”

“The Sir Lenny Henry Centre for Media Diversity has been disappointed that in all the recent debate around ChatGPT and generative AI there has been little if any acknowledgement of how it could affect diversity in society general – and in journalism in particular.

Diversity problems in IT have been well documented for years. Report after report has highlighted the issues journalism has with regard to diversity. Journalistic use of AI potentially sets the ‘perfect storm’ in which diversity and underrepresented voices are the victims.

Visiting Professor Marcus Ryder

“This new set of guidelines is not only an important contribution to the current debate around the use of ChatGPT and generative AI but also represent a clarion call for diversity and inclusion to be central to all policies, regulations and best practice which emerge in the future around the use of generative AI.”

The guidelines were reviewed and incorporated feedback from leading UK media practitioners and academics.