Exploring Collaborative and Creative AI Futures: New Research Initiative Launched
A £2.4 million initiative has been launched to help organisations develop solutions for pressing questions around the responsible use of artificial intelligence (AI).
Researchers will address a range of AI-related challenges in industry, public organisations and the third sector through a series of fellowships.
The Fellows, appointed from universities across the UK, will apply research expertise from humanities and arts including data ethics, copyright law, digital design and qualitative analysis to address questions around the responsible use of AI.
The Bridging Responsible AI Divides (BRAID) Fellowships are part of the BRAID programme. BRAID is led by the University of Edinburgh in partnership with the Ada Lovelace Institute and the BBC. The £15.9 million, six-year programme is funded by the Arts and Humanities Research Council (AHRC), part of UK Research & Innovation (UKRI).
Each Fellow, numbering 17 in all, will partner with an organisation from the public, private or third sector to unite expertise for tackling existing, new or emerging AI challenges.
Partners from the technology sector include Adobe, Datamind Audio, Diverse AI, Mozilla Foundation and Microsoft.
Project partners from regulatory and public organisations include Ada Lovelace Institute, The Alan Turing Institute, BBC, Institute for the Future of Work and the Public Media Alliance.
Elsewhere, fellows will be working with arts and cultural institutions including the Arts Council England, Edinburgh International Book Festival, Serpentine Galleries, and Royal Botanic Gardens Kew.
The collaborative projects will address questions, including examining approaches for the use of generative AI in the media, exploring the societal and ethical factors shaping the adoption of AI in a medical setting, developing a responsible AI innovation framework for the arts and culture sector, and supporting the needs of creatives when using AI.
Elsewhere, other collaborations will research the complex issue of copyright and generative AI in creative and cultural industries, including the impact of generative AI on writing novels, exploring the creation and ownership of AI-generated sounds, and examining the impact of generative AI in publishing.
The impact of AI can already be felt in many areas of our lives. It will transform our jobs and livelihoods, and impact on areas as diverse as education, policing and the creative industries. It is vital that we ensure its responsible development and use. The BRAID fellowships announced today will play an invaluable role informing the practice and tools crucial to ensuring this transformative technology is used responsibly to provide benefits for all of society.
Project leads at the University of Edinburgh said the Fellowships will support the creation of an AI ecosystem which will enable researchers and industry and public sector leaders to develop a deeper understanding of AI and its challenges and opportunities.
The 17 Fellowships offer opportunities for deeper relationships and joint impact, moving towards a genuine embedding of arts and humanities knowledge within how we think about, develop and deploy AI in practice and in the world. It is our hope that with these connections, and working towards common challenges across sectors and diverse communities, we will take substantial strides towards a more responsible AI ecosystem.
We are reaching a critical point in society where businesses and the public sector recognise that deploying AI systems safely and responsibly requires new kinds of knowledge and expertise, which can be challenging to access – the BRAID fellowships aim to bring together researchers with industry and the public sector to help bridge that divide between technical capability and the knowledge of how to use it wisely and well, to ensure that the benefits of AI are realised for the good of us all.
The recipients of the BRAID Fellowships are:
- Professor Nick Bryan-Kinns – University of the Arts London – project partner, BBC R&D
Explainable Generative AI in the BBC
Developing explainable AI approaches for creative practice within the BBC and beyond.
- Professor Mercedes Bunz – King’s College London – project partner, Serpentine Galleries
AI art beyond the gallery: exploring the capacity of cultural institutions to impact tech policy.
- Ms Clementine Collett – University of Cambridge – project partner, Institute for the Future of Work
Co-designing responsible technology and policy for the impact of generative AI on the writing and publishing of the novel.
- Dr Bahareh Heravi – University of Surrey – project partner, BBC R&D
Enhancing Responsible AI Literacy at BBC and beyond.
- Dr Federica Lucivero – University of Oxford – project partner, Ada Lovelace Institute
Anticipating Today: Co-creating techno-moral tools for responsible AI governance.
- Dr Caterina Moruzzi – The University of Edinburgh – project partner, Adobe
CREA-TEC: Cultivating Responsible Engagement with AI Technology to Empower Creatives.
- Dr Oonagh Murphy – Goldsmiths, University of London – project partner, Arts Council England
Developing a Responsible AI Innovation Framework for the subsidised arts and culture sector, with Arts Council England.
- Dr Martin Parker – University of Edinburgh – project partner, Datamind Audio
Machining Sonic Identities
Exploring the issue of digital sound identity, including how AI creates sound, and questions of provenance and ownership.
- Dr Kyrill Potapov – University College London – project partner, Microsoft Research
Human-Centred AI for the Equitable Smart Energy Grid
- Dr Sanjay Sharma – University of Warwick – project partner, Diverse AI
Inclusive Futures: Radical Ethics and Transformative Justice for Responsible AI.
- Dr Anna-Maria Sichani – University of London – project partner, The Alan Turing Institute
Responsible data, models and workflows: Responsible AI digital skills provision for the cultural heritage community.
- Ms Caroline Sinders – University of the Arts London – project partner, Mozilla Foundation
Centering Creativity and Responsibility for AI Tools for Artists, Creatives and Makers.
- Dr Alex Taylor – University of Edinburgh – project partner, Microsoft Research
Muted registers: A feminist intersectional (re)figuring of red-teaming.
- Dr Pip Thornton – University of Edinburgh – project partner, Edinburgh International Book Festival
Writing the Wrongs of AI: LLMs, copyright and creativity in the age of Generative AI.
- Dr Beverley Townsend – University of York – project partner, Microsoft Research
Regulatory guidelines informing the societal and ethical factors shaping medical AI adoption.
- Dr Paula Westenberger – Brunel University London – project partner, Royal Botanic Gardens Kew
Responsible AI for Heritage: copyright and human rights perspectives.
- Dr Kate Wright – University of Edinburgh – project partner, Public Media Alliance
Responsible AI in International Public Service Media.
BRAID is dedicated to integrating arts and humanities research more fully into the responsible AI ecosystem, as well as bridging the divides between academic, industry, policy and regulatory work on responsible AI. The project is led from Edinburgh Futures Institute, University of Edinburgh, and supported by Edinburgh Innovations, the University’s commercialisation service.