University of Exeter: AI set to transform science and engineering in Canada

The panel, which included, Professor Sabina Leonelli from the University of Exeter, found AI has the potential to spur innovation and further scientific understanding beyond the limits of human abilities but could also deepen existing inequities, perpetuate human biases, and even create new ones.

The National Research Council of Canada asked the CCA to examine the legal, regulatory, ethical, social, and policy challenges associated with deploying AI technologies to enable scientific and engineering research design and discovery.

The report, “Leaps and Boundaries”, identifies the actors whose decisions will determine how the challenges will be addressed and how various fields and sectors could potentially integrate AI into their practices.

Professor Leonelli said: “The report is the result of a year-long effort, brilliantly fostered by the Council of Canadian Academies, to document cutting-edge developments and prospects for the use of AI within research. It clearly signals the need and the potential for AI to help address discrimination and bias in the production and use of scientific research, rather than sweep serious concerns under the carpet in the name of swift innovation. I hope that these findings will be widely read and discussed by all those engaged in AI development.”

Teresa Scassa, SJD, Chair of the Expert Panel, said: “The cross-cutting nature of AI means that no field will remain untouched by this technology. To maximize its benefits, it will be critical that the social and ethical implications of AI are addressed at the earliest stages of development, through to application, and with greater collaboration among researchers across disciplines and sectors.”

The report says Canada could also risk losing its competitive advantage in AI unless it takes decisive steps to move beyond its existing strengths. To date, growth in AI has been focused heavily on research and talent, but there’s a pressing need to better integrate knowledge and skills across multiple disciplines for the responsible development and use of the technology in a broader way. AI is already used for a range of tasks in science and engineering, such as analysing and interpreting data.

It’s anticipated in the future AI will be developing novel scientific hypotheses and experiments, and creating new engineering design processes, with minimal human involvement. This rapid pace of technological development has created various legal and regulatory hurdles, including issues related to data governance, intellectual property, and the management of acceptable levels of societal risk.

“AI can lead to significant advances in science and engineering, but not without recognizing potential pitfalls,” said Eric M. Meslin, PhD, FRSC, FCAHS, President and CEO of the CCA. “Realizing the promise and potential benefits of AI will require addressing possible biases, from the people who build it, the institutions and governments whose policies are intended to regulate it, and the organizations that use it.”

Comments are closed.