University of Calgary researchers’ team investigates ethical use of AI

0

Sounds convincing and attention grabbing, right? The only thing you need to know about the first paragraph of this story is that it wasn’t written by a human.

Instead, it was written by a machine learning tool called ChatGPT, the latest AI innovation from tech company OpenAI. The only human touch on the paragraph was a prompt asking the machine to write a news story about academic integrity being compromised by artificial intelligence.

The sentiment also doesn’t tell the full story, as a transdisciplinary team of University of Calgary scholars is investigating how AI writing tools can be used ethically for teaching, learning and assessments.

Does using AI equal academic cheating?
“We don’t start from the point that the use of AI automatically constitutes academic cheating,” says Dr. Sarah Elaine Eaton, PhD’09, an associate professor in the Werklund School of Education and principal investigator on the team.

“In fact, we want to resist knee-jerk reactions and instead explore how we in higher education can really support and advance learning and ethical teaching by using these technologies in proactive, ethical ways.”

Using a 2022 University of Calgary Teaching and Learning grant from the Taylor Institute for Teaching and Learning, Eaton and her team are aiming to gain knowledge about the capabilities and ethical implications of AI technologies to help educators develop ethical and accessible teaching practices to benefit student learning outcomes.

“We know the tech is here, we know we can’t stop it, we know we can’t control it, we know that students are already using it, and, in this case, we know the tech is ahead of the educators.”

AI tools currently more like Wikipedia
Eaton says professors may even be dealing with this here at the end of the semester, as students may be handing in assignments that have been AI-assisted and those professors may want to automatically say that it’s a misconduct.

“The reality is our academic misconduct policy and every academic misconduct policy in the world right now has no mechanisms to address this as academic cheating,” she says.

Eaton says it would require a very nuanced interpretation of current policy to make a case for AI assistance as academic cheating, and some professors may feel compelled to do this because they are unfamiliar with the technology.

A counterargument to this is we don’t tell students they’re cheating when they use dictionaries, so why should we tell them they’re cheating when they use AI-assisted tools?

As currently constituted, AI tools like ChatGPT have massive databanks of knowledge, but the machine learning tool itself isn’t connected to the internet, so it can’t give details about current events or specific people.

In Eaton’s exploration of a tool like ChatGPT, she found its formal writing was at around a Grade 9 level, so it doesn’t produce work that would be expected of a university student.

Additionally, these tools can’t synthesize information and they make mistakes, so they can only give students answers in the same way a Wikipedia can.

“If we’re only asking our students to do summaries of things, they can probably already copy that from the internet or they can use a fancy new tool and have it generated, but it’s not really helping students learn in exciting and creative ways,” says Eaton.

She says right now these tools are novel and exciting, but the mid- to long-term impacts are yet to be known. There is a sense that these tools could soon be commercialized, so the landscape one year from now could be very different if the tools are subscription based.

“As soon as companies figure out how to monetize this, they will,” Eaton says.

Exciting opportunities ahead
The team will start collecting data in January, but their literature reviews have found that having professors become aware of the tools and having them weigh in on how the tools can be used in their own disciplines is important. The team itself is transdisciplinary because these AI tools can be used in different ways across disciplines, including the humanities, fine arts, and computer sciences.

Eaton says these tools will present exciting opportunities for educators to think about how they can design assessments where either the use of AI is encouraged or rethink the assessment, so students don’t feel the pressure to use it.

These tools may also make educators rethink if their assessments are fit for purpose and are aligned with the learning outcomes of the course.

“If the learning outcome of the course does not explicitly say ‘One of the outcomes of this course is that by the end of this course students will be able to write an essay,’ why are we assigning essays,” says Eaton.

For Eaton, the use of AI is just the next evolution in teaching and learning tools.

“We would never say is it unethical for students to use the internet, that question is absurd,” she says.

“Give it a couple of years, and the question about giving students access to AI will be equally as absurd.”

Project team
The University of Calgary research project includes:

Dr. Sarah Elaine Eaton, PhD, principal investigator
Dr. Robert (Bob) Brennan, PhD’97, co-investigator
Dr. Jason Wiens, PhD, co-investigator
Dr. Brenda McDermott, PhD, co-investigator
Helen Pethrick, research associate and project manager
Beatriz Moya, research assistant
Jonathan Lessage, research assistant