Brock-Led Research Provides Guidelines for High Schools on Teaching Responsible Use of AI
A Brock-led research team has mapped out a strategy to help high school teachers guide their students on the responsible use of artificial intelligence (AI) tools such as ChatGPT.
“I would much prefer students write an essay from scratch without leaning on these tools, but these tools are everywhere,” says Brock Professor of Education Governance and Policy Analysis Louis Volante. “We have a responsibility to teach students how to approach the use of AI tools in an ethically defensible manner, because even in the world of work, they’re going to encounter these types of applications.”
Volante is lead author on the Aug. 28 paper, “Leveraging AI to enhance learning,” co-authored with Queen’s University Professor of Educational Assessment Christopher DeLuca and Murdoch University Professor of Assessment and Measurement Don Klinger.
The researchers recommend teachers follow the ‘Ideas-Connections-Extensions framework” (ICE) education model when instructing students on interacting with AI writing programs.
In this model, students start to learn by grasping foundational ideas and their related terms and facts and then move onto connecting these ideas with experiences and knowledge they’ve already gained. From this fusion, students can generate innovative ideas that can be applied to solve problems in new ways.
Following this model, the team puts forth three steps:
Understanding ideas: Students learn how to fact-check AI-generated text by gathering information from a number of credible sources and comparing it to what is being presented as ‘fact.’ After completing these exercises, students share their experiences with each other through creative group activities.
Making connections: Students examine words and sentence lengths in AI-generated text and assign scores evaluating the complexity of words and whether there’s a mix of short and long sentences. Students then make the text livelier and more engaging, and connect ideas to their personal environments and experiences.
Creating extensions: Students take the text to a new level “in ways that demonstrate critical, creative and higher-order thinking” by evaluating the limits of arguments presented in the AI-generated text, brainstorming alternatives and suggesting a new way forward that comes from the student’s own thinking.
“This last step separates human work from AI-generated content, and it is where secondary teachers should increasingly focus their instruction,” says the paper. “In many respects, AI makes the need for authentic assessment more evident than ever and can therefore push us to make education more human, not less.”
In addition to connecting ideas to their personal contexts, students can develop “deep thinking” in other ways. These include outlining actions they plan to take to address a specific challenge such as climate change; giving an oral presentation to their class on a particular topic and answering questions in real time; or being involved in artistic and community projects.
“The extension requirement and assessment criteria should be available from the outset, so students know that generating and refining AI content is an insufficient demonstration of learning,” says the paper.
Secondary and post-secondary educators are increasingly concerned that students who turn to ChatGPT and other text-writing programs are not developing skills in original, critical-thinking research and writing, Volante says.
In post-secondary education, it becomes even more difficult for faculty to accurately assess whether students are violating academic integrity standards, such as plagiarizing, and also if students have truly mastered what is being taught, he adds.
This can have serious implications for highly specialized fields like medicine or engineering, where lack of competence or misinformation perpetuated by text-writing algorithms can lead to dire consequences, Volante says.
He notes AI tools are becoming more sophisticated at a rapid rate, making it harder for educators to determine if assignments were written by the student or an AI tool. He points to ChatGPT4’s ability to score highly — in some cases, within the 90th percentile — on national entrance and professional exams in the U.S.
“Ultimately, it is incumbent on educators at all levels, both compulsory and within higher-education settings, to explicitly address the opportunities and challenges presented by AI, and ensure their assessment methods reflect authentic learning,” says Volante.