Chatgpt Is In Classrooms. How Should Educators Now Assess Student Learning?
Our recent qualitative study with 28 educators across Canadian universities and colleges - from librarians to engineering professors - suggests that we have entered a watershed moment in education.
We must grapple with the question: What exactly should be assessed when human cognition can be augmented or simulated by an algorithm?
Research about AI and academic integrityIn our review of 15 years of research that engages how AI affects cheating in education, we found that AI is a double-edged sword for schools.
On one hand, AI tools like online translators and text generators have become so advanced that they can write just like humans. This makes it difficult for teachers to detect cheating. Additionally, these tools can sometimes present fake news as facts or repeat unfair social biases, such as racism and sexism, found in the data used to train them.
Read more: I used AI chatbots as a source of news for a month, and they were unreliable and erroneous
On the other hand, the studies we reviewed showed AI can be a legitimate assistant that can make learning more inclusive. For instance, AI can provide support for students with disabilities or help those who are learning an additional language.
Because it's nearly impossible to block every AI tool, schools should not just focus on catching cheaters. Instead, schools and post-secondary institutions can update their policies and provide better training for both students and teachers. This helps everyone learn how to use technology responsibly while maintaining a high standard of academic integrity.
Participants in our study positioned themselves not as enforcers, but as stewards of learning with integrity.
Their focus was on distinguishing between assistance that supports learning and assistance that substitutes for it. They identified three skill areas where assessment boundaries currently fall: prompting, critical thinking and writing.
Prompting: A legitimate and assessable skillParticipants widely viewed prompting - the ability to formulate clear and purposeful instructions for a chatbot - as a skill they could assess. Effective prompting requires students to break down tasks, understand concepts and communicate precisely.
Several noted that unclear prompts often produce poor outputs, forcing students to reflect on what they are really asking.
Prompting was considered ethical only when used transparently, drawing on one's own foundational knowledge. Without these conditions, educators feared prompting may drift into over-reliance or uncritical use of AI.
Critical thinkingEducators saw strong potential for AI to support assessing critical thinking. Because chatbots can generate text that sounds plausible but may contain errors, omissions or fabrications, students must evaluate accuracy, coherence and credibility. Participants reported using AI-generated summaries or arguments as prompts for critique, asking students to identify weaknesses or misleading claims.
These activities align with a broader need to prepare students for work in a future where assessing algorithmic information will be a routine task. Several educators argued it would be unethical not to teach students how to interrogate AI-generated content.
Writing: Where boundaries tightenWriting was the most contested domain. Educators distinguished sharply between brainstorming, editing and composition:
. Brainstorming with AI was acceptable when used as a starting point, as long as students expressed their own ideas and did not substitute AI suggestions for their own thinking.
. Editing with AI (for example, grammar correction) was considered acceptable only after students had produced original text and could evaluate whether AI-generated revisions were appropriate. Although some see AI as a legitimate support for linguistic diversity, as well as helping to level the field for those with disabilities or those who speak English as an additional language, others fear a future of language standardization where the unique, authentic voice of the student is smoothed over by an algorithm.
. Having chatbots draft arguments or prose was implicitly rejected. Participants treated the generative phase of writing as a uniquely human cognitive process that needs to be done by students, not machines.
Educators also cautioned that heavy reliance on AI could tempt students to bypass the“productive struggle” inherent in writing, a struggle that is central to developing original thought.
Read more: What are the key purposes of human writing? How we name AI-generated text confuses things
Our research participants recognized that in a hybrid cognitive future, skills related to AI, together with critical thinking are essential skills for students to be ready for the workforce after graduation.
Living in the post-plagiarism eraThe idea of co-writing with GenAI brings us into an post-plagiarism era where AI is integrated into into teaching, learning and communication in a way that challenges us to reconsider our assumptions about authorship and originality.
This does not mean that educators no longer care about plagiarism or academic integrity. Honesty will always be important. Rather, in a post-plagiarism context, we consider that humans and AI co-writing and co-creating does not automatically equate to plagiarism.
Today, AI is disrupting education and although we don't yet have all the answers, it's certain that AI is here to stay. Teaching students to co-create with AI is part of learning in a post-plagiarism world.
Design for a socially just futureValid assessment in the age of AI requires clearly delineating which cognitive processes must remain human and which can be legitimately cognitively offloaded. To ensure higher education remains a space for ethical decision-making especially in terms of teaching, learning and assessment, we propose five design principles, based on our research:
1. Explicit expectations: The educator is responsible for making clear if and how GenAI can be used in a particular assignment. Students must know exactly when and how AI is a partner in their work. Ambiguity can lead to unintentional misconduct, as well as a breakdown in the student-educator relationship.
2. Process over product: By evaluating drafts, annotations and reflections, educators can assess the learning process, rather than just the output, or the product.
3. Design assessment tasks that require human judgment: Tasks requiring high-level evaluation, synthesis and critique of localized contexts are areas where human agency is still important.
4. Developing evaluative judgment: Educators must teach students to be critical consumers of GenAI, capable of identifying its limitations and biases.
5. Preserving student voice: Assessments should foreground how students know what they know, rather than what they know.
Preparing students for a hybrid cognitive futureEducators in this study sought ethical, practical ways to integrate GenAI into assessment. They argued that students must understand both the capabilities and the limitations of GenAI, particularly its tendency to generate errors, oversimplifications or misleading summaries.
In this sense, post-plagiarism is not about crisis, but about rethinking what it means to learn and demonstrate knowledge in a world where human cognition routinely interacts with digital systems.
Universities and colleges now face a choice. They can treat AI as a threat to be managed, or they can treat it as a catalyst for strengthening assessment, integrity and learning. The educators in our study favour the latter.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment