While many aren’t concerned yet about robot overlords due to the rise of artificial intelligence, trust in AI continues to be a hot topic.
A new pilot program at Ontario Tech University is underway during the winter and spring 2026 terms, allowing students in 22 undergraduate and graduate courses from all faculties to test an AI Learning Agent: a unique course-based AI system built with trust, accountability, and academic integrity in mind.
As discussions around artificial intelligence grow within higher education and various workplaces, Ontario Tech is concentrating on a more fundamental issue: how can we prepare students to interact with technologies that need to be trusted, managed, and accountable?
Rather than just using standard AI solutions, the university aims to transform everyday learning into practical experiences while exploring how technology should enhance judgment, integrity, and human decision-making.
“Canada’s AI sector is vital for job creation and boosts productivity, innovation, GDP and economic growth. Universities have a duty to teach students how to create and utilize AI systems that benefit humanity,” said Dr. Steven Murphy, president and vice-chancellor of the school. “As a tech-forward university, it’s essential for us to lead by developing solutions that equip our future workforce with the skills for responsible AI design.”
The distinguishing feature of this initiative lies in its design and governance. Students at Ontario Tech actively engage with the tool providing feedback that shapes system improvements, governance choices, and plans for future implementation. This involvement gives them direct insight into how ethical constraints influence AI systems while reinforcing principled decision-making alongside innovation.
The use of AI in higher education isn’t just on the horizon; it’s already prevalent. Recent data indicates that approximately 86 to 90 percent of college students are utilizing AI tools for their studies, highlighting its impact on learning as well as the necessity for careful integration led by institutions.
“This isn’t simply about picking up the newest tool,” explained Manny Kandola, Chief Technology Officer at Ontario Tech. “It’s about instructing students on how technology should be crafted, evaluated and governed. By including them directly in this process, we’re equipping graduates to work responsibly with AI in careers yet unseen while fostering digital trust from day one.”
Differentiating itself from public AI tools, the Learning Agent solely utilizes instructor-approved course content designed to facilitate reasoning rather than deliver answers outright. For instance, instead of presenting solutions directly, it prompts students with questions aligned with course objectives that encourage critical thinking. Available outside classroom hours as well; this tool enhances teaching while keeping faculty-led discussions central.
Piloting this system within live courses allows faculty members real-time access to common student inquiries and learning hurdles. This promotes more focused teaching methods along with timely updates on lectures and tutorials while ensuring instructors maintain full control over system access and content.
Additionally,” said Kandola,” it transforms “everyday learning into a real-world testbed for responsible AI design.”
The integrated safeguards aim at reducing bias or misuse while ensuring human oversight stays paramount; thus creating a learning tool that fosters inquiry without sacrificing academic standards.
Ontario Tech University has recently introduced an interdisciplinary artificial intelligence hub named the Mindful Artificial Intelligence Research Institute (MAIRI), placing Canada at the forefront of ethical approaches toward people-first AI.
“One of the key stories emerging from this decade will surely be the rapid evolution and application of AI technology along with its significant effects on humanity,” stated MAIRI Director Dr. Peter Lewis who also serves as an associate professor at the university as well as being recognized globally as an authority in Trustworthy AI.
“To ensure that AI becomes an empowering resource we must prioritize values like dignity inclusivity along intellectual curiosity throughout its development process.” Through MAIRI we advocate for thoughtful interdisciplinary research done collaboratively.”
Lewis has consistently warned against assuming that information provided by AIs is always trustworthy or accurate.
Can we rely on AIs to admit when they lack knowledge?
“With current AI systems; unfortunately not,” Lewis remarked. “We humans know we’re not infallible; thus expect humility from each other too – shouldn’t we expect similar traits from AIs if they’re going To be deemed trustworthy? Perhaps due To media illiteracy or what psychologists term ‘automation bias,’ many folks might assume everything generated by AIs must be correct.”
A team comprising over 50 researchers representing all faculties at Ontario Tech along diverse fields plus external partners such as Canadian National Institute For The Blind Lakeridge Health Meta Ontario Shores Centre For Mental Health Sciences participate actively within this project.
INsauga’s Editorial Standards And Policies
Last 30 Days: 35 ,019 Votes p>
All Time: 1 ,266 ,892 Votes p >2964 VOTES
The use of AI in higher education isn’t just on the horizon; it’s already prevalent. Recent data indicates that approximately 86 to 90 percent of college students are utilizing AI tools for their studies, highlighting its impact on learning as well as the necessity for careful integration led by institutions.
“This isn’t simply about picking up the newest tool,” explained Manny Kandola, Chief Technology Officer at Ontario Tech. “It’s about instructing students on how technology should be crafted, evaluated and governed. By including them directly in this process, we’re equipping graduates to work responsibly with AI in careers yet unseen while fostering digital trust from day one.”
Differentiating itself from public AI tools, the Learning Agent solely utilizes instructor-approved course content designed to facilitate reasoning rather than deliver answers outright. For instance, instead of presenting solutions directly, it prompts students with questions aligned with course objectives that encourage critical thinking. Available outside classroom hours as well; this tool enhances teaching while keeping faculty-led discussions central.
Piloting this system within live courses allows faculty members real-time access to common student inquiries and learning hurdles. This promotes more focused teaching methods along with timely updates on lectures and tutorials while ensuring instructors maintain full control over system access and content.
Additionally,” said Kandola,” it transforms “everyday learning into a real-world testbed for responsible AI design.”
The integrated safeguards aim at reducing bias or misuse while ensuring human oversight stays paramount; thus creating a learning tool that fosters inquiry without sacrificing academic standards.
Ontario Tech University has recently introduced an interdisciplinary artificial intelligence hub named the Mindful Artificial Intelligence Research Institute (MAIRI), placing Canada at the forefront of ethical approaches toward people-first AI.
“One of the key stories emerging from this decade will surely be the rapid evolution and application of AI technology along with its significant effects on humanity,” stated MAIRI Director Dr. Peter Lewis who also serves as an associate professor at the university as well as being recognized globally as an authority in Trustworthy AI.
“To ensure that AI becomes an empowering resource we must prioritize values like dignity inclusivity along intellectual curiosity throughout its development process.” Through MAIRI we advocate for thoughtful interdisciplinary research done collaboratively.”
Lewis has consistently warned against assuming that information provided by AIs is always trustworthy or accurate.
Can we rely on AIs to admit when they lack knowledge?
“With current AI systems; unfortunately not,” Lewis remarked. “We humans know we’re not infallible; thus expect humility from each other too – shouldn’t we expect similar traits from AIs if they’re going To be deemed trustworthy? Perhaps due To media illiteracy or what psychologists term ‘automation bias,’ many folks might assume everything generated by AIs must be correct.”
A team comprising over 50 researchers representing all faculties at Ontario Tech along diverse fields plus external partners such as Canadian National Institute For The Blind Lakeridge Health Meta Ontario Shores Centre For Mental Health Sciences participate actively within this project.









