As the use of artificial intelligence technology reaches deeper into our daily lives, Bow Valley College has joined a growing number of organizations making a public commitment to AI safety, transparency, privacy and ethical accountability.
The “AI Trustworthy Pledge” is a set of principles laid out by international digital security and ethics organization Cloud Security Alliance, guiding acceptable applications of the rapidly expanding technology.
“It's essentially the public line in the sand that this is how we're running our AI program,” says James Cairns, Chief Information Security Officer for Bow Valley College. “It's also the impetus for us to go back and reassess and say, ‘hey, you know, this isn't as we stated. Let's deal with the issue.’”
Last year, Bow Valley College became the first post-secondary institution in the world to achieve the Trusted Cloud Provider designation from the Cloud Security Alliance. At the time, just over 100 companies had met the rigorous security standard.
Cairns says users should have a clear sense of when they are dealing with a human or an AI interface. Things such as biometrics or using technology to read people’s intentions are strictly out.
He gives the example of a pop machine.
“The machine could look at your face, remember it and say: ’James' bought five colas today, he maybe needs to lay off. This is going to give him a heart attack, right?” Cairns says. “They can look at your pupils and say, ‘oh, man, did you know your heart rate is at 148 beats per minute?’”.
“It might be well intentioned, but it’s unacceptably intrusive.”
Post secondary institutions have wrestled with a steady stream of challenging questions around academic integrity and privacy, and at what point in agentic AI workflow does human oversight have to review the decisions. Are there biases emerging or built into the models? Are real world outcomes for humans being decided by unsupervised code?
In January 2025 Bow Valley College released its AI governance policy, which includes frameworks for spotting and protecting against potentially unfair (or unethical) consequences arising from technology use.
Cairns says at its heart, the AI Trustworthy Pledge and the college’s guiding policy are not really about technical issues.
“We have 13 people on our AI governance committee. Only two are technical. So the reason for that is, sure there are technical issues, but it's actually more a technical placement of all these different social aspects.”
“It's always going to be (a review of) that risk reward balance. That's just the way we've tried to look at it. If something's in a high-risk area it's not to say ‘no,’ but maybe there's some things that we need to do that puts human oversight in the loop.”