The Future of Work and Learning Brief
Issue #52 | April 2025

This special edition of the Future of Work and Learning brief, focused on AI integration in the workplace, was written by CWF’s winter semester intern, Tristan Hall. Tristan is a Loran Scholar and has just completed his first year in the Bachelor of Engineering – Mechatronics, Robotics and Automation Engineering program at the University of Waterloo.

We’re incredibly grateful to Tristan for spending the past few months with us. His curiosity, insight and forward-thinking perspective have helped us explore how we, as a research institution, can responsibly and thoughtfully incorporate AI into our own work.

The AI revolution is already here

In recent years, months and weeks, there have been rapid advancements within the field of AI, and it has continued to develop (some might say encroach upon) the modern workforce. More than a third of Americans report that their jobs have been changed profoundly by AI and that they use AI tools regularly. Almost twice as many believe that AI will fundamentally transform their daily lives within the next few years. Looking at the Canadian workforce, we struggle to identify AI systems over two-thirds of the time and we have difficulty defining its specific capabilities—a critical gap to breach when controlling AI’s effects on our lives.

The human element at the centre of the AI revolution

Canadians overwhelmingly see AI as a positive development; however, they also recognize its potential risks. In fact, 70 per cent believe AI could potentially harm society. The important note here is that they agree that AI remains a human-led innovation.

While AI represents a significant opportunity, it’s also our responsibility as Canadians, free thinkers and individuals to ensure that AI empowers us, rather than renders us obsolete. We should first understand the strengths and weaknesses of these new technologies and carefully consider AI’s role in our everyday lives now and in future.

Where it all began

According to IBM, AI can be defined as: “technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.” Under that definition, you’ve probably been using AI for years. Spam filters, search engines, navigation apps, music recommendations and voice assistants like Siri are all rule-based AI systems that:

  • Rely on structured data
  • Follow predefined processes
  • Match queries to existing information in their database
  • Provide outputs within programmed parameters

For example, when you ask Siri where the nearest Tim Hortons is, it:

  1. Converts your voice to text
  2. Analyzes intent using its knowledge base
  3. Searches locations using GPS and map data
  4. Delivers a response following preset patterns

While these AI models may be effective at finding an emergency caffeine fix, they merely retrieve and organize existing information—they can’t generate new insights beyond their programming.

To understand why, we need to take a closer look at the types of models we use (see diagram). As a rule of thumb, the more advanced the AI is, the more complex its structure becomes and the longer it takes to train.

Source: CWF based on accepted industry standards, see IBM

In addition to these subsets, AI models also learn in different ways. The table below contains some ways that AI is commonly trained.

Learning Type Description Example
Supervised Trains on data labeled by human Emails marked as spam or not spam
Unsupervised Finds patterns in unlabeled data Clusters customers for targeted marketing
Reinforcement Learns through trial and error, “rewarded”         for good decisions Stockfish, the AI chess player

Source: CWF based on accepted industry standards, see PECAN

What’s different today?

The ability to absorb exponential levels of training data, new algorithmic architecture and use of Reinforcement Learning from Human Feedback (RLHF) has allowed AI technology to make significant leaps forward. While this seemingly occurred overnight, these developments are the culmination of decades of research and investment.

These breakthroughs have brought interactive systems to the masses, allowing them to communicate and create with these networks in a way that was not possible before. Advanced AI assistants like Claude and ChatGPT can write anything from complex code to Shakespearean poetry. AI video generation tools such as Fauna & Flora AI models can create a video based on a few lines of text. That’s not even touching on all the real-time voice synthesis, music composition and AI art tools that have emerged.

By developing user-friendly generative AI technologies that produce results that are both tangible and easy to interact with, what was once the domain of specialized researchers and big tech companies is now in the hands of the everyday user.

But just how “intelligent” are models like ChatGPT?

Modern Large Language Models (LLMs) like ChatGPT have a high understanding of grammar and the way sentences are formed. They can also sound intelligent and confident about almost any topic; however, they don’t actually ‘understand’ the information in a human sense. The LLM:

  • Breaks the input information into manageable data chunks
  • Runs these chunks through algorithms trained on vast internet text repositories
  • Selects statistically likely answers
  • Uses language models to deliver polished responses

LLMs are expert pattern predictors, not true reasoners. They don’t “understand” or “reason” like humans,  rather, they predict text patterns. Of course, as LLMs, they always present constructed data in a plausible manner but there’s no real guarantee of factual quality.

In fact, models are prone to “hallucinations” or presenting false information to the user in a way that seems correct but is not. The onus is on the user to ensure that content generated by AI is accurate. Intentional manipulation of AI for misinformation raises different concerns entirely, and presents important questions about human responsibility, defence and cybersecurity threats.

The scope of current AI

Every AI that has been mentioned so far, in fact every AI known to exist so far, falls into the category of a Narrow AI. That means it has task-specific training, no genuine reasoning capabilities and a limited ability to apply knowledge across domains. There has been no true imitation of a human yet, something that is loosely referred to as Artificial General Intelligence (AGI). Companies are attempting to achieve it, whether it be by adding symbolic reasoning to ChatGPT, or by creating “World AI” that digitally recreate all aspects of the real world on the digital plane. But so far, humanity hasn’t been able to reach such comprehensive AI capabilities.

AI applications in everyday work

Despite these risks, workplaces and sectors are adopting AI as it is an incredibly versatile tool for any workplace. After understanding the basics, which hopefully you do now, ask yourself: how can AI enhance my work and how can I mitigate the known risks?

You’re in the best position to answer that.

Think about your role. What routine tasks prevent you from focusing on important work? Can you spend less time on them? That’s what AI should be accomplishing for you.

If you’re still stuck, consider these immediate applications:

To ensure thahhttps://www.glideapps.comttps://www.glideapps.comt your work remains ethical and error-free, it’s important to do your due diligence when it comes to the AI tools that you use. Your workplace may already have some procedures in place for you to follow, but consider the following:

  • Don’t share sensitive or personal information
  • Be aware of possible copyright infringement
  • Use AI to find evidence and sources rather than have it draw conclusions
  • Fact-check any data and information provided
  • Based on the data they were trained on, models may have pre-existing biases
  • Due to the nature of its training, AI struggles to identify ethical issues in its work
  • Remember that AI can “hallucinate,” and you may have to challenge its findings when necessary

This requires critical thinking and reflection. Look for opportunities through online training or mentorship to learn how to maximize the benefit of AI while reducing the risk for your job or sector. The beauty of tools like ChatGPT is that you can literally ask them to teach you how to use them. And as you can see above, it’s not just Generative AI that’s being used – there are many other AI tools that can assist you, they just tend to be overshadowed by the more well-known models’ (un)popularity. The truth is, once you start, you’ll soon be finding applications across the board.

Start small. Pick one repetitive task and see how AI can help.

Build from there.

The future of our work is powered by AI interaction and human/machine cooperation.


The Future of Work & Learning Brief is compiled by Jeff Griffiths and Stephany Laverty and Tristan Hall. Through this monthly brief, keep on top of developments in the workforce and how education and training are changing today to build the skills and competencies needed for the future.