Visual concept by KlueIQ, created in collaboration with Perplexity.ai.

November 27, 2025.

Social media spent a decade teaching people two bad habits:

  1. confuse having an account with having authority, and
  2. treat untested feelings as untouchable facts.

The result is a narcissistic scaffolding: follower counts as moral proof, hot takes as identity, outrage as a personality that collapses the moment a technology appears that does not care how “special” you feel.

AI is that technology. It doesn’t check your blue checkmark before it helps someone else write a grant application, organize their thoughts, or draft a game script. It just works. And for people who built their status on being the only ones who could do those things, this feels less like progress and more like an insult.

So instead of learning, they perform alarm. They recycle borrowed talking points. They demand applause for “warning humanity,” while quietly admitting they have never actually tried to understand how AI works, who it helps, or what literacy would look like.

This isn’t a debate. It’s a defence mechanism.

Canada: Early AI Pioneer, Late AI Learner

Canada helped birth some of the core ideas behind modern AI. Yet in 2025, a global KPMG–University of Melbourne study ranked Canada 44th out of 47 countries for AI training and literacy, and 28th out of 30 advanced economies.
Only 24% of Canadians surveyed reported receiving any AI training, compared with 39% globally, and Canada also sits near the bottom on trust and confidence in using AI tools.

In other words: the country that helped invent the technology never bothered to teach its people how to understand it.

UNESCO’s mapping of K‑12 AI curricula shows at least 11 countries, including China, India, South Korea, Portugal, Qatar, and the UAE, have already developed and endorsed national AI curricula for primary and secondary students, with several more in development.

Canada is not on that list. At home, policy voices are now openly calling for a national AI literacy strategy precisely because this gap is starting to look like a competitiveness problem, not an academic one.

When you put those two facts together: low literacy and high volume of opinion, you don’t get thoughtful caution. You get a very loud, very Western pseudo‑debate driven by people whose confidence is inversely proportional to their contact with reality.

How AI Undercuts Narcissistic Pecking Orders

Social media built invisible pecking orders: who gets quoted, whose threads trend, whose “brand” is treated as canon. Over time, that creates what can only be called narcissistic scaffolding: a structure of likes, shares, and in‑group affirmation that props up fragile self‑importance.

AI quietly kicks out that scaffolding:

  • It lets someone with no platform write a clear policy brief, grant proposal, or op‑ed that reads as polished as anything from a legacy insider.
  • It gives dyslexic, ADHD, and other neurodivergent thinkers tools to have text read aloud, restructure messy notes into outlines, and offload executive function, turning what used to be daily friction into usable flow.
  • It helps non‑native speakers express themselves in confident, nuanced written language instead of being dismissed as “unprofessional” because of grammar or style.

This is what narcissists cannot stand: AI does not ask for pedigree. It does not care how many years you dined out on being “the smart one” in the room. It notices patterns, then helps anyone who shows up with a question.

When those benefits surface: people saving hours of drudgery, finally organizing their thoughts, building something they could never afford to hire for, the narcissistic response is not curiosity. It is denial. “That doesn’t count.” “That’s slop.” “That’s dangerous.” Anything to avoid admitting that their exclusivity was always built on uneven access, not innate superiority.

Ignorance Isn’t Caution. It’s a Brake on Progress.

A country can’t navigate AI with people who are afraid to even touch it.

Canadian research warns that low AI literacy will cost citizens financially, slow adoption, and drag down productivity if education and training don’t catch up.

UNESCO and others now describe AI literacy as “the basic grammar of our century”: a baseline skill for understanding what AI can and cannot do, when it helps, and when it must be questioned.

Yet much of the Canadian (and broader Western) AI discourse still comes from people who:

  • have never used AI beyond a cursory test,
  • cannot explain how training, prompting, or safeguards work, and
  • treat their discomfort as evidence rather than a data point.

That isn’t principled scepticism. It is cognitive stagnation disguised as virtue. It keeps classrooms, workplaces, and creative industries frozen in older patterns while other countries are busy teaching ten‑year‑olds how to collaborate with the tools adults here are still afraid to open.

Where KlueIQ Stands

KlueIQ was built on a different premise: that AI is not a thief of meaning, but a multiplier of access.

  • In our games, AI is not just a gimmick; it’s the infrastructure that lets true crime, history, and psychology become interactive, investigative experiences instead of static text locked in a spine.
  • For players who were once on the edges of the classroom, the newsroom, or the industry, neurodivergent thinkers, late bloomers, outsiders without the “right” connections, AI becomes a co‑pilot: organizing clues, testing hypotheses, and giving them the same analytical leverage insiders take for granted.
  • For communities historically kept far from the levers of storytelling, AI‑assisted tools turn “I wish I could” into “I just did”, without waiting for permission from a publisher, studio, or gatekeeper.

Where some see a threat to their paper crowns, KlueIQ sees an opportunity: to democratize investigation, narrative, and insight so that people who lived on the fringes, and on the edge, of society can document, analyze, and re‑invent their worlds, too.

Canada can keep arguing in circles about whether AI should exist. Or it can do the harder, humbler work of becoming AI‑literate enough to steer it well.

At KlueIQ, the choice is already made. The plough is not stopping for performative doom. It’s moving forward: making room for more hands on the handle.