Can Canada be as innovative in regulating AI as we are in developing it?
As I walked home after last night’s mock trial, “On Trial: Can Big Tech Be Trusted with AI?” at the Rotman School of Management, I couldn’t help but feel invigorated. Evenings like these remind me why I was excited to return to school.
So, can we trust big tech to self-regulate its use and training of Artificial Intelligence? Will Canada’s inaction push us to the back, behind others? Is there a balanced path between regulation and innovation, distinct from the extreme approaches seen in the tech landscapes of the U.S. and EU? Do regulations hold significance in a world where some nations relentlessly race ahead in military and technological competition, consistently evolving? These and some other questions were cross-examined during the trial, leading to a consensus — there’s work to be done in the space of befriending AI, and today’s Canada should not miss its opportunity to lead the charge.
Recently, I’ve grown more amenable to the notion that “strict parents create sneaky kids” and truly believe that this concept extends beyond parent-child relationships; it applies to any recently popularized innovation, including AI. By applying overly panicked and strict regulations (ones that will also have a hard time keeping up with the rapidly evolving industry), we risk AI and its trainers becoming “sneaky” in their approach to meeting the regulatory criteria while continuing to pursue their interests (most often rooted in profit). We also risk bad actors continuing to move miles ahead of our development in leveraging and training AI while we bicker over details.
But before we delve any further, let’s redefine “Big Tech” to encompass not only FAANG companies (Meta formerly known as Facebook); Amazon; Apple; Netflix; and Alphabet formerly known as Google), but also entities in health monitoring, pharmaceuticals, manufacturing, power, finance, and law. Virtually every industry in our modern market is influenced by tech or Big Tech in some capacity.
Now, let’s reframe “regulation.” It spans from minimal oversight to heavy control in consumer product regulation. Crafting specific policies or laws for each product and its myriad variations is impractical, necessitating more general guidelines and requirements. Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems is a positive step in this direction.
Currently, Big Tech companies hold a near-monopoly on knowledge regarding AI developments, implications, and applications. And what do monopolies do? They work to protect themselves, not necessarily to improve the product.
To address this, the cross-examination proposed the concept of “guardrails” as an alternative to conventional regulations, which often rely on outdated 20th-century technology models. The concept involves the registration of significant AI projects with the government to enhance awareness and transparency in the extensive use of AI by big tech. Rather than endorsing unrestricted experimentation or stifling innovation, this approach promotes disclosure of AI project purposes and benefits. It should be noted that this doesn’t necessarily mean revealing innovations before their launch or inviting unwarranted government interference. It also means that policies can evolve and adapt, allowing for ongoing evaluation and relevance updates, similar to the practices seen in the nuclear sector. In short, it establishes a system of checks and balances.
Implementing these guardrails ensures that Canada maintains its leadership position in AI advancement. It does so by embracing diversity and fostering an environment conducive to healthy competition, thus avoiding incremental setbacks. This approach also enables a more equitable distribution of profits for those involved while preventing the historical trend of driving innovation out of Canada due to overburdensome regulations, often referred to as “death by a thousand cuts.”
The need for establishing guardrails in AI learning and utilization extends beyond local, regional, or national boundaries; it is a global imperative. Instead of debating whether Big Tech or Government should exclusively regulate AI, let’s consider the idea of collaboration between the two that transcends power struggles and profit-chasing. This collaboration should be rooted in understanding and empathy for collective safety, driven by self-interest. In my humble opinion, Canada has an opportunity to lead the world in embracing AI for the greater good while maintaining vigilant oversight to prevent its misuse. Failing to seize this opportunity would be a significant loss.