Canada’s Voluntary AI Code of Conduct: Debates and Divergent Opinions

0
234
Canada's Voluntary AI Code of Conduct

Canada’s Voluntary AI Code of Conduct: Some businesses fear that regulations will impede innovation and diminish Canada’s competitive edge.

Companies working with AI in Canada will be presented with a new voluntary code of conduct regarding the use and development of sophisticated generative artificial intelligence in this country.

Canada’s Voluntary AI Code of Conduct: Debates and Divergent Opinions

And while there is already support from the business community, there are also concerns that it could impede innovation and Canada’s ability to compete with foreign companies.

Advanced generative artificial intelligence typically refers to content-generating AI. ChatGPT is a well-known example, but any system that generates audio, video, or text would qualify.

Companies that sign the code adhere to a number of principles, including that their AI systems are transparent regarding where and how the information they gather is used, and that there are methods for addressing any potential bias in a system.

In addition, they accept human monitoring of AI systems and the requirement that developers of generative AI systems for public use construct detection systems for anything generated by their system.

“I believe that if you ask people on the street, they want us to take action immediately to ensure that we have specific measures that companies can take immediately to build trust in their AI products,” said Industry Minister Francois-Philippe Champagne at a Montreal AI-focused conference on Wednesday.

Parliament is still considering legislation such as Bill C-27, which would update privacy legislation and add rules governing artificial intelligence.

Hence, the voluntary code would give another method for the federal government to lay out rules for companies to make products people can trust before they even use them, or whether they elect to use them at all.

BlackBerry and Telus are among the signers

BlackBerry, a Canadian technology company that employs generative AI in its cybersecurity products, is the first signatory to the voluntary code.

According to the company’s chief technology officer, the goal is to instill trust in an AI product prior to its use, which represents a cultural transition for some.

Charles Egan told CBC News in an interview, “People always deploy mobile phones, computers, and networks, and then we try to apply trust after the fact.”

“I believe AI, particularly generative AI, has tremendous potential… “Therefore, if we put in place some guidelines, we can enjoy the benefits and reduce some of the potential pitfalls of this explosion of generative AI that we’re all experiencing,” said Egan.

Egan noted that he and his company view the Canadian code of conduct as advantageous because it imposes most of its requirements on AI developers. According to Egan, this means that consumers who wish to purchase or use generative AI technology face far fewer restrictions.

“If there were no signs and traffic signals on the highway, there would be complete pandemonium. And I believe that’s how BlackBerry and I see it in terms of bringing trust to the AI world, ” said Egan.

The Value of Eliminating Language Barriers: Discover The Abilities Of AI Translation Tools

Code of conduct is a ‘step’

Even though the code is voluntary, Canadian lawyer Carole Piovesan says it is part of a growing ecosystem of legal and regulatory measures.

“This is the first step in introducing more enforceable measures,” said Piovesan, who explained that there are “immediate concerns” as generative AI such as ChatGPT and image generators grow in popularity.

According to Piovesan, the federal government is utilizing the voluntary code to supplement and bridge between mandatory rules that are still being drafted or enacted.

Piovesan believes Canada’s actions will mirror those of the United States and the European Union.

“What Canada is doing to regulate artificial intelligence is consistent with other jurisdictions, such as the EU and the United States. “The EU is very close to passing the EU Artificial Intelligence Act, a fairly prescriptive law,” she said.

Concerns about’stifling’ industry influence

Despite the fact that the code is currently voluntary, other businesses in Canada have voiced concerns about it.

The chief executive officer of Shopify criticized the government’s initiative on X, formerly Twitter.

Tobi Lütke stated in an email that he will not endorse the code of conduct.

“Canada does not need more referees. We need more constructors. Let other nations regulate, while we take the more courageous route and say, “Build here.”

The company did not respond to a request for comment from CBC News regarding Lütke’s post.

In addition, others in the Canadian industry have varied reactions.

“Is it something that should be included, particularly in relation to consumer data, privacy, and cybersecurity? Yes, said XAgency AI co-founder Jeff MacPherson.

MacPherson told CBC News: “However, it also has the potential to stifle industry growth.”

XAgency AI develops proprietary generative AI technologies for applications such as business automation and marketing. The team has not yet adopted the code of conduct; according to MacPherson, it is waiting to see what occurs with the code and how the industry evolves with the code in place.

One of his concerns is that distinct or more stringent rules in Canada may make it more difficult to compete, citing European tech regulations in other, non-AI sectors that have resulted in companies deciding not to offer their services there.

“It can disadvantage Canadians,” he stated. There are many of these large technology corporations, and when these regulations are implemented, the technologies cannot be used within the country.