Canadian Privacy Regulators Find OpenAI Broke the Law Building ChatGPT

OpenAI violated Canadian privacy law when it scraped vast quantities of personal information from the open internet to train ChatGPT, a joint investigation by federal and three provincial privacy commissioners has concluded. The findings, released earlier in May, mark one of the most significant regulatory determinations to date against a major foundation-model developer and could shape how generative artificial intelligence products are introduced into the Canadian market.
The investigation, conducted jointly by the Office of the Privacy Commissioner of Canada and his counterparts in Quebec, British Columbia and Alberta, found that OpenAI overcollected personal information, failed to obtain valid consent, lacked transparency and accountability, and made it difficult for Canadians to access, correct or delete personal data the company had captured about them.
What the investigation found
According to the joint findings document released by the Privacy Commissioner of Canada, OpenAI gathered enormous volumes of personal data when building ChatGPT, including information about Canadians scraped from publicly accessible web pages, social media, forums, news sites and other sources. Regulators concluded that publicly accessible information is not exempt from Canadian privacy law and that the company needed a valid legal basis to collect, use and disclose such data.
The investigation found that OpenAI did not seek meaningful consent from individuals whose data was used to train its models, did not adequately inform users about how prompts and chat content might be processed and retained, and could not always identify whether personal information about a specific individual was contained in its training data. The regulators also flagged the risk of factual inaccuracies that the model produces about real people, which can cause reputational harm and is itself a privacy issue under Canadian law.
The commissioners concluded that the complaint, originally filed in 2023, was well-founded. The matter was described as conditionally resolved, meaning OpenAI has agreed to implement or maintain a series of remedial measures over a defined period.
What OpenAI agreed to do
According to the regulators, OpenAI cooperated with the investigation and implemented a number of changes during its course. The company has limited the categories of personal and sensitive information used to train new ChatGPT models, expanded user-facing controls allowing people to opt out of having their data used for model training, and improved its transparency disclosures about data handling.
OpenAI also committed to additional measures going forward, including enhanced procedures for handling access, correction and deletion requests from Canadian users, refinements to its consent and notification practices, and ongoing engagement with Canadian regulators on how its products evolve. The Privacy Commissioner of Canada said those measures, taken together, address the concerns identified in the investigation, although the regulators reserved the right to take further action if commitments are not met.
Why it matters for Canadians
Tens of millions of Canadians have used ChatGPT since its November 2022 launch, and the application now sits at the centre of how many students, workers and professionals interact with written information. The finding establishes that Canadian privacy law applies to the construction of such systems, not only to their day-to-day use, and that users of these products have rights of access, correction and deletion over their personal information.
It also reinforces the position of Canadian regulators that publicly accessible information remains personal information for the purposes of the Personal Information Protection and Electronic Documents Act. That interpretation has been resisted by some technology companies but has been consistently advanced by Canadian privacy commissioners in recent years.
For businesses building or deploying generative AI tools in Canada, the findings clarify expectations. Companies are expected to map and document their training data sources, ensure they have a valid legal basis for processing personal information, provide meaningful notice to individuals and ensure that downstream uses of model outputs do not generate or amplify inaccurate personal information.
Reaction
The Privacy Commissioner of Canada called the joint investigation a milestone in the regulation of artificial intelligence, noting that it is the first major coordinated regulatory action by Canadian privacy authorities against a generative AI developer. Provincial commissioners in Quebec, British Columbia and Alberta echoed the message, framing the findings as a template for how privacy oversight can extend to large language models.
OpenAI, in a statement quoted by Canadian news outlets, said it appreciates the constructive engagement with Canadian regulators and is committed to ongoing compliance with applicable law. The company noted that it has already made changes to address the concerns identified during the investigation.
Civil society groups working on digital rights, including the Canadian Internet Policy and Public Interest Clinic, welcomed the findings but said enforcement tools available to Canadian regulators remain weak compared to those in the European Union, where authorities can impose multi-million-dollar penalties. The findings are not accompanied by financial penalties, although the Privacy Commissioner of Canada has long argued that PIPEDA needs to be modernised to include order-making and penalty powers.
Legislative implications
The Carney government has signalled that it intends to revisit the federal privacy law reform agenda that stalled in the previous Parliament. The Artificial Intelligence and Data Act, originally bundled with broader privacy reforms in Bill C-27, did not pass before the spring 2025 election and was not reintroduced in its earlier form.
Federal officials say a new package, focused on artificial intelligence governance and on aligning Canadian privacy law with international standards, is in development. The OpenAI findings are expected to inform that legislative work, particularly with respect to training data, user rights and the responsibilities of providers of general-purpose AI systems.
Quebec's framework is somewhat further along. The province's Law 25, which came fully into force in 2023, gives Quebec's data protection authority order-making powers and the ability to impose significant administrative monetary penalties for serious violations. British Columbia and Alberta are reviewing their own private-sector privacy laws.
Education, work and consumer impact
For ordinary users, the practical changes flowing from the findings are likely to be incremental. OpenAI already offers controls allowing users to opt out of having their data used to train new models. Canadian users may see clearer in-product notices and more straightforward processes for exercising their privacy rights, including requests to access, correct or delete personal data held by the company.
For schools, hospitals, law firms and other institutions deploying ChatGPT and similar tools, the findings reinforce the importance of vendor due diligence, contractual safeguards and internal policies governing the input of personal information into AI systems. Several Canadian universities have already issued guidance discouraging staff and students from pasting confidential or personal data into general-purpose AI tools.
Broader international context
Canada's findings echo concerns raised by regulators in Italy, France, Germany and elsewhere. Italy briefly blocked ChatGPT in 2023 over similar concerns, prompting OpenAI to introduce changes that have since been refined in markets around the world. The European Union's AI Act, which entered into force in 2024 and is being phased in, imposes additional obligations on providers of general-purpose AI systems.
The Canadian findings add to a growing global consensus that the construction of large language models is itself a regulated activity, not merely the deployment of finished products. For Canadian users, that consensus is increasingly translating into more visible rights and protections, even as the underlying technology continues to evolve at speed.
The road ahead for AI regulation in Canada
The OpenAI findings sit within a broader and rapidly evolving regulatory landscape for artificial intelligence. Federal and provincial governments are weighing how best to balance the benefits of AI development with the risks of harm to privacy, equity, employment and democratic processes.
The Carney government has signalled an interest in advancing AI governance legislation, with details expected to emerge as the federal cabinet finalises its priorities for the fall sitting of Parliament. International coordination, particularly with the European Union, the United Kingdom and like-minded democracies, has been a consistent theme of Canadian engagement on AI policy.
Provincial governments are also active. Quebec's Law 25, the Ontario Trustworthy AI framework that has been under discussion, British Columbia's review of its private-sector privacy law and Alberta's similar review all contribute to a complex but increasingly coherent regulatory environment for AI in Canada.
Industry, civil society and academic institutions are participating actively in the policy conversation. Canadian AI research and commercialisation, anchored by institutes including Mila in Montreal, the Vector Institute in Toronto and Amii in Edmonton, represents a significant national asset that the government has worked to protect and grow even as it advances regulation.
What's next
OpenAI's compliance with the agreed remedial measures will be monitored by the Canadian regulators, with periodic reporting expected over the coming months. The commissioners said they will be ready to take further enforcement steps if those commitments are not met.
The findings also strengthen the hand of regulators in dealing with other generative AI providers. Officials said the principles articulated in the OpenAI case will be applied to any company offering similar products in Canada, regardless of where the company is headquartered.
For Canadians, the practical bottom line is straightforward. Personal information used to build the AI tools that increasingly shape daily life is subject to Canadian law, and the people whose data is captured have the right to ask questions about how it has been used.
Spotted an issue with this article?
Have something to say about this story?
Write a letter to the editor