CNIL Says ‘Privacy-Friendly’ AI Systems Are a Must
The French data protection authority on Tuesday signaled increased concerns over the privacy impacts of generative artificial intelligence and said issues such as data scraping raise data protection questions.
The French National Commission on Informatics and Liberty – known as CNIL – said it has a four-pronged action plan intended to create “clear rules protecting the personal data of European citizens in order to contribute to the development of privacy-friendly AI systems.”
The agency already initiated an investigation into ChatGPT. French Minister for Digital Transition and Telecommunications Jean-Noël Barrot said in April that Europe’s main concerns are primarily over whether ChatGPT processes data for training its algorithms in a way that violates privacy laws (see: European Scrutiny of ChatGPT Grows as Probes Increase).
The agency intends to publish a guide on the rules for the sharing and reuse of data, including data scraped from the internet.
During the coming year, the agency said, it will pay “particular attention” to whether artificial intelligence companies have completed a privacy impact assessment and planned for individuals to exercise rights over their data.
Data scraping by artificial intelligence companies to train algorithms is a flashpoint in the technology’s rollout. Facial recognition company Clearview AI for years has been dogged by accusations that it acted illegally in downloading billions of facial images. CNIL earlier this month imposed a 5-million-euro fine on the company after finding that it had failed to comply with an earlier order to delete the data of French residents and pay a 20-million-euro fine.
Regulators have shown increased willingness to target the fruits of illegal data downloads. The U.S. Federal Trade Commission in March 2022 ordered Weight Watchers to delete personal information collected from children younger than 13 and to destroy algorithms derived from that data.