Sundar Pichai, CEO of Alphabet.
Alphabet CEO Sundar Pichai committed to an “AI Pact” and discussed disinformation around elections and the Russian war in Ukraine in meetings with top European Union officials on Wednesday.
In a meeting with Thierry Breton, the European commissioner for internal market, Pichai said Alphabet-owned Google would collaborate with other companies on self-regulation to ensure that AI products and services are developed responsibly.
related investing news
“Agreed with Google CEO @SundarPichai to work together with all major European and non-European #AI actors to already develop an “AI Pact” on a voluntary basis ahead of the legal deadline of the AI regulation,” Breton said in a tweet Wednesday afternoon.
“We expect technology in Europe to respect all of our rules, on data protection, online safety, and artificial intelligence. In Europe, it’s not pick and choose. I am pleased that @SundarPichai recognises this, and that he is committed to complying with all EU rules.”
The development hints at how top technology bosses are seeking to assuage politicians and get ahead of looming regulations. The European Parliament earlier this month greenlighted a groundbreaking package of rules for AI, including provisions to ensure the training data for tools like ChatGPT doesn’t violate copyright laws.
The rules seek to take a risk-based approach to regulating AI, placing applications of the technology deemed “high risk,” such as facial recognition, under a ban and enforcing tough transparency restrictions for applications that pose limited risk.
Regulators are growing increasingly concerned by some of the risks surrounding AI, with tech industry leaders, politicians and academics having raised alarm about the recent advances in new forms of the technology such as generative AI and the large language models that power them.
These tools allow users to generate new content — such as a poem in the style of William Wordsworth or an essay in a refined form — by simply giving them prompts on what to do.
They have raised concern not least due to the potential for disruption in the labor market and their ability to produce disinformation.
ChatGPT, the most popular generative AI tool, has amassed more than 100 million users since it was launched in November. Google released Google Bard, its alternative to ChatGPT, in March, and unveiled an advanced new language model known as PaLM 2 earlier this month.
During a separate meeting with Vera Jourova, a vice president of the European Commission, Pichai committed to ensuring its AI products are developed with safety in mind.
Both Pichai and Jourova “agreed AI could have an impact on disinformation tools, and that everyone should be prepared for a new wave of AI generated threats,” according to a readout of the meeting that was shared with CNBC.
“Part of the efforts could go into marking or making transparent AI generated content. Mr. Pichai stressed that Google’s AI models already include safeguards, and that the company continues investing in this space to ensure a safe rollout of the new products.”
Tackling Russian propaganda
Pichai’s meeting with Jourova also focused on disinformation around Russia’s war on Ukraine and elections, according to a statement.
Jourova “shared her concern about the spread of pro-Kremlin war propaganda and disinformation, also on Google’s products and services,” according to a readout of the meeting. The EU official also discussed access to information in Russia.
Jourova asked Pichai to take “swift action” on the issues faced by Russian independent media that can’t monetize their content in Russia on YouTube. Pichai agreed to follow up on the issue, according to the readout.
In addition, Jourova “highlighted risks of disinformation for electoral processes in the EU and its Member States.”
The next elections for European Parliament will take place in 2024. There are also regional and national elections across the region this year and next.
Jourova praised Google’s “engagement” with the bloc’s Code of Practice of Disinformation, a self-regulatory framework released in 2018 and since revised, aimed at spurring online platforms to tackle false information. She added, though, that “more work is needed to improve reporting” under the framework.
Signatories of the code are required to report how they have implemented measures to tackle disinformation.