Google AI Faces Scrutiny Over Deepfake Terrorism Complaints

Google reported over 250 global complaints about its AI software creating deepfake terrorism content. The tech giant alerted the Australian eSafety Commission about its Gemini program's misuse for generating child abuse material. The report covers AI harm minimisation efforts from April 2023 to February 2024.


Devdiscourse News Desk | Updated: 05-03-2025 18:31 IST | Created: 05-03-2025 18:31 IST
Google AI Faces Scrutiny Over Deepfake Terrorism Complaints

Google has disclosed to Australian authorities that it received more than 250 global complaints over a period of nearly one year related to its artificial intelligence software being misused to create deepfake terrorism material. This revelation comes amid increasing scrutiny of AI technology's societal impacts.

The Alphabet-owned company also revealed dozens of warnings about its AI program, Gemini, allegedly being used to generate child abuse material. Under Australian law, tech companies are required to report harm minimisation efforts to the eSafety Commission, or face fines. The concerned period spans from April 2023 to February 2024.

Since the launch of OpenAI's ChatGPT, regulatory bodies worldwide have been vocal about the need for robust safeguards to prevent AI from enabling harmful activities like terrorism and fraud. The Australian eSafety Commission praised Google's report as a "world-first insight" into AI exploitation, stressing the urgent need for effective protective measures. Despite receiving numerous complaints, the regulator indicated that Google did not verify how many were genuine. Google employs a hatch-matching system to tackle Gemini-produced child abuse material but hasn't implemented this method for terrorist content. Previous fines on Telegram and Twitter highlight ongoing compliance challenges.

(With inputs from agencies.)

Give Feedback