Home WORLD Four million users of Telegram AI can create deepfake nudes of anyone:...

Four million users of Telegram AI can create deepfake nudes of anyone: Report

17
0

In a shocking revelation, a recent investigation has uncovered that AI-powered chatbots on Telegram are being used by millions to create nude and explicit images of real people. Reports indicate that up to 4 million users utilize these chatbots each month to produce deepfakes that can alter photos by removing clothing or fabricating sexual activity, a report on Wired stated. This alarming trend has raised concerns among experts about the potential harm these tools pose, particularly to young girls and women.

Deepfake Technology and Its Risks

Henry Ajder, an expert who discovered these chatbots four years ago, highlighted the dangers of such technologies. He described the situation as “nightmarish,” emphasizing the accessibility of these tools and the large number of users actively engaging with them. “It’s really concerning that these tools — which are really ruining lives and creating a very nightmarish scenario primarily for young girls and for women — are still so easy to access and to find on the surface web, on one of the biggest apps in the world,” he stated.

Targeting Vulnerable Populations

While celebrities like Taylor Swift and Jenna Ortega have previously been victims of deepfakes, reports of teenage girls being targeted are particularly alarming. This has led to incidents of sextortion, where individuals use generated images for blackmail or ongoing abuse. A survey indicated that 40 per cent of US students have encountered deepfakes circulating in schools.

Telegram’s Role and Response

Telegram, known for its messaging services, has faced several allegations of misuse of platforms. In August, several reports suggested that the Indian government initiated an investigation, which could lead to a ban depending on the findings.

In a related incident earlier this year, Telegram CEO Pavel Durov faced legal charges for facilitating child pornography. Despite these challenges, there has been little change in the operations of the platform. “Building technology is hard enough as it is. No innovator will ever build new tools if they know they can be personally held responsible for potential abuse of those tools,” Durov remarked.

« Back to recommendation stories


As the situation develops, the need for stringent measures to protect vulnerable individuals from the misuse of deepfake technology continues to grow.

LEAVE A REPLY

Please enter your comment!
Please enter your name here