AI-powered detection tech is here to spot GenAI-generated audio deepfakes

In a bid to tackle deepfakes being generated via generative artificial intelligence (AI) tools, global cyber-security company McAfee on Monday announced an AI-powered deepfake audio detection technology during the ‘CES 2024’ event.

 

Called Project Mockingbird, the technology with advanced AI-detection capabilities can help consumers spot what’s real and what’s fake in a time of malicious and misleading AI-generated content.

 

According to the company, the AI-powered technology is over 90 per cent accurate at detecting and exposing maliciously altered audio in videos.

 

“Much like a weather forecast indicating a 70 per cent chance of rain helps you plan your day, our technology equips you with insights to make educated decisions about whether content is what it appears to be,” said Steve Grobman, Chief Technology Officer, McAfee.

 

Increasingly sophisticated and accessible Generative AI tools have made it easier for cybercriminals to create highly convincing scams, such as using voice cloning to impersonate a family member in distress, asking for money.

 

Others, often called “cheapfakes,” may involve manipulating authentic videos, like newscasts or celebrity interviews, by splicing in fake audio to change the words coming out of someone’s mouth.

 

The ‘Project Mockingbird’ technology uses a combination of AI-powered contextual, behavioural, and categorical detection models to identify whether the audio in a video is likely AI-generated.

 

“The use cases for this AI detection technology are far-ranging and will prove invaluable to consumers amidst a rise in AI-generated scams and disinformation,” said Grobman.