TECHNOLOGY

AI Also Tricks Us With Voices: The Human Ability To Detect Audio “Deep Fakes” Is Unreliable

The Risks Of Audio Deep Fakes

Audio deepfakes, generated through artificial intelligence, represent an increasing threat in the digital world. These fakes can imitate a natural person’s voice or even generate unique voices, which poses severe challenges in cybersecurity and copyright protection.

A Dangerous Deception

Some criminals have used these audio deepfake tools to deceive bankers and authorize fraudulent money transfers. In one shocking case, a fraudster managed to trick a bank manager and steal $35 million using an imitation voice in a phone call.

The Proliferation Of Audio ‘Deep Fakes’

Unlike video or image deepfakes, audio deepfakes have largely gone unnoticed. However, its potential to destroy reputations and cause cyberattacks is just as worrying. These deepfakes are generated by machine learning models that analyze leaked audio samples, meaning anyone can create a convincing imitation of another’s voice.

A Revealing Study

A study conducted by University College London looked at people’s ability to detect voice fakes. The results were surprising: participants could only distinguish profound speech imitations 73% of the time. Even those who received examples of voice imitations for training did not improve their detection ability.

The Challenge Of Discerning Reality From Fiction

Cybersecurity experts warn that discerning fact from fiction will become increasingly tricky as audio deepfake techniques improve. Instead of training people to detect these fakes, focusing efforts on developing more sophisticated automatic detectors is more effective.

The Game Of Cat And Mouse

Some experts express skepticism about the effectiveness of pitting AI against AI. They argue that competition will always exist to develop the most advanced technology. However, others emphasize the importance of investing resources in improving audio deepfake detectors.

Attacks Directed At Public Figures

While ordinary users are also vulnerable to audio deepfakes, the most sophisticated attacks target public figures with many publicly available voice records. It is crucial that everyone is aware of the existence of these auditory fakes and takes precautions when consuming digital content.

Also Read: Elon Musk challenges ChatGPT With Grok, His Mocking Artificial Intelligence

Tech Reviews Daily

Tech Reviews Daily is a hub of latest news and updates on Technology, Business, Gadgets, Marketing, Digital Marketing, App Development and other technologies. we also write about reviews on latest released gadgets (Mobiles & Laptops).

Recent Posts

What Type Of Mobile Phone Cases Are Most Protective?

If you don't want your phone to break, but you don't want to cover it…

3 months ago

Digital Signage: Why Your Business Shouldn’t Do Without It

Does your business welcome the public? If so, you probably know that waiting can sometimes…

4 months ago

Bringing The Managerial Vision Closer To Artificial Intelligence

In the last two years, the avalanche of news around Artificial Intelligence (AI) has been…

5 months ago

CRM Marketing Tools: Definition, Choice, Trends

Implementing a CRM is a necessity for any business. This tool makes it possible to…

6 months ago

Lead Management Or Lead Management: What Is It?

Lead management should not be neglected by SMEs because it stimulates their growth. When effective,…

6 months ago

Company Valuation: How It Is Done On Fundamental Analysis

Business valuation is a fundamental process in financial analysis. This analysis allows us to determine…

7 months ago