Education

Research finds PVC pipes can hack voice identification systems

These systems, which are already in place for phone banking and other applications, are adept at detecting digitally-manipulated attempts to impersonate a user's voice.

Hack voice identification systems: Researchers and hackers are engaged in a race to prevent data theft. Among their standard resources are multifactor authentication systems, fingerprint technology, and retinal scanning. Automatic speaker identification, which utilises a person’s voice as a passcode, is a type of security system that is gaining popularity.

These systems, which are already in place for phone banking and other applications, are adept at detecting digitally-manipulated attempts to impersonate a user’s voice. Digital security engineers at the University of Wisconsin–Madison have discovered, however, that these systems are not as failsafe in the face of an innovative analogue attack. They discovered that speaking through customised PVC pipes — the kind commonly found in hardware stores — can fool machine learning algorithms that support automatic speaker recognition systems.

The team, led by doctoral student Shimaa Ahmed and professor Kassem Fawaz of electrical and computer engineering, presented their findings at the Usenix Security Symposium in Anaheim, California, on August 9.

5 Simple Personal Finance Hacks for Effective Budgeting: Boost Your Savings

PVC pipes can hack voice identification systems

The dangers presented by analogue security flaws could be extensive. Ahmed notes that numerous commercial companies are already selling the technology to financial institutions as early adopters. Additionally, the technology is used for AI-based personal assistants such as Apple’s Siri.

“The systems are currently marketed as being as secure as a fingerprint, but this is inaccurate,” says Ahmed. All of these are susceptible to speaker identification attacks. The attack we devised is very inexpensive; all you need is a tube from a supply store to alter your voice.”

The project began when the team began searching for vulnerabilities in automatic speaker identification systems. When they communicated plainly, the models acted as advertised. The models did not perform as anticipated, however, when they spoke through their hands or into a box instead of clearly.

Experts Analysis

Ahmed examined whether it was possible to manipulate the resonance or specific frequency vibrations of a voice in order to circumvent the security system. As her work began while she was quarantined at home due to COVID-19, Ahmed began by testing the concept by speaking through paper towel tubing. Later, after returning to the lab, the team recruited Yash Wani, a former undergraduate who is now a PhD candidate, to assist with PVC pipe modification at the UW Makerspace. Ahmed, Yani, and their team modified the length and diameter of pipes purchased from a local hardware store until the pipes produced the same resonance as the voice they were attempting to imitate.

Eventually, the team devised an algorithm capable of calculating the PVC pipe dimensions required to transform the resonance of virtually any voice into an imitation of another. In a test set of 91 voices, the researchers were able to deceive the security systems with the PVC tube attack 60% of the time, whereas unaltered human impersonators were only able to fool the systems 6% of the time.

The spoof assault is effective for multiple reasons. First, because the sound is analogue, it bypasses the digital assault filters of the voice authentication system. Second, the tube does not transmute one voice into another; rather, it imitates the resonance of the target voice, which is sufficient to fool the machine learning algorithm into misclassifying the attacking voice.

Fawaz says that part of the reason for the initiative is to alert the security community that voice identification is not as secure as many people believe, although he says that many researchers are already aware of the technology’s vulnerabilities.

“We’re trying to say something more fundamental,” Fawaz says. “All machine learning applications that analyse speech signals assume that the voice originates from a speaker and travels through the air to a microphone. However, you should not assume that the voice is what you anticipate it to be. There are numerous possible physical world transformations for this speech signal. If this violates the system’s underlying assumptions, the system will behave improperly.”

Sweta Bharti

Sweta Bharti is pursuing bachelor's in medicine. She is keen on writing on the trending topics.

Recent Posts

AMCPlus.com Provider Activate: How to Enter Activation Code and Watch AMC+? Check Devices, Codes, and Login Steps

Users with provider-based subscriptions must always use the activation code method and cannot log in…

18 hours ago

How to Activate Your Tremendous Card at card.tremendous.com

Visit card.tremendous.com to activate your Tremendous prepaid card easily. Enter the 20-digit barcode number for…

19 hours ago

10 Play App Issues on Samsung, Sony or Hisense? Try These Fixes

If 10 Play is not working on your Samsung, LG, Sony, Hisense, Panasonic, or other…

2 days ago

How to Activate Fortnite Using Epic Games Website: Code Guide

Visit www epicgames com Activate Fortnite to link your console quickly. Enter the code shown…

3 days ago

How To Activate Apple AirTag? You can do it in a few easy Steps

Activating an Apple AirTag is quick and simple. Bring it close to your iPhone or…

4 days ago

February 2026 Social Security Payments: Who Gets Paid This Week

Some Social Security beneficiaries will receive their February 2026 payment this week. Payment dates depend…

4 days ago