DolphinAttack: what is it and how can it affect speech recognition systems?

Six security researchers from Zhejiang University have published an interesting study on speech recognition systems like Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana and Alexa. It has become beneficial for users to control their devices via voice commands, but anyone barely approaches the topic of whether these systems are properly secured.

Voice controllable systems have been treated as focus of multiple analyses, and some works have emphasized a possibility of inaudible voice commands. These commands cannot be detected by humans, but are very well recognized by machines. Therefore, it becomes evident that speech recognition systems could become manipulated by hackers.

What is a DolphinAttack?

DolphinAttack against speech recognition systems

The six researchers have provided evidence that the earlier mentioned speech recognition systems can actually become victims of inaudible voice commands. For this attack to be possible, hackers have to initiate voice commands on ultrasonic carriers. As a result, they are inconceivable by people. However, speech recognition systems are perfectly capable of comprehending these ultrasounds. Researchers have provided some proof-of-concept attacks, displaying the exact results of DolphinAttacks.

Usually, voice controllable systems are bound to pass three main steps. First of all, they capture voices. Then, they recognize the stated command. Finally, the assigned task is executed. While it might not be very easy to accomplish a DolphinAttack, the possibility has been proven to be true. From the full report of this study, we have noticed that if successful, this attack could give a disturbing amount of power to hackers. Researchers have emphasized that crooks could be allowed to initiate FaceTime calls on iPhones, make Google Now switch the phone to the airplane mode. However, probably the most shocking example was that this attack could allow hackers to interfere with the navigation system in Audi cars.

In addition to the mentioned possibilities, DolphinAttack can open malicious websites and automatically install malware. Furthermore, users might be unknowingly monitored via voice/video calls. In other cases, users might not be allowed to use their machines fully as some of functions can be in denial of service. The researchers mainly focused on investigating Amazon Echo, Google Nexus, Apple iPhones and automobiles. However, they do believe that many other devices that use speech recognition systems should be included as potential victims of DolphinAttacks.

In their study, researchers from China have also mentioned the techniques, capable of decreasing possibilities of DolphinAttacks. They mostly focus on hardware-based defense strategies: microphone enhancement and baseband cancellation. Despite sounding rather difficult, microphone enhancement means that microphones should be modified to no longer recognize ultrasounds.

Source: assets.documentcloud.org.

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

Security Guides

Recent Comments