Smart speakers like Alexa or Google Assistant may make life much more convenient for users, but how safe are the devices when it comes to malicious hacking?
Amichai Shulman, an adjunct professor at Technion Israel Institute of Technology, decided to test that question with his computer science students more than a year ago. After challenging his class to find security flaws in Microsoft’s voice assistant Cortana, the results were quite alarming. One of the smart assistant’s vulnerabilities allowed a potential hacker to gain access over a Windows device just by using voice commands that would direct a device to download malware even as it’s locked.
As Shulman explains, “I took undergraduate students, and in three months, they were able to come up with a whole wealth of vulnerabilities.”
As discussed at the Black Hat cybersecurity conference in Las Vegas, the professor’s college assignment highlights the potential risk that surrounds voice assistants as they are integrated into more homes worldwide. In just the first quarter of 2018, 9.2 million smart speakers–a majority of those including Amazon’s Alexa and Google Assistant–had shipped, and the market for such devices isn’t slowing down any time soon. According to researchers, it is expected that 55 percent of US households will have a smart assistant by the year 2022.
Each device acts as a potential entry-point to which hackers can use to their advantage.
While security researchers Shulman and Tal Be’ery found such vulnerabilities within Cortana, McAfee’s researchers independently discovered the same flaws as well. Cortana’s shortcomings in security instigated researchers to further look into the problem with voice assistants.
As McAfee’s chief consumer security advocate Gary Davis explains, “It is too ripe of an environment. There are too many of these going into homes for them not to be considered.”
Davis further goes on to explain how the smart assistant’s spread across homes worldwide increases the chances of attacks happening in the future.
Microsoft has already addressed the security vulnerability (that allowed voice command access through locked devices) within Cortana by implementing a June 2018 software update.
Cortana isn’t the only device that comes with security flaws.
In the previous year, researchers have also looked into Amazon’s Echo that features voice assistant Alexa. Back in April, Checkmarx–a security testing firm–found a security flaw within the device. Security researchers at the firm developed an app for Alexa–called a “skill”–which allowed potential hackers to activate Echo as a listening device.
After Amazon was notified of the security issue, the problem was shortly resolved, the company soon releasing a statement stating, “[We] take customer security seriously and we have full teams dedicated to ensuring the safety and security of our products. We have taken measures to make Echo secure.”
Even last September, China’s researchers found that a low-frequency pitch could be used to send commands to voice assistants without the knowledge of users, as such frequencies could not be heard by the human ear.
Symantec’s principal threat researcher, Candid Wueest, comments on how more security issues will arise even as these reported vulnerabilities are fixed.
“Skill and actions are probably one of the most prevalent attack vectors we’ll see,” he explains. “There will be others that can be found in the future that we probably haven’t even heard of yet.”
With Shulman’s discovery on Cortana’s security shortcomings, the device was even able to browse over to non-secure websites just by use of voice commands, to which a hacker could then carry out an attack due to the page lacking encryption.
While Microsoft may have fixed the problem, Shulman still found a loophole through the tech giant’s security updates. By just saying the voice commands differently, the smart assistant would still browse to such non-secure sites.
He explains how “instead of saying ‘Go to BBC.com’ [one] would say, ‘Launch BBC,’ and it would open the non-SSL site in the background,” referring to the non-secure website. He goes on to say how he was “able to find many, many sentences that repeat[ed] the same behavior.”
With voice assistant technology comes a great deal of potential attacks from cybercriminals. Such devices may even be able to send payments soon, as developers have expressed interest in implementing that skill, thereby grabbing the attention of cybercriminals who wish to exploit the system’s vulnerabilities.
Nowadays voice assistants can be used on almost any device–from our television, to our cars, our phones, and even our bathrooms. This poses a risk to us users, especially as we become more comfortable using them. The more we implement them into our lives, “the more our guard will be dropped,” as Davis explains.
For this reason, Shulman suggested that not every device needs to have voice command control.
“You take a concept that is very helpful with handheld devices, and you try to replicate it,” as he explains. “In which, it is not extremely helpful, and as we’ve shown, [it can become] very dangerous.”
For more information and to view the original article, please click here.