As hard as it is to believe, what many might think is the last bastion of total privacy, namely, the human mind, is quickly becoming just as vulnerable as the rest of our lives with the invention of mind-reading helmets and other ways to “hack” the mind.
Now security researchers from the University of California, Berkeley, the University of Oxford and the University of Geneva, have created a custom program to interface with brain-computer interface (BCI) devices and steal personal information from unsuspecting victims.
The researchers targeted consumer-grade BCI devices due to the fact that they are quickly gaining popularity in a wide variety of applications including hands-free computer interfacing, video games and biometric feedback programs.
Furthermore, there are now application marketplaces ~ similar to the ones popularized by Apple and the Android platform ~ which rely on an API to collect data from the BCI device.
Unfortunately with all new technology comes new risks and until now, “The security risks involved in using consumer-grade BCI devices have never been studied and the impact of malicious software with access to the device is unexplored,” according to a press release.
The individuals involved with this project ~ which resulted in a research paper entitled “On the Feasibility of Side-Channel Attacks with Brain-Computer Interfaces,” include Ivan Martinovic and Tomas Ros of the Universities of Oxford and Geneva, respectively, along with Doug Davies, Mario Frank, Daniele Perito, and Dawn Song, all of the University of California, Berkeley.
The findings of these innovative researchers are nothing short of disturbing. They found “that this upcoming technology could be turned against users to reveal their private and secret information.”
The information that can be gained by the attacks is incredibly sensitive, including, “bank cards, PIN numbers, area of living, the knowledge of the known persons.”
Most troubling is the fact that this represents “the first attempt to study the security implications of consumer-grade BCI devices,” which makes the success of the attacks that much more disconcerting.
The researchers tested out their proprietary program on 28 different participants who, while they were obviously aware that they were cooperating in a study, were not aware that they were being “brain-hacked,” as it were.
Unfortunately, or fortunately depending on your perspective, the researchers found “that the entropy of the private information is decreased on the average by approximately 15% ~ 40% compared to random guessing attacks.”
Or as Sebastian Anthony put it in writing for ExtremeTech, “in general the experiments had a 10 to 40% chance of success of obtaining useful information.”
The researchers leveraged a distinctive EEG signal pattern known as the P300 response. This brainwave pattern typically occurs when the subject recognizes something such as a friend’s face or a tool necessary to complete a given task.
Using the knowledge of the P300 response, the researchers created a program which utilizes a technique which those who are familiar with typical hacking might call a “brute force” method.
However, this method is only loosely comparable to the traditional brute force methods since we’re talking about using a brute force attack on the human mind.
The researchers did this by flashing pictures of maps, banks, PINs, etc. while monitoring the subject for any P300 responses.
After they had collected enough data from the subject, they were able to easily compare the captured information in order to see when a P300 response was triggered by a certain image.
Thus, this allowed the researchers to discover with surprising accuracy which bank the subject uses, where they live, and other information which could potentially be highly sensitive.
The key to capturing this information seems to be making the subject remain unaware of the fact that they are being attacked either through specially formulated “games” designed to steal personal information from the mind of the target or through a false sense of security engendered by social engineering techniques.
Personally, I find it quite troubling that people could have their personal information stolen simply by playing what they think is a normal game controlled by a BCI device when in reality it is a carefully engineered piece of software designed to pull private data from the target’s mind.
However, Anthony incorrectly states, “Really, your only defense is to not think about the topic,” when in reality the P300 response can occur without consciously “thinking” about the topic.
Therefore, if the target is already consciously on the defensive, the hacker has failed in their task of remaining in the shadows and carrying out the attack without the knowledge of the target.
That being said, if programs are created in a clever enough manner, I seriously doubt that most people would be able to tell that they’re being actively attacked in order to obtain their most private and sensitive information.