Having known people who actually do use mindspeak to communicate, I thought this article might be of interest. After all, science is surely tackling everything these days. What my friends did came naturally and an affectionate, communicative home could function well for days without audible speech. They told me it was greatly a DNA thing which they helped their children develop to a high degree. They all had other amazing skills but I digress.
I particularly liked the part about having your mind read by others. One of the above used to tease me by picking the exact words out of my skull and mirror them back to me in a sentence. I trusted him but would I trust another to get into my skull and read my thoughts? Not a chance!
By Adam Peore
July 22, 2011
On a cold, blustery afternoon the week before Halloween, an assortment of spiritual mediums, animal communicators, and astrologists have set up tables in the concourse beneath the Empire State Plaza in Albany, New York. The cavernous hall of shops that
connects the buildings in this 98-acre complex is a popular venue for autumnal events: Oktoberfest, the Maple Harvest Festival, and today’s “Mystic Fair.”
Traffic is heavy as bureaucrats with ID badges dangling from their necks stroll by during their lunch breaks. Next to the Albany Paranormal Research Society table, a middle-aged woman is solemnly explaining the workings of an electromagnetic sensor that can, she asserts, detect the presence of ghosts. Nearby, a “clairvoyant” ushers a government worker in a suit into her canvas tent. A line has formed at the table of a popular tarot card reader.
Amid all the bustle and transparent hustles, few of the dabblers at the Mystic Fair are aware that there is a genuine mind reader in the building, sitting in an office several floors below the concourse. This mind reader is not able to pluck a childhood memory or the name of a loved one out of your head, at least not yet. But give him time. He is applying hard science to an aspiration that was once relegated to clairvoyants, and unlike his predecessors, he can point to some hard results.
The mind reader is Gerwin Schalk, a 39-year-old biomedical scientist and a leading expert on brain-computer interfaces at the New York State Department of Health’s Wadsworth Center at Albany Medical College. The
Austrian-born Schalk, along with a handful of other researchers, is part of a $6.3 million U.S. Army project to establish the basic science required to build a thought helmet ~ a device that can detect and transmit the unspoken speech of soldiers, allowing them to communicate with one another silently.
As improbable as it sounds, synthetic telepathy, as the technology is called, is getting closer to battlefield reality. Within a decade Special Forces could creep into the caves of Tora Bora to snatch Al Qaeda operatives, communicating and coordinating without hand signals or whispered words. Or a platoon of infantrymen could telepathically call in a helicopter to whisk away their wounded in the midst of a deafening firefight, where intelligible speech would be impossible above the din of explosions.
For a look at the early stages of the technology, I pay a visit to a different sort of cave, Schalk’s bunker like office. Finding it is a workout. I hop in an elevator within shouting distance of the paranormal hubbub, then pass through a long, linoleum-floored hallway guarded by a pair of stern-faced sentries, and finally descend a cement stairwell to a subterranean warren of laboratories and offices.
Schalk is sitting in front of an oversize computer screen, surrounded by empty metal bookshelves and white cinder-block walls, bare except for a single photograph of his young family and a poster of the human brain. The fluorescent lighting flickers as he hunches over a desk to click on a computer file. A volunteer from one of his recent mind-reading experiments appears in a video facing a screen of her own. She is concentrating, Schalk explains, silently thinking of one of two vowel sounds, aah or ooh.
The volunteer is clearly no ordinary research subject. She is draped in a hospital gown and propped up in a motorized bed, her head swathed in a plaster like mold of bandages secured under the chin. Jumbles of wires protrude from an opening at the top of her skull, snaking down to her left shoulder in stringy black tangles. Those wires are connected to 64 electrodes that a neurosurgeon has placed directly on the surface of her naked cortex after surgically removing the top of her skull. “This woman has epilepsy and probably has seizures several times a week,” Schalk says, revealing a slight Germanic accent.
The main goal of this technique, known as electrocorticography, or ECOG, is to identify the exact area of the brain responsible for her seizures, so surgeons can attempt to remove the damaged areas without affecting healthy ones. But there is a huge added benefit: The seizure patients who volunteer for Schalk’s experiments prior to surgery have allowed him and his collaborator, neurosurgeon Eric C. Leuthardt of Washington University School of Medicine in St. Louis, to collect what they claim are among the most detailed pictures ever recorded of what happens in the brain when we imagine speaking words aloud.
Those pictures are a central part of the project funded by the Army’s multi-university research grant and the latest twist on science’s long-held ambition to read what goes on inside the mind. Researchers have been experimenting with ways to understand and harness signals in the areas of the brain that control muscle movement since the early 2000s, and they have developed methods to detect imagined muscle movement, vocalizations, and even the speed with which a subject wants to move a limb.
At Duke University Medical Center in North Carolina, researchers have surgically implanted electrodes in the brains of monkeys and trained them to move robotic arms at MIT, hundreds of miles away, just by thinking.
At Brown University, scientists are working on a similar implant they hope will allow paralyzed human subjects to control artificial limbs.
And workers at Neural Signals Inc., outside Atlanta, have been able to extract vowels from the motor cortex of a paralyzed patient who lost the ability to talk by sinking electrodes into the area of his brain that controls his vocal cords.
But the Army’s thought-helmet project is the first large-scale effort to “really attack” the much broader challenge of synthetic telepathy, Schalk says. The Army wants practical applications for healthy people, “and we are making progress,” he adds.
Schalk is now attempting to make silent speech a reality by using sensors and computers to explore the regions of the brain responsible for storing and processing thoughts. The goal is to build a helmet embedded with brain-scanning technologies that can target specific brain waves, translate them into words, and transmit those words wirelessly to a radio speaker or an earpiece worn by other soldiers.
As Schalk explains his vast ambitions, I’m mesmerized by the eerie video of the bandaged patient on the computer screen. White bars cover her eyes to preserve her anonymity. She is lying stock-still, giving the impression that she might be asleep or comatose, but she is very much engaged.
Schalk points with his pen at a large rectangular field on the side of the screen depicting a region of her brain abuzz with electrical activity. Hundreds of yellow and white brain waves dance across a black backdrop, each representing the oscillating electrical pulses picked up by one of the 64 electrodes attached to her cortex as clusters of brain cells fire.
Somewhere in those squiggles lie patterns that Schalk is training his computer to recognize and decode.
“To make sense of this is very difficult,” he says. “For each second there are 1,200 variables from each electrode location. It’s a lot of numbers.”
Schalk gestures again toward the video. Above the volunteer’s head is a black bar that extends right or left depending on the computer’s ability to guess which vowel the volunteer has been instructed to imagine: right for “aah,” left for “ooh.” The volunteer imagines “ooh,” and I watch the black bar inch to the left. The volunteer thinks “aah,” and sure enough, the bar extends right, proof that the computer’s analysis of those hundreds of squiggling lines in the black rectangle is correct. In fact, the computer gets it right “close to 100 percent of the time,” Schalk says.
He admits that he is a long way from decoding full, complex imagined sentences with multiple words and meaning. But even extracting two simple vowels from deep within the brain is a big advance. Schalk has no doubt about where his work is leading.
“This is the first step toward mind reading,” he tells me.
The motivating force behind the thought helmet project is a retired Army colonel with a Ph.D. in the physiology of vision and advanced belts in karate, judo, aikido, and Japanese sword fighting. Elmar Schmeisser, a lanky, bespectacled scientist with a receding hairline and a neck the width of a small tree, joined the Army Research Office as a program manager in 2002. He had spent his 30-year career up to that point working in academia and at various military research facilities, exhaustively investigating eyewear to protect soldiers against laser exposure, among other technologies.
Schmeisser had been fascinated by the concept of a thought helmet ever since he read about it in E. E. “Doc” Smith’s 1946 science fiction classic, Skylark of Space, back in the eighth grade. But it was not until 2006, while Schmeisser was attending a conference on advanced prosthetics in Irvine, California, that it really hit him: Science had finally caught up to his boyhood vision. He was listening to a young researcher expound on the virtues of extracting signals from the surface of the brain. The young researcher was Gerwin Schalk.
Schalk’s lecture was causing a stir. Many neuroscientists had long believed that the only way to extract data from the brain specific enough to control an external device was to penetrate the cortex and sink electrodes into the gray matter, where the electrodes could record the firing of individual neurons. By claiming that he could pry information from the brain without drilling deep inside it ~ information that could allow a subject to move a computer cursor, play computer games, and even move a prosthetic limb ~ Schalk was taking on “a very strong existing dogma in the field that the only way to know about how the brain works is by recording individual neurons,” Schmeisser vividly recalls of that day.
Many of those present dismissed Schalk’s findings as blasphemy and stood up to attack it. But for Schmeisser it was a magical moment. If he could take Schalk’s idea one step further and find a way to extract verbal thoughts from the brain without surgery, the technology could dramatically benefit not only disabled people but the healthy as well. “Everything,” he says,” all of a sudden became possible.”
The next year, Schmeisser marched into a large conference room at Army Research Office headquarters in Research Triangle Park, North Carolina, to pitch a research project to investigate synthetic telepathy for soldiers. He took his place at a podium facing a large, U-shaped table fronting rows of chairs, where a committee of some 30 senior scientists and colleagues ~ division chiefs, directorate heads, mathematicians, particle physicists, chemists, computer scientists, and Pentagon brass in civilian dress ~ waited for him to begin.
Schmeisser had 10 minutes and six PowerPoint slides to address four major questions: Where was the field at the moment? How might his idea prove important? What would the Army get out of it? And was there reason to believe that it was doable?
The first three questions were simple. It was that last one that tripped him up. “Does this really work?” Schmeisser remembers the committee asking him. “Show us the evidence that this could really work ~ that you are not just hallucinating it.”
The committee rejected Schmeisser’s proposal but authorized him to collect more data over the following year to bolster his case. For assistance he turned to Schalk, the man who had gotten him thinking about a thought helmet in the first place.
Schalk and Leuthardt had been conducting mind-reading experiments for several years, exploring their patients’ ability to play video games, move cursors, and type by means of brain waves picked up via a scanner. The two men were eager to push their research further and expand into areas of the brain thought to be associated with language, so when Schmeisser offered them
a $450,000 grant to prove the feasibility of a thought helmet, they seized the opportunity.
Schalk and Leuthardt quickly recruited 12 epilepsy patients as volunteers for their first set of experiments. As I had seen in the video in Schalk’s office, each patient had the top of his skull removed and electrodes affixed to the surface of the cortex. The researchers then set up a computer screen and speakers in front of the patients’ beds.
The patients were presented with 36 words that had a relatively simple consonant-vowel-consonant structure, such as bet, bat, beat, and boot. They were asked to say the words out loud and then to simply imagine saying them. Those instructions were conveyed visually (written on a computer screen) with no audio, and again vocally with no video. The electrodes provided a precise map of the resulting neural activity.
Schalk was intrigued by the results. As one might expect, when the subjects vocalized a word, the data indicated activity in the areas of the motor cortex associated with the muscles that produce speech. The auditory cortex and an area in its vicinity long believed to be associated with speech, called Wernicke’s area, were also active.
When the subjects imagined words, the motor cortex went silent while the auditory cortex and Wernicke’s area remained active. Although it was unclear why those areas were active, what they were doing, and what it meant, the raw results were an important start. The next step was obvious: Reach inside the brain and try to pluck out enough data to determine, at least roughly, what the subjects were thinking.
Schmeisser presented Schalk’s data to the Army committee the following year and asked it to fund a formal project to develop a real mind-reading helmet. As he conceived it, the helmet would function as a wearable interface between mind and machine. When activated, sensors inside would scan the thousands of brain waves oscillating in a soldier’s head; a microprocessor would apply pattern recognition software to decode those waves and translate them into specific sentences or words, and a radio would transmit the message. Schmeisser also proposed adding a second capability to the helmet to detect the direction in which a soldier was focusing his attention. The function could be used to steer thoughts to a specific comrade or squad, just by looking in their direction.
The words or sentences would reach a receiver that would then “speak” the words into a comrade’s earpiece or be played from a speaker, perhaps at a distant command post. The possibilities were easy to imagine:
“Look out! Enemy on the right!”
“We need a medical evacuation now!”
“The enemy is standing on the ridge. Fire!”
Any of those phrases could be life-saving.
This time the committee signed off.
Grant applications started piling up in Schmeisser’s office. To maximize the chance of success, he decided to split the Army funding between two university teams that were taking complementary approaches to the telepathy problem.
The first team, directed by Schalk, was pursuing the more invasive ECOG approach, attaching electrodes beneath the skull.
The second group, led by Mike D’Zmura, a cognitive scientist at the University of California, Irvine,planned to use electroencephalography (EEG), a noninvasive brain-scanning technique that was far better suited for an actual thought helmet. Like ECOG, EEG relies on brain signals picked up by an array of electrodes that are sensitive to the subtle voltage oscillations caused by the firing of groups of neurons. Unlike ECOG, EEG requires no surgery; the electrodes attach painlessly to the scalp.
For Schmeisser, this practicality was critical. He ultimately wanted answers to the big neuroscience questions that would allow researchers to capture complicated thoughts and ideas, yet he also knew that demonstrating even a rudimentary thought helmet capable of discerning simple commands would be a valuable achievement. After all, soldiers often use formulaic and reduced vocabulary to communicate. Calling in a helicopter for a medical evacuation, for instance, requires only a handful of specific words.
“We could start there,” Schmeisser says. “We could start below that.” He noted, for instance, that it does not require a terribly complicated message to call for an air strike or a missile launch: “That would be a very nice operational capability.”
The relative ease with which EEG can be applied comes at a price, however. The exact location of neural activity is far more difficult to discern via EEG than with many other, more invasive methods because the skull, scalp, and cerebral fluid surrounding the brain scatter its electric signals before they reach the electrodes. That blurring also makes the signals harder to detect at all. The EEG data can be so messy, in fact, that some of the researchers who signed on to the project harbored private doubts about whether it could really be used to extract the signals associated with unspoken thoughts.
In the initial months of the project, back in 2008, one of D’Zmura’s key collaborators, renowned neuroscientist David Poeppel, sat in his office on the second floor of the New York University psychology building and realized he was unsure even where to begin. With his research partner Greg Hickok, an expert on the neuroscience of language, he had developed a detailed model of audible speech systems, parts of which were widely cited in textbooks. But there was nothing in that model to suggest how to measure something imagined.
For more than 100 years, Poeppel reflected, speech experimentation had followed a simple plan: Ask a subject to listen to a specific word or phrase, measure the subject’s response to that word (for instance, how long it takes him to repeat it aloud), and then demonstrate how that response is connected to activity in the brain. Trying to measure imagined speech was much more complicated; a random thought could throw off the whole experiment. In fact, it was still unclear where in the brain researchers should even look for the relevant signals.
Solving this problem would call for a new experimental method, Poeppel realized. He and a postdoctoral student, Xing Tian, decided to take advantage of a powerful imaging technique called magnetoencephalography, or MEG, to do their reconnaissance work. MEG can provide roughly the same level of spatial detail as ECOG but without the need to remove part of a subject’s
skull, and it is far more accurate than EEG.
Poeppel and Tian would guide subjects into a three-ton, beige-paneled room constructed of a special alloy and copper to shield against passing electromagnetic fields. At the center of the room sat a one-ton, six-foot-tall machine resembling a huge hair dryer that contained scanners capable of recording the minute magnetic fields produced by the firing of neurons. After guiding subjects into the device, the researchers would ask them to imagine speaking words like athlete, musician, and lunch. Next they asked them to imagine hearing the words.
When Poeppel sat down to analyze the results, he noticed something unusual. As a subject imagined hearing words, his auditory cortex lit up the screen in a characteristic pattern of reds and greens. That part was no surprise; previous studies had linked the auditory cortex to imagined sounds. However, when a subject was asked to imagine speaking a word rather than hearing it, the auditory cortex flashed an almost identical red and green pattern.
Poeppel was initially stumped by the results. “That is really bizarre,” he recalls thinking. “Why should there be an auditory pattern when the subjects didn’t speak and no one around them spoke?” Over time he arrived at an explanation. Scientists had long been aware of an error-correction mechanism in the brain associated with motor commands.
When the brain sends a command to the motor cortex to, for instance, reach out and grab a cup of water, it also creates an internal impression, known as an efference copy, of what the resulting movement will look and feel like. That way, the brain can check the muscle output against the intended action and make any necessary corrections.
Poeppel believed he was looking at an efference copy of speech in the auditory cortex. “When you plan to speak, you activate the hearing part of your brain before you say the word,” he explains. “Your brain is predicting what it will sound like.”
The potential significance of this finding was not lost on Poeppel. If the brain held on to a copy of what an imagined thought would sound like if vocalized, it might be possible to capture that neurological record and translate it into intelligible words. As happens so often in this field of research, though, each discovery brought with it a wave of new challenges. Building a thought helmet would require not only identifying that efference copy but also finding a way to isolate it from a mass of brain waves.
D’Zmura and his team at UC Irvine have spent the past two years taking baby steps in that direction by teaching pattern recognition programs to search for and recognize specific phrases and words. The sheer size of a MEG machine would obviously be impractical in a military setting, so the team is testing its techniques using lightweight EEG caps that could eventually be built into a practical thought helmet.
The caps are comfortable enough that Tom Lappas, a graduate student working with D’Zmura, often volunteers to be a research subject. During one experiment last November, Lappas sat in front of a computer wearing flip-flops, shorts, and a latex EEG cap with 128 gel-soaked electrodes attached to it. Lappas’s face was a mask of determined focus as he stared silently at a screen while military commands blared out of a nearby speaker.
“Ready Baron go to red now,” a recorded voice intoned, then paused. “Ready Eagle go to red now…Ready Tiger go to green now…” As Lappas concentrated, a computer recorded hundreds of squiggly lines representing Lappas’s brain activity as it was picked up from the surface of his scalp. Somewhere in that mass of data, Lappas hoped, were patterns unique enough to distinguish the sentences from one another.
With so much information, the problem would not be finding similarities but rather filtering out the similarities that were irrelevant. Something as simple as the blink of an eye creates a tremendous number of squiggles and lines that might throw off the recognition program. To make matters more challenging, Lappas decided at this early stage in the experiment to search for patterns not only in the auditory cortex but in other areas of the brain as well.
That expanded search added to the data his computer had to crunch through. In the end, the software was able to identify the sentence a test subject was imagining speaking only about 45 percent of the time. The result was hardly up to military standards; an error rate of 55 percent would be disastrous on the battlefield.
Schmeisser is not distressed by that high error rate. He is confident that synthetic telepathy can and will rapidly improve to the point where it will be useful in combat. “When we first started this, we didn’t know if it could be done,” he says. “That we have gotten this far is wonderful.” Poeppel agrees. “The fact that they could find anything just blows me away, frankly,” he says.
Schmeisser notes that D’Zmura has already shown that test subjects can type in Morse code by thinking of specific vowels
in dots and dashes. Although this exercise is not actual language, subjects have achieved an accuracy of close to 100 percent.
The next steps in getting a thought helmet to work with actual language will be improving the accuracy of the pattern-recognition programs used by Schalk’s and D’Zmura’s teams and then adding, little by little, to the library of words that these programs can discern.
“Whether we can get to fully free-flowing, civilian-type speech, I don’t know. It would be nice. We’re pushing the limits of what we can get, opening the vocabulary as much as we can,” Schmeisser says.
For some concerned citizens, this research is pushing too far.
Among the more paranoid set, the mere fact that the military is trying to create a thought helmet is proof of a conspiracy to subject the masses to mind control.
More grounded critics consider the project ethically questionable.
Since the Army’s thought helmet project became publicly known, Schmeisser has been deluged with Freedom of Information Act requests from individuals and organizations concerned about privacy issues. Those requests for documentation have required countless hours and continue
to this day.
Schalk, for his part, has resolved to keep a low profile. From his experience working with more invasive techniques, he had seen his fair share of controversy in the field, and he anticipated that this project might attract close scrutiny.
“All you need to do is say, ‘The U.S. Army funds studies to implant people for mind reading.",he says. “That’s all it takes, and then you’re going to have to do damage control.”
D’Zmura and the rest of his team, perhaps to their regret, granted interviews about their preliminary research after it was announced in a UC Irvine press release. The negative reaction was immediate. Bizarre e-mail messages began appearing in D’Zmura’s in-box from individuals ranting against the government or expressing concern that the authorities were already monitoring their thoughts. One afternoon, a woman appeared outside D’Zmura’s office complaining of voices in her head and asking for assistance to remove them.
Should synthetic telepathy make significant progress, the worried voices will surely grow louder. Says Emory University bioethicist Paul Root Wolpe, a leading voice in the field of neuroethics:
“Once we cross these barriers, we are doing something that has never before been done in human history, which is to get information directly from the brain. I don’t have a problem with sticking this helmet on the head of a pilot to allow him to send commands on a plane. The problem comes when you try to get detailed information about what someone is either thinking or saying nonverbally. That’s something else altogether. The skull should remain a realm of absolute privacy. If the right to privacy means anything, it means the right to the contents of my thoughts.”
Schmeisser says he has been reflecting on this kind of concern “from the beginning.” He dismisses the most extreme type of worry out of hand. “The very nature of the technology and of the human brain,” he maintains, “would prevent any Big Brother type of use.” Even the most sophisticated existing speech-recognition programs can obtain only 95 percent accuracy and that is after being calibrated and trained by a user to compensate for accent, intonation, and phrasing. Brain waves are “much harder” to get right, Schmeisser notes, because every brain is anatomically different and uniquely shaped by experience.
Merely calibrating a program to recognize a simple sentence from brain waves would take hours. “If your thoughts wander for just an instant, the computer is completely lost,” Schmeisser says. “So the method is completely ethical. There is no way to coerce users into training the machine if they don’t want to. Any attempt to apply coercion will result in more brain wave disorganization, from stress if nothing else, and produce even worse computer performance.”
Despite the easy analogies, synthetic telepathy bears little resemblance to mystical notions of mind reading and mind control. The bottom line, Schmeisser insists, “is that I see no risks whatsoever. Only benefits.”
Nor does he feel any unease that his funding comes from a military agency eager to put synthetic telepathy to use on the battlefield. The way he sees it, the potential payoff is simply too great.
“This project is attempting to make the scientific breakthrough that will have application for many things,” Schmeisser says. “If we can get at the black box we call the brain with the reduced dimensionality of speech, then we will have made a beginning to solving fundamental challenges in understanding how the brain works ~ and, with that, of understanding individuality
No comments:
Post a Comment
If your comment is not posted, it was deemed offensive.