中文字幕网伦射乱中文-超清中文乱码字幕在线观看-亚洲v国产v欧美v久久久久久-久久性网-手机在线成人av-成人六区-国产人与zoxxxx另类一一-青青草国产久久精品-蜜桃av久久久一区二区三区麻豆-成人av一区二区免费播放-在线视频麻豆-www爱爱-成人免费看片视频-性欧美老肥妇喷水-五月99久久婷婷国产综合亚洲-亚洲最色-各种含道具高h调教1v1男男-91丨porny丨国产-国产精品无码专区在线观看不卡-大香伊人

Scientists find how brain encodes speech, closing to making brain-speech machine interface

Source: Xinhua| 2018-09-27 02:34:47|Editor: yan
Video PlayerClose

WASHINGTON, Sept. 26 (Xinhua) -- American researchers have unlocked new information about how the brain encoded speech, moving closer to developing speech-brain machine interface that can decode the commands the brain is sending to the tongue, palate, lips and larynx.

The study published on Wednesday in the Journal of Neuroscience revealed that the brain controls speech production in a similar manner to how it controls the production of arm and hand movements.

Northwestern University researchers recorded signals from two parts of the brain and decoded what these signals represented.

They found that the brain represented both the goals of what we are trying to say (speech sounds like "pa" and "ba") and the individual movements that we use to achieve those goals (how we move our lips, palate, tongue and larynx). The different representations occur in two different parts of the brain.

The discovery could potentially help people like the late Stephen Hawking communicate more intuitively with an effective brain machine interface (BMI) and those people with speech disorders like apraxia of speech.

"This can help us build better speech decoders for BMIs, which will move us closer to our goal of helping people that are locked-in speak again," said the paper's lead author Marc Slutzky, associate professor of neurology and of physiology at Northwestern.

Speech is composed of individual sounds, called phonemes, which are produced by coordinated movements of the lips, tongue, palate and larynx. However, scientists didn't know exactly how these movements, called articulatory gestures, are planned by the brain.

Slutzky and his colleagues found speech motor areas of the brain had a similar organization to arm motor areas of the brain.

Scientists recorded brain signals from the cortical surface using electrodes placed in patients undergoing brain surgery to remove brain tumors, keeping the patients awake during surgery and asking them to read words.

After the surgery, scientists marked the times when the patients produced phonemes and gestures. Then they used the recorded brain signals from each cortical area to decode which phonemes and gestures had been produced, and measured the decoding accuracy.

The brain signals in the precentral cortex were more accurate at decoding gestures than phonemes, while those in the inferior frontal cortex, a higher level speech area, were equally good at decoding both phonemes and gestures, according to the study.

This finding helped support linguistic models of speech production and guide engineers in designing brain machine interfaces to decode speech from these brain areas.

The next step for the research is to develop an algorithm for brain machine interfaces that would not only decode gestures but also combine those decoded gestures to form words.

TOP STORIES
EDITOR’S CHOICE
MOST VIEWED
EXPLORE XINHUANET
010020070750000000000000011105521374949131