University of Bremen: Breakthrough Research: Speaking by Introducing

Great research successes require international collaboration: The Cognitive Systems Lab (CSL) at the University of Bremen, the Department of Neurosurgery at Maastricht University in the Netherlands and the ASPEN Lab at the Virginia Commonwealth University (USA) have been working on a neural language prosthesis for several years. The goal: to convert language-related neural processes in the brain directly into audible language. This goal has now been achieved: “We have succeeded in making our test subjects hear themselves talking, even though they can only imagine speaking,” says Professor Tanja Schultz, head of the CSL, happily. “The brainwave signals of volunteer test persons who imagine themselves to be speaking are converted directly into an audible output by our neuro-language prosthesis – in real time with no noticeable delay!

The innovative neural speech prosthesis is based on a closed-loop system that combines technologies from modern speech synthesis with brain-computer interfaces. This system was developed by Miguel Angrick at the CSL. As input, it receives the neural signals of the users who imagine they are speaking. It transforms this into language practically at the same time using machine learning processes and outputs this audibly as feedback to the users. “This closes the circle for them from imagining speaking and hearing their language,” says Angrick.

Study with volunteer epilepsy patient
The work, published in Nature Communications Biology, is based on a study with a volunteer epilepsy patient who was implanted with deep electrodes for medical examinations and who was in hospital for clinical monitoring. In the first step, the patient read out texts from which the closed-loop system learned the correspondence between language and neural activity using machine learning processes. “In the second step, this learning process was repeated with a whispered language and with a pretended language,” explains Miguel Angrick. “The closed-loop system generated synthesized speech. Although the system had learned the correspondence exclusively in audible language, an audible output is generated even when the language is whispered and when it is introduced. ”This allows the conclusion that

Important role of the Bremen Cognitive Systems Lab
“Speech neuro-prosthetics aims to offer people who cannot speak due to physical or neurological impairments a natural channel of communication,” says Professor Tanja Schultz, explaining the background to the intensive research activities in this area, in which the Cognitive Systems Lab at the University of Bremen is one of the worlds plays an important role. “The real-time synthesis of acoustic language directly from measured neural activity could enable natural conversations and significantly improve the quality of life of people whose communication options are severely limited.”

The groundbreaking innovation is the result of a long-term cooperation that is financed jointly by the Federal Ministry of Education and Research (BMBF) and the American National Science Foundation (NSF) as part of the research program “Multilateral Cooperation in Computational Neuroscience”. This cooperation with Professor Dean Krusienski (ASPEN Lab, Virginia Commonwealth University) was initiated together with the former CSL employee Dr. Christian Herff, as part of the successful RESPONSE (REvealing SPONtaneous Speech processes in Electrocorticography) project. It is currently being continued with CSL employee Miguel Angrick in the ADSPEED (ADaptive Low-Latency SPEEch Decoding and synthesis using intracranial signals) project. Dr. Christian Herff is now an assistant professor at Maastricht University.

Comments are closed.