The work at the University of California, Berkeley opens up the possibility of a system that can transcribe the imagined speech of such patients who cannot talk.
The team enlisted 15 patients who were already due to undergo open brain surgery. They placed up to 256 electrodes on the surface of the temporal lobe – the seat of the auditory system.
Patients were then played recordings of conversation 5-10 minutes in length. Using the electrode data captured during this time the team was able to reconstruct and play back the sounds the patients heard.
This was possible because there is evidence that the brain breaks down sound into its component acoustic frequencies – for example, between a low of about 1 Hertz (cycles per second) to a high of about 8,000 Hertz – that are important for speech sounds.
The team then tested two different computational models to match spoken sounds to the pattern of activity in the electrodes. The patients then heard a single word, and the models were able predict the word based on electrode recordings.
‘With neuroprosthetics, people have shown that it’s possible to control movement with brain activity,’ said project collaborator Dr Robert Knight of UC Berkeley. ‘But that work, while not easy, is relatively simple compared to reconstructing language. This experiment takes that earlier work to a whole new level.’
The economy cannot wait for next gen AI and digital talent
If I recall my Greek mythology correctly, Cassandra received the gift of prophecy but was cursed to never be believed … don´t know how much that...