A research project funded by Facebook is exploring the use of AI to create a non-invasive brain computer interface.
Researchers at the University of California in San Francisco have developed a device that they claim can translate brain activity data into speech. The device used machine learning algorithms to process data from electrodes placed on a patient's brain as they answered certain questions, and then attempted to guess the question and the answer.
The algorithm was able to identify the question correctly 75% of the time and the answer 61% of the time. The advancement gives hope to those living with conditions such as speech paralysis, indicating that the information from the mental component of speech could one day be read and used to reconstruct speech.
The electrodes in this study were placed on the patients' brains already as part of a procedure to diagnose the cause of serious epileptic seizures, but invasive devices like this are not practical for general use. The firm hopes to refine the technology to work with electrodes placed on the scalp instead to create a non-invasive brain control interface.
This interface concept is similar to work by Northern Irish firm NeuroCONCISE, which has won several awards for its own AI-based brain computer interface. The wearable device has been used to diagnose brain function in patients in vegetative states and allow them to control computer systems and so communicate again with loved ones. The new research is different in that it attempts to reconstruct speech directly.
Source: BBC News