Introducing AI: Bridging the Gap Between Healthcare
The integration of Artificial Intelligence (AI) in healthcare has been a topic of interest and became a buzzword in recent years. The applications of AI are vast and varied and for many it has become a ray of hope in their healthcare needs. One area where AI is making a significant impact is in the treatment of patients with aphasia, a condition that affects the ability to speak. Including those that have suffered from traumatic brain injuries or strokes.
AI and Brain Mapping: A New Voice for Aphasia Patients
Edward Chang, MD, a specialist in advanced brain mapping methods at the University of California, San Francisco (UCSF) Weill Institute for Neurosciences, is leading a team that utilises AI to grant a new voice to patients who have lost the ability to speak. The technology leverages AI to interpret the brain signals associated with speech-related sensory and motor processes. This radical use of AI in healthcare aims to “restore people to who they are”, according to Dr Chang. By using AI they will be able to once again have the ability to fully express and communicate with the assistance of digital avatars and personalised voices.
The Functionality of AI in Speech Restoration
But how does this technology work? The process begins with an electrode array that lays on the brain surface, connected to a port streaming data from brain activity to a computer. The computer then uses AI to translate the brain activity into specific intended words or speech sounds. The AI then also controls a digital avatar, a visual representation of the patient, to mimic facial movements associated with speech.Engineering is regarded as the most vital part of this technology and fully understanding the use of machine learning (ML) is the secret to this success. Translating the data and brainwaves into words by using recurrent neural networks is where Dr Chang’s team spends most of their time in this research area.
Results: AI and Healthcare Advance
The results of this technology have been promising. The last participant in the study achieved a speech rate of about 70 words per minute (the average rate of normal speech is approximately 150 words per minute), with a vocabulary of over a thousand words. Two other participants in this study suffered from strokes 15 years ago – unable to speak and completely paralysed in the arms and legs- which left the team wondering if that part of the brain is still functioning. What the team learned from them was that those parts of the brain were still there and with training could potentially be restored.
The Future of AI in Healthcare: Wireless Technology and Bilingual Decoders
The possibilities for this technology will also be expanding as the performance levels and algorithms are becoming more powerful. Efforts are underway to make the device fully wireless, increasing its real-world applicability. Moreover, the team is exploring the creation of bilingual decoders, allowing patients to switch between languages seamlessly or to use this technology in a population where English is not their first language.
Ethical Considerations: Ensuring Privacy and Volition
As with any AI technology integration, ethical considerations are paramount. The team guarantees the technology decodes only the brain activity the individual intends to express. They use the brain part that monitors vocal muscles, not the part where everyday thoughts occur. The individual’s privacy remains protected, and they own the generated data.
Conclusion:
The integration of AI in healthcare marks a significant milestone especially for patients with aphasia. This technology not only restores their ability to speak but will also enable them to express themselves fully. Further advancements can personalise AI tools to train on a specific voice or expression through their own avatar. AI-Enhanced speech restoration does more than just restore patients’ voices. It gives back a piece of their humanity.