Projects
Neural Ultrasound Implant MUSIC
Salvaging recoverable tissue following an acute spinal cord injury (SCI) remains a challenging aspect of patient care, primarily due to limited therapeutic options and the difficulty in continuously monitoring physiological changes that contribute to deficits1. In response to this and paving way towards neural tissue-sparing human computer interface, we have developed a multifunctional ultrasound-based platform, MUSIC, which offers a comprehensive approach to SCI care and multi-modality functional imaging on spinal cord. This thin and flexible device is designed for placement along the spinal cord following standard decompression surgery, where bone is removed to alleviate pressure.
Focused Ultrasound Therapy
Imagine harnessing the power of sound waves to precisely target and treat areas of the body without ever making an incision. Focused Ultrasound (FUS) is revolutionizing medicine by offering a non-invasive alternative to surgery—no radiation, fewer complications, and faster recovery. We’ve taken it further with the AMPLITUDE characterization platform 2, pushing the boundaries of what’s possible. In addition, we’ve pioneered an in vitro neuromodulation platform 3 and conducted MRI-guided FUS experiments on rats, alongside ultrasound-guided trials on pigs.
Robot(Computer)-assisted Surgery (Vision Sensing Digital Twins and Augmented Reality (AR))
With the growing adoption of sensors in operating rooms, surgeons are now better equipped to make precise, agile decisions during procedures. This is particularly crucial in neurosurgeries, where the level of accuracy required, especially for neural implants, makes these advanced tools not just beneficial, but essential. The integration of such technology ensures safer outcomes and helps navigate the complexities of modern surgery with greater confidence.
Spotaneous and Natural Conversation Agent (Speech AI)
Spontaneous and natural conversations are essential for more human-like interactions with AI. Our project focuses on developing advanced speech AI that can understand and participate in real, casual conversations. By training AI on natural dialogue, we aim to create systems that are more intuitive, engaging, and capable of meaningful interactions across various contexts. Stay tuned for more updates aiming for Interspeech submission in 2025.
Multi-modal Deep Learning
My research on multi-modal deep learning integrates diverse data types to enhance AI systems’ ability to understand and respond to complex, real-world environments. In robot-assisted surgery, 3D vision sensing plays a crucial role by providing precise, real-time visual feedback, improving surgical accuracy and outcomes. Similarly, speech AI leverages deep learning to model natural language interactions, enabling AI systems to engage in spontaneous and intuitive conversations with users. Another innovative area combines semantic visual and semantic information with fMRI data, bridging the gap between visual perception and neural activity. Together, these fields represent the power of multi-modal learning in advancing technology across vision, speech audio, natural language and cognitive neuroscience.
Chat with me
Jan 2023: I have set up the online-coffee-time Welcome to chat with me!
-
Lorach, Henri, et al. Walking naturally after spinal cord injury using a brain–spine interface. Nature 618.7963 (2023): 126-133. ↩
-
Liang, Ruixing, et al. Designing an Accurate Benchtop Characterization Device: An Acoustic Measurement Platform for Localizing and Implementing Therapeutic Ultrasound Devices and Equipment (Amplitude). Frontiers in Biomedical Devices. Vol. 84815. American Society of Mechanical Engineers, 2022. ↩
-
Liang, Ruixing, et al. Focused Ultrasound Neuromodulation of Human In Vitro Neural Cultures in Multi-Well Microelectrode Arrays. JoVE (Journal of Visualized Experiments) 207 (2024): e65115. ↩