We strive to build an interdisciplinary team working at the intersection of neuroscience, machine learning, human-computer interaction. Neuroscientists, computer scientist, machine learning researchers, software engineers, are all welcomed.
We value scientific rigor, focusing on researching under strong scientific grounds and conducting sound experiments that provide definitive and repeatable findings. Our scientific nature is to use algorithms, mathematical models, strong theoretical background, and strong coding skills to advance the field of brain-computer interfaces.
Project 1: Neuroscience + DL
We work on the applied aspects of the brain-computer interface particularly spellers and emotion recognition, as well as computational aspects namely deep learning and signal filtering. Our goal focuses on increasing the speed and accuracy of BCI, make subject-independent models, and increase its practicality and reliability in using it in real life. Here is the application side:
- Real-time BCI speller - our lab focus on spellers for locked-in patients (paralyzed but with intact mind). BCI speller has been studied for over two decades (since 1990) and still remains one of the key research areas in BCI due to its fundamental challenges shared with other BCI applications.
- Real-time emotion/cognition recognition - another application we focus on is real-time user cognitive/emotional state recognition, e.g., stress, depression, engagement, cognitive load, pain. Over the last decade, we are very successful in understanding emotion/cognition by analyzing users' neural oscillations (i.e., alpha, beta, gamma, etc.). Nevertheless, we seek to integrate different physiological modalities (i.e., EEG, heart rate, facial expressions) to yield an even more granular, accurate real-time system
For each application, we can drill down and are specifically interested in the following perspectives:
- Transfer Learning - a method of exploiting information from existing subjects and infer the statistical distributions of unseen subjects. One key challenge in BCI is regarding subject variability and the lengthy calibration (training) process. Transfer learning is one promising approach towards zero-training BCIs
- Generative Adversarial Model - a method of using discriminator and generator modeling for understanding complex distributions of data and generate similar data. It proves to be a useful trend for both artifact removal, improved classification, transfer learning, and the reconstruction of EEG.
- Unsupervised Learning (e.g., Autoencoders) - a method of deriving the structure of unlabeled data. It is a particularly useful technique for classification in BCI since it eliminates the need for training, addresses the challenge of subject variability, and is fast enough for real-time applications.
- Artifact Removal (e.g., Blind source separation) - To truly use BCI in daily life, one would produce many side artifacts include large muscle movements, head movements, and speech. Thus, effective pre-processing techniques are needed. We are exploring contemporary techniques such as deep learning, autoencoders, Riemannian geometry, and a class of techniques called blind source separation.
- Interface Design - design of the interface directly impacts the performance of BCI, not because of usability but because of the evoked brain signals. An effective user interface will allow the BCI to record strong and clean evoked signals, which will then be input into machine learning algorithms. Hence, rigorous experiments are needed to be carried to understand different parameters of design. For example, in a BCI speller, one may question how the letters should be colored, shaped, or presented - all of this surprisingly affect the brain signals being evoked
Project 2: Deep Learning
It is no surprise that our lab has strong overlapping skills and a passion for deep learning. Thus, we got some talented students working on other domains aside from the brain but sharing similar problem space, such as applied deep learning on Finance (e.g., Langevin Equation, GBM) and NLP (e.g., Transformers), or even theoretical deep learning.
Here are some of the project we are working on:
- Natural Language Processing: Using contemporary models for text processing and prediction, e.g., using news information to predict finance or infection cases, performing topic modeling and sentiment analysis
- Computer Vision: Developing various computer vision applications for agriculture, health, geography, etc.
- Image and Text Generation: Using GANs and their variants to generate text and images
- Style transfer: Using GANs and their variants to transfer style to another domain, for example, converting images to icons
- Image to Text, Video to Text: Using seq2seq model or similar to transform images or videos to text, and of course, vice versa.
- Deep Learning Theories: Understanding how to develop a generalized, robust, and efficient neural network.
- Neural Architecture Search: Finding the optimal micro-and macro- architecture for neural network