We strive to build an interdisciplinary team working at computational neuroscience.  Our core values are:

  • Experimental:  We value scientific rigor, focusing on researching under strong scientific grounds and conducting sound experiments that provide definitive and repeatable findings.  
  • Computational:  Our scientific nature is to use algorithms, mathematical models, strong theoretical background, and strong coding skills.

Focus Area

Our lab focuses on modeling the brain functions of human beings using advanced signal/image processing (e.g., blind source separation, Riemannian geometry, wavelet neural networks) and deep learning (e.g., self-supervised learning, transfer learning, GANs, attention mechanisms, graphs, meta-learning). 


  • Brain signal processing - using EEG/MEG/fNIRS to model human brain functions, e.g., epilepsy, stress, depression, engagement, drowsiness, cognitive load, pain, etc.
  • Brain image processing - using deep learning techniques to model/reconstruct brain images for the purpose of diagnosis and classification, e.g., Alzheimer's.
  • Real-time BCI speller -  spellers using only EEG.  Our goal focuses on increasing the speed and accuracy of BCI, make subject-independent models, and increase its practicality and reliability in using it in real life.
  • Real-time emotion recognizer -  emotion recognizer using only EEG.  Our goal focuses on making a reliable emotion recognizer using only EEG signals.  Similarly, speed, accuracy, and subject independence will be the key to increase its practicality and reliability in use in real life.


  • Cross-Modal Learning - a method of exploiting information from different modalities, e.g., text, image, sound for modeling purposes.   For example, one of our projects focuses on EEG image reconstruction in which brain signals are reconstructed back to the visual image that users see.
  • Transfer Learning - a method of exploiting information from existing subjects and infer the statistical distributions of unseen subjects.  One key challenge in EEG is regarding subject variability and the lengthy calibration (training) process.  Transfer learning is one promising approach towards zero-training BCIs
  • Generative Adversarial Model - a method of using discriminator and generator modeling for understanding complex distributions of data and generate similar data.  It proves to be a useful trend for both artifact removal, improved classification, transfer learning, and the reconstruction of EEG.
  • Unsupervised Learning (e.g., Autoencoders) - a method of deriving the structure of unlabeled data.  It is a particularly useful technique for classification in BCI since it eliminates the need for training, addresses the challenge of subject variability, and is fast enough for real-time applications.  
  • Artifact Removal (e.g., Blind source separation) - To truly use BCI in daily life, one would produce many side artifacts include large muscle movements, head movements, and speech.  Thus, effective pre-processing techniques are needed.  We are exploring contemporary techniques such as deep learning, autoencoders, Riemannian geometry, and a class of techniques called blind source separation.
  • Generalized Models - We are interested to use a more generic model such as Attention Mechanisms and Graph Neural Networks to advance EEG modeling tasks, as compare to assumption-based models such as conv1d, conv2d, LSTM, GRU, etc.
  • Domain-specific Models - Of course, we are also interested to use EEG-specific models such as Wavelet Neural Network to advance the EEG modeling tasks.


It is no surprise that some of my students are very interested in apply machine learning/deep learning to real-world problems, other than one related to the brain.  Some of the common topics engaged by students are NLP (e.g., analyzing news/social media),  financial analysis (e.g., predicting short-term stock market), and human-computer interaction (e.g., making intelligent systems).