We strive to build an interdisciplinary team working at the domain of health, using deep learning, NLP, and neuroscience. Our core values are:

  • Experimental:  We value scientific rigor, focusing on researching under strong scientific grounds and conducting sound experiments that provide definitive and repeatable findings.  
  • Computational:  Our scientific nature is to use algorithms, mathematical models, strong theoretical background, and strong coding skills.

Lab Entry

We welcomed students from all disciplines but those with a huge passion for research.  All Master/Ph.D. students are required to publish at top-tier conferences/journals:  HCI (CHI, UIST),  NLP (ACL, EMNLP),  Brain (Neurocomputing, Journal of Neural Engineering).


Our topic mostly revolves around the intersection of health, language, neuroscience, and deep learning.

1. Non-invasive glucose monitoring via Raman spectroscopy - measuring glucose non-invasively via Raman spectroscopy. The ultimate goal is to develop a reliable, practical wearable for measuring glucose.

2. BCI for spellers, motor imagery, and diagnosis - exploiting the use of EEG for the BCI spellers for locked-in patients, motor imagery for disabled people, and diagnosis of diseases such as epileptic seizures and Alzheimers. 

3. Research writing assistant - utilizing traditional NLU and modern NLP to develop a research writing assistant (similar to Grammarly but for research). 

4. Few-shot explainable Neuroimaging - assisting medical doctors by providing explainable diagnosis via deep learning, few-shot learning, explainable AI, and medical question answering.

5. Real-time emotion/cognition recognition via multimodal sensors - utilizing multimodal sensors such as EEG, ECG, EoG, and cameras for real-time emotion/cognition recognition.  The ultimate goal is to develop tools for human activity empowerment.

Focus Area

Although these topics are slightly different, our lab views them from these shared research challenges:

  • Few-shot learning - contributes to the development of a model that can learn quickly.  There are many proposed ways such as transfer learning, meta-learning, and multi-task learning.
  • Reinforcement learning - contributes to using reinforcement learning for more effective training.   In backpropagation-based learning, one has to define a loss function.  However, it is also possible to optimize a model through behavioral rewards.  For example, in text generation, since the ROUGE score is non-differentiable, it may be wise to use reinforcement learning instead to optimize such a score.
  • Cross-modal learning - contributes to learning the alignment or mapping function between different modalities.  For example, converting fMRI images to face images, EEG signals to stimuli images, text to images, or vice versa.  In such a model, one has to work with generative adversarial networks or diffusion networks.
  • Explainable AI - trying to understand where knowledge lives in the neural network.   The question is a bit scientific, but in a practical sense, sometimes the model messes up and we want to know how to change it.  Another practical question is why the neural network suggests the decision, which could help domains like medical or business better understand.