Culture

We strive to build an interdisciplinary team working at the domain of deep learning, NLP, and computational neuroscience. Our ultimate goal is to model humans. Our core values are:

  • Experimental:  We value scientific rigor, focusing on researching under strong scientific grounds and conducting sound experiments that provide definitive and repeatable findings.  
  • Computational:  Our scientific nature is to use algorithms, mathematical models, strong theoretical background, and strong coding skills.

Lab Entry

We welcomed students from all disciplines but those with a huge passion for research.  All Master/Ph.D. students are required to publish at top-tier conferences/journals:  HCI (CHI, UIST),  NLP (ACL, EMNLP),  CV (CVPR, ICCV), Brain (Journal of Neural Engineering).

Current Projects

  1. Large language models - contributes to the research on how to utilize and improve the efficiency and reasoning/factuality capacity of large language models.
     
  2. Multimodal models - contributes to the relationship modeling between vision and text.
     
  3. BCI speller - contributes to the development of BCI speller using EEG paradigms such as P300, SSVEP, Hybrid P300-SSVEP and motor imagery for locked-in patients.
     
  4. Medical (visual) question answering - contributes to the development of models that generate answers to medical (visual) question answering tasks.
     
  5. Legal question answering - contributes to the development of models that generate answers to legal questions, which can help average people access to their rights and laws.
     
  6. Informal-formal paraphraser - contributes to the development of models that can help turn informal text into formal text
     
  7. Blood glucose monitoring - contributes to the use of Raman spectroscopy and the development of Raman wearables to monitor blood glucose in real-time

Focus Area

Although these topics are slightly different, our lab views them from these shared research challenges:

  1. Few-shot / semi-supervised learning - contributes to the development of a model that can learn quickly, and when labels are limited or even not available.
     
  2. Low compute and efficiency - contributes to solving the problem of efficiency of large model, either by distillation, pruning or quantization or parameter efficient tuning or token merging.
     
  3. Robustness - contributes to the use of adversarial or data augmentation for robust performance
     
  4. Explainable AI - contributes to the development of tools/techniques or analysis of model that yield better understanding