Culture

We strive to build an interdisciplinary team working at deep learning, especially in the area of brain and language, ultimately for the purposes of human health and well-being. Our core values are:

  • Experimental:  We value scientific rigor, focusing on researching under strong scientific grounds and conducting sound experiments that provide definitive and repeatable findings.  
  • Computational:  Our scientific nature is to use algorithms, mathematical models, strong theoretical background, and strong coding skills.

Focus Area

Our lab focus area is threefold, focusing on modeling: 1) neuroimages (e.g., fMRI) (nilearn + nipype + pyTorch), 2) EEG signals (mne + pyTorch), and 3) text (torchtext + pyTorch).

Although these focus areas are slightly different in their data structures, our lab views them from these shared research challenges:

  • Multi-Task Learning - contributes to the development of a model that can generalize across tasks (e.g., question-answering, summarization, dialogue) and subjects.  These challenges often come in many names where they share very similar dreams, e.g., transfer learning, meta-learning, few-shot learning, zero-shot learning.
     
  • Reinforcement learning - contributes to the use of reinforcement learning for more effective training.   In backpropagation-based learning, one has to define a loss function.  However, it is also possible to optimize a model through behavioral rewards.  For example, in text generation, since the ROUGE score is non-differentiable, it may be wise to use reinforcement learning instead to optimize such a score.
     
  • Cross-modal learning - contributes to learning the alignment or mapping function between different modalities.  For example, converting fMRI images to face images, EEG signals to stimuli images, text to images, or vice versa.  In such a model, one has to work with generative adversarial networks.
     
  • Understanding the robustness of the model - contributes to testing and improving the robustness and generalizability of the model by adversarial attacks (e.g., changing the input, or adding some noise), or developing some specialized test kit.
     
  • Understanding how the model works - contributes to the empirical investigation of how the model was able to correctly/incorrectly model the data.  This often involves examining the hidden states, cell states, attention heads, etc.
     
  • Neural Architecture Search: contributes to the automatic finding of the optimal micro-and macro architecture for the neural network through pruning and reinforcement learning.
     
  • Applications - contributes to the creative use and comparison of existing techniques on unsolved industrial/practical real-world problems.  Examples:  EEG (e.g., cognitive enhancements, motor Imagery, emotion/cognition recognition, BCI spellers), Text (e.g., summarization, depression identification, social media analysis), fMRI (e.g., diagnosis).