Setting Python environment for OpenBCI

Our lab uses mne-python for offline analysis and signal processing, psychopy, tkinter, pygame for interfacing the experiments, pyOpenBCI and other openBCI python libraries for streaming data from openbci boards.   For backend, scikit-learnpytorch/keras, pyriemann, pywavelets, and mne-python are used.

For setting up your python environment(tested on Ubuntu 18.04; also tried MacOS Catalina, use brew instead of apt-get; Mac is a bit complicated, e.g., install tkinter with brew install tck-tl and make sure to export flags before creating venv - this guide is one of the cleanest so I highly recommend)

  1. Make sure you have python, if not. (check brew guide for macOS;  Do not use the default version of python for Mac.  Also, do not attempt to install python installation from the website, nor the anaconda.  Use only brew then you will have one clean python distribution.)
    • sudo apt update
    • sudo apt install python3-pip python3-dev
    • sudo apt install build-essential libssl-dev libffi-dev libgtk-3-dev
  2. You can check whether your python is installed
    • python3 --version  #should return python version 3.X.X
  3. Upgrade pip
    • sudo -H pip3 install --upgrade pip
  4. Install python virtual environment (optional, but will make your life easier, this will make sure your dependencies are isolated)
    • sudo apt install -y python3-venv
  5. You can create a directory to hold all your projects (optional)
    • mkdir ~/bci_project
    • cd ~/bci_project
  6. Then within the project dir, you can start create your first virtual environment that will hold all python dependencies for your bci project
    • python3 -m venv bci_env
  7. Then activate the virtual env  (type "deactivate" when you want to come up of the virtual env.")
    • source bci_env/bin/activate  

    (If you are lazy to type this everytime, check out how to put "alias = bci="source /path/to/bin/activate". inside the ~/.bashrc or ~/.bash_profile)

  8. Once activated (you will sit some parenthesis in front), install all python libraries for your bci project, as a starter
    1. You can simply use "python" and "pip" instead of "python3" and "pip3" because the environment is already in python3.  Confirm by typing "python -V" and "pip -V"
    2. For pyOpenBCI to work
      • pip install pyserial bitstring xmltodict requests
      • pip install bluepy (skip this for Mac)
      • pip install pyOpenBCI 
    3. For LabStreamingLayer
      • pip install pylsl
    4. For basic python data analytics
      • pip install pandas numpy scikit-learn matplotlib seaborn scipy
      • pip install torch  #doing deep learning
      • pip install tensorflow
      • pip install keras
      • pip install -U mne #python-mne
      • pip install pyqt5==5.14.0 #you require downgrading pyqt5 to 5.14 for psychopy to work since it uses manylinux2014wheel)
      • pip install psychopy #for making experiments
      • pip install https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-18.04/wxPython-4.0.7.post2-cp36-cp36m-linux_x86_64.wh
         #in case you are not using ubuntu 18.04, try others https://extras.wxpython.org/wxPython4/extras/linux/gtk3/
      • pip install braindecode #for deep learning in EEG; built on top of pytorch

      • pip install pygame #for making experiments

      • pip install pyriemann #for riemannian geometry

      • pip install jupyter #for running the notebook
      • sudo apt-get install python3-tk  #tkinter for making ui in python.  (use brew install tcl-tk for MacOS;  also export flags, then create venv)
  9. Before dealing with python, install OpenBCI_GUI and check the EEG signal qualityeeg signal quality
    1. See below picture. Note that typical uVrms on the right is around 1 or 2 digits.  If it is 1000 something, then something is wrong.
    2. If you get a lot of rallied data, it means you did not put in SRB1 data and noise canceling data. Both should be put at earlobe.
    3. Note that I am using Cz, P3, Pz and P4 for channel 1, 2, 3, 4 (you do not need to really obey the OpenBCI GUI head plot mapping.  As long as you know the other, it's fine.  Channel 1 maps to N1 pin, that is.
  10. Then write your first python that would stream LSL data to any listeners
    1. Then run this file in the background, e.g., "python lsl_stream.py"  (here I am using a Cyton board with 8 channels running at 250Hz - Note that even I use 4 channels, we have to define according to the board specifications; we can delete other unused channels later on)
    2. If you have a permission error, do "sudo usermod -a -G tty ${USER}" to add your current user to tty group (do the same for dialout) (login and logout to activate the group permission, and simply type "groups" in your terminal, you should see that you belong to both tty and dialout group)
  11. Then write your first python file that would view this real-time LSL data 

    lsl_viewer plot

    1. While lsl_stream.py is run in another tab, type "python lsl_viewer.py" to view the data real-time streamed by lsl_stream.py  (Credits: Neurotech Berkeley)
    2. Note that NP1 maps to the first channel, NP2 maps to the 2nd channel, and so on.  NP1 is the pin name on the Cyton board. 
    3. Note that the voltage should be normally 2  digits. If it is more than that..for example, my third channel..something is wrong.  Try change the electrode or check wires 
  12. If you would like to store this data into csv for latter analysis, you can replace lsl-viewer.py with lsl-record.py

    mkdir data
    chmod 777 data

    1. create a "data" directory at the same place as lsl-record.py
    2. then run "pyhon lsl-record.py" Cltr-C when you are finished

Typical process includes

  1. (psychopy/pygame) data acquisition through some presentation stimulus. 
  2. (mne.rawpre-process signal data such as bandpass filter so you get only the frequency you need (follows the Nyquist frequency theory),  notch filter to cut down the 50/60Hz frequency which is the normal frequency used by electronic devices, ICA to find the source of eye blinks, remove it, and reconstruct the signal
  3. (mne.raw) feature selection and extraction -  For extraction, time domain or frequency domain (psd using fourier transform) or wavelet transform.  For selection, one of the most common one is using Common Spatial Pattern
  4. (sklearn, pytorch)  proceed to machine learning models to classify the signals.
  5. (pygame, tkinter, javascript) making into BCI application