Portrait of D Mzurikwao

D Mzurikwao

Research student


D Mzurikwao is a research student in the School of Engineering and Digital Arts.


Book section

  • Mzurikwao, D., Williams Samuel, O., Grace Asogbon, M., Li, X., Li, G., Yeo, W., Efstratiou, C. and Siang Ang, C. (2019). A Channel Selection Approach Based on Convolutional Neural Network for Multi-channel EEG Motor Imagery Decoding. In: 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE). New York, USA: IEEE, pp. 195-202. Available at: https://doi.org/10.1109/AIKE.2019.00042.
    For many disabled people, brain computer interface (BCI) may be the only way to communicate with others and to control things around them. Using motor imagery paradigm, one can decode an individual's intention by using their brainwaves to help them interact with their environment without having to make any physical movement. For decades, machine learning models, trained on features extracted from acquired electroencephalogram (EEG) signals have been used to decode motor imagery activities. This method has several limitations and constraints especially during feature extraction. Large number of channels on the current EEG devices make them hard to use in real-life as they are bulky, uncomfortable to wear, and takes lot of time in preparation. In this paper, we introduce a technique to perform channel selection using convolutional neural network (CNN) and to decode multiple classes of motor imagery intentions from four participants who are amputees. A CNN model trained on EEG data of 64 channels achieved a mean classification accuracy of 99.7% with five classes. Channel selection based on weights extracted from the trained model has been performed with subsequent models trained on eight selected channels achieved a reasonable accuracy of 91.5%. Training the model in time domain and frequency domain was also compared, different window sizes were experimented to test the possibilities of realtime application. Our method of channel selection was then evaluated on a publicly available motor imagery EEG dataset.

Conference or workshop item

  • Mzurikwao, D., Ang, C., Samuel, O., Asogbon, M., Li, X. and Li, G. (2018). Efficient Channel Selection Approach for Motor Imaginary Classification based on Convolutional Neural Network. In: IEEE International Conference on Cyborg and Bionic Systems (CBS). IEEE, pp. 418-421. Available at: https://doi.org/10.1109/CBS.2018.8612157‚Äč.
    Brain Computer Interface (BCI) may be the only way to communicate and control for disabled people. Someone's intention can be decoded from their brainwaves during motor imagery action. This can be used to help them control their environment without making any physical movement. To decode someone's intention from brainwaves during motor imagery activities, machine learning models trained on features extracted from the acquired EEG signals have been used. Although the technique has been successful, it has encountered several limitations and difficulties especially during feature extraction. Moreover, many current BCI systems rely on a large number of channels (e.g. 64) to capture spatial information which are necessary during training a machine learning model. In this study, Convolutional Neural Network (CNN) is used to decode five motor imagery intentions from EEG signals obtained from four subjects using 64 channels EEG device. A CNN model trained on raw EEG data managed to achieve a mean classification accuracy of 99.7%. Channel selection based on learned weights extracted from a trained CNN model has been performed with subsequent models trained on only two selected channels with higher weights attained a high accuracy (average of 98%) among three participants out of four.
Last updated