ABSTRACT

Recent growth and advancements in technologies like artificial intelligence have provided tremendous power to mankind. Comprehending the imagination that goes on inside the human brain has always been fascinating to the human race. Brain signals comprise the details that are generated as a result of various stimuli to the human body. Decoding human brain signals has been a trending topic for the past few years. Brain signals are recorded using various non-invasive procedures like functional magnetic resonance imaging, magnetoencephalography, electroencephalography, and others. Recent advancements in the domain of deep learning are showing promising results in the healthcare domain, particularly in decoding brain signals. Electroencephalography signals are high-temporal-resolution electrophysiological recordings and carry significant information about thoughts and visual stimuli. Recent research has shown that features extracted from brain signals can be used to generate images that are being visualized by the subject. Conditional generative adversarial network-based approaches in brain-computer interface applications are successful in generating realistic natural images from brain signals. This work generates images corresponding to the thoughts of the subject from recorded electroencephalography signals. To accomplish this task, this work has designed a feature encoder and generative adversarial networks using convolutional neural networks. This approach begins by extracting features from electroencephalography signals and then generating images from those features using proposed generative adversarial networks. This work focuses on digits and character datasets. The experimental results demonstrate that the proposed method has achieved better accuracy in classifying electroencephalography signals and has generated realistic images using generative adversarial networks.