Brain Computer Interface with Artificial Intelligence and Reinforcement Learning

Jeff Coleman
5 min readMay 4, 2018

--

A brain–machine interface (BMI) is a device that translates neuronal information into commands capable of controlling external software or hardware. The dream gadget that let humans read each other’s thoughts and communicate with brain waves may be moving closer to reality. Brain computer interfaces combine knowledge and techniques from neuroscience, signal processing and machine learning. Functional Near Functional Infrared Spectroscopy (fNIRS) is commonly used to record brain signals. The fNIRS is a non-invasive technique to record brain signals.

Reinforcement learning is an interactive learning method designed to allow systems to obtain reward by learning to interact with the environment, and which has adaptation built into the algorithm itself using an feedback signal.

Top 10 Current Research Works On Brain Computer Interface with AI:

Brain-computer interfaces (BCI)could change the way people think, soldiers fight and Alzheimer’s is treated. Currently with the availability of IoT, and Open Source technologies, brain computer interface and AI research is going at a rapid speed both in academic environment as well as in industry, and other research labs. Here is my top 10 most interesting current works in the field of Brain-computer interfaces with Artificial Intelligence are as follows:

Notable AI Scientist and spiritual master, Sri Amit Ray explained the Brain-Computer Interface for Compassionate AI, in his seminal book Compassionate Artificial Superintelligence AI 5.0. Many researchers think that his path-breaking research and contribution on compassionate artificial intelligence will have considerable impact on future artificial superintelligence research.

Brain and Cognitive Sciences research lab MIT. A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University is working on this problem, creating a feedback system that lets people correct robot mistakes instantly with nothing more than their brains.

Max Planck Institute for Intelligent Systems, Germany working on Neuro-physiological causes of performance variations in brain-computer interface.

Johns Hopkins University, Neuroengineering Lab focus on Spino-Cerebellar Ataxia (SCA) sufferers, who are able to perform motor movements, but due to selective degeneration of the cerebellum are deficient in precise control of motor movements.

Biomedical Functional Imaging and Neuroengineering Laboratory of Carnegie Mellon University, focused on Brain computer interface research on robotic arms and thought decoders.

Neuralink is an American neurotechnology company founded by Elon Musk and eight others, reported to be developing implantable brain–computer interfaces (BCIs). The company’s headquarters are in San Francisco.

The University of Tokyo, JP, Life Science Center of TARA obtained Annual BCI Award, 2016 for the A learning-based approach to artificial sensory feedback: intracortical microstimulation replaces and augments vision.

The Annual BCI Research Award for 2017 went to the Center for Sensory-Motor Interaction, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark, for their contribution on Wrist and Finger Movements in a Human with Quadriplegia.

Tsinghua Biomedical Engineering Team, China doing considerable research on enabling a pure brain-based communication channel and sensors with the environment.

Kernel, a startup created by Braintree co-founder Bryan Johnson, is also trying to enhance human cognition.

Reinforcement learning and Brain computer interface:

Reinforcement learning (RL)algorithms can generally be divided into two categories: model-free and model-based. Model-free learns a policy or value function. In model-free learning, the agent simply relies on some trial-and-error experience for action selection. In model-based learning, the agent exploits a previously learned lessons. However, model based algorithms are dynamics model. Reinforcement learning is used for closed-loop brain-controlled interfaces. The schematic diagram of the RL based BCI is as follows:

Brain Computer Interface with Artificial Intelligence

Power and limitations of Model Free Algorithms

While model-free deep reinforcement learning algorithms are capable of learning a wide range of applications, they typically suffer from very high sample complexity, often requiring millions of samples to achieve good performance, and can typically only learn a single task at a time.

Power and limitations of Model BasedAlgorithms

Model-based RL uses experience to construct an internal model of the transitions and immediate outcomes in the environment. Appropriate actions are then chosen by searching or planning in this world model.

There is a statistically efficient way to use experience, as each morsel of information from the environment can be stored in a statistically faithful and computationally manipulable way. Provided that constant replanning is possible, this allows action selection to be readily adaptive to changes in the transition contingencies and the utilities of the outcomes. This flexibility makes model-based RL suitable for supporting goal-directed actions.

Implementation of RL and BCI:

The brain signals can be transformed from the time domain into the frequency domain, i.e., with discrete Fourier transform (DFT). Brain signals can be divided into different frequency bands, delta waves (0.5–3 Hz), theta waves (4–7 Hz), alpha/mu waves (8–13 Hz), beta waves (14–30 Hz) and gamma waves (> 30 Hz).

Filtering Noise:

The raw signals are noisy, non-stationary, complex and of high dimensionality. Hence, first the signals need to go through different pre-processing steps. After pre-processing the signals, features will be extracted and used to train a classifier, to assign classes to different sets of features which encode the imagined movement from a person. Spatial filters like the common average reference (CAR) or the Laplacian filter can be used to subtract common noise and enhance local activity.

Delta waves are an indicator for deep sleep or deep unconsciousness, theta waves indicate transition between deep sleep and wakefulness, alpha waves indicate inactive wakefulness and relaxation, beta waves indicate active wakefulness, gamma waves indicate strong concentration and learning.

Reinforcement learning

In reinforcement learning, an agent interacts in uncertain environments with the goal to maximize a numerical long term reward. A reinforcement learning task that satisfies the Markov property is called a Markov decision process (MDP). This property makes it possible to predict the next state based on the current state and action without considering the history of all states and actions. Through the learned policy the RL agent knows which is the right command to execute in every state of the game.

Sum up:

Recent advances in artificial intelligence and reinforcement learning with neural interfacing technology and the application of various signal processing methodologies have enabled us to better understand and then utilize brain activity for interacting with computers and other devices. Here, we discussed the basic elements of emerging BCI, AI and RL technologies as well as discussed the ten top research institutes and initiatives in this emerging field.

Thanks!

--

--

Jeff Coleman

Technology and AI Consultant. Love to write about personal growth, leadership, inspirational, and management articles.| A loving father ..