rate-limited by the slowest component. This had no impact on the virtual environment where simulation time had no bearing on real time, but it did affect the real robot, where the object identification process proved slowest due to the naïve implementation of color matching applied to the relatively large input image. Many other robot platforms, including Tekkotsu and MoBeE (Frank et al., 2012), use threaded, finite state machine architectures, which can achieve real-time performance and take advantage of concurrent and distributed processing of information. This avoids the rate-limiting problem of the serial architecture at the cost of increased system complexity. However, with the computational power inherent in modern laptops, like the one mounted on the Calliope, CoCoRo’s simplistic structure did not interfere with the ability of the robot to complete tasks effectively. The Calliope operated at an average rate of 10Hz during task execution, which was sufficiently fast enough to adjust motor commands as needed for the tasks undertaken albeit with the maximum wheel and joint velocities artificially reduced. Wheel velocities were capped at 300 mm/s and arm joint velocities were capped at ±1.5 rad/s. The simulation step time in Webots was set at the default value of 32ms. As all sensor and motor component control steps must be a multiple of this simulation step, 96ms was chosen to offer a comparable decision performance rate.
Since BCIs bypasses normal output pathways of peripheral nerves and muscles, they have originally been directed at people with severe neuromuscular disorders, such as amyotrophic lateral sclerosis (ALS), brainstem stroke, and spinal cord injury ( Wolpaw et al. ( 2002 )). A BCI can provide such individuals, who may be completely paralyzed, with basic communication capabilities and enable them to use assistive technologies. Common application are spelling devices, neuroprosthetics and control of wheelchairs. Recent years there has been significant effort to develop BCIs aimed at able-bodies as well. BCI-based controllers directed at practical use has already been adapted by computer gaming ( Marshall et al. ( 2013 )) and have been used experimentally in many other real-world applications, for instance roboticcontrol. However,
The combination of BMI and computer vision-based grasping creates a system that can allow people without use of their arms to control a robotic prosthetic to perform functional tasks in cases where neither technology would be sufficient on its own. The BMI provides the user with high-level control of the pace and goals of the arm movements. The computer vision system helps with the details of the movement, ensuring a secure grasp in the presented cases, but also by identifying how to act on a specific object based on its shape. Balancing the control between the user and the automated system will provide high performance while ensuring that the user feels the device is reliable and responsive to their commands in a variety of situations. As both technologies continue to improve, robotic prosthetic control makes it both easier and more useful for the people who need it.
A Brain Computer Interface (BCI) often called a Mind-MachineInterface (MMI) or sometimes called a Direct Neural Interface (DNI), Synthetic Telepathy Interface (STI) or a BrainMachineInterface (BMI) is a direct communication pathway between the brain and an external device. BCIs are often directed at assisting, augmenting or repairing human cognitive or sensory-motor functions. Figure 1 shows the basic blocks in the BCI interface. Humans‟ brain is filled with neurons, individual nerve cells connected to one another by dendrites and axons. Every action like think, move, feel or remember something make neurons are at work. That work is carried out by small electric signals that zip from neuron to neuron as fast as 250 mph. The signals are generated based on the differences in electric potential carried by ions on the membrane of each neuron. The signals then can be detected, interpreted to what they mean and use them to direct a device of some purpose.
Since active participation of patients in the therapies is known to be crucial for motor recovery, inferring the subject’s level of mental stress, conditions or emotions from EEG signals, provides valuable information for “assist-as-needed” protocols.  proposes an approach to incorporate the user’s attention state into game control, by computing the short window energy of the EEG signals that contrasts between attention conditions in which the subjects were asked to perform Stroop tasks and in-attentiveness conditions in which they were instructed to relax. [51, 52] present a study to find a correlation between emotions and chronic mental stress levels mea- sured by Perceived Stress Scale 14 (PSS-14) and EEG signals.  proposes a fast emotion detection approach from EEG, by showing neutral, positive and negative video clips to the subjects. Immediately after the played video, the subjects reported the induced emotions during watching the video clip. But these proposed approaches are very specific to the tasks executed in the experiments, strongly dependent on the patients or not suitable for real-time adaptation of robotic rehabilitation systems.
The scalp EEG is a measure of mean electrical activity from large populations of cortical neurons. Thus, the causal relationship between EEG activity and movement is not well under- stood [ 4 ]. In order to decode a user ’s intended movement, signal features correlated to activity are extracted; however, finding optimal, task-specific features is an ongoing focus of BCI research [ 2 , 5 ]. Given the complexity of the underlying neural data and the large number of potentially useful features, a more generalized approach to feature extraction will be valuable. The two key methodologies applied to transform neural data for BCIs are classification and regression. The work in the current study is focused on the classification problem, whereby fea- tures are used to distinguish between discrete classes of control signal. The optimal feature choice is highly dependent on the specific task [ 6 ]. For instance, features used for motor imag- ery-based tasks, such as alpha and beta event-related desynchronization may provide variable accuracy for different movement tasks [ 7 ]. Furthermore, it is unclear if particular features will generalize well across different users performing the same task. For example, it has been shown that motor imagery-specific features provide variable accuracy for BCI control, with some users unable to reliably produce distinguishable oscillations [ 8 , 9 ]. Therefore, the development of a subject-specific BCI that does not rely on an a-priori choice of features will represent a sig- nificant advance in the field.
The BMI training to control the robotic hand was performed as a randomized crossover trial consisting of two training sessions on different days. Each training session was performed with two different decoders to control the robotic hand: a real decoder and a sham decoder. Using the z-scored MEG sensor signals of the offline tasks to move the right hand, we constructed a decoder to infer hand movements at an arbitrary time, in order to control the robotic hand in real time ( Fukuma et al., 2015 ). Each experiment was performed after more than 2 weeks had passed since the previous experiment. For the experiments with the real decoder and sham decoder, the order of the experiments was randomly assigned to the subjects. The experimenter was not blinded to the group allocation.
In the present study, participants had no previous experience with the use of EOG signals for BNCI control and familiarized with EOG control only during the calibration procedure at the beginning of the session. While eye movements are often left intact in patient populations with severe motor disabilities, e.g. stroke or spinal cord injuries (SCI), validity of these results and their dependence on various factors, e.g. cognitive capacity, attention span or alertness should be investigated in future studies. Also, it is conceivable that hybrid EEG/ EOG BNCI control can be improved beyond the level demonstrated in this study if effective training protocols [14,16], or advanced decoding algorithms, e.g. based on Riemannian geometry , discriminative models  or machine learning are applied.
called a Mind and machineinterface (MMI) is a direct communication pathway between the external device and the human brain. BMI is an association between a brain and a device that authorizes brain signals to direct some external action, such as controlling of a wheelchair. For instance in the case of cursor control, the signal is imparted directly from the brain to the process directing the cursor instead of taking the normal path through the neuromuscular system of a person to the finger on a mouse from the brain. Here we have taken measures to find the people who have worked on different applications on BCI to be accommodated in a single paper.
Six subjects participated in this experiment. EEG was recorded with one channel over the occipital cortex at a sampling rate of 1kHz, filtered by a 0.15Hz high-pass filter and a 150Hz low-pass filter. The resistances between the skin and the sensor are all below 10k. The distance between the CRT and a subject was 40 cm. We examined stimuli of 6Hz and 12Hz, and recorded the SSVEPs of “two points”, circle, “‘8” shaped trajectory and a black-white flashing box as the control stimuli. This test session was repeated for three times. In each recording session, the subject was told to look at the stimulus for 10 seconds and close their eyes for a rest period of a random duration from 10 to 20 seconds. The recorded data were discarded then repeated when muscle movements artifacts were significant. Figure 5.2 shows the SSVEP spectrums of the above four stimuli.
Several publications have showed improvements in upper limb motor function after rehabilitation therapies based on robotic devices [7, 8] and FES [9, 10]. Fur- thermore, the combined use of both technologies has shown promising results in terms of motor recovery after stroke [11, 12]. The main advantage of using the hybrid approach is that, individual limitations are overcome, gen- erating in this way a more robust concept . Robotic devices generally apply external mechanical forces to drive joint movements, while FES-based therapy facili- tates exercise execution leaded by the participant’s own muscles. This last approach yields several benefits con- sidering motor recovery, such as muscle strength  and cortical excitability . Further, even when stroke participant does not contribute to voluntary movement these advantages are still present. However, the use of FES elicits the fast occurrence of muscle fatigue due to non-physiological recruitment (unnatural) of the motor units. Muscle fatigue decreases the efficacy of therapy and also entails other drawbacks, that is why, effort are always targeted to prolong the appearance of its effects. Moreover, the nonlinear and time variant behavior of the muscles during FES generate a less accurate motor con- trol response. This problem can be addressed by using an exoskeleton, in order to cooperatively aid the move- ments. The inclusion of robotic device avoids stimulate arm’s muscles to overcome gravity effects, and hence, release the system from patients discomfort generated when arm muscles are constantly stimulated for this pur- pose. So, the main idea begins the hybrid approach based on reaching movement rehabilitation is that the exoskele- ton compensate again gravity and FES assists the patient for movements execution.
From EGS’ neurons we recorded goal and trajectory signals for imagined movements (17). He controlled both a cursor on a computer screen and a robotic limb. As predicted from monkey work, neurons were active for imagined reach of either limb. Neurons were also found that were extremely specialized for specific behavioral actions; for example, we found units that became active for imagined movements of the hand to the mouth but would not become active for similar movements such as movement of the hand to the cheek or forehead. Such neural encoding of behaviorally meaningful actions opens the possibility that the high-level intent of the subject can be decoded and in- tegrated with smart robotics (49–51) to perform complex movements that may otherwise require attentionally demanding moment-to-moment control of a robotic limb. To demonstrate this concept for PPC, we showed that participant EGS was able to grasp an object and move it to a new location with a robotic limb, which combined his timing of the intended movements with machine vision and smart robotic algorithms (50). These studies confirmed the earlier monkey studies that PPC is a good can- didate for signals for neuroprosthetic control.
Once the BCI has predicted the user’s mental task, it sends the corresponding command to the computer, which performs the corresponding action. The user ob- serves this response as feedback, completing the BCI cycle as shown in Figure 2.1. Possible applications include brain-controlled motorized wheelchairs , remotely- controlled assistive robots that can navigate a building [14, 15], improving rehabili- tation methods , and even prosthetic limbs that respond to neural signals like a biological limb . Additionally, BCIs could provide an intuitive control method for able-bodied users teleoperating a robot in a remote location. This would potentially provide faster and more intuitive control, either alone or as an enhancement to tradi- tional interfaces such as joysticks, voice control, or typing commands into a computer terminal.
Stroke and road traffic injuries may severely affect movements of lower-limbs in humans, and consequently the locomotion, which plays an important role in daily activities, and the quality of life. Robotic exoskeleton is an emerging alternative, which may be used on patients with motor deficits in the lower extremities to provide motor rehabilitation and gait assistance. How- ever, the effectiveness of robotic exoskeletons may be reduced by the autonomous ability of the robot to complete the movement without the patient involvement. Then, electroencephalogra- phy signals (EEG) have been addressed to design brain-computer interfaces (BCIs), in order to provide a communication pathway for patients perform a direct control on the exoskeleton using the motor intention, and thus increase their participation during the rehabilitation. Specially, activations related to motor planning may help to improve the close loop between user and exoskeleton, enhancing the cortical neuroplasticity. Motor planning begins before movement onset, thus, the training stage of BCIs may be affected by the intuitive labeling process, as it is not possible to use reference signals, such as goniometer or footswitch, to select those time pe- riods really related to motor planning. Therefore, the gait planning recognition is a challenge, due to the high uncertainty of selected patterns, However, few BCIs based on unsupervised methods to recognize gait planning/stopping have been explored.
Early works in this area, i.e., algorithms to reconstruct movements from motor cortex neurons which control movement, were developed in the 1970’s. The first intra-cortical BCI was built by implanting electrodes into monkeys. After conducting initial studies in rats during 1990’s researchers developed brain computer interfaces that decoded brain activity in monkeys and used the devices to reproduce the movements in monkeys and then used the devices to reproduce monkey movements in robotic arms.
The tongue-robot interface developed in this study was based on the inductive tongue control system, which has previously been developed for control of computers [11, 12, 17], powered wheel- chairs [18, 19] and prosthetic devices . A commercially avail- able version of this system, iTongue  (Fig. 1a, b), was modified to provide continuous character input to a computer, which executed software (Fig. 1d) to transform these characters to command signals for an assistiverobotic arm, JACO [3, 4], with a three-finger gripper (Fig. 1c). Two versions of tongue- based roboticcontrol schemes were implemented: Direct actuator control in order to demonstrate the complete volitional control of the robot provided by the tongue-robot interface and further Cartesian endpoint control was implemented to de- monstrate a clinical and more intuitive application of the system.
Three different types of EEG-based BMI are currently in use namely slow cortical potential (SCP)-BMI, sensorimotor rhythm (SMR)-BMI and P300-BMI. Based on the detailed comparison of three different signatures of EEG-based BMIs as reported by Birbaumer , it was concluded that in ALS patients with functioning vision and eye control, SMR-BMI and P300-BMI shows the most promising results. SCP-BMIs need more extensive training than other BMIs but may have the best stability and are more independent of sensory, motor, and cognitive functioning necessary for its application in the LIS and the CLIS patients. The patients described earlier  had high success rates with SCP-BMI training but only after many sessions. It has been postulated that some cognitive impairment and changes in EEG signatures in late stage ALS may contribute to the lack of success using EEG-BMI technology as the technology was introduced after the participants had become ‘‘locked-in’’ [5,9]. Kuebler and Birbaumer  have shown that patients in CLIS do not reach sufﬁcient BMI control for communication with EEG parameters. Kuebler and Birbaumer  speculated that extinction of goal directed thinking may prohibit operant learning of brain communication. The most successful application for communication has occurred in people at the beginning stages of the disease [11–13]. Hence there is a need to ﬁnd an alternative neuroimaging technique to design a more effective BMI to help ALS patient in CLIS with communication.
10 Chapter 1. Assistive Technologies based on the BCI paradigm Passive control signals: Among evoked signals the most used are Steady State Evoked Potentials (SSEPs) and P300. SSEPs are brain signals that are generated during the presentation of a periodic stimulus, either visual, e.g. a flickering image in Steady-State Visual Evoked Potentials (SSVEPs) Müller-Putz et al., 2005b , a mod- ulated sound in auditory SSEPs (Nijboer et al., 2008 ), or vibrations, as in somatosen- sory SSEPs (Breitwieser et al., 2012 ). When a person is subjected to a high-frequency periodic stimulation, the EEG signals power in the brain area related to the sensation process involved, tends to reach the stimulus frequency. This phenomenon can be exploited in SSEPs-based BCI to generate a control signal. One of the most common application is based on SSVEP, and consists in a Graphical User Interface (GUI) with some buttons, each one of them flickering at a certain frequency (typically between 6-30 Hz). When the subject focuses on one of the buttons, the equivalent frequency can be detected on EEG signals recorded over the occipital cortex, and used to select the button (Zhu et al., 2010 ). The P300 evoked brain potential is an EEG signal oc- curring ~300 ms after the subject is exposed to an infrequent or surprising stimulus. When the person recognizes one rare relevant stimulus, among a random sequence of stimuli, this triggers the P300 EEG signal (Polich, 2007 ). The P300 paradigm is the most used in BCI spelling applications: patients select a letter from a matrix of transiently illuminated rows and columns by focusing on it. The illumination of the desired letter arises an EEG P300 that triggers the letter’s selection. Evoked sig- nals do not require any training from the subject. However, repetitive stimuli and the passive nature of the control can be tiring and uncomfortable for the subject. Furthermore, P300 and SSVEPs-based BCIs require intact vision and attention capa- bilities, which instead are often compromised in patients with severe neurological disorders at late stage.
The design of the ADC0804 has been optimized by incorporating the most desirable aspects of several A/D conversion techniques. The device offers high speed, high accuracy, minimal temperature dependence, excellent long-term accuracy and repeatability, and consumes minimal power. These features make it ideally suited for applications from process and machinecontrol to consumer and automotive applications.
This concept was explored in the authors’ prior work [2-4] in which subjects (both able-bodied and SCI) used an electroencephalogram (EEG) based BCI to control the ambulation of an avatar within a virtual reality envi- ronment. In these studies, subjects utilized idling and walking kinesthetic motor imagery (KMI) to complete a goal-oriented task of walking the avatar along a lin- ear path and making stops at 10 designated points. In addition, two out of five subjects with SCI achieved BCI control that approached that of a manually controlled joystick. While these results suggest that accurate BCI control of ambulation is possible after SCI, the transla- tion of this technology from virtual reality to a physi- cal prosthesis has not been achieved. In this study, the authors report on the first case of integrating an EEG- based BCI system with a robotic gait orthosis (RoGO), and its successful operation by both able-bodied and SCI subjects.