Brain controlled Robots
Japan Science and Technology Agency ICORP, Computational Brain Project,
and ATR Computational Neuroscience Laboratories
Hikaridai 2-2-2, Seika-chyo, Soraku-gun, Kyoto, 619-0288, Japan
In January of 2008, Duke University and Japan Science and Technology Agency (JST) publicized success of brain-machine interface control of a humanoid robot by a monkey brain across the Pacific Ocean. Activities of a few hundreds of neurons were recorded from a monkey motor cortex in Miguel Nicolelis’s lab in Duke University, and kinematic features of monkey locomotion on a tread mil were decoded from neural firing rates in real time. The decoded information is sent to a humanoid robot CB-i in ATR Computational Neuroscience Laboratories located in Kyoto Japan, which was developed by JST International Collaborative Research Project (ICORP) “Computational Brain Project”. CB-i’s locomotion-like movement was video-recorded and was projected on a screen in front of the monkey. Although the bidirectional communication used a usual internet connection, its delay was only one of several second partly due to a video-streaming technique and it encouraged monkey’s voluntary locomotion and influenced brain activity. In this commentary, backgrounds and future directions of this brain-controlled robot experiment are introduced.
Recent computational studies on how the brain generates behaviors are progressing rapidly. In parallel, development of humanoid robots that act like humans is now part of the focus of robotic research. The Japan Science and Technology Agency (JST) has succeeded in making a humanoid robot execute locomotion-like movement via data detected from cortical brain activity that was transmitted through an internet interface between the U.S.A and Japan in real time. In our projects (ERATO web page, ICORP web page), we have developed information-processing models of the brain and verify these models on real robots in order to better understand human brain mechanisms in yielding behaviors. In addition, we aim to develop humanoid robots that behave like humans to facilitate our daily life. This experiment is epoch making both from a computational neuroscience viewpoint and for further development of brain machine interface. In this commentary, I will explain backgrounds and future directions of brain controlled robots.
Ten years have passed since the Japanese “Century of the Brain” was promoted, and its most notable objective, the unique “Creating the Brain” approach, has led us to apply a humanoid robot as a neuroscience tool (Kawato, 2008). Our aim is to understand the brain to the extent that we can make humanoid robots solve tasks typically solved by the human brain by using essentially the same principles. In my opinion, this “Understanding the Brain by Creating the Brain” approach is the only way to fully understand neural mechanisms in a rigorous sense. Even if we could create an artificial brain, we could not investigate its functions, such as vision or motor control, if we just let it float in incubation fluid in a jar. The brain must be connected to sensors and a motor apparatus so that it can interact with its environment. A humanoid robot controlled by an artificial brain, which is implemented as software based on computational models of brain functions, seems to be the most plausible approach for this purpose, given the currently available technology. With the slogan of “Understanding the Brain by Creating the Brain”, in the mid-80s we started to use robots for brain research (Miyamoto et al., 1988), and about 10 different kinds of robots have been used by our group at Osaka University’s Department of Biophysical Engineering, ATR Laboratories, ERATO Kawato Dynamic Brain Project (ERATO 1996-2001, ERATO web page), and ICORP Kawato Computational Brain Project (ICOPR 2004-2009, ICORP web page).
A computational theory that is optimal for one type of body may not be optimal for other types of bodies. Thus, if a humanoid robot is used to explore and examine neuroscience theories rather than for engineering, it should be as close as possible to a human body. Within the ERATO project, in collaboration with the SARCOS research company led by Professor Stephen C. Jacobsen of the University of Utah, Dr. Stefan Schaal as a robot group leader and his colleagues developed a humanoid robot called DB (Dynamic Brain) (Fig. 1) with the aim of most closely replicating a human body, given the robotics technology of 1996. DB possessed 30 degrees-of-freedom and human-like size and weight. From the mechanical point of view, DB behaves like a human body, which is mechanically compliant unlike most electric-motor-driven and highly-geared humanoid robots, because the SARCOS’ hydraulic actuators are powerful enough to avoid the necessity of using reduction mechanisms at the joints. Within its head, DB is equipped with an artificial vestibular organ (gyro sensor), which measures head velocity, and four cameras with vertical and horizontal degrees-of-freedom. Two of the cameras have telescopic lenses corresponding to foveal vision, while the other two have wide-angle lenses corresponding to peripheral vision. SARCOS developed the hardware and low-level analog feedback-loops, while the ERATO project developed high-level digital feedback-loops and all of the sensory-motor coordination software.
The photographs in Fig. 1 introduce 14 of the more than 30 different tasks that can be performed by DB (Atkeson et al., 2000). Most of the algorithms used for these task demonstrations are based roughly on principles of information processing in the brain, and many of them contain some or all of the three learning elements: imitation learning (Miyamoto et al., 1996; Schaal, 1999; Ude and Atkeson, 2003; Ude et al., 2004; Nakanishi et al., 2004), reinforcement learning, and supervised learning. Imitation learning (“Learning by Watching”, “Learning by Mimicking” or “Teaching by Demonstration”) was involved in Okinawan folk dance “Katya-shi” (Riley et al., 2000) (A), three-ball juggling (Atkeson et al., 2000) (B), devil-sticking (C), air-hockey (Bentivegna et al., 2004a; Bentivegna et al., 2004b) (D), pole balancing (E), sticky-hands interaction with a human (Hale and Pollick, 2005) (L), tumbling a box (Pollard et al., 2002) (M), and a tennis swing (Ijspeert et al., 2002) (N). The air-hockey demonstration (Bentivegna et al., 2004a; Bentivegna et al., 2004b) (D) utilizes not only imitation learning but also a reinforcement-learning algorithm with reward (a puck enters the opponent’s goal) and penalty (a puck enters the robot’s goal) and skill learning (a kind of supervised learning). Demonstrations of pole-balancing (E) and visually guided arm reaching toward a target (F) utilized a supervised learning scheme (Schaal and Atkeson, 1998), which was motivated by our approach to cerebellar internal model learning.
Demonstrations of adaptation of the vestibulo-ocular reflex (Shibata and Schaal, 2001) (G), adaptation of smooth pursuit eye movement (H), and simultaneous realization of these two kinds of eye movements together with saccadic eye movements (I) were based on computational models of eye movements and their learning (Shibata et al., 2005). Demonstrations of drumming (J), paddling a ball (K), and a tennis swing (N) were based on central pattern generators. Central pattern generators (CPG) are neural circuits that can spontaneously generate spatiotemporal movement patterns even if afferent inputs are absent and descending commands to the generators are temporally constant. CPG concepts were formed in 1960’ through neurobiological studies of invertebrate movements, and are key to understand most of rhythmic movements and essential for biological realization of biped locomotion as described below.
The ICORP Computational Brain Project (2004-2009), which is an international collaboration project with Prof. Chris Atkeson of the Carnegie Mellon University, follows the ERATO Dynamic Brain Project in its slogan “Understanding the Brain by Creating the Brain” and “Humanoid Robots as a Tool for Neuroscience”. Again in collaboration with SARCOS, at the beginning of 2007 Dr. Gordon Cheng as a group leader with his colleagues developed a new humanoid robot called CB-i (Computational Brain Interface), shown in Fig. 2 (Cheng et al., 2007b). CB-i is even closer to a human body than DB. To improve the mechanical compliance of the body, CB-i also used hydraulic actuators rather than electric motors. The biggest improvement of CB-i over DB is its autonomy. DB was mounted at the pelvis because it needs to be powered by an external hydraulic pump, through oil hoses arranged around the mount. A computer system for DB was also connected to DB by wires. Thus, DB could not function autonomously. In contrast, CB-i carries both onboard power supplies (electric and hydraulic) and a computing system on its back, and thus it can function fully autonomously. CB-i was designed for full-body autonomous interaction, for walking and simple manipulations. It is equipped with a total of 51 degrees-of-freedom (DOF): 2x7 DOF legs, 2x7 DOF arms, 2x2 DOF eyes, 3 DOF neck/head, 1 DOF mouth, 3 DOF torso, and 2x6 DOF hands. CB-i is designed to have similar configurations, range of motion, power, and strength to a human body, allowing it to better reproduce natural human-like movements, in particular for locomotion and object manipulation.
Within the ICORP Project, biologically inspired control algorithms for locomotion have been studied while utilizing three different humanoid robots (DB-chan (Nakanishi et al., 2004), Fujitsu Automation HOAP-2 (Matsubara et al., 2006) and CB-i (Morimoto et al., 2006)) as well as the SONY small-size humanoid robot QRIO (Endo et al., 2005) as their test beds. Successful locomotion algorithms utilize various aspects of biological control systems, such as neural networks for CPGs, its phase resetting by various sensory feedbacks including adaptive gains, or and hierarchical reinforcement learning algorithms. In the demonstration of robust locomotion by DB-chan, three biologically important aspects of control algorithms are utilized; imitation learning, a nonlinear dynamical system as a central pattern generator, and its phase resetting by a foot-ground -contact signal (Nakanishi et al., 2004). First, a neural network model developed by Schaal et al. (2003) quickly learned successfully correctly demonstrated locomotion trajectories by humans or other robots. In order to synchronize this limit- cycle oscillator (central pattern generator) with a mechanical oscillator realized functioning by through the robot body and the environment, the neural oscillator is phase- reset by foot-ground -contact. This guarantees stable synchronization of neural and mechanical oscillators with respect to phase and frequency. The achieved locomotion is quite robust against different surfaces with various frictions and slopes, and it is human- like in the sense that the robot body’s center of gravity of the robot body is high and while the knee is almost nearly fully extended stretched at the foot contact. This is in sharp contrast to engineering realization of locomotion engineered by zero-moment -point control, a traditional control method for biped robots, which was proposed by Vukobratovic 35 years ago and then successfully implemented by Ichiro Kato and Honda and Sony humanoid robots, and usually induces a low center of gravity center and bent knees. For particular importance of the BMI experiment, Jun Morimoto succeeded in CB-i locomotion based on the CPG models (Morimoto et al., 2006).
Brain machine interface (BMI) can be defined as artificial electrical and computational neural circuits that compensate, reconstruct, cure and even enhance brain functions ranging from sensory, central to motor control domains. BMI is already not a mere science-fiction fantasy in the domain of sensory reconstruction and central cure as exemplified by artificial cochlear and deep brain stimulation. Also in reconstruction of motor control capabilities for paralyzed patients, much progress has been made in last 15 years (Nicolelis, 2001) and chronic implantations of BMI to human patients have already been started in 2004, thus large-scale cures are expected to dramatically start in a near future.
Any successful BMI is relying on at least one, and in most cases all, of the following three essential elements; brain plasticity through user training, neural decoding by machine learning algorithm, and neuroscience knowledge. Sensory and motor BMI is a kind of a new tool for a brain. Unlike usual tools such as screw-drivers, chopsticks, bi-cycles, automobiles, which are connected to the brain via sensory and motor organs, BMI is connected directly to the brain via electrical and computer circuits. Still, BMI reads out neural information from the brain and feeds information back to the brain, thus a closed-loop is formed between the brain and BMI, just like usual tools. If delays associated with BMI closed loop are below one of several seconds, they are within the temporal window of spike timing dependent plasticity of neurons, hence learning to utilize BMI better could take place in the brain. Thus, based on synaptic plasticity of the brain, BMI users can learn how to better control BMI. This process can be regarded as an operant conditioning, and is reminiscent of “biofeedback”. Eberhard Fetz is the pioneer of this first element of BMI (Fetz, 1969). Most of the BMI systems based on electroencephalogram, often called brain computer interface, depend heavily on this first element; user training.
The second element is neural decoding by machine learning techniques. For example of the Duke-JST BMI controlled robot (Fig. 3), neural activities of a few hundreds of motor cortical neurons were recorded as well as the 3 dimensional positions of monkey legs were recorded simultaneously. Linear regression models were trained to predict the kinematic parameters from neural firing rates (Nicolelis, 2001), and they were used in real time decoding of leg position from the brain activity (Cheng et al., 2007a). Generally speaking, any machine learning technique can be used to reconstruct some physical variables such as motor apparatus position, velocity or acceleration, or different kinds of movements from brain activity such as neural firings of many neurons or non-invasive brain signals such as electroencephalogram. Typically, training data and test data sets consist of pair of neural activity X and some target variable Y, (X,Y). A machine learning algorithm is used to determine an optimal function F that can predict Y from X; Y=F(X) only using the training data set. A machine leaning algorithm is considered to be successful if it generalizes well even for the unseen test data set, that is, F(X) well predicts Y not only for the training set but also for the test set. For example, Honda Research Institute of Japan in collaboration with ATR Computational Neuroscience Laboratories (ATR-CNS) demonstrated real-time control of a robot hand by decoding three motor primitives (rock-paper-scissors, as in the children’s game) from the fMRI data of subject’s motor cortex activity (press release 2006). This was based on the machine-learning algorithm called support vector machine, previously utilized by Kamitani and Tong (2005, 2006) for decoding the attributes of visual stimuli from fMRI data.
The third element is neuroscience knowledge. In the case of the Duke-JST BMI controlled robot, neural recordings were made in the primary motor cortex that is known as the motor control center in neuroscience for long time. Instantaneous neural firing rates (pulses per millisecond) were utilized as regressors to estimate the kinematic parameters since firing rates are believed to be the most important information carriers in the brain. fMRI signals in visual cortical areas were used in Kamitani and Tong (2005, 2006) for visual attributes decoding. This third element is further elaborated in the following sections.
From a computational point of view, understandings of neural mechanisms for sensory-motor coordination have not yet fully been utilized for current BMI design. For example, population-coding hypothesis of movement directions by ensemble of motor cortical neurons (Georgopolous et al., 1982) was advocated to be the base of some BMI design (Taylor et al., 2002), but the hypothesis itself is still controversial (Todorov, 2000). In most of motor BMI, cursor positions or arm postures are determined directly from neural decoding and no computational models of sensory-motor integration were seriously incorporated (with a small number of exceptions such as Koike et al., 2006) However, it is obvious that simple approach to decode three dimensional position of hands or legs and give it to a simple position controller as a desired trajectory cannot deal with practical control problems such as object manipulation, locomotion or posture control. All these control problems incorporate instability of mechanical dynamics, thus require intelligent and autonomous control algorithms such as CPGs, internal models and force control with passive dynamics on the robot side. To be more specific, let us take an example of locomotion. If joint torques or joint angles during monkey locomotion are decoded from monkey brain activity and they are simply and directly fed into a torque or joint angle controller of CB-i, CB-i cannot achieve stable locomotion because its body is different from monkey’s body thus the same dynamic or kinematic trajectories lead to falling down (Figure 4). CB-i should possess an autonomous and stable locomotion controller such as CPGs on its controller side. A simple trajectory control approach can work only for the simplest control problems such as visually-guided arm reaching or cursor control, which have been main tasks investigated in BMI literature. We definitely need some autonomous control capability on robot sides to deal with real-world sensory-motor integration problems. Duke-JST BMI experiment is very important in notifying this requirement to future BMI research.
Masa-aki Sato and his colleagues at ATR-CNS have been developing a “brain-network interface (BNI)” based on a hierarchical, variational Bayesian technique to combine information from fMRI and magnetoencephalography (Sato et al., 2004). They succeeded in estimating brain activities with spatial resolution of a few millimeters and millisecond-level temporal resolution for various domains such as visual perception, visual feature attention, and voluntary finger movements. In collaboration with the Shimadzu Corporation, we aim to develop within 10 years a portable and wireless combined EEG/NIRS (electroencephalography/near infrared spectroscopy)-based Bayesian estimator for millimeter and millisecond accuracy. “Brain-network interface” is a term we have created for this project, and it is like a brain-machine interface or a brain-computer interface. BNI non-invasively estimates brain activity by solving the inverse problem, and it also estimates neural activities and reconstructs represented information. Accordingly, it is not a brain-machine interface because it is non-invasive, and it is not a brain-computer interface because it does not require extensive user training since it decodes information. We have already succeeded, for example, in estimating the velocity of wrist movements from single-trial data without subject training (Toda et al., 2007).
The brain utilizes its hierarchical structure in solving most difficult optimal control in sensory-motor integration problems. This is because a simple randomly connected uniform neural network cannot be powerful enough to solve a complicated optimal and real-time control issues with large degree of freedom and strong nonlinearity in controlled objects, and large time delays are associated with feedback loops (Kawato and Samejima, 2007). Thus different brain areas contribute to the resolution by solving different sub-problems; the cerebellum for internal models (Kawato, 2008), the premotor cortex for trajectory planning, the basal ganglia for reward predictions in reinforcement learning (Kawato and Samejima, 2007). For tackling real-world sensory-motor control problems with which any practical BMI controlled robots face, we definitely need to introduce such hierarchy and modularity into controllers of the robots. Those controllers should be as close as possible to real brain movement controllers. We need to decode different neural representations in different hierarchy of brain controllers, and then provide these decoded representations to the corresponding hierarchy of the robot controller. BNI could be an ideal framework to simultaneously estimate hierarchically arranged neural representations from the brain in a non-invasive manner. For an example of locomotion, self motion could be estimated from MST, decision to move, stay, turn left or right could be estimated from the prefrontal cortex, planning to start motion could be estimated from the pre-motor cortex, joint angles and torques could be estimated from the primary motor cortex, and predictions and estimations of the current states and motor commands could be decoded from the cerebellum. These hierarchically arranged list of neural representations could be maximally beneficial if the locomotion controller of the robot has a similar hierarchical and modular structure.
Estimating cortical electrical currents at thousands of lattice points on the cortical surface from electrical or magnetic signals measured by hundreds of sensors of electroencephalogram or magnetoencephalogram is called as the inverse problem since it is the inverse of the forward process modeled by electromagnetic equations in physics, thus is mathematically ill-posed and the most difficult part of BNI. The current realization of BNI utilizes somewhat ad-hoc sparseness and spatial continuity assumptions in this inverse problem (Sato et al. 2004, Toda et al., 2007), but in future to attain the better BNI we must incorporate dynamical models of the brain activity in solving the inverse problem. We believe that there should exist mathematical duality relationship between the models used in this observation process and the models used for control described above. Both kinds of models should possess hierarchy and modularity and should be mathematically dual each other. This is an interesting future mathematical issue associated with BNI controlled robots.
Humanoid robots that can be controlled by natural brain activity at will could be regarded as the second or third body for humans, and can open up wide area of applications. It could be utilized as nursing robots for disabled people as their second body to help them at will. If exoskeleton or power suits replace humanoid robots, movement reconstruction could be possible for paralyzed people. We will avoid any military applications of this technology by all means. From telecommunication view points, BNI controlled robots can be postulated as future cellular phones which have all the capabilities of human body such as movement execution, tactile sensing and so on while the current cellular phones mimic only visual and auditory senses and speech motor control (Figure 5). Face-to-face bodily communication may become possible in future at temporally and spatially distant locations based on BNI-controlled humanoid robots. This cannot be said a mere science-fiction fantasy considering the Duke-JST BMI controlled robot.
Atkeson, CG, Hale, J, Pollick, F, Riley, M, Kotosaka, S, Schaal, S, Shibata, T, Tevatia, G, Vijayakumar, S, Ude, A, and Kawato, M (2000). “Using humanoid robots to study human behavior.” ^ 15, 46-56.
Bentivegna, DC, Atkeson, CG, and Cheng, G (2004a). “Learning tasks from observation and practice.” Robotics and Autonomous Systems 47, 163-169.
Bentivegna, DC, Atkeson, CG, Ude, A, and Cheng, G (2004b). “Learning to act from observation and practice.” International Journal of Humanoid Robotics 1, 585-611.
Cheng, G, Fitzsimmons, NA, Morimoto, J, Lebedev, MA, Kawato, M, and Nicolelis, MAL (2007a). “Bipedal locomotion with a humanoid robot controlled by cortical ensemble activity.” ^ . San Diego, CA, USA.
Cheng, G, Hyon, S, Morimoto, J, Ude, A, Hale, JG, Colvin, G, Scroggin, W, and Jacobsen, SC (2007b). CB: A humanoid research platform for exploring neuroscience. Journal of Advanced Robotics, 21, 1097-1114.
Endo, G, Morimoto, J, Matsubara, T, Nakanishi, J, and Cheng, G (2005). “Learning CPG sensory feedback with policy gradient for biped locomotion for a full body humanoid.” The Twentieth National Conference on Artificial Intelligence (AAAI-05 Proceedings, 1237-1273, Pittsburgh, USA, July 9-13.
EARTO (1996-2001) web page, http://www.kawato.jst.go.jp/
Fetz, EE (1969). “Operant conditioning of cortical unit activity.” Science 163, 955-958.
Georgopoulos, AP, Kalaska, JF, Caminiti, R, and Massey, JT (1982). “On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex.” Journal of Neuroscience 2, 1527-1537.
Hale, JG, and Pollick, FE (2005). “'Sticky hands': Learning and generalisation for cooperative physical interactions with a humanoid robot.” ^ 35, 512-521.
ICORP (2004-2009) web page, http://www.cns.atr.jp/hrcn/ICORP/project.html
Ijspeert, AJ, Nakanishi J, and Schaal, S (2002). “Movement imitation with nonlinear dynamical systems in humanoid robots.” Proceedings of the IEEE International Conference on Robotics and Automation (ICRA2002), 1398-1403, Washington, USA, May 11-15.
Kamitani, Y, and Tong, F (2005). “Decoding the visual and subjective contents of the human brain.” Nature Neuroscience 8, 679-685.
Kamitani, Y. and Tong, F (2006). “Decoding seen and attended motion directions from activity in the human visual cortex.” Current Biology 16, 1096-1102.
Kawato, M (2008). “From “Understanding the brain by creating the brain” toward manipulative neuroscience.” Philosophical Transactions of the Royal Society B, in press.
Kawato, M, and Samejima, K (2007). “Efficient reinforcement learning: computational theories, neuroscience and robotics.” Current Opinion in Neurobiology 17, 205-212.
Koike, Y, Hirose, H, Sakurai, Y, and Iijima, T (2006). “Prediction of arm trajectory from a small number of neuron activities in the primary motor cortex.” Neuroscience Research 55, 146-153.
Matsubara, T, Morimoto, J, Nakanishi, J, Sato, M, and Doya, K (2006). “Learning CPG-based biped locomotion with a policy gradient method.” Robotics and Autonomous Systems 54, 911-920.
Miyamoto, H, Kawato, M, Setoyama, T, and Suzuki, R (1988). “Feedback-error-learning neural network for trajectory control of a robotic manipulator.” Neural Networks 1, 251-265.
Miyamoto, H, Schaal, S, Gandolfo, F, Gomi, H, Koike, Y, Osu, R, Nakano, E, Wada, Y, and Kawato, M (1996). A Kendama learning robot based on dynamic optimization theory. Neural Networks 9, 1281-1302.
Morimoto, J, Endo, G, Nakanishi, J, Hyon, S, Cheng, G, Bentivegna, DC, and Atkeson, CG (2006). “Modulation of simple sinusoidal patterns by a coupled oscillator model for biped walking.” IEEE International Conference on Robotics and Automation (ICRA2006) Proceedings, 1579-1584, Orlando, USA, May 15-19.
Nakanishi, J, Morimoto, J, Endo, G, Cheng, G, Schaal, S, and Kawato, M (2004). “Learning from demonstration and adaptation of biped locomotion.” Robotics and Autonomous Systems 47, 79-91.
Nicolelis, MA (2001). “Actions from thoughts.” Nature 409, 403-407.
Pollard, NS, Hodgins, JK, Riley, M J, and Atkeson, CG (2002). “Adapting human motion for the control of a humanoid robot.” Proceedings of the IEEE International Conference on Robotics and Automation (ICRA2002), 1390-1397, Washington, USA, May 11-15.
Press Release 2006 (http://www.atr.jp/html/topics/press_060526_e.html)
Riley, M, Ude A, and Atkeson, CG (2000). “Methods for motion generation and interaction with a humanoid robot: case studies of dancing and catching.” Proceeding of 2000 Workshop on Interactive Robotics and Entertainment (WIRE-2000), 35-42, Pittsburgh, USA, April 30 -May 1.
Sato, M, Yoshioka, T, Kajiwara, S, Toyama, K., Goda, N, Doya, K, and Kawato, M (2004). “Hierarchical Bayesian estimation for MEG inverse problem.” NeuroImage 23, 806-826.
Schaal, S (1999) “Is imitation learning the route to humanoid robots?” Trends in Cognitive Science 3, 233-242.
Schaal, S, and Atkeson, CG (1998). “Constructive incremental learning from only local information.” Neural Computation 10, 2047-2084.
Schaal, S, Peters, J, Nakanishi, J, and Ijspeert, A (2003). Control, planning, learning and imitation with dynamic movement primitives. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2003) Workshop on Bilateral Paradigms of Human and Humanoid.), 39-58, Las Vegas, USA, October 27-31.
Shibata, T, and Schaal, S (2001) “Biomimetic gaze stabilization based on feedback-error learning with nonparametric regression networks.” ^ 14, 201-216.
Shibata, T, Tabata, H, Schaal, S, and Kawato, M (2005). “A model of smooth pursuit based on learning of the target dynamics using only retinal signals.” Neural Networks 18, 213-225
Taylor, DM, Tillery, SI, and Schwartz, AB (2002). “Direct cortical control of 3D neuroprosthetic devices.” Science 296, 1829-1832.
Toda, A, Imamizu, H, Sato, M, Wada, Y, and Kawato, M (2007). “Reconstruction of temporal movement from single-trial non-invasive brain activity: A hierarchical Bayesian method.” Proceedings of ICONIP 2007, WED-4.
Todorov, E (2000). “Direct cortical control of muscle activation in voluntary arm movements: a model” Nature Neuroscience 3, 391-398.
Ude, A, and Atkeson, CG (2003). “Online tracking and mimicking of human movements by a humanoid robot.” Journal of Advanced Robotics 17, 165-178.
Ude, A, Atkeson, CG, and Riley, M (2004). “Programming full-body movements for humanoid robots by observation.” Robotics and Autonomous Systems 47, 93-108.
Demonstrations of 14 different tasks by the ERATO humanoid robot DB
New humanoid robot called CB-i (Computational Brain Interface)
Figure 3 Experimental overview of brain controlled robot
Decoding walking related information from a monkey’s brain activity while walking on a treadmill, we were able to relay these data from Duke university in USA to the Advanced Telecommunication Research (ATR) in Japan in real time. We then were able to control our humanoid robot in Japan to execute locomotion-like movements in a similar manner as the monkey (with visual feedback of the robot is presented to the monkey.).
Figure 4 Brain controlled humanoid robot
Figure 5 BNI controlled humanoid robots as a future telecommunication interface
Let us assume that a husband and wife who enjoy playing tennis are living apart because the wife lives in Japan and the husband has been stationed in the U.S. for work. Nevertheless, the two want more than anything to be able to play tennis together. In order to actually play tennis (to experience the physical feelings), there would have to be an “agent” robot of the husband near the wife, and an “agent” robot of the wife near the husband, with the two playing tennis in Japan and the U.S. at the same time. The greatest obstacle in enabling these two people to play simultaneously is to overcome the time delays that accompany communications, and BNI, quantitative brain models seem sole solutions to this most difficult obstacle.