Visual Odometry Biped robots in contradiction with wheeled robots can not implement easily a classic robot odometry techniques based of sensing actuators encoders. In RoboCup field environment, only there are two elements that can be taken on account as a beacon, goals and lines. In many cases, the only element that is present at images is the field lines, because goals are not perceived for a long time. To avoid that situations, visual odometry can be used combined with field lines information. Visual odometry uses known field geometry to situate robot in field space and compare the last estimated position with the new one. Visual odometry, also, can be used as an generalized solution as in  where the perception system is an monocular camera as in Nao’s case. Campbell article shows how camera coordinates can be easily mapped into ground plane when in- trinsic and extrinsic camera parameters are known. Visual odometry fails when robot is moving meanwhile image is acquiring, in this situations some articles proposed some solutions to the problem. The solution is detected invariant feature to the motion blur . Proposed solutions can not be applied into a real-time problem because they are computational expensive. NaoVi Visual Odometry will use field crosslines form two con- secutive images to detect the motion of the robot. Error estimation was done but motion correction are not used by NaoMo yet.
As for research about humanoidrobot, the foreign technological capability is obviously better than inland's. In humanoidrobot soccer game, the robot called Nao from France as a standard platform for the game appears in the vision of people. In the game, an ability of judgment given to robot is based on the fast and precise location of target object which is directly affected by the process of image, such as deposing, color segmentation and feature extraction.
Nao is an autonomous humanoidrobot which is developed by the Aldebaran Robotics. This Naorobot can be programmable and it is mainly used for the educational purposes. This Naorobot which has 25 degrees of freedom. Those tension which can be individually controlled through an actuator and an electric motor. The Ethernet or wireless networking can be used by the user to program and control the robot. This Naorobot which mainly features an inertial measurement unit with accelerometer, gyrometer and four ultrasonic sensors which will provide a Nao with the stability and positioning within space. The OS which powers the robot's multimedia system, it includes two HD cameras (computer vision, including facial and shape recognition), four microphones (voice recognition and sound localization) and two speakers (multilingual text-to-speech synthesis). The software suite which comes with Naorobot that includes Choregraphe which is graphical programming tool. This Naorobot which can be controlled by a specialized Linux-based operating system, by using these NAOqi. NAOqi is the programming framework which can be used to control all the hardware components of the NAOrobot. The software’s such as the Gostai URBI Studio, Cyberbotics Webots, and Microsoft Robotics Studio where this software furthermore compatible with Nao.
constantly growing. Therefore robots need new, sophisticated interaction abilities. Humanlike interaction seems to be an appropriate way to increase the quality of human-robot communication. Psychologists point out that most of the human-human interaction is conducted nonverbally. For that reason, researchers try to enable humanoid robots to realize nonverbal communication signals. This paper presents a compact, lightweight, and low-cost arm and hand design, to enable the generation of gestures as nonverbal interaction signals to humanoid robots. Human ranges of motion and size have been investigated and they have been used as guiding principle for the construction of the arms and hands.
This paper describes the different types of humanoidrobot. The interest in assistance- and personal robots is constantly growing. Therefore robots need new, sophisticated interaction abilities. Humanlike interaction seems to be an appropriate way to increase the quality of human-robot communication. Psychologists point out that most of the human-human interaction is conducted nonverbly. The appearance demand for efficient design of humanoidrobot, the mechanisms, the specifications, and many kind of humanoidrobot are in also introduced.
connected systems. In each machine the events are seri- alised in standard JSON data packets and sent over TCP/IP. Using this method, it is also possible to connect modules that are implemented in other programming languages (Fig. 12 ). The architecture includes three standalone layers fully inter- connected via a TCP/IP network (Fig. 11 ). Each layer has a number of modules that process either the sensory data captured by sensors/hardware or the high-level information that is distributed to the network as “events” in the stan- dard JSON data packets. The layers and modules are fully interconnected and have the capacity to send and receive “Events” via the Broker over the network (Fig. 13 ). Thanks to the architecture’s modularity and network structure, the system is capable of running on multiple devices which facilitates handling of the overall processing cycles for real- time applications, if required. One of the primary benefits to this architecture is the potential for scalability allowing us to easily extend the architecture by adding new sen- sors/hardware devices and also new modules to the system. The architecture operates by collecting the sensory data and extracting high-level information then streams the corre- sponding “events” as JSON data packets to the networks (Sense Layer). The central layer receives the JSON packets and evaluates which reactive behaviour is the most appropri- ate for the current situation taking into account the interaction status and high-level information, and then streams an action “event” (behaviour name) to the network and asks the robot to display that behaviour (Thinks Layer). The Act layer receives the action event from the network and displays the behaviour on the permission of operator and returns the feedback/monitor “event” to the network to confirm that per- forming the action has been completed (see [ 47 ] for further details). Since the architecture communicates the events in JSON packets it is ideal for real-time HRI where the data communication is extremely quick with minimal lag time. Although the Sense-Think-Act architecture is fully intercon- nected meaning that the modules have the capacity to receive all the distributed events over the network. In order to reduce the computational costs, in each layer there is the possi- bility to subscribe only to those events that are necessary for that layer and dismiss all the other events (Fig. 13 black arrows).
Flexible skills are one of the fundamental building blocks of truly autonomous robots. When solving control problems, single policies can be learned but may fail if the tasks at hand vary or if the agent has to face novel, unknown contexts. Furthermore, learning a single policy for each possible variation of a task or context is infeasible. To deal with these problems we extend previous research and demonstrate a new sample-efficient method capable of con- structing reusable, parameterized skills on a physical robot. Parameterized skills are flexible behaviors that can be used to tackle any instance of a family of related control tasks, given only a parameterized description of the task , . Once such a skill is learned it can be applied to novel variations of a task without having to learn context-specific policies from scratch. Parameterized skills are also useful for dealing with high-dimensional control problems since they can be treated as adjustable primitive actions by higher-level planning processes, thus abstracting away details of low-level control.
The paper is organized as follows. In the first section, we recall the previous work done in , with its main components, and highlight the particular improvements that allow the new demonstration. The following section intro- duces the SLAM (Simultaneous Localisation and Mapping) system that has been integrated into the demonstration and the control laws that have been devised to use the output as a guideline for the robot walking trajectory. We then present the integration of the SLAM in the BCI user interface to allow the target-oriented navigation introduced in the previous section. Finally, we discuss the results of trial experiments performed with the system and discuss future improvements to the demonstration.
The most common way of gait planning of HumanoidRobot is the criterion that Vukobratovic came up called ZMP(Zero Moment Point), based on COP(Center of Pressure), FRI(Foot-Rotation Indicator) and GZMP(General Zero Moment Point) to judge the gait movement stability. In the research of gait planning, the planning based on geometrical constraint is widely used, and it succeed in actual walking in the situation of planning walking speed is not very fast. This passage is based on MF robot as experiment terrace, using the basic of achieving essential gait goal, changing the arms' oscillating of humanoidrobot, researching its effect on stability.
The EKF-based visual SLAM is also implemented on the small-size humanoidrobot. The procedures in- cluding the image feature detection and tracking method, feature initialization, system startup procedure, and EKF-based state estimation are integrated. In this experi- ment, the robot moves from the left- to right-side of the field, as shown in Figure 7 and the estimate state and im- age features are depicted as a 3D map shown in Figure 8. In the plot, the dots indicate the landmarks obtained from the initialized image features and the asterisks represent the state of the camera equipped on the robot. Therefore, the small-size humanoidrobot performs the self-local- ization and mapping procedures simultaneously.
An eye for a human are very important for recognize and detection of an object even in a room that full of darkness. Without eyes, incidents may be happened due bumped to the obstacles, mishap or be at the unfamiliar environment. Edelman in  pointed out that there are some theories from researchers and scientist that explaining the human perception of objects. One of it is promote the importance of multiple model views while the others postulate viewpoint invariants in the form of shape primitives (geons) as stated by Tarr et al. and Biederman in . However, from all the theories, the practical conclusion is that vision systems detecting objects in a human-like manner should use locally-perceived features fundamental tool for matching the scene content to the models of known objects. In a nutshell, this study can help or apply to a humanoidrobot to have an ability to recognize an object and its position thus detect any anomalities in front of it.
For imitation of full body human motion, the main focus of existing methods is on balance maintenance. Two stability criteria which are used in these studies include Zero Moment Point (ZMP)  and Center of Mass (CoM). The general pipeline of balance maintenance involves designing ZMP trajectory for a humanoidrobot, computing reference CoM trajectory from the ZMP trajectory, and constraining a humanoidrobot to follow the reference CoM trajectory. Kim et al. proposed a method for imitating full body dance movements. The ZMP trajectory of the robot was generated based on the support region and used to compute reference CoM trajectory by recursive equations. Under this scheme, the robot’s pelvis was forc ed to follow the reference CoM to maintain balance. Hu et al. used human walking data to allow walking replication by a humanoidrobot. The robot’s ZMP trajectory was designed by projecting pelvis position according to support area then reference CoM trajectory was obtained by a preview controller. Closed loop inverse kinematics was applied to follow human end-effector positions. Koenemann and Bennewitz  performed whole body motion imitation by finding valid foot positions and applying inverse kinematics to compute lower body joint angles. However, the method was validated by using results from standing on one leg motion, not walking motion. It considered only static stability, not dynamic stability. Boutin et al. imitated human walking. The ZMP trajectory of the robot was derived based on foot trajectories; and the CoM trajectory was also generated by a preview controller. The optimization algorithm for inverse kinematics was employed to find the joint angles that satisfy constraints on the swing foot
We have presented an integrated solution which allows a humanoidrobot to shoot arrows with a bow and learn on its own how to hit the center of the target. We have proposed a local regression algorithm called ARCHER for learning this particular skill, and we have compared it against the state-of- the-art PoWER algorithm. The simulation experiments show significant improvement in terms of speed of convergence of the proposed learning algorithm, which is due to the use of a multi-dimensional reward and prior knowledge about the optimum reward that one can reach. We have proposed a method for extracting the task-specific visual information from the image, relying on color segmentation and using a probabilistic framework to model the objects. The conducted experiments on the physical iCub robot confirm the feasibil- ity of the proposed integrated solution.
To overcome these limitations, we have chosen to design and develop our own prototype of humanoidrobot. KUBO is a servo-driven humanoidrobot that offers 20 DOF (Figure-1). Its structure consists of complex aluminum links manufactured with ultra-high precision to reduce backlashes/misalignments and achieve precise motion. KUBO is taller than commercially available robots (60 cm) and includes an on-board video camera, ample on-board computing power, and a number of sensors that allow the development of control strategies for standing balance and walking.
In the visual and auditory RRC-Humanoid systems, experiential intelligence is obtained by performing beha- vioral programming on the processed raw data coming from the video-visual recording monitor and the auditory recording monitor. The raw data are processed in an Interface Circuit, inputted to the RRC and then behaviorally programmed to reach human-like levels of AI. Behavioral programming reaches the level of experiential human- like intelligence, when the RRC-HumanoidRobot demonstrates behaviorally that it has identified, recognized, visualized and comprehended in the same manner as does a human, the signals coming from the visual sensors, or the auditory sensors. The processing of the video-visual raw data was described in previous publications  . The following sections will describe the processing of the auditory raw data in the Interface Circuit, and the behavioral programming of the processed data within the RRC-HumanoidRobot. On completion of behavioral programming, the RRC-HumanoidRobot demonstrates behaviorally human-like levels of AI for the identifica- tion, recognition, visualization or comprehension of the processed raw data. Note: Behavioral-programming of the Auditory RRC-HumanoidRobot generates an operational definition of the “identification”, “recognition” and “comprehension” levels of AI. Any human need merely verbally ask the RRC-Robot to identify, recognize, comprehend, or visualize any color or 3D-object in the FOV, in order to obtain a verbal response that is indis- tinguishable from the response of another human.
Gait planning based on linear inverted pendulum (LIPM) on structured road surface can be quick- ly generated because of the simple model and definite physical meaning. However, over-simplifi- cation of the model and discontents of zero velocity and acceleration boundary conditions when robot starts and stops walking lead to obvious difference between the model and the real robot. In this paper, parameterized gait is planned and trajectories’ smoothness of each joint angle and centroid are ensured using the 3-D LIPM theory. Static walking method is used to satisfy zero ve- locity and acceleration boundary conditions. Besides, a multi-link model is built to validate the stability. Simulation experiments show that: despite of some deviation from the theoretical solu- tion, the actual zero-moment point (ZMP) is still within the support polygon, and the robot walks steadily. In consequence, the rationality and validity of model simplification of LIPM is demon- strated.
To reduce the damages against humanoidrobot caused by falling and impact, firstly, establish a general dynamics model for humanoidrobot, further analyze the characteristics of falling motion and establish a simplified, multilevel and inverted pendulum model for falling motion; secondly, adopt the optimum control method and convert the falling motion control problem of humanoidrobot to the nonlinear control system problem that contains inequation constraint; finally, aiming at the defects of classic and minimal principle and classic SQP algorithm in optimizing effect, time and convergence rate, introduce an enhancing parameter optimization technique to conduct optimal control for the falling process of humanoidrobot, and adopt configuration state set screening and EPSQP solution methods to improve the optimal control. The computer simulation shows that the improved parameter optimization method (EPSQP) can minimize the angular momentum during the falling of humanoidrobot, each joint track change is smooth, barycenter acceleration is mild, optimizing time is relative short and the optimization effect is good.
to validate the effectiveness and robust nature of the PID controller which maintains the setpoint of the upper body. Furthermore it was shown that the performance of this controller is generally smoother when the upper body pitching angle estimate is taken from the combination of the rotary encoder at the hip joint and the IMU located close to the axis of rotation in the body. Since the system is statically stable, no actuator effort was required to maintain static stability (although depending on the required pose, actuation may be required to counter inertial effects) during periods of ‘downtime’ when the robot is not actively engaged in undertaking a task. Therefore the robot can be said to be inherently more energy efficient than comparable designs which use statically unstable morphologies where continuous actuation is required in order to avoid falling over. As the control requirements of statically stable robots are inherently less than for statically unstable ones, the control and computational requirements are significantly reduced in comparison also. It is clear from the test results that the controller succeeds in balancing the upper body during locomotion and during ‘bending over’ phases. However despite this success, the testing revealed several mechanical design improvements that would lead to a better performing robot. The first design improvement might be to increase the gearing ratio of the knee, hip and stabiliser joint. With the current design, the motors are unable to supply enough torque to extend the knee or stabiliser due to the large torques acting at these joints. This made physical testing of the controller for maintaining the SSM impossible with the current prototype. Also the rotation of the hip was limited to approximately 30 degrees as the joint torques required for recovery at greater angles became prohibitive. It is noted that during the initial testing phase, it was observed that the robot tended to oscillate about its setpoint. As the hip joint utilises low-friction rotary bearings, these oscillations were attributed to insufficient damping at the joint. These oscillations reduce significantly with the addition of an elastic rubber coupling mechanism which when installed formed an additional connection between the upper leg and the body.
One of the most controversial issues related to humanoidrobot research to be control mechanisms, modularity and Inter-platform operability in their operating system. To overcome this issue, Robot Operating System (ROS) architecture ,  which is simple to navigate and manage in prototype circumstance has been employed as the software architecture to support both real-time hardware and simulation within this project. Beaglebone Black, an open source Linux based board is used to deploy a Human like robot having 18 degrees of freedom is examined. Robot Operating System (ROS)  is implemented in the Beaglebone Black controller to increase the interactivity of the robot. Ultrasonic sensor is attached to the robot arm for detecting the presence of an object. The main objective of this research is to present a method with sufficient flexibility to be made potentially suitable for different scenarios of object perception and handling.
In this scene, it was affirmed that interaction and/or transaction occurred not only between Pepper-CPGE and the older adult with dementia, but also with other older adults who were attracted to the conversation with Pepper-CPGE in the surrounding area . These conversations showed a bigger response than the usual dialogue may exist between patients and between nurses and patient. This seemed to be caused by having a strong interest in the dialogue between the older adults and the robot. In addition, it can be said that the older adult A who was watching the dialogue between Pepper-CPGE and her, recognized Pep- per-CPGE as a conversation partner. As a result, the older adult A joined the di- alogue. Unfortunately, Pepper-CPGE can only have one-on-one conversation with the target who did face recognition. If Pepper-CPGE has the ability to talk with multiple people, Pepper-CPGE can respond to surrounding reactions, spreading and deepening further conversation becomes possible, and it becomes possible to expand opportunities of speech of the older adults. It is important to develop a dialog pattern of humanoid robots that sympathize with older adults.