In this paper, we employ impedance control and develop forcetrackingcontrol to achieve human-robotmotionsynchronization, subject to uncertain human limb dynamics. The mass- damping-stiffness model developed under the equilibrium point control hypothesis will be used to describe the human limb dynamics [20, 21]. This model suggests that the interaction force is resulted due to the error between the actual trajectory and the equilibrium position of the human limb. Therefore, the equilibrium position is deemed as the desired trajectory/motion intention of the human partner, which is planned in the central nervous system (CNS) of the human partner and unknown to the robot. Since this desired trajectory is generally time- varying and uncertain due to the modeling error and external disturbance, it is different from the constant rest position studied in [14, 15, 16, 17]. Three typical cases will be discussed under the same framework: adaptive control will be developed to deal with the point-to- point movement, and learning control and neural networks (NN) control to generate periodic and arbitrary continuous trajectories, respectively. Stability and tracking performance of the closed-loop system will be shown to be guaranteed, even in the presence of active force feedback.
Since pHRC can be seen as an exchange of force and motion signals over contact points, an energy and port-based approach provides a very powerful tool for both modeling and control design. While several researchers have already used the concept of energy and power for robot collision detection  and safe reaction control , a systematic energy-based modeling and control approach to pHRC tasks involving continuous contact and a varying number and loca- tion of contact points is missing so far. The Port-Hamiltonian (PH) formalism, which is a domain-independent concept, has proven to be very successful for modeling complex systems , . It provides a framework to describe a system in terms of energy variables and interconnection of sub-systems by means of power ports. So far, it was adopted to model the dynamical behavior of robots with rigid or flexible links , hybrid hopping robots , underactuated aerial vehicles , soft finger manipulation , as well as for the control of bipedal walking robots .
pHRC has been interpreted in the literature from various perspectives. The major body of these research in human-robot shared manipulation focuses on controlling. In particular, these works usually assume the manipulated objects to be already stably grasped, solving for necessary stiffness of robot manipulator joints with external disturbances applied onto the objects ( Abi-Farraj et al. , 2017 ; Rozo et al. , 2016 ). The early work by Kosuge & Kazamura ( 1997 ) presented and compared several control methods, like damping control and impedance control, for the motion generation problem of a robot supporting and moving an object in cooperation with a human under the effect of external disturbances such as friction force. In another work ( Kosuge et al. , 1998 ), the authors proposed a controller for a multiple-robot system assisting humans in flexible object handling. The robot system played two roles of supporting and deforming a flexible object by adjusting internal forces onto the object, so that the human could easily handle the object by only applying intentional forces. Most control schemes in this line of research fall into some form of impedance and the related admittance control ( Hogan ,
considered the future of controlling humanoid robotsThe second application was using the Kinect sensor in the robot-assisted balance training (RABT) which is a rehabilitation program that was developed and built by Dr. Seok Hun Kim, PhD, and Dr. Kyle Reed, PhD, at the University of South Florida. Physical therapy and rehabilitation can be very slow and expensive especially for stroke patients who suffer from walking disabilities and balance problems. The RABT trains patients to adapt their balance by applying an external pulling force to their waists . This method introduces a real-time visual feedback to patients that has shown significantly improved results. The purpose of using the Kinect sensor with RABT is to enhance the visual feedback, and improve patients' ability to balance during the training.
As reviewed above, there exists an optimality principle behind the impedance shaping based on statistical heuristics. A more formal treatment in optimal control was presented in (Medina et al., 2013). The authors characterized the trajectory consistency with a quadratic cost func- tion, whose weighting matrix was inversely proportional to the motion variance. Moreover, a risk-aware formulation was introduced, with the disturbance measured as the deviation from a recorded force profile. Such a formulation provides a unified way to deal with the conflict between position tracking and force yielding. Specifically, when the robot is risk-sensitive, it will take a negative attitude to the external force disturbance, hence adopting an increased stiffness to eliminate tracking errors. On the other hand, a risk-seeking behavior will tend to increase the importance of force regulation and yield to the external disturbance. (Rozo et al., 2013) used a task-parameterized GMM (TPGMM) to fit the demonstration trajectories and estimated the impedance parameters separately. In the task reproduction, the impedance for each trajectory segment depended on the similarity to each GMM component. TPGMM was also explored in (Calinon et al., 2014; Rozo et al., 2015, 2016). In these works, a quadratic cost function was parameterized by the regression mean and covariance then an impedance controller was resolved from a finite-horizon LQR. (Lee et al., 2015) applied a Bayesian esti- mation to the covariance matrix. Only diagonal positive-definitive matrix was allowed due to the constraint from the prior, so the impedance controllers of joint DOFs were indepen- dent. Similar covariances were also learned, however, in a different manner in (Rückert et al., 2013). The proposed method featured a quadratic cost function and a sub optimal control system, which were termed as a planning movement primitive in the paper. The quadratic cost function and a linear system were fit based on the rollouts scored on an extrinsic signal. Finally, research has been done to facilitate the cost design through inverse optimal control approaches. Relevant work on the robotmotion synthesis (Kalakrishnan et al., 2013) used a sampling-based to estimate the linear cost parameters. Impedance parameters were explicitly transfered in (Howard et al., 2013), where apprentice learning was used to estimate, again, a linear parameterized cost function.
During the last two decade, the control of robot envi ronment interaction (also known as compliant motion) has emerged as one of the most attractive and fruitful research area in robotic. In all of these applications the position and in teraction force of end-effector manipulator must be controlled simultaneously. The initial investigations in the field were motivated by practical needs for automating complex tasks mainly performed by human, such as assembly, deburring, etc. The control of physical robot interaction is still a challenging research issue, recently addressing the emerging fields of human-robot interaction systems, human augmentations and enhancements, haptic rendering, rehabilitation robotics, etc . In the hybrid force/position control strategy, either a posi tion or a force is controlled along each task space direction, through the use of proper selection matrices [2, 3]. Considering parametric uncertain robot manipulators, several robust control strategies [4, 5, 6] provide asymptotic motiontracking and an ultimate bounded force error. In addition, Kiguchi  and Song
The learning behaviour of the human brain inspired further developments in control engineering. Due to the complex nature of this biological system only a concise selection of its key-aspects has been retained for the development of control concepts. To learn action- outcome relations and achieve goal-based recall of these actions, (Baldassarre, 2013) presented a system-level model. The authors incorporated processes related to intrinsic motivations, i.e. motivations not directly stimulated by extrinsic rewards as well as to the role of repetition bias, i.e. the natural tendency to repeat recent actions. (Frank, 2014)’s work focussed on reinforcement learning allowing an agent to learn a policy with the goal to maximize a reward-signal. The authors combined a low-level, reactive controller with a high- level curious agent. Artificial curiosity contributes to the learning process by guiding exploration to areas where the agent can efficiently learn. The work was validated by a real-time motion planning task on a humanoid robot. (Merrick, 2012) implemented a goal-lifecycle and introspection for reinforcement learning. The aim was to make the system aware of when to learn what as well as of which acquired skills to keep either active, ignored or erased. (Franchi, 2016) simulated the structure and interaction among amygdala, cortex and thalamus, i.e. three parts of the brain which play a key role in human cognitive development. In the field of developmental robotics, the authors designed a network for autonomous learning with the ability to generate new goals. The suggested method was validated on an active sensing activity.
Abstract- In this research, a differential drive wheeled mobile robot which can automatically track the trajectories was designed by using the fuzzy logic controller. The motion task of the mobile robot is motion through the points. When the user gives the desired point’s position (x, y, θ) from PC, the position error and orientation error are derived. The position error and orientation error is sent as input to the fuzzy controller and then the required velocity is given to the motor depending to the position and orientation angle between the start point and desired point. Based on the electrical section, PIC16F887 microcontroller, motor driver, magnetic encoder DC motor and LCD are mainly used. The purpose of the use of driver is to control the rotation of the motors - forward or backward. So, VNH2SP30 driver is used in this system for this purpose. The magnetic encoder is to measure wheel revolution and it can b e translated into linear displacement of the wheel. The design procedure consists of the kinematic structure of robot, hardware and software implementation and microcontroller programming. A simulation model of the control system is developed with MATLAB / SIMULINK block diagram. Simulation results of the proposed system are given to approve the effectiveness of the system.
 Schoisengeier, A., Lindenroth, L., Back, J., Qiu, C., Noh, Y., Althoefer, K., Dai, J., Rhode, K., and Liu, H. "Design of a Flexible Force-sensing Platform for Medical Ultrasound Probes.” Proceedings. 6th IEEE International Conference on Biomedical Robotics and Biomechatronics: pp. 278-283. 2016.
ABSTRACT: This paper presentation is mainly about the detection of humans behind the wall at the time of natural disasters like earthquakes. The advent of new high-speed technology and the growing computer capacity provided realistic opportunity for new robot controls and realization of new methods of control theory. This technical improvement together with the need for high performance robots created faster, more accurate and more intelligent robots using new robotic control devices new drives and advanced control algorithm. This project proposes a robot that moves in the disaster prone area for detecting living beings behind the wall in natural calamity environments and helps to identify the live people and rescue operations.
Radical prostatectomy surgery (RP) is the gold standard for treatment of localized prostate can- cer (PCa). Recently, emergence of minimally invasive techniques such as Laparoscopic Rad- ical Prostatectomy (LRP) and Robot-Assisted Laparoscopic Radical Prostatectomy (RARP) has improved the outcomes for prostatectomy. However, it remains difficult for surgeons to make informed decisions regarding resection margins and nerve sparing since the location of the tumour within the organ is not usually visible in a laparoscopic view. While MRI enables visualization of salient structures and cancer foci, its efficacy in LRP is reduced unless it is fused into a stereoscopic view such that homologous structures overlap. Registration of the MR image and peri-operative ultrasound image either via visual manual alignment or using a fully automated registration can potentially be exploited to bring the pre-operative information into alignment with the patient coordinate system at the beginning of the procedure. While do- ing so, prostate motion needs to be compensated in real-time to synchronize the stereoscopic view with the pre-operative MRI during the prostatectomy procedure. In this thesis, two track- ing methods are proposed to assess prostate rigid rotation and translation for prostatectomy. The first method presents a 2D-to-3D point-to-line registration algorithm to measure prostate motion and translation with respect to an initial 3D TRUS image. The second method investi- gates a point-based stereoscopic tracking technique to compensate for rigid prostate motion so that the same motion can be applied to the pre-operative images.
current will generate lines of flux around the armature and affect the lines of flux in the air gap between two coils, generating two magnetic fields, the interaction between these two magnetic fields (attract and repel one another) within the DC motor, results in a torque which tends to rotate the rotor (the rotor is the rotating member of the motor). As the rotor turns, the current in the windings is commutated to produce a continuous torque output resulting in motion.
technology because of less accurate result. This system is overcome on issues like lighting variation and hand orientation. It calculate real time data of object using X,Y and Z coordinates .Revolutionary product leap control used to achieve synchronization between human and robot. Because Leap Motion is having small and compact size and give better accuracy than other gesture control sensor. This system can be used in research field and military area. Due to popularity of robots we can use simple interactive robotic controller to controlrobot. Hence the Gesture control will be perfect and revolutionary product in future.
I N human-robotcollaboration humans interact with robots or manipulators in a shared workspace without protective guards. Manipulators have been used in a large range of applications mainly in the production industry, but their application is limited in scenarios which need a very large working space such as aerospace manufacturing, ship building, and wind turbine manufacturing. Mobile manipulation is a solution to overcome these limitations and counts to be a key technology not only for the production industry but also for professional service robotics such as intralogistics.
e.g., radar, lidars, radio, wifi) are susceptible to large atten- uation and are thus unusable for underwater applications. Sonar is predominantly used in many underwater vehicles, particularly for localization, long-range sensing, and low- bandwidth communication, but does not provide the band- width and richness necessary for AUV’s to track targets in real-time. Furthermore, active sensors can be intrusive to marine species and have detrimental effects on their well- being. With recent advances in deep machine learning, par- ticularly in convolutional and recurrent neural networks, gen- erative adversarial networks, and deep reinforcement learn- ing, machine vision has shown some promising results in underwater applications. In particular, robot convoying , image enhancement , and gesture-based programming  have been shown to work well in real-world settings. High accuracy with deep object detectors (e.g., , ), and the availability of embedded, power-efficient hardware that can efficiently run deep models real-time have encouraged robotics researchers to delve into the ‘tracking-by-detection’ approach. However, while these methods are able to robustly detect objects of interest in a scene (divers in our case), they have not been able to distinguish between them unless there is a high degree of ‘in-class’ feature diversity. In other words, individual detected objects, while belonging to the same class, should exhibit difference in features to be distinguished robustly. In the case of divers in underwater scenes, such feature diversity is often absent. In addition, data scarcity is an issue that prevents deep methods from reliably identifying individual divers. Due to the very nature of deep learning methods, little control can be asserted over the feature selection process, which makes them a somewhat less desirable choice.
presented. An omnidirectional mobile robot is a type of holonomic robots. It can move simultane- ously and independently in translation and rotation. The RoboCup small-size league, a robotic soc- cer competition, is chosen as the research platform in this paper. The first part of this research is to design and implement an embedded system that can communicate with a remote server using a wireless link, and execute received commands. Second, a fuzzy-tuned proportional–integral (PI) path planner and a related low-level controller are proposed to attain optimal input for driving a linear discrete dynamic model of the omnidirectional mobile robot. To fit the planning requirements and avoid slippage, velocity, and acceleration filters are also employed. In particular, low-level optimal controllers, such as a linear quadratic regulator (LQR) for multiple-input-multiple-output acceleration and deceleration of velocity are investigated, where an LQR controller is running on the robot with feedback from motor encoders or sensors. Simultaneously, a fuzzy adaptive PI is used as a high- level controller for position monitoring, where an appropriate vision system is used as a source of position feedback. A key contribution presented in this research is an improvement in the combined fuzzy-PI LQR controller over a traditional PI controller. Moreover, the efficiency of the proposed approach and PI controller are also discussed. Simulation and experimental evaluations are conducted with and without external disturbance. An optimal result to decrease the variances between the target trajectory and the actual output is delivered by the onboard regulator controller in this paper. The modeling and experimental results confirm the claim that utilizing the new approach in trajectory-planning controllers results in more precise motion of four-wheeled omnidirectional mobile robots.
Laser scanners or light curtains are monitored planes in the defined zones, the challenges raised with these sensors are where operator or other worker obstruct the beams for car- rying out their daily work. In another hand, 3D cameras, such as SafetyEYE are studied to protect the access of operator or objects with 3D scanning of danger zones. The ad- vantage can be mentioned as this three-dimensional sensor is capable of scanning differ- ent zones simultaneously. In addition, there are limitation for use of this sensor, the zones are defined for workstation is not dynamic and for reconfiguration of zone it is needed to redo whole process from beginning. Other issue is about the location of sensor for setup which with changes in the workstation it should be relocated again and calculate the zones and also environmental elements such as light have effect on the precision of its camera. Then, other investigation is projector and safety system where the zone of workspace is projected on surface and with depth camera sensor the presence of human or object can be detected. The challenge for this system is that in industrial application there can be disturbances such as tables, fences, components, etc. for projection on the surface. Plus, the resolution of depth camera is low for such large work area. The some of these studies is summarized in Table 1.
In our experiment, we employ a network emulator (NIST Net ) instead of the network of Figure 1. A metal rod is fixed at the tip of the force sensor which is attached to the surface of the flange of the robot arm. Four fixtures are setting on the top of rack which is used to fix the industrial robot arm to fix objects as shown in Figure 3. We generate a constant delay (called the additional delay in this paper) for each packet transmitted between the master and slave terminals. We handle two types of work (work 1 and work 2) to investigate the effect of proposed method. In work 1, each subject is asked to push a ball by operating the robot arm while watching video, and in work 2, the subject pushes a ball by operating the robot arm and identify what kind of the ball is pushed according to the softness without watching video. In the two types of work, the subject pushes the ball by the metal rod fixed at the robot arm.
The genetic algorithm system was used to auto-tune a gain-scheduled controller. The gain schedule controller had 11 gain values for the angle error and 7 gain values for the distance error. The angle error gains are for the ranges π - 3π/4, 3π/4 - π/2, π/2 - π/4, π/4 - π/8, π/8 - π/16, π/16 - -pi/16, -π/16 - -π/8, - π/8 - -π/4, -π/4 - -π/2, -π/2 - -3π/4, -3π/4 - -π. The distance error gains are for the ranges ∞ - 50, 50 - 35, 35 - 25, 25 - 15, 15 - 10, 10 - 5 and 5 - 0. It is expected that the angle gains will be symmetric about zero but the controller is free to evolve nonsymmetric gains if these produce an optimal controller over the training set of data. The controller provides the required velocity set point to the left and right wheels via the control formulae given in equation 2.
vehicles alignment, as in car parking and airplane landing. Also, robots equipped with vision find application in surgery, where an instrument has to be guided to an organ to operate, and in dangerous environments such as nuclear stations and spatial missions, where humans have to be replaced. Depending on the number and position of the cameras, and on how the information provided by these cameras is exploited by the system, several configurations of robotcontrol based on artificial vision can be obtained. For instance, the cameras can be mounted on the robot end-effector in the so called eye-in-hand configuration (also known as hand-eye), or can be positioned somewhere in the scene separately from the robot in the so called eye-to-hand configuration (also known as static-eye). Then, each camera may be used to inspect a different region of the scene (monocular vision), or all cameras may be used to observe a common set of objects (multi-camera vision, known as stereo vision in the case of two cameras). Lastly, the images acquired by the cameras can be used by the control system to define the goal only at the beginning of the task (open-loop control) or exploited during the robotmotion in order to progressively update the goal (closed-loop control). See for instance - and - for details.