MARVIN’s 2014 redevelopment replaced its original robotic development environment (RDE), Microsoft’s Robotics Developer Studio (MRDS), with Robot Operating System (ROS) . ROS is an open-source RDE that runs on Linux. It uses a graph-based messaging network to communicate be- tween subsystems (called nodes) . Each node can subscribe and publish to message topics. This standardises and abstracts the inter-node commu- nication making it transparent. This allows nodes to be written in different styles and languages without having to redevelop communication meth- ods between them, which reduces development time. ROS handles the processing of each node, allowing multiple nodes to run simultaneously. This provides distributed processing for each of the robot’s subsystems on the same machine, which leads to more efficient and robust control. ROS also has a highly active development community, which provides support and a vast library of packages for performing standard robotic tasks. This also dramatically reduces development times. For these reasons, ROS will continue to be used throughout this project.
The mobile robotics is a research area that deals with the control of autonomous vehicles or half-autonomous ones. In mobile robotics area one of the most challenger topic is keep in the problems related with the operation (loco- motion) in complex environments of wide scale, that if modify dynamically, composites in such a way of static obstacles as of mobile obstacles. To operate in this type of environment the robot must be capable to acquire and to use knowledge on the environment, estimate a inside environment position, to have the ability to recognize obstacles, and to answer in real time for the situations that can occur in this environment. Moreover, all these functionalities must operate together. The tasks to per- ceive the environment, to auto-locate in the environment, and to move across the environment are fundamental pro- blems in the study of the autonomousmobile robots .
Mobile robots endowed with cognitive skills are able to help and support humans in an indoor environment, providing increased availability, awareness and access, as com- pared to static systems. Thus, a robot can act not only as assistant in the context of robot-assisted living, but also offer social and entertaining interaction experiences be- tween humans and robots. For that, the robot needs to be able to understand human behaviours, distinguishing human daily routine from potential risk situations in order to react in accordance. In this work, we focus our attention on the domain of human- centered robot application, more precisely, for monitoring tasks, where a robot can rec- ognize daily activities and unusual behaviours to react according to the situation. In this context, a robot that can recognize human activities will be useful for assisted care, such as human-robot or child-robotinteraction and also monitoring elderly and disabled people regarding strange or unusual behaviours. We use a robot with an RGB-D sensor (Microsoft Kinect) on-board to detect and track the human skeleton in order to extract
The area of robotics deals with the control of the robots from long distances using wireless connections. Tele-operator is a device which is controlled remotely by a human. Tele-robot is a device which can perform autonomous works. These robots can enter places where it is not possible to enter by humans such as earthquake affected places. This application is developed using the Programming Language called Java. It consists of predefined and instant actions. Merging of robotics with communication mode is the basis of successful robotoperation. It is very easy for the user at the receiver side to handle this application. In terms of velocity, range and coverage, this system developed using java platform can give successful results. It helps disabled and mentally challenged people. The working of the robot stresses on the information that we pass to the robot. The mobilerobot is helpful in unknown surroundings and not properly oriented areas. The device will contact with the human operator through video communications. It uses the capabilities of a human operator to function well in such surroundings. The robot is far away from the web camera. Care must be taken under such cases where the interference of various hindrances could occur. Force feedback will decrease the stress and possible errors. This also enhances the efficiency of the mobilerobot.
As its name, service robot its main task is to help human by reducing their workload. The service robot is under the assistive social robots classification, where it is one of the branches from the assistive robot as shown in Figure 1.2. Assistive robot when it is introduced to a sensor or screen where it can interact with human will become an assistive social robot. The interaction of a robot and human through the usage of sensor and the screen is called Human-RobotInteraction features.
During the recent years the effort of providing resources useful for the automatic understanding robot commands has yielded the definition of corpora for Natural Language HRI, as (Kuhlmann et al., 2004; Tellex et al., 2011; Dukes, 2013). However, these corpora are highly domain or sys- tem dependent. The basic idea of this work is to build a corpus containing information that are yet oriented to a spe- cific application domain, e.g. the house service robotics, but at the same time inspired by sound linguistic theories, that are by definition decoupled from such a domain. The aim is to offer a level of abstraction that is totally indepen- dent from the platform, but yet motivated by supported the- ories. We exploited three different situations to start gather- ing user utterances representing possible commands given to a robot in a house environment. Two groups of utterances have been recorded during the Speaky for Robots project 2 , while a third one has been collected by interviewing mem- bers of the teams participating at the Robocup 2013 com- petition. At the end of the gathering, three datasets have been collected, each representing a different working con- dition. Each dataset is mainly characterized by: the com- plexity of the language used by the user, in terms of de- gree of variability of syntactic structures and lexicon; back- ground noise conditions; device used for recording. Each utterance is coupled with its correct transcription, directly inserted by the user under an operator’s control. Utterances have been pronounced by different users, so that multiple spoken versions of the same sentence are included. In the following, the gathering process is discussed for each of the three datasets.
For trial testing, we created a very simple al- gorithm: at the start the rover rotates on the spot and aligns itself along the direction of the global path, then it tries to follow the path by steering when needed either when the direction of the curves changes too much or the rover moves too far away from the plan. Once the destination is reached a sig- nal is sent to the navigator nodes which may provide another goal or halt the rover. The planner uses five states of operation with appropriate commands sent to the drive motors for the duration of each state: Start, Forward, Backward, Pause, and Rotation. A flowchart of the process the planner uses when mov- ing from the current movement state to the next move- ment state is given in Figure 7.
Interest in the application of robotic automation to agriculture has grown quickly in recent years, due to robotic technologies having reached a degree of matu- rity that allows autonomous experimentation in agri- cultural fields. Mobile robotic platforms have tradi- tionally been very expensive in term of production costs with only the ability to perform simple tasks such as moving and recording images, and have been focused on data gathering and repetitive work in ap- plications such as space exploration or factory mass production. Agricultural tasks are more challenging: in a non-structured environment, gathering informa- tion from the environment, harvesting randomly dis- tributed crops, and simply navigating continuously for a long time away from a power source require a higher level of autonomous intelligence. It is only re- cently that improvements in sensor and actuator tech- nologies and progress in data processing speed and energy have enabled robots to process enough infor- mation to reliably and affordably perform basic nav- igational tasks autonomously in an agricultural envi- ronment. The development and testing of navigation methods suitable for an autonomous agricultural data gathering robot is the focus of this paper.
In social interactions between humans, two types of communication – verbal and non‐verbal – are utilized to deliver information, emotion and intention [26, 27]. Usually, verbal communication is used to deliver the purpose and detailed information of the interaction. In contrast, non‐verbal communication is used to support the verbal communication, often by conveying the information in a more subtle manner. In non‐verbal communication, messages are communicated via gestures and facial expressions [26, 27]. Several previous studies have indicated the powerful effects of non‐verbal communication in human‐human interactions [28‐30]. For example, Tojo et al. found that non‐verbal communication, including facial expressions, played a crucial role in interactions with others. Additionally, they found that suitable non‐verbal feedback also played an important role in improving the performance of communication .
When human agents are performing a joint action together, they typically have a sense of being committed to remaining engaged until the shared goal is achieved or until all par- ticipants have agreed to interrupt the joint action. Thus, to borrow Margaret Gilbert’s famous example, if two people are taking a walk together, they are likely to take themselves to be obligated to walk at a pace that the other can keep up with, to refrain from suddenly veering off into the woods without giving any explanation, and to wait for the other if she has to tie her shoe (cf. ). Correspondingly, each of them will typically expect that the other will also fulfill these obligations, and otherwise take herself to be entitled to repri- mand her. In the canonical case, this is because each has given the other an assurance that they will remain committed (e.g. they have explicitly agreed to take a walk together). But even in the absence of explicit assurances, a more or less implicit sense of commitment can often be generated through bodily cues, subtle nuances of verbal communication or situational affordances, and may have very similar effects, and be expe- rienced in very similar ways. We therefore find it useful to distinguish sharply between commitment, on the one hand, and the sense of commitment on the other. We will first offer a few remarks about commitment, and then turn our attention to the sense of commitment.
A common technique in robotics for proximity detection is time-of-flight measurement (TOF) . This is often accomplished using the Senscomp 6500 ultrasonic ranging module and an appropriate transducer. There are other options but the Senscomp modules are very popular because of their ease of interfacing and relatively low cost. Sonar Ranging is one of the most common forms of distance measurement used in Mobile Robotics and a variety of other applications. The principle is simple to understand, a speaker (Transducer) is used to emit a short burst of sound (Ping). The sound wave travels through the air and reflects off a target back to the Transducer (Echo). By measuring the Time of Flight between Ping and Echo detection, one can calculate the distance between the target and transducer.
The motivation of this work can be expressed by an analogy: imagine that a person needs to take an object and give it to another person. The environment is dark so they may not always see the object properly (in fact, the vision systems are often unstable and a robot sees things as we see in darkness), and the objects are unknown (heavy or not, which is important for the robot control). And for normal object exchange, the two people are reactive and the manipulation is natural and robust. The giver should predict where the receiver’s hands will be, and will release the object when he detects that the object is firmly grasped. Based on a motion planner provided in our team from colleagues’ work, this thesis sets out to solve this task. Three major parts are proposed: force sensor fusion, continuous classification for events detection, and a trajectory controller. A scenario is given in Figures
service robot designed to interact with people with mild dementia. It was equipped with telepresence software, which would allow remote medical consultation, and with a speech recognition system which would allow the user to communicate with the robot. The robot navigated around an indoor environment by first creating a map using learning techniques and then being able to move to an arbitrary location using this map. ‘CompanionAble’, developed in 2011 by Gross et al. [ 11 ], was designed to assist the elderly who suffer from mild cognitive impairment, in home environments. This project was geared towards developing home robots with telepresence and with the capability to detect hazardous events such as falls and using telepresence to allow the patient to communicate with caregivers. With a growing fraction of the elderly living alone in the US [ 12 ], such robots are placed to fill a void in the care afforded to this section of the population. ‘Hein II’, developed in 2011 by Tani et al. [ 13 ], was designed as a person follower for home oxygen therapy patients. Such patients need to tether around an oxygen supplier tank, which can be physically exhausting. A large number of people in Japan, where this robot was designed, are dependent on home oxygen therapy [ 14 ], and such a robotic follower would provide an improvement to their quality of life. Thus service robots, or ‘socially assistive robots’, as they have also been called [ 15 ], are gradually maturing into a useful technology.
tic processes as the Dynamic Bayesian Network formalism. It has been shown that this technique can be used to learn models for behaviors with controllable parameters, such as reactive navigation (Infantes, Ingrand, and Ghallab 2006). Even though stochastic methods allow robots take into ac- count environment uncertainty during the learning process, we worked with deterministic techniques because we are currently using deterministic planning and dealing with the non-determinism during execution by monitoring and re- planning. Classical versions of both regression and decision trees have been used for action modelling in autonomous robots in ROGUE (Haigh and Veloso 1999). ROGUE used decision trees to acquire rules that prioritize its activities ac- cording to the values of its sensors. It learned rules that are compiled into heuristics (in the form of control rules) used when planning, while in our work we change the domain model. ROGUE used feature-based ML and we use rela- tional ML instead. Notice that PELEA will be able to inte- grate both learning approaches.
Human-RobotInteraction and Human activity recognition are not only specific for ser- vice robots. Actually, they are being used in all robotics fields, even in industrial robots with the idea of direct interaction and easier programming. If industrial robots were pro- vided with a cognitive system, robots would be able to understand different situations and also will be able to learn new tasks from users. Consequently, industrial robots will be more flexible in use, and it will reduce time and cost for small and medium-sized enter- prises (SMEs) with small lot-sizes.
and programming the same processor will be used for both design team 07A and 07B. Therefore, the processor chosen must be capable of performing the necessary tasks for each team independently. The full combat robot system of both teams requires 3 UART modules (to communicate between the boards and to the LiDAR), a I2C module (to communicate to the accelerometer/gyroscope), 8 PWM capture compare modules (for the RC receiver and to drive the LiDAR motion), preferable USB capable (for troubleshooting), and at least 102,400 bytes of RAM flash memory (to store the sensor data). This is because the lidar needs to store 3200 bytes and we would like to store at least 4 samples in order to plan a path. This is calculated from 800 bytes of distance data plus 800 bytes of angle data per scan, multiplied by two additional data arrays to store processed data which will enable the robot to locate the enemy. This results in 3200 bytes, or 25,600 bits.
The complexity behind modeling emotions on the face of the robot lies in the absence of the equivalent joints to some FACS descriptors (e.g., cheek raiser, nose wrin- kler). Therefore, we imposed experimentally some additional body gestures in order to reduce the negative effect of the missing joints so as to enhance the expressed emotion. These additional gestures do not include -normally- any head gesture (i.e., neck rotation) nor arm-hand gestures, which are being generated by the metaphoric gestures generator explained earlier (Section II-A ). 1 However, the combination of the neck rotation (i.e., turning the head aside) and the raising front-bent arms has been helpful for better expressing the “disgust" emotion (consequently, they got considered as additional supportive gestures for this emotion). This will help give the interacting human - even to some extent - the impression that the robot did not like the context of interaction and considered it disgusting. Similarly, the “fear" emotion, the “anger" emotion, and the “sadness" emotion have been attributed additional mouth-guard hand gesture, down head-shaking, and bowing head and covering-eyes hand gesture respectively, in order to help emphasize their meanings, as indicated in Figure ( 3 ). On the other hand, the main role of the additional supportive left smile and right smile face joints of the “fear" emotion, is to depress a little the corners of the open mouth to better reflect the emotion, however they do not have any equivalent FACS descriptor representing the “fear" emotion, as indicated in Table ( II ).
The learning activity is more appropriate for seven-year-old children because it addresses topics covered in the UK National Curriculum for Lower Key Stage Two (ages seven to nine). We anticipate that the supportive social interaction and learning activity provided will provide scaffold for children to understand the relationship between exercise and energy. Given that the interaction is part of a public exhibit and open to all ages, a wide age range is anticipated in our data collection. Children’s age will be co-varied in analysis to make account for children’s prior education on the subject of exercise and energy.
Persistent Influence via Leaders: In cases where more precise control over a swarm’s operation is needed, or when a desired emergent behavior cannot be generated autonomously and without significant human influence, continuous inputs may be given by a human operator. These continuous inputs will have a persistent influence on selected leaders and indi- rectly on the swarm, and such situations require significantly more training and attention on the part of the operator. In its basic form, persistent influence is akin to teleoperation. It generally involves some notion of the state of the system fed back to the operator who can then modify the inputs accordingly. Such control usually requires a tight feedback loop with low latency and a representation of the system state that is interpretable for the operator. But proximal interactions are also conducive to continuous control since the human can always be sensed by the robots continuously and can direct them much like a leader robot, and thus any movement of the operator is potentially an input to the swarm. In Section III-C we briefly discussed the difficulties of estimating and visualizing the state of a swarm. For controlling motion of single and multi-robot systems, visual and haptic feedback has been used predominantly, and these do not easily translate to swarms. The selection of swarm leaders, however, can enable such control. In this case, the control of a single leader or a group of leaders is similar to single robot or multi-robot teleoperation. The key difference is the influence of the motion of swarm leader on the remaining swarm that has to be taken into account.