Human use their sensors, brain and respondents to do things. Before performance is typically associated with faster or more accurate behavior and this leads to a fundamental property of human performance. Human position themselves on a speedy – accuracy trade off in a manner that is both comfortable and consistent with their goals. With human performance, we begin to see complexities and challenges in humaninteraction with technology that are absent in traditional sciences such as physics and chemistry. Human bring diversity and variability and these characteristics bring imprecision and uncertainty.
Some of the relevant literary works in this field are explained briefly below these work give an idea about the haptics .The studies related to simulation, mathematical modeling of existing devices, humaninteraction with the devices and its evaluation, study of surface variability and its effects on haptics feedback and physical Design and modeling of the devices are discussed below:
We propose a web application based security system. When a user interacts with a computing system to enter a secret password, shoulder surfing attacks are of great concern. This system overcomes the problem of shoulder surfing. Previous system proposed a methodology in which the user has to remember all the events performed. This limits the system usage. Our novel approach enhances the shoulder surfing security with humaninteraction; indeed can break the well-known PIN entry method previously evaluated to be secure against shoulder surfing. To overcome the problem, we design a multi-color number panel. This interface provides the user, a higher level of security that the shoulder surfer cannot be aware of the process the user undergoes. The color pattern in the number panel changes periodically so that for each user is provided a different pattern.
There are rich literatures on human activity recognition from video sequences. Recently, Aggarwal and Ryoo conducted a comprehensive review of human activity analysis , in which various spatial and temporal features are used. Ke et al. ,  proposed models for action recognition in which the input video sequence is treated as a 3D volume and some local volumetric features are extracted. In , , local interest point descriptors are detected and used for human activity recognition. In , , the involved activity agents, such as persons, are first detected and their relations are then modeled for recognizing the underlying human activities. There are also many models have been developed to describe the identified features and agents for human activity recognition. For example, prior work has used Hidden Markov Model (HMM) to describe and distinguish the dynamics underlying different human activities . In , Bayesian networks, together with Markov chain Monte Carlo algorithm, are used to recognize bicycle related activities. In , a hierarchical probabilistic latent model is developed to represent the behavior pattern. In , probabilistic analysis, such as stochastic-context grammars, is designed for modeling human activities in a hierarchical way. Most of these methods are focused on recognizing a small set of different human actions or activities, while in this paper, our goal is to detect distant humaninteraction by not limited to a few specific actions.
Persistent Influence via Leaders: In cases where more precise control over a swarm’s operation is needed, or when a desired emergent behavior cannot be generated autonomously and without significant human influence, continuous inputs may be given by a human operator. These continuous inputs will have a persistent influence on selected leaders and indi- rectly on the swarm, and such situations require significantly more training and attention on the part of the operator. In its basic form, persistent influence is akin to teleoperation. It generally involves some notion of the state of the system fed back to the operator who can then modify the inputs accordingly. Such control usually requires a tight feedback loop with low latency and a representation of the system state that is interpretable for the operator. But proximal interactions are also conducive to continuous control since the human can always be sensed by the robots continuously and can direct them much like a leader robot, and thus any movement of the operator is potentially an input to the swarm. In Section III-C we briefly discussed the difficulties of estimating and visualizing the state of a swarm. For controlling motion of single and multi-robot systems, visual and haptic feedback has been used predominantly, and these do not easily translate to swarms. The selection of swarm leaders, however, can enable such control. In this case, the control of a single leader or a group of leaders is similar to single robot or multi-robot teleoperation. The key difference is the influence of the motion of swarm leader on the remaining swarm that has to be taken into account.
In general, none of the participants wanted to wear the Neurotiq Social, but in spe- cific social contexts like education the Neurotiq Social might be useful. Both groups came with the same use case, using the Neurotiq Social to monitor the attention span of the class with students. This way the teacher could easily monitor the class. How- ever due to the nature of the EEG bands the teacher would not be able to distinguish between someone who is concentrated and thinking about the lecture and someone who is focussed on a computer game. Another situation mentioned by both groups was in a therapy setting, where a psychologist or psychiatrist could use the Neurotiq Social as a tool to monitor the client and as a tool for the client to become more aware of his/her state of mind. The social context of meeting friends was also discussed by both groups and resulted in different opinions, part of the participants thought that the Neurotiq Social or a ISD could benefit social interaction by providing an extra source of information about the people they would be interacting with, other partici- pants thought that the ISD would distract them from the conversation or make them judge a person based on the ISD instead of what the person was saying. Overall the participants doubt the usefulness of the Neurotiq Social in its current state in a social context. However with the changes suggested in the previous section and social con- texts where there is a specific use ISDs could benefit society and could be accepted more and more until there is a society in which everybody wears a ISD and the ISD is an integrated part of social interaction.
The Association for Computing Machinery (ACM) defines human-computer interaction as "a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them”. An important facet of HCI is the securing of user satisfaction (or simply End User Computing Satisfaction). "Because human–computer interaction studies a human and a machine in communication, it draws from supporting knowledge on both the machine and the human side. On the machine side, techniques in computer graphics, operating systems, programming languages, and development environments are relevant. On the human side, communication, graphic and industrialdesign disciplines, linguistics, social,sciences, cognitive psychology, social psychology, and human factors such as computer user satisfaction are relevant. And, of course, engineering and design methods are relevant." Due to the multidisciplinary nature of HCI, people with different backgrounds contribute to its success. HCI is also sometimes termed human–machine interaction (HMI), man–machine interaction (MMI) or computer–humaninteraction (CHI).
There are specialization domains exist within the ergonomics area that represents essential competencies in specific human attributes or characteristics of humaninteraction. Those are including physical, cognitive and organizational ergonomics. Tool design is an important role in the development to avoid physical work-related problems in the hand and forearm. People have learned that certain jobs could be done faster and more efficiently with tools. By improving the ergonomic properties of hand tools, the health of users and their job satisfaction might be positively affected.
 Jaime Lien, Nicholas Gillian, M. Emre Karagozler, Patrick Amihood, Carsten Schwesig, Erik Olson, Hakim Raja, and Ivan Poupyrev. 2016. Soli: ubiquitous gesture sensing with millimeter wave radar. ACM Trans. Graph. 35, 4, Article 142 (July 2016), 19 pages. DOI: https://doi.org/10.1145/2897824.2925953  Hui-Shyong Yeo, Gergely Flamich, Patrick Schrempf, David Harris-Birtill, and Aaron Quigley. 2016. RadarCat: Radar Categorization for Input & Interaction. In Proceedings of the 29th Annual Symposium on
Of course, enchantment is not the only user experience of interest to interaction designers. However, as discourse on enchantment features enriched experience, affective attachment, and engagement of the whole person, it should have an important place among the variety of experiences to be explored in interaction design. But making the case for enchantment was only one of our objectives in this paper. The other was to demonstrate the value of the particular approach to user experience that we pursued, which, as we indicated in the previous paragraph, involves a close cultural analysis of a variety of experiences. With some exceptions, e.g. [19, 20], such analyses are lacking in HCI, though they may become more prominent with the emergence of interest in literature and art-related approaches in HCI.
While considerably less common, there is growing support in movements that aim to encourage users of technologies to reconsider how they engage with technology that is often demanding, “invisible”, time centric and increasingly integrated into commonplace experiences. Interactive technologies designed to make people more “efficient” now encompass a broad spectrum of activities (such as assistive, social and entertainment roles) and are “…more or less continuously present as part of a designed environment” (p162, Hallnäs and Redström, 2001). In response Slow Technology advocates a readdressing of traditional interaction paradigms to encourage technology designed “…in a way that encourages people to reflect and think about it.” (p169, Hallnäs and Redström, 2001). As noted by Grosse-Hering et al. (2013) this does not specifically mean slowing interactions (with regards to time taken) but opening up interactions to promote ‘slowness’ on aspects of interaction that may be more meaningful for users and in doing so “…can be used to create more ‘Mindful’ interactions that stimulate positive user involvement” (p3431). Thus, the goals of slow technology encourage “reflection and moments of mental rest rather than efficiency in performance” (p161, Hallnäs and Redström, 2001); and as shall be highlighted through this thesis are highly sympathetic to Mindfulness. In a similar vein to the Slow Technology movement are positions that consider the felt experience of the user as central motivations of action and design. Such concepts hold that interactive experiences focusing on functionality (understood in traditional quantifiable metrics of efficiency) only activate limited capacities of the experience of the user. Such example is found within experience- centred design (Wright, Wallace and McCarthy, 2008; Wright and McCarthy, 2010; Hassenzahl 2011); that proposes while the functional attributes of interactive systems are of great importance, they should be supported through an understanding of the emotional values that people construct through interactions with other people and technology.
A first thing t o note is that the various kinds of activity are not mutually exclusive, as they can be carried out together. For example, it is possible for someone to give instructions while conversing or navigate an environment while browsing. How- ever, each has different properties and suggests different ways of being developed at the interface. The first one is based on the idea of letting the user issue instruc- tions to the system when performing tasks. This can be done in various interaction styles: typing in commands, selecting options from menus in a windows environ- ment or on a touch screen, speaking aloud commands, pressing buttons, or using a combination of function keys. The second one is based on the user conversing with the system as though talking t o someone else. Users speak t o the system or type in questions to which the system replies via text or speech output. The third type is based on allowing users to manipulate and navigate their way through an environ- ment of virtual objects. It assumes that the virtual environment shares some of the properties of the physical world, allowing users to use their knowledge of how physical objects behave when interacting with virtual objects. The fourth kind is based on the system providing information that is structured in such a way as to allow users to find out or learn things, without having to formulate specific ques- tions to the system.
In this chapter the results of the project are discussed. The robot is shown in figure 4.1. The toolkit has been developed in parallel with the robot. A Raspberry Pi, which runs ROS, has been integrated into the robot head. This increased the portability of the robot a lot. An Ubuntu computer is used as the development system, which is also be used for visualization and test- ing. The toolkit has also been applied on a new robot, to validate if it is effective as a toolkit for human robot interaction. Also, a new user has performed a show with the system as to gen- erate user feedback. Several demonstrations have been given such as for national television, large university shows with over 400 visitors, open lab days and other events. The toolkit has been connected to the dedicated social interaction software AsapRealizer, made by the HMI group. The visualizer has been used succesfully by the HMI group.
generation procedure of this grammar is presented in (Car- lucci Aiello et al., 2013). The generated sentences have then been pronounced by three speakers and recorded us- ing a push-to-talk microphone. The acquisition process took place inside a small room, thus with low background noise. Moreover, the push-to-talk mechanism helped in precisely segmenting the audio stream, and further reduc- ing the noise. Due to its constrained nature, the language represented here is free of colloquial forms of interaction. The S4R Experiment (S4R) dataset has been gathered in two distinct phases of the Speaky for Robots project ex- periment. In the first phase, the users were asked to give commands to a real robot operating in rooms set up as a real home, thus capturing all the interferences generated by talking people or sounds of other working devices nearby. The users were aware about the robot capabilities in terms of action it could perform and about the rooms and all the objects the robot was able to recognize. The same device used for the GG dataset has been employed here for the in- teraction. In a second phase, the users could access the Web portal showed in Figure 1 to record other commands. Gen- eral situations involved in an interaction were described in the portal by displaying text and images. Each user was asked to give a command inherent to the depicted situa- tion. This time the internal microphone of the pc running the portal has been used for recording. Since the users were only partially constrained (they had knowledge only about capabilities of the robot and lexicon handled by the Speech Recognition Engine), the language represented in this dataset is characterized by features that are more simi- lar to free spoken English with respect to the one reported in the GG dataset, including richer syntactic structures. The Robocup (RC) dataset has been collected during the Robocup@Home (Wisspeintner et al., 2009) competition held in 2013, in the context of the RoCKIn 3 project. The same Web portal used for the S4R dataset has been em- ployed, and the recording took place directly in the com- petition venues or in a cafeteria, thus with different levels of background noise. Again, the internal microphone of the pc running the web portal has been employed. Expres- sions uttered here exhibit large flexibility in lexical choices
Prejudice is something inevitable and indispensable—it is part of the being of human. Prejudice is the specific manifestation of the historical existence of humans, because history does not belong to them, but humans belong to historical tradition. Long before humans understood themselves through retrospection, they understood them- selves in a natural way in family and in society. This belonging to history means that prejudice, far more than humankind’s own judgments, is the reality of their being. That is humans enter the world as children, and up- bringing and socialization is just to have layer after layer of a prejudice lay on top of each other. Prejudices are indispensable, because mutual understanding rests on prejudices humans also use in interaction: 1) in the situa- tion, where they have common prejudices in respect of a problem or a phenomenon. These prejudices thus create no problems, as humans can immediately understand another person or come to terms with them. 2) In the situa- tion where we are confronted with something new and unfamiliar. When we wonder, we do this by virtue of our prejudices. This is new to us, just because we have no prejudices against it. In this situation a process of under- standing can thus be started, with the aim of understanding the new things. Prejudices are thus opening and con- firming to understanding, i.e. the precondition of all knowledge and cognition is the preconceived and prelimi- nary meaning of the question. Prejudices may be said to have a treble character of time: 1) they have come to people from tradition and history (before); 2) they are constituent for what people are now and are about to be (now); and 3) they are expectant, being open to future testing and change (future). The epistemologically fun- damental question is thus not about how people get rid of prejudices, in order to find a safe foundation of cogni- tion. But it is about how to distinguish fruitful prejudices from unfruitful.
[25 Wurzinger KK, Novotny JE, Mohrenweiser HW. Studies of the purine analog associated modulation of human erythrocyte acid phosphatase activity. Mol Cell Biochem 1985; 66:127-136. [26 Safranow K, Rzeuski R, Binczak-Kuleta A, et al. ADA*2 allele of the adenosine deaminase gene may protect against coronary artery disease.Cardiology 2007; 108:275-81.
There has also been a great deal of research in recent years on what has been called ’signaling’, i.e. a form of charac- teristically cooperative movements dynamics in joint action [12,19,39,40]. For example,  found that participants tended to sacrifice efficiency of movement in order to make their movements more easily and quickly predictable for their partners. Insofar as this type of signaling constitutes an investment in the joint action, and demonstrates a willingness to coordinate with one’s partner, and also an expectation that the one’s partner will remain engaged. As a result, it could enhance the partner’s YOU’s sense of being committed until the goal is reached. If so, it is plausible that signaling could also have such effects in human–robot interactions. After all, there is no need to have the same bodily shape as a human in order to adapt one’s movements to make them more pre- dictable, as  results indicate. Indeed, work by  has shown that it is possible to identify bodily cues that corre- late with trust in dyadic interactions, and to design robots to exhibit and to identify such cues.