The goal of this thesis is to develop a voice-controlledhuman-robotinterface (HRI) which allows a person to control and communicate with a robot. Dragon NaturallySpeaking, a commercially available automatic speech recognition engine, was chosen for the development of the proposed HRI. In order to achieve the goal, the Dragon software is used to create custom commands (or macros) which must satisfy the tasks of (a) directly controlling the robot with voice, (b) writing a robot program with voice, and (c) developing a HRI which allows the human and robot to communicate with each other using speech. The key is to generate keystrokes upon recognizing the speech and three types of macro including step-by-step, macro recorder, and advanced scripting. Experiment was conducted in three phases to test the functionality of the developed macros in accomplishing all three tasks. The result showed that advanced scripting macro is the only type of macro that works. It is also the most suitable for the task because it is quick and easy to create and can be used to develop flexible and natural voice command. Since the output of macro is a series of keystrokes, which forms a syntax for the robot program, macros developed by the Dragon software can be used to communicate with virtually any robots by making an adjustment on the output keystroke.
created for people with no profound knowledge of electronics. It includes a code editor with features such as syntax highlighting, brace matching, cutting- pasting and searching-replacing text, and automatic indenting, and provides simple one-click mechanism to compile and upload programs to an Arduino board. It also contains a message area, a text console, a toolbar with buttons for common functions and a series of menus. A program written with the IDE for Arduino is called a "sketch".  Sketches are saved on the development computer as files with the file extension .ino. Arduino Software (IDE) pre-1.0 saved sketches with the extension .pde. The Arduino IDE supports the languages C and C++ using special rules to organize code. The Arduino IDE supplies a software library from the Wiring project, which provides many common input and output procedures. User-written code only requires two functions, for starting the sketch and the main programs loop, that are compiled and linked with a program stub main() into an executable cyclic executive program with the GNU toolchain, also included with the IDE distribution. The Arduino IDE employs the program avrdude to convert the executable code into a text file in hexadecimal coding that is loaded into the Arduino board by a loader program in the board's firmware.
Face detection is a technology being used in variety of applications which can identify or verify a person from digital image. It only detects the facial features and ignores the background such as buildings, trees and other objects. Face recognition algorithm mainly focuses on detection of frontal human faces. Nowadays face recognition technology is seeing increase in its usage across the world for providing more safety and reliable security technology.
When a command for the robot is recognized, then voice module sends a command message to the robot’s microcontroller. The microcontroller analyzes the message and takes appropriate actions. The objective is to design a walking robot which is controlled by servo motors. When any commands are given on the transmitter, the EasyVR module will take the voice commands and convert the voice commands into digital signals. Then these digital signals are transmitted via ZIGBEE module to the robot. On the receiver side the other ZIGBEE module receives the command from the transmitter side and then performs the respective operations. The Hardware Development board used here is ATmega 2560 development board. In ATmega 2560 there are 15 PWM channels which are needed to drive the servo motors. Addition to this there is camera which is mounted in the head of the robot will give live transmission and recording of the area. The speech- recognition circuit functions independently from the robot’s main intelligence [central processing unit (CPU)]. This is a good thing because it doesn’t take any of the robot’s main CPU processing power for word recognition. The CPU must merely poll the speech circuit’s recognition lines occasionally to check if a command has been issued to the robot. The software part is done in Arduino IDE using Embedded C.
"The robot is going in reverse". Thus, robot will talk with each guidance the client will give. The sound will be pre-recorded human voices and put away to a miniaturized scale SD card associated with the microcontroller unit utilizing a SDcard module.
2. HARDWARE AND SOFTWARE
Installed frameworks are intended to do some particular errand, as opposed to be a universally useful PC for numerous assignments. Some likewise have time execution imperatives that must be met, for reasons, for example, security and convenience; others may have low or no exhibition necessities, enabling the framework equipment to be improved to lessen costs. Installed frameworks are not generally independent gadgets. Many inserted frameworks comprise of little, automated parts inside a bigger gadget that fills a progressively broad need. For instance, the Gibson Robot Guitar includes an inserted framework for tuning the strings, yet the general reason for the Robot Guitar is, obviously, to play music. So also, an implanted framework in a car gives a particular capacity as a subsystem of the vehicle itself. The program guidelines composed for inserted frameworks are alluded to as firmware, and are put away in read-just memory or Flash memory chips. They keep running with constrained PC equipment assets: little memory, little or non-existent console or screen.
are implemented for video presentation, and audio encoding select GSM and G.723 over RTP. The performance test of the developed real-time video stream has been done to get live video feedback to monitor the state of the aged or disabled in a campus network. The video server is run on Windows 2000 Professional (Pentium IV, CPU 1.9GHz), and the video client is run on Windows XP (IV, CPU 2.4GHz). The average frame rate is about 19.5fps. The experiments that users transfer robot control token via video/audio conference system have also been done. After entering the multipoint conference, users (e.g., doctor, caregivers) can select media for presentation by double clicking the users names in the middle right of the chat client panel. After a discussion, the robot manipulating token will transfer to an appropriate user. The experiment was successfully done in a campus network. Many options for different kinds of live feedback images have also been provided such as the state of the mobile robot cooperating with the manipulator, the rooms of the disabled or aged. In addition, “auto" and "step" control modals of the live image feedback are provided to allow user to see the continuous live feedback images or the "step" image feedback which refreshes the image once after button is pushed.
Our world is currently facing the global warming whereby the average temperature of our earth atmosphere and oceans is increasing year by year. Studies shows that our earth mean surface temperature has increased about 0.8C which about two-third of increase occurring since 1980 . The global warming of the earth may lead to more forest fire and fire disaster occur as everything gets more flammable due to the high temperature of our earth atmosphere. Therefore, fire extinguishing robot is needed to reduce all the damage cause by natural or human made fire disaster. The project aims at designing an intelligent live video feedback voice operated fire extinguishing robotic vehicle which can be controlled wirelessly.
The robot did not exceed the speed limit.
To test the haptic behavior of a master and slave system is not a trivial task as there is no simple way to express how good the performance is in the eyes of the user. Something that could be measured is however how the robot will react if it is pressing against an obstacle and do this under two different circumstances. First, to let a human operator induce the step through the Phantom and to apply the force feedback, i.e., to run the full system. Second, to do it by inducing an automatic step in the desired position in the impedance controller and not listening to the Phantom or sending any force feedback to it. In Figure A.1 the first case is shown and in Figure 10.4 the second can be seen. There will clearly be a difference in behavior according to the two figures which has to be explained by the user’s inability to hold the Phantom perfectly still, and by that keeping the desired position constant, when the force feedback kicks in.
Universiti Teknikal Malaysia Melaka, Malaysia
a,* email@example.com, b firstname.lastname@example.org, c email@example.com,
Abstract – The advanced design and development of robotic technology in producing multi task are increasingly. In this paper presents about designing and developing mobile robot model that can be controlled using Graphical User Interface (GUI) via wireless protocol. This paper focuses on the control mobile robot by using the GUI as navigation control and the user can get a view an image and real time video on visual basic software. To address the problem of sired based control, XBee wireless communication circuit was used in mobile robot through a computer command. The development of this mobile robot consists of a chassis, a graphical user interface (GUI), XBee module, DC gear motor, camera, track wheels and microcontroller type PIC18F4550. Differential driving method using L298 circuit was used to control movement of the robot. In mechanical design, the wheel track has been used instead of conventional wheels to enable the robot to travel through different types of surfaces or rough terrain. In addition, wireless cameras was attached to the robot as a system of monitoring function.
In operation tests of object handling task is realized by combination of following 3 functions; Environment information representation using robot mounted camera image, Known object recognition using 2D visual marker, and Dialog based teleoperation user interface. Figure 2 shows a robot controller of dialog based HMI. This HMI supposes operation using the touch panel, surrounding environment of the robot is shown using head mounted camera image, robot action is selected from buttons on the right side of the screen, for each action different oper- ation interface is called. Object recognition is realized by putting 2D visual markers on the environment. Prob- lems of this system are following 3 points; 1) Using only 2D color image obtained from 3D camera, it is difficult to understand the environments around robot because it has a narrower viewing than human’s, and sometimes occlusion is occurred by the robot’s own body. 2) Object recognition using 2D visual marker limits usable objects from robots. It needs the initial cost to put markers on the environment, it also may be difficult to recognize mark- ers due to size of markers, and distance from the robot, and sometimes objects can not be put markers. 3) Dia- log based user interface can call only predefined tasks and cannot adjust detailed robot motions from the interface.
Experiments with new users show a correct classication in four out of ve cases against realistic moderately com- plex backgrounds. Our previous system  reached about 86% correct recognition for 10 different postures in front of highly complex backgrounds as depicted in gure 9, but there the lighting was more tightly controlled. By means of using fewer allowed postures and coarser model-graphs, we managed to reduce the previous recognition time of 16 seconds signicantly. The Gabor transformation of the im- ages takes 2.95 seconds on a conventional Sun UltraSparc Workstation. The matching of the model graphs adds an- other 1.88 seconds.
If such factors are ignored, it may take any body part under pressure causing pressure sores. In the present system, this aspect was carefully considered while integrating the components. The camera was adjusted such that the user does not need to put any extra effort into looking into the camera. Users can easily drive the wheelchair via eye or voice command while remaining in a comfortable position, thus avoiding potential tiredness. Overall, the designed system is proficient, feasible, comfortable, and safe to use. However, the system is not without its limitations. Although the image processing technique used has a relative superiority in processing, these techniques sometimes malfunction in the dark due to variation in illumination. In the existing setup, a 12-V LED is incorporated to compensate for this problem to some extent; however, in the near future, FPGA-field programmable gate array systems may be used to improve the processing speed and make the system more synchronized with environmental variations and user needs.
Communication is the fundamental of all interactions between human beings; the bridge that leads to exchange of information. The importance of communication is foreseen by many legendary leaders thousand years ago. This includes King Sejong, the creator of Korean language character and Qin Shi Huang, the founder of Qin dynasty who is well-known for his masterpiece - Great Wall of China. During Qing Shi Huang’s reign, he instructed everyone to write only in Chinese character disregard of their speaking languages and dialects, for the purpose of unity. Eventually and until today, many countries languages can be seen using some Chinese character. In some extent, a Chinese nowadays can understand a little bit about a Japanese-written product’s title and description. Anyway, this reflects how language has evoluted and became a necessary tool in our daily life. Human was once communicates with other human only, but now, human instruct machines through speech. The speech recognition technology has reaches an extent where not only we tell machine, but machine tells us too.
Key Words: voice command, obstacle, Bluetooth module.
1.1 History of Robotics:
Robots are the combination of electrical, mechanical and automated systems that are used to perform specific and complex tasks that are given by humans. Robot’s growth from scratch has been tremendous over the years. The concept of developing a robot originated when people begun to think that their work has to be done in a given period of time without any human help. Turning ideas into reality they developed remotely operated robots with wiring system, and then they developed as wireless robot in the form of antenna which covers over a certain distance only. Around 10 th century BC the mechanical automated robot was built that could sing and dance. It was built by an artisan named Yan Shi and the machine had lifelike organs like muscles, joints and bones. The ancient Chinese built the clock towers that automatically ring the bell for every hour.
Home monitoring system has been one of the basic infrastructure that will be installed in almost every residential compound in this modern world while closed-circuit television has become the trend replacing the security guard to look after their house 24 hours. However, closed-circuit television system which uses non-mobile video camera and wired system have created some limitation to the system such as limited angle rotation of camera which lead to creating blinds spot and high usage of wire. Thus, in this project we will develop a remotely-controlled home monitoring mobile robot system with obstacle avoidance property which provide more flexibility and mobility to the existing home monitoring system. Nevertheless, a modern networking system of local area network system (LAN) will be applied to the home monitoring mobile robot that enable user to control the robot from long distance wirelessly. To obtain a more optimal motor speed control, Pulse Width Modulation technique (PWM) is being used due to the simple operation method.
2.2.4 Eye gaze estimation
One of the driving forces behind the development of face tracking system is the goal to allow machines to sense where a user is looking. It is not sufficient to consider the head pose only, the orientation of the eyes must also be measured. The vision based methods that measure the eye gaze direction of a person can be classified into two main groups, use of corneal reflections and localising the iris with respect to the centre of the eye ball. Systems of both categories have evolved from restrictive laboratory based setups to complete unintrusive and unrestrictive systems. Early systems required people to put their heads into fixation frames to suppress head motions [Spindler and Chaumette 1997] and [Klingspohr et al. 1997]. The camera(s) were mounted to point directly at the persons eye(s)4. With the miniaturisation of camera technology it has become feasible for a person to wear the frame and the camera. Most head mounted systems contain a magnetic motion tracker to determine
2. Proposed Methodology
The proposed methodology for the experimentation is as in Fig 1.The design presented in this paper helps in acquiring the data by mounting suitable sensors on the partially paralyzed patients, whose eye movements and brain are assumed to be working normal. The acquired data is suitably preprocessed, along with the removal of spurious signals The data is then transmitted so as to enable the user to interact with the devices interfaced such as graphical user interface or any other interactive devices and assisting himself/herself without the help of any others on whom they were dependent on all the time prior to the development of such an interactive brain machine interface. A voice annunciation can also be implemented to command the interactive device such as directing a robot if interfaced
Amoral Ethical Stupid Intelligent
Table 2: Strengths and Weaknesses of Humans and Computers 
The glovebox environment similarly presents great challenges to the development of appropriate sensory feedback. Cameras are generally unsuitable for use in high radiation fields due to the spurious activation of pixel detectors and the introduction of opaque defects into lenses by ionizing radiation. Placing cameras on a glovebox teleoperator with the goal of being close to the task unfortunately exposes them to high levels of radiation, as intensity is proportional to the inverse-squared distance. Cameras used inside the glovebox must therefore be hardened, and can be expected to have relatively short service lives. Of course, it’s likely preferable to install additional, more complex cameras that sit outside the glovebox and peer through existing windows, however gloveboxes tend to be very crowded, and visual occlusion can be expected to be a significant problem. Borescope or flexible fiberscope ports to the exterior of the glovebox would expose relatively inexpensive and easily replaceable optical elements to radiation, and would allow operators to change the visual magnification and orientation of their point of view to achieve the highest intensity of visual stimulus for the task at hand in order to make the task as easy to see as possible.
Comparison Comparing both modes, the semi-autonomous mode seems to oer more advantages, especially under rough circumstances. All interfaces described for this mode oer an easy and intuitive control of the vehicle. Nevertheless, not all interface solutions are suitable for military or space missions. During combat, it is unfavorable for a soldier to use a touchscreen device in order to navigate a vehicle. The same applies for astronauts, who might be limited in their mobility by their spacesuit. Therefore, solutions based on voice, gesture or the ability to follow a human should be preferred. Gesture and voice based interfaces provide an optimal solution for commanding a vehicle. In contrast, a human-following ability oers full navigation control, while the user can still focus on other task. Thus, an ideal solution can be achieved by combining an easy-to-use interface with a human-following ability (see Figure 3.1). In this case, gestures, voice commands, touchscreen devices as well as other simple button-based input devices can be used to turn the human-following interface on or o.
2 Associate Professor, Electronics and Communication Engineering, Saveetha School of Engineering, Kuthamakkam, Chennai.
---------------------------------------------------------------------***---------------------------------------------------------------------- ABSTRACT - In this paper, Arduino based speech controlledrobot can be operate in manufacture line process, automobiles and for physical disable persons. Today many electronic devices are available to reduced the mechanical work of humans, in that speech controlledrobot(SCR) is one. The main aim is to recognize the humanvoice and commanding the robot accordingly. Speech recognition software is used to recognize voice. The project is of two types: one for smart-phone application type and other is for hardware implementation type. Smart-phone application is developed in smart-phones with a controller. Hardware application involves the construction of components like Arduino UNO, Bluetooth module, DC motors, batteries, etc. This system is programmed using Embedded C language for controlling mechanism of robot.