Depush aims to be a comprehensive server for innovative engineering education. Until now, its products have mainly focused on micro controllers and motor control. As computer vision is expanding very quickly in many fields and becoming widely taught in engineering education, Depush is becoming more interested in developing computer vision products. OpenCV technology is widely used to process the images sent by cameras. At the same time, obstacle avoidance is one of the most important applications of computer vision, thus our core work—OpenCV and obstacle avoidance—is quite significant for Depush in the market. By the way, we are the first users of Depush’s newly developed teaching board and we can provide experience and formal instructions on how to use it.
The obstacleavoiding robots are a great boon to the society because of their versatility, and easy maneuvering capability. These are advanced robots which have the capability to detect obstacles through sensors and move without colliding. The objective of the project to make mobile robots as the target application and problems that will be covered include : how to make (teams of) wheeled ground robots avoid collisions while reaching target locations, by using the AVR programmed microcontroller. This paper treats the navigation problem of mobile robots to avoid obstacles according to vision information. In present method, first detection of obstacles which exist in front of a mobile is done by robot by calculating the optical flow. Then, based on the area of detection, the optimal trajectory for a robot is decided. The sensor data for supporting a vision system has been used. In order to find the optimal trajectory, the distance between a mobile robot and obstacle evaluating a function is calculated. The microcontroller on the board programmed with AVR can sense the environment by receiving input from a variety of sensors and can affect its surroundings by controlling lights, motors, and other actuators.
The global focus on terrorism and security may have geared up following the 9/11 attacks in the USA. The risk of terrorist attack can perhaps never be eliminated, but sensible steps can be taken to reduce the risk. The word “Robot” was first used in a 1921 play titled R.U.R. Rossum’s Universal Robots, by Czechoslovakian writer Karel Capek. Robot is a Czech word meaning “worker.” Merriam-Webster defines robot  as “a machine that looks like a human being and perform various complex acts; a device that automatically performs complicated, often repetitive tasks; a mechanism guided by automatic controls.” ISO describes a robot as “an automatically controlled reprogrammable, multipurpose manipulator programmable in three or more axes, which may be either fixed in place or mobile for use in industrial automation applications”. Yet, all these definitions do give us a rough idea about what comprises a robot, which needs to sense the outside world and act accordingly. There are motors, pulleys, gears, gearbox, levers, chains, and many more mechanical systems, enabling locomotion. There are sound, light, magnetic field and other sensors that help the robot to collect information about its environment. There are Processors powered by powerful software that help the robot make sense environmental data captured and tell it what to do next and also microphones,
Nowadays, robot is one of the most important things to the human. The most famous robot is obstacle avoidance robots where the robot can collision any object surrounding itself therefore it can avoid any damage either the robot structured or component inside the robot. To ensure the robot do not damage because of collision, vision based obstacle avoidance algorithm has been developed. Sensors are most famous, but vision camera have more information that can be provided and increasingly being used by people because the high technology and capability. Because of this characteristic,vision camera has increase to another level for robot perception and become the visual sensing sensor with the highly attractive.
The organizing committee is soliciting contributed papers and invited session proposals. Review of the contributed papers will be based on Extended Abstracts of 3 to 4 pages. Invited session proposals should include a session title, and the abstracts of 5 to 6 papers with the contact information of the session proposer. Special Publication in Journals:
The main motto of designing this robot is to help the disabled people drive their chairs without even having the need to touch the wheels of their chairs. Not only this, it can reduce the complexity of operating remote control based robots. For example, military, industrial robotics, construction vehicles in civil side etc. come under this category. Commands to the robot are sent by chronos watch [1, 2] depending on either Tilt control or Touch control. Once the commands are received by the receiver on robot, it processes them in order to change position or speeds. It also develops real-time obstacle detection and obstacle  avoidance for autonomous navigation of mobile robots using IR sensors in an unstructured environment. The process of robot control includes • Collects information of the environment (Senses). • Information collected is used and processed (Process). • Follows instructions to perform actions (Acts).
In order to navigate in unknown environments the robot has to get information, process them and providing an optimal solution for the task. One particular issue for reaching the target position is avoiding the obstacles encountered in robot’s path. There is a lot of research work carried out so far in this field but we will focus into implementing a fuzzy inference system that has a high accuracy and adapts better to our requirements. The fuzzy algorithm used for obstacle avoidance is a Mamdani inference system. In the fuzzification stage the systems requires to map the inputs from two ultrasonic sensors to values ranging from 0 to 1 using a set of input membership functions. The two ultrasonic sensors are placed on Nao’s chest. Each sensor is equipped with transmitter and receiver, with a distance of 7 cm between the two sonars transmitters. The sonars effective cone is 600, frequency=40kHz and the detection range varies between 0.25 to 2.55 m, with a resolution of 1cm.
The method we chose to investigate is based on the Looming method described in  and . This is based on the change of various image properties over time. These two sources contain most of the key information relating to this method. The work of  demonstrates the use of projected object area, using the increasing area of approaching objects in a controlled environment as a basis for avoidance. This work is supported by the lengthy paper of , where other cues, such as texture and irradiance are discussed. The work of  also discusses the use of optical blur in some detail, although no results are presented for this approach. Additional sources for this method are rare, typically the work of , which make reference to the looming approach, but prefer a method augmented by other sensors.
The track which the Autobot needs to follow should be pre-fed into the built in microcontroller, i.e., the distance and the angle of the final point from the initial point. If the Autobot needs to travel in a curved path the required equation of the curve needs to be provided. Once the equation is determined, the instantaneous coordinates of the center of the Autobot are fed into the equation over which the Autobot has to travel. The micro-controller calculates the position of the center of the Autobot on the basis of the information provided by the left and right wheel encoders and feeds these instantaneous co-ordinates to the desired equation of travel. This helps to monitor the position of the center of the robot. A PID controller can be used to help the Autobot move over the desired equation. If the Autobot deviates from the equation it was supposed to move over, an error signal would be generated. The constants Kp,Ki,Kd would be calculated by the micro-controller. If the Autobot, drifts to right of its desired path the PID controller would try to minimize the error by trying to move the Autobot in the left direction. In such a condition the micro-controller needs to vary the speed of the Left and Right motors in such a way that the Autobot drifts left till the center is not following the required equation of travel.
Figure 6a shows an image with two regions on the floor which have little texture. The first is a circular piece of white paper which can be driven over, and the second is a small cardboard box, which can not. Figure 6b shows the extracted boundaries and the FOE used to cast a ray in order to match intersections between corresponding boundaries. The cross- ratio construct to measure a ﬃ ne height is applied to the cor- respondences, thus allowing a height profile to be extracted as we “walk around” the closed contours associated with the two low-texture regions. If the height profile remains close to zero, then the region can be classified as belonging to the ground plane, as in Figure 7a. Otherwise it is classified as an obstacle, as in Figure 7b. The final image in Figure 8, shows two frames of the extracted ground region where the textured carpet has been classified on a pixel-by-pixel basis, and the textureless white paper region has been included by virtue of the height profile of its boundary. Obviously, this could have been done by determining whether the contour motions in reciprocal-polar space lay close to the extracted sinusoid defining the homography, but this does not give any quan- titative information about height which may be necessary if we wanted to allow the robot to drive over obstacles of small height compared to the robot wheel diameter.
Characteristics of a robot are mobility, autonomy and perception ability. Robots can perform in the absence of obstacle, but the scenario in the presence is not easy. To avoid obstacles robots require additional information.Obstacle avoidance robot without critical circuit and programming is developed.
The distance between near objects and the robot is used for the situation recognition. Far distanced objects should be ignored because an approach of these is uncertain. If an object is too close, a reaction, for example an avoiding maneuver, should be performed. Hence, only objects within a certain distance-window are relevant for learning. We mapped this window on image regions, as it can be seen in Figure 6. The upper part is ignored for processing, based on the assumption that far objects are located there. Close objects can be found on the bottom part of the image. Therefore, the robot stops as soon as the object contours overlap this lower part. However, the first assumption fails sometimes, for example concerning nearby tall objects as depicted in Figure 6 on the right. In this case available object information is thrown away.
This system focusses on a intelligent line follower robot for serving the health care management. A line following robot carrying medicine has been designed for providing the medicine to the patient whenever they need it. . A switch with IR sensor has been fitted near the patient, which connection has been made by the robot too. If the patient presses the switch then a flag bit set in the microcontroller, from which line following robot follows the line and reaches near the patient and provide the medicine to the patient with the help of dc motor. Along with the line follower they have used proximity sensors which is used to stop the robot when any object comes into its path. So in our project we have inherited the obstacle detection trait but the proximity sensors which on a industrial level are much more costly and also the line of sight for the ir wave should be properly aligned. So instead we have included ultrasonic sensor which is much more economical and provides efficiency even in harsh conditions.
The visual tracking system installed on the follower enables the follower to follow the leader robustly in a variety of environment. Despite this advantage, there are quite a number of disadvantages for using vision system. One of the main disadvantages of the vision system is that it is costly to build. This is due to the high computational power required to process the raw data obtained from vision sensor. This reason makes the vision tracking system less preferable to be used.
Robotics is a leading branch of engineering which demands knowledge of hardware, sensors, actuators and programming. The result is a system which can be made to do a lot of different things. However, to develop such a system is expensive and difficult. So, we have come up with a plan to build an autonomous mobile robot which is less expensive. A robot has three main different parts – preceptors, processors and actuators. The perceptors are the sensors which provide information about the surrounding environment to the robot. There are many works had done for object detection. Those robots are efficient in the purpose of accuracy. But they are very costly. So we move in a direction where we can have a robot that is cost effective and good enough in its accuracy. 1.1 Robot Navigation and Sensor Fusion Robot navigation algorithms are classified as global or local, depending on surrounding environment. In global navigation, the environment surrounding the robot is known and the path which avoids the obstacle is selected. In local navigation, the environment surrounding the robot is unknown, and sensors are used to detect the obstacles and avoid collision.  For global navigation, INS (Inertial Navigation System) or odometric system can be used . INS uses the velocity, orientation, and direction of the robot to calculate the location of the robot relative to a starting position. In global environment, where the starting position, the goal and the obstacles are known, INS can lead a robot to its goal. But a major problem of INS is that it suffers from integration drift: small errors in the measurements accumulate to larger error in position. It is like letting a blindfolded man to navigate from point X to point Y in a known environment. He knows the way but he cannot see. He has to guess his location and decide the direction to move. With every guess, every error he makes is cumulated. By the time he thinks he has reached Y, his actual position may be quite a drift from Y.
Nowadays, robots are built to perform multiple tasks with different level of complexity. There are some situations that require multiple robots to perform a single task. When these situations are required, the robots are required to cooperate with each other. One of the key elements for the multiple robots to work together is that the robot need to able to follow other robot or human. This element leads to the study of the leader and follower behavior.
This study proposed an obstacle-avoiding path planning method based on improved Dubins path, which effectively improved the speed of the search path by using the genetic algorithm. And a new turning strategy based on Dubins for UAV maneuverability was obtained, which reduced the number of variable speed of the UAV turning process. Simulation analysis showed that the path planning method proposed in this paper has a 1.9% increase in path length and less obstacle-avoiding path, compared with the spraying path in the environment without obstacles. Under the same obstacle-avoiding environment, the path length is slightly larger than that of the traditional obstacle-avoiding algorithm, but it significantly reduces the area of the overlap and skips spray in the spraying process of UAV obstacle-avoiding. Therefore, the proposed algorithm can effectively improve the effect of the UAV spraying operation in the obstacle environment and ensure the safety of flight operations.
The second set of benchmarks is intended to model robot nav- igating in terrain changes, for example if a robot is moving in a parking lot along other vehicles. In these benchmarks a fixed percentage of vertices are initially blocked and, on succeeding rounds, each of the blocked vertices moves to some adjacent vertex with probability 0.5, the particular target vertex being chosen randomly from all available adjacent vertices. The ex- periments are done in the same way as the rock-and-garden ex- periments except we also plot, in Figure 7, the effect of chang- ing the percentage of blocked vertices for fixed sensor-radius. Figures 5 to 6, left, show that ID* Lite uses fewer heap op- erations to compute a path from v s to v g . In the right plot
Abstract— The paper presents a human following and an obstacleavoiding algorithm for the Bot that provides a service to a marathoner while training. For its working, the Bot shold have the abilities of following a human and dynamically avoid the variable positioning obstacles in an unlevelled outdoor surroundings. The Bot detects a human by a transceiver model , speed is controlled using PWM signal concept and the direction is controlled using sensors. To avoid moving obstacles while following a running person, there is a defined definite radius of each obstacleusing the relative velocity between the Bot and an obstacle. For easily avoiding obstacles without collision, a dynamic obstacleavoiding algorithm for the Bot is implemented, which directly employs a real-time position between the Bot and it follows the shortest path around the obstacle to avoid it. We verified the feasibility of these algorithms through experimentation in different outdoor environments.