Abstract: The paper is discuss on develop and implement a visionbasedobstacleavoidance for mobilerobotusingopticalflowprocess. There are four stages in this project which are image pre-processing, opticalflowprocess, filtering, object stance measuring and obstacleavoidance. The opticalflowprocess are an image resizing, set parameters, convert color to grayscale, Horn-Schunk method and change grayscale image to binary number. Next process is a filtering done by smoothing filter then image center will be defined. The maximum distance object from a camera has been set as 20 cm. Therefore, the decisions of the robot to move whether left or right are based on the direction of opticalflow. This avoidance algorithm allows the mobilerobot to avoid the obstacles which are in different shape either square or rectangular. A friendly graphical user interface (GUI) had been used to monitor the activity of mobilerobot during run the systems.
ble to implement this algorithm with a less powerful CPU. Since only 10% of the CPU time is spent on processing data, a CPU running at 20 MHz would be sufficient. So instead of the Gumstix computer we can use a micro-controller such as a Brainstem or a BA- SIC STAMP for both image processing and motion control without any decrease in performance. The memory usage is nearly one Mb which is rather big. We did not try to optimise memory usage while im- plementing the code so improvements could be made. We plan to implement this control algorithm on a micro-controller instead of the Gumstix. This change will reduce the cost and power usage of the robot by a large amount. To the best of our knowledge, there has not been a mobilerobot that can perform reliable obstacleavoidance in unconstrained environ- ments using such low resolution vision and slow mi- croprocessor. A robot with only obstacleavoidance behaviour might not be very useful apart from appli- cations such as exploration or surveillance. However given that our algorithm is very computationally ef- ficient and requires low resolution images, it can be incorporate into a more complex system at a small price and leaves plenty of resources for higher level behaviours.
The vision-basedobstacleavoidance technique is an appearance-based technique which classifies the input colour image from a monocular vision camera into defined classes such as ground, walls and obstacles. The robot is able to use the information from these image to localize itself in the environment. Vision sensors provide detailed information of the environment which can be used for navigation of mobile robots to follow the path and avoid the obstacles .
Abstract— Mobilerobot applications become interested in various fields of life, the navigation methods of mobilerobot depend mainly on the application which can be utilized to use by mobilerobot. A robot’s safe movement, should be considered in any navigation method to avoid collision with other objects that may introduce errors in the navigation path, or avoid reaching the targets properly.
Dead reckoning is the estimation of position and can also be referred to as self- localization or position tracking. Odometry is the estimation of speed and distance. In the case of ground robots, sensors are usually attached to the wheels, and the collected data is analyzed to estimate the motion of the robot. The most popular and widely used sensors are the absolute and incremental rotary optical encoders. Unfortunately, these sensors generate unbounded errors, especially when paths are not straight. Inertial sensors are also used for dead-reckoning and odometry applications, but they suffer from the same type of errors. One factor behind the lack of precision of rotary encoders is that when a wheel-basedrobot slips, the wheels do not spin, leading to zero data collected by the sensors. This is the major handicap for indoor and outdoor robots; therefore, sensors based solely on opticalflow computation must be adopted in robotics to solve the problems of inaccuracy and unreliability of traditional sensors. Among all the opticalflow sensors, the use of an array of high speed opticalflow mice has been proposed as a solution for dead reckoning and odometry issues –.
In this paper, the safety analysis of collision avoidance systems is presented. The optimization-based verification process method is applied for verification of collision avoidance algorithms for an unicycle robot. First, kinematic and dynamic equations of the unicycle robot are introduced and the controller is presented based on these equations. The inner-outer-loop control architecture is used where the inner-loop controller is a proportional speed controller. A local planner in the outer-loop is developed using the artificial potential field method. Then an optimisation based approach is developed to find the worst cases which are defined by the minimum distance to the obstacle in the presence of all possible described variations. Mass and inertia variations are considered in this case study. The local optimisation method does not give the unique solution. It clearly shows that the optimal solutions do not converge to the global minimum. Different worst cases are identified when the optimisation starts from different initial conditions. Therefore, the local optimisation is not suitable for verification of collision avoidance algorithms for this case study. As demonstrated by Fig.8 and Fig.9, it is a non convex nonlinear optimisation problem and it is possible to miss the worst case. To overcome this problem, global optimisation algorithms are required.
Our main contribution is that the presented method provides both velocity and distance estimates, while still being compu- tationally efficient enough to run close to the frame rate on a very limited embedded processor. As such, the method enables unstable MAVs such as tiny quadcopters to perform fully autonomous flights in unknown environments. The EdgeFlow and EdgeStereo methods will be explained in more detail in section II. Off-line results for velocity estimates with a set of images is shown in section III. From here, the algorithm is embedded on the lightweight stereo camera and placed on 40 g pocket drone for velocity estimation (section IV-B). Finally, the velocity estimate is used together with EdgeStereo-basedobstacle detection to perform fully autonomous navigation in an environment with obstacles (section IV-C). This is followed by some concluding remarks.
Abstract— The Robot Operating System (ROS) is a collection of tools, libraries, and conventions that focus on simplifying the task of creating a complex and advanced robotics system. Its standard framework can be shared with another robotics system that has a similar platform and suitable for being introduced as an educational tool in robotics. However, the problems found out in the current robot platform available in the market are expensive and encapsulated. The development of an open source robot platform is encouraged. Therefore, this research is carried out to design and develop an ROS basedobstacleavoidance system for existing differential-wheeled mobilerobot. The ROS was installed under Ubuntu 14.04 on a Beaglebone Black embedded computer system. Then, the ROS was implemented together with the obstacleavoidance system to establish the communication between program nodes. The mobilerobot was then designed and developed to examine the obstacleavoidance application. The debugging process was carried out to check the obstacleavoidance system application based on the communication between nodes. This process is important in examining the message publishing and subscribing from all nodes. The obstacleavoidancemobilerobot has been successfully tested where the communication between nodes was running without any problem.
In order to reduce dependence of a UAV on human operators, autonomous control is required. In navigating a desired path, an autonomous UAV must take into account static and dynamic obstacles such as buildings, terrain, other vehicles, and people. The successful avoidance of such obstacles is dependent on accurate obstacle detection logic . There are a number of sensing schemes that may provide the needed environment data, the most commonly implemented of which are computer vision, LIDAR  and sonar  schemes. Ground-based robots have developed far enough to use suites of combined sensor types , though their relatively constant proximity to the ground enables the use of novel approaches not available to air-based platforms  . Instead, most research in UAV obstacle detection in recent years has taken advantage of the relatively low cost and light payload weight of vision-based systems . These systems are also flexible in their design: different numbers of sensors may be used, with different obstacle detection algorithms in each situation. Most commonly, either one or two cameras are used to provide environment data. In the case of a single camera, features such as edges or corners are identified and tracked from one sensor frame to the next in a process known as opticalflow. In dual-camera implementations, the disparity of pixels between the left and right sensors is used to compute a depth mapping  . It has also been shown that a combination of these methods is possible, using cameras of different framerate to perform both stereo correspondence and feature tracking . These computer vision approaches are made easier by the availability of software packages built for the implementation of computer vision algorithms. One such package, OpenCV, was selected for use in this project due to the completeness of its feature-set and its implementation in both C++ and Python .
In this section we compare and evaluate the performance of both Mamdani and Sugeno FLC in term of the path travelled, the smoothness and the efficiency of the FLC for single and multi-robot. For integration of FLS for obstacleavoidance behavior of mobilerobot, the simulation begins upon the successful remote application program interface (API configuration) between V-REP and MATLAB. If the distance from the target and robot position is more than 0.1m, the robot is far from the target location. So, the robot will start move towards the target with three conditions. The second condition is, if the angle is bigger than 0.1 radian the robot will rotate left whereas the third condition is if the angle is smaller than 0.1 radian the robot will rotate right. While executing these three conditions, if any of the proximity sensors of roots detects the obstacle, the FLC either Mamdani or Sugeno, will control the left velocity and right velocity of robotbased on the 15 rules defined. This process will keep on repeating until the distance between the robot and target location is not more than 0.1m.
The visual tracking system installed on the follower enables the follower to follow the leader robustly in a variety of environment. Despite this advantage, there are quite a number of disadvantages for usingvision system. One of the main disadvantages of the vision system is that it is costly to build. This is due to the high computational power required to process the raw data obtained from vision sensor. This reason makes the vision tracking system less preferable to be used.
Abstract: Due to payload restrictions for micro aerial vehicles (MAVs), vision-based approaches have been widely studied with their light weight characteristics and cost effectiveness. In particular, opticalflow-basedobstacleavoidance has proven to be one of the most efficient methods in terms of obstacleavoidance capabilities and computational load; however, existing approaches do not consider 3-D complex environments. In addition, most approaches are unable to deal with situations where there are wall-like frontal obstacles. Although some algorithms consider wall-like frontal obstacles, they cause a jitter or unnecessary motion. To address these limitations, this paper proposes a vision-basedobstacleavoidance algorithm for MAVs using the opticalflow in 3-D textured environments. The image obtained from a monocular camera is first split into two horizontal and vertical half planes. The desired heading direction and climb rate are then determined by comparing the sum of optical flows between half planes horizontally and vertically, respectively, for obstacleavoidance in 3-D environments. Besides, the proposed approach is capable of avoiding wall-like frontal obstacles by considering the divergence of the opticalflow at the focus of expansion and navigating to the goal position using a sigmoid weighting function. The performance of the proposed algorithm was validated through numerical simulations and indoor flight experiments in various situations.
[6, 7] propose specific systems for exception handling. The approach described in  introduces the concept of the supervisor which has the role of a handler for a group of system’s agents. He defines two types of exceptions: the internal exceptions that are treated by the agent itself and the external ones handled by the supervisor which has a global access to the agents that it supervises. Hagg  proposes a strategy for exception handling using sentinels. Sentinels are guardian agents which protect the multi-agent system from failing in undesirable states. They have the authority to monitor communications in order to react to fault. This approach is costly in terms of computation and communication and it causes point of failure since sentinels, also, are subject of fault. These two works are based on the idea of controlling agent, this means the use of a generic agent that controls the execution process and handles exceptions when they occur.
The averaging approach for blur finding has disadvantages, in that it is simplistic and assumes that all objects in the area are on the same plane and orientation relative to the camera. However, for the controlled test environment this was considered an acceptable assumption, the averaging approach providing consistent results (see below). Although such a method could not be used in a real mobile application (the assumption that the area contains one at object would almost certainly be violated), the overall method of looming-through-blur could be implemented by substituting this area averaging technique for a simple object tracking method. The critical point is to find the previous frame history of blur radius r for a given object. In the experiments we were able to assume that the area of interest contained the same object in each frame. The advantages of this experimental set up are that, while it is not real-time, it is controlled and produces clear data, illustrating the efficacy of looming-through-blur. While modifications needed to implement this concept in real time avoidance for mobile robots they were considered relatively minor.
The organizing committee is soliciting contributed papers and invited session proposals. Review of the contributed papers will be based on Extended Abstracts of 3 to 4 pages. Invited session proposals should include a session title, and the abstracts of 5 to 6 papers with the contact information of the session proposer. Special Publication in Journals:
While far more efficacious than any other edge detection method, the edge tracing algorithm was not selected for final use in the finished program for a number of reasons. When identified to the author, writing for the final segmentation algorithm (see below) was already nearing completion. It was initially felt that additional expenditure of time could not be justified, especially as virtually all existing work would have to be rewritten and for a method that was bound to have unforeseen and time consuming side issues. Secondly, it was believed that the edge based approach suffered from a serious flaw. By definition only the outlines of objects are considered. While this requires far less computational power than other methods (only a tiny fraction of the image is under consideration (Billingsley 26th May 2011), it renders most of the image a blank map with no information about those pixels. If the machine was to approach a blank wall, an edge detection device could only extract information about the line where the floor meets the wall.
In past years a substantial amount of work has been done in the field of navigating mobile robots securely through a known or unknown environment. Ground operating robots, or slow mov- ing and heavy UAVs, typically use light detection and ranging (LiDAR) technology to conceive the surrounding environment (Bachrach et al., 2012). Scherer et al. (2008) used a heavy he- licopter able to carry maximum payload of 29 kg and flying with speeds of up to 10 m/s. The LiDAR system has a range up to 150 m, which is suitable given the speeds and the mass of the sys- tem. However, for fast reactions in dynamic or cluttered scenes, heavy LiDAR systems are not practical. They consume signifi- cantly more power than a camera-based system, thus requiring a corresponding power supply, making them heavier and sluggish. Due to higher speeds and more agile movements, UAVs need fast, light and energy-efficient solutions for robust obstacleavoidance. Systems relying on a single camera typically use opticalflow in- formation, computed between two consecutive frames to perceive the structure of the environment. The main advantage of such structure-from-motion approaches is that they are lightweight and energy-efficient and can therefore be used for small UAVs. Mer- rell and Lee (2005) proposed a probabilistic method of comput- ing the opticalflow for more robust obstacle detection. Ross et al. (2012) used a data driven opticalflow approach learning the flight behaviour from a human expert. Nonetheless, since the informa- tion gained by a single camera system is limited, such systems are typically infeasible for robot navigation.
This article is written together by Shiao Ying Shing, Yang Jui Liang, and Su Ding Tsair from Department of Electrical Engineering from National Chang-hua University of Education, Taiwan. This article provides an alternative way for vision-based approach navigation. The control objective for the vision-based tracking problem is to manipulate the WMR to follow the desired trajectory. This system is composed of a ceiling-mounted fixed camera whose outputs are connected to a host computer, and a WMR with two atop different color, round marks to differentiate the front and rear of the WMR as show in figure 2.4.
time, constantly sends messages to the SONAR module. The ultrasonic sensors of the SONAR module respond back with an acknowledgement whenever an obstacle is detected. The SONAR module also calculates the distance of the object and sends it to the master. When the distance is within the reach of the robot arm, a message is sent by the master to the base unit to apply the brakes and stop the unit. With the help of three sonar sensors, the base is aligned to be exactly in front of the object. This ensures proper focusing of the webcam to capture the image of the obstacle accurately. A frame is then captured and the image is analyzed to identify the shape and the colour of the object. If it matches the speciﬁcation of our object then the arm unit is activated to pick up the ball, else the robot moves around the obstacle and continues on its path.
noise during the walk, as Nao walk is unstable. A Neural Approach for Robot Navigation based on Cognitive map Learning is presented in . This paper presents a neural network architecture for a robot learning new navigation behaviour by observing a human’s movement in a room. There are also a lot of works that study the control of autonomous robots using neural networks, fuzzy, PSO, ANFIS . In  a fuzzy logic technique for Romeo Autonomous Vehicle Navigation is proposed , an on‐ line navigation technique for a wheeled mobilerobot in an unknown dynamic environment using fuzzy was developed in , a new behaviour-based fuzzy control method for Voyager II mobilerobot navigation is presented in paper in , a fuzzy discrete event systems framework for the control of sampled data systems that have to fulfil multiple objectives has been introduced by Schmidt K.W. et al. in . Nao and humanoid robot control have also been the object of other research works of the authors, such as Melinte O et al. in  for haptic robots, Wang X et al. for rescue robots, Feng Y. in , Wang H. for rehabilitation robots in  and the results are applied in this paper. 2. NAO robot control