Visionsystem is gradually becoming more important. As computing technology advances, it has been widely utilized in many industrial and service sectors. One of the critical applications for visionsystem is to navigate mobilerobot safely. In order to do so, several technological elements are required. This article focuses on reviewing recent researches conducted on the intelligentvision-basednavigationsystem for the mobilerobot. These include the utilization of mobilerobot in various sectors such as manufacturing, warehouse, agriculture, outdoor navigation and other service sectors. Multiple intelligent algorithms used in developing robotvisionsystem were also reviewed.
This article is written together by Shiao Ying Shing, Yang Jui Liang, and Su Ding Tsair from Department of Electrical Engineering from National Chang-hua University of Education, Taiwan. This article provides an alternative way for vision-based approach navigation. The control objective for the vision-based tracking problem is to manipulate the WMR to follow the desired trajectory. This system is composed of a ceiling-mounted fixed camera whose outputs are connected to a host computer, and a WMR with two atop different color, round marks to differentiate the front and rear of the WMR as show in figure 2.4.
Slam is considered as an important integral of autonomous mobilerobotnavigation. It represents an ability to place an autonomous mobilerobot at an unfamiliar location in an unknown environment, and then have it figure a map, using only relative observation of the environment, and then to use this map simultaneously to navigate. The focal idea behind the SALM technique, it is an observation of robot motion from a starting unknown posture or vicinity while maneuvering within an environment. Furthermore, absolute features localities are not accessible. It is proposed to adopt linear evolution of motion as this results in robot synchronous discrete-time model and the observations of landmarks are known. It is known that robot motion and observation of features, the landmarks, are nonlinear while navigating, however, using linear models does not reduce the accuracy of the approach. In this regard, within this paper, we shall use nonlinear robot model in addition to nonlinear observation models. This includes system state, such as position and orientation of the robot, in addition to the position of landmarks. Denote the state of the KSU- IMR robot as X ij ( ) k . While maneuvering, the dynamical motion of the robot is modeled by linear discrete-time state transition equation, Eq. (5):
The research is motivated by the gap between the current available technology and new Application demands. The current available industrial robots used in production and/or manufacturing lack flexibility, adaptability and autonomy, typically performing pre- programmed sequences of operations as pre-decided by the programmer, in highly constrained environments and these systems prove unable to function in new environments or face unexpected situations. Soft computing techniques like fuzzy logic, which use the tolerance for imprecision inherent in most real world systems, to improve performance of a system in an iterative manner have become hugely popular methods for controller design in mobile robotics. These techniques are used for expressing various subjective uncertainties in human-like behavior. The real world problems cannot be defined in crisp (hard) logic, and are characterized by uncertainties, so a fuzzy based modeling scheme was considered appropriate to design the controller of the robot in order to deal with real life data.
Received: 28 April 2020; Accepted: 23 May 2020; Published: 25 May 2020 Abstract: In this study, a semantic segmentation network is presented to develop an indoor navigationsystem for a mobilerobot. Semantic segmentation can be applied by adopting different techniques, such as a convolutional neural network (CNN). However, in the present work, a residual neural network is implemented by engaging in ResNet-18 transfer learning to distinguish between the floor, which is the navigation free space, and the walls, which are the obstacles. After the learning process, the semantic segmentation floor mask is used to implement indoor navigation and motion calculations for the autonomous mobilerobot. This motion calculations are based on how much the estimated path differs from the center vertical line. The highest point is used to move the motors toward that direction. In this way, the robot can move in a real scenario by avoiding different obstacles. Finally, the results are collected by analyzing the motor duty cycle and the neural network execution time to review the robot’s performance. Moreover, a different net comparison is made to determine other architectures’ reaction times and accuracy values.
In this article, we discuss implementation phases for an autonomous navigation of a mobile robotic system using SLAM gathered data, while relying on the features of learned navigation maps. The adopted SLAM based learned maps, was relying entirely on an active stereo vision for observing features of the navigation environment. We show the framework for the used lower-level software coding, that was necessary once a vision is used for multiple purposes, distance measurements, and obstacle discovery. In addition, the article describes the adopted upper-level of system intelligence using fuzzy based decision system. The proposed map based fuzzy autonomous navigation was trained from data patterns gathered during numerous navigation tasks. Autonomous navigation was further validated and veriﬁed on a mobilerobot platform.
mobilerobot between urban buildings . GPS can be combined with LRF and dead reckoning for navigating vehicles in roads using beacons and landmarks , combined with camera vision for localizing mobilerobot in color marked roads , combined with 3D laser scanner for building compact maps to be used in a miscellaneous navigation applications of mobilerobot , combined with IMU, camera vision, and sonar altimeter for navigating unmanned helicopter through urban building  or combined with LRF and inertial sensors for leading mobilerobot ATRV-JR in paved and rugged terrain [11, 12]. GPS can be combined with INS and odometry sensor for helping GPS to increase its positioning accuracy of vehicles . GPS is combined with odometry and GIS (geographical information system) for road matching-based vehicle naviga- tion . GPS can also be combined with LIDAR (light detection and ranging) and INS for leading wheelchair in urban building by 3D map-basednavigation . GPS is combined with a video camera and INS for navigating the vehicles by lane detection of roads . GPS is combined with INS for increasing position estimation of vehicles . GPS is combined with INS and odometry for localizing mobilerobot in urban buildings . Camera video with odometry is used for navigating land vehicle, road following by lane signs and obstacles avoiding . Omni-directional infrared visionsystem can be used for localizing patrol mobilerobot in an electrical station environment . 3D map building and urban scenes are used in mobilerobotnavigation by fusing stereo vision and LRF .
completed by humans due to limited abilities. Robot are known have the higher ability in doing repetitive works with constant performance, working in dangerous area which could danger human life and make the job faster with less rest time. According to Dudek and Jenkin (2000), a mobilerobot is autonomous systems which have intelligent function of traversing a facade with natural or artificial obstacles. The chassis is providing with wheel or legs and possibly a manipulator setup mounted on the chassis for work piece operation, tool or special system. There are variety trepanned operations are perform based on a reprogrammed navigation strategy taking into account the current status of the environment.
fundamental to both cooperation and coordination and hence the central role of the networked system. Embedded computers and sensors are now ubiquitous in homes and factories, and increasingly wireless ad- hoc networks orplug-and-play wired networks are becoming commonplace. Robots are functioning in environments while performing tasks requiring them to coordinate with other robots, cooperate with humans, and act on information derived from multiple sensors. In many cases, these human users, robots and sensors are not collocated, and the coordination and communication happens through a network. Networked robots allow multiple robots and auxiliary entities to perform tasks that are well beyond the abilities of a single robot. Robots can automatically couple to perform locomotion and manipulation tasks that either a single robot cannot perform, or would require a larger special-purpose robot to perform. They can also coordinate to perform search and reconnaissance tasks exploiting the efficiency inherent in parallelism. Further, they can perform independent tasks that need to be coordinated. Another advantage of networked robots is improved efficiency. Tasks like searching or mapping are, in principle, performed faster with an increase in the number of robots. A speed-up in manufacturing operations can be achieved by deploying multiple robots performing operations in parallel, but in a coordinated fashion. Perhaps the greatest advantage of using the network to connect robots is the ability to connect and harness physically-removed assets.
Nowadays many objects or machines are built-in with sensor to make things works more easily, smoothly and intelligently. Object with intelligently system that control by controller and support with vision sensors is called VisionMobileRobotSystem (VMRS). There are many researchers have been used and applied vision sensor to the mobilerobotsystem [1, 2]. This kind of robot can identify the surrounding environment as well as immediate movement can be determined to reach the intermediate or final goal. This because vision sensors are a low-cost sensor and provide a vast amount of information on the environment in which robots can move . In addition they are passive so that vision-basednavigation systems do not suffer from the interferences often observed when using active sound- or light-based proximity sensors . The VRMS mostly used in services robotic application which is equipped with a camera that can be controlled from its visual perception using visual data to control the motion of the robot .
In this research, self-localization and mapping system for autonomous mobilerobotnavigation is used in a structured indoor environment. It is based on the visual detecting process by fusing with the ultrasonic sensor sensing and encoder method. Artificial landmarks were used for robot localization, in which landmark distances and landmark recognition were worked out using camera and ultrasonic sensor. By using the artificial landmarks, the robot can reduce the uncertainty in the environment while estimating the position more accurately. In addition, the coordinates of each detected landmark were combined with distance to the robot from wheel encoders for estimation of the robot location. The system is based on Kalman Filter, which utilizes the data between previous observed position and the detection of artificial landmark to correct the position and orientation of the mobilerobot. This method is applicable to mobilerobot localization and proved to achieve solutions with less computational times. The system only needs to focus on the area which has artificial landmark to extract feature while making localization. So, this system is able to reduce the cost and computational time up to 15 minutes because of the sensor fusion and image processing techniques. In this research, map-based self-localization and mapping system is used to be more efficient in the structured indoor environment.
Besides that, there is also a popular technique called Noise Visibility Function (NVF). It is worked as based on the noise visibility of an image and modelled specially for watermarking. It can be used as the texture masking function. The idea is meant to extract all edges and texture from high frequency bands and choose most prominent among of them for the surveillance and navigation. Visual and night vision images are fused to provide better description for the current natural appearance. The attraction is considered as improving the image visual effect significantly than the existing methods .
Another method is through image processing method to process the data of a camera [2,3,9,13,14]. Segmented Hough Transform and Canny’s algorithm is the most common method used in the analysis corridor environment. Line segment detector (LSD) is an upgrade version of previous method by refer to methods proposed by Burn et al. method and Desolneux et al.. Both methods are using edge detection and line extraction to get the vanish point of the corridor. However, LSD is faster than pervious method in obtaining the line from an image, but LSD can’t get short line from the image . Therefore, it will be loss some of the data in the image. Besides that, block-based image processing method also used to obtain data about the environment . This method using the pixel of the image to get the information. Grouping of the same pixel into a large pixel will increase the respond time but this method only effective in large color different such as road. Hence this method more convenience using in analysis outdoor environment.
2) Absolute Localization: In the absolute localization method, the robot estimates its current position by determining the distance from predefined locations without regard to the previous location estimates. Therefore, any error in the localization measurement does not increase. This method usually employs landmarks to estimate the robot’s location. Landmarks are classified into active and passive landmarks. The former can be satellites or other radio transmitting objects and they actively send out information about the location of the robot. This has the advantage that the robot does not require prior information about the environment. However, the active landmarks’ signals might be disturbed before being received by the robot and this will cause errors in the measurement . The Global Positioning System (GPS) is frequently used to measure the absolute position of robot that use active landmarks. The passive landmarks do not send signals as active landmarks do but they must be actively seen and recognized by the robot in order for it to determine its location. Landmark recognition depends on the type sensors used.
After having a reliable set of inlier, the relative transformation between the current and the past view can be calculated in closed form by singular value decomposition, from the 3D reconstructions of the corners (Haralick et al., 1989; Arun et al., 1987). Thereafter, the reprojection error, or for the sake of speed, the ellipsoid error (Matthies and Shafer, 1987) can be minimized by non-linear optimization. Finally, the error of the transformation can be calculated by error propagation (Stelzer et al., 2012). The transformation error estimate is important for the following processing steps. Pure incremental visual odometry suffers from drift, since small errors accumulate over time. We use a keyframe based approach that computes the transformation of the current frame to a few keyframes. The transformation with the minimal error is used for further processing. Then, the current frame is inserted into the keyframe list, such that it replaces a keyframe which is not visible any more or one with a high transformation error. In this way, visual odometry is drift free if the system is standing. Furthermore, the keyframe approach reduces drift if the system is moving slow in relation to the used frame rate.
The navigation of mobilerobot always includes sensors like ultrasonic sensor, infrared sensor, and laser rangefinder to detect objects and physical changes at the world environment. However, the transmitting signal of these sensors will be interrupted when there is disturbance occurs at the surrounding environment. This will cause to the inaccurate signal and data to be received by the controller. Vision sensor which is a camera-based sensor is more preferable and selected to be developed as the visionsystem to the mobilerobot for navigation. This is because of the data obtained by a camera sensor is of image form and thus occurrence of disturbance to the data transferring will be more difficult. A Pixy colour camera is installed on an Arduino mobilerobot and the visionsystem is developed through programming using Arduino IDE to integrate the camera module with the robot module. The visionsystem is created to perform navigation and obstacle avoidance via several algorithms such as image processing, data extraction, and computation. The integrated vision-guided mobilerobot able to follow line to navigate and switch path to avoid obstacle. The time taken for the vision-guided mobilerobot in line following is much shorter when compared to mobilerobot that utilizes infrared sensor in following the same path. The visionsystem is precise in detecting the colour difference during path switching. It also has high consistency in repeating the same path.
For testing the stereo algorithm we placed the robot in a 60 meter long corridor and programmed it to go from one end to the other end and to avoid walls and obstacles, always trying to the initial orientation. Every time it detected an obstacle it move away from it. The corridor had varying lighting conditions, being dark in some parts, well lit in others with fluorescent lamps, which generate a lot of image noise, or with direct sunlight from the side. Along the corridor there were 7 obstacles with different sizes, shapes and colors, such as a table, a chair and cardboard and styrofoam boxes. During autonomous robotnavigation we randomly placed ourselves in front of the robot. Other persons occasionally passed in front of the robot as well. In the middle of the corridor, it had a wider region with two pillars that the robot also had to avoid. We rearranged the obstacles in three different setups (A, B and C) and made 20 entire runs for each setup. The results can be seen on Table 1.
In this paper, we propose and validate a framework for visual navigation with collision avoidance for a wheeled mobilerobot. Visual navigation consists of following a path, represented as an ordered set of key images, which have been acquired by an on-board camera in a teaching phase. While following such path, the robot is able to avoid obstacles which were not present during teaching, and which are sensed by an on-board range scanner. Our control scheme guarantees that obstacle avoidance and navigation are achieved simultaneously. In fact, in the presence of obstacles, the camera pan angle is actuated to maintain scene visibility while the robot circumnavigates the obstacle. The risk of collision and the eventual avoiding behaviour are determined using a tentacle-based approach. The framework can also deal with unavoidable obstacles, which make the robot decelerate and eventually stop. Simulated and real experiments show that with our method, the vehicle can navigate along a visual path while avoiding collisions.
Two map representations are commonly used, topological and metric, although the trend is to combine both leading to a vast class of hybrid maps, see  for a review. In topological maps the environment is stored with linked nodes. They contain distinctive places  and the connexions between them, without metric information in its basic form . Metric maps store unambiguous locations of objects, usually in a 2D frame, which allow to precisely position them. Objects may be stored from different points of view: they can be considered as punctual , as different points recorded from a surface , corners with an associated orientation  or lines defining polygonal boundaries . Alternatively, they can contain a free space representation, i.e., the portion of the environment that is accessible to the robot. This is the main idea behind occupancy grids , .
. The blimp robot not only creates a good opportunity to explore the environments, but also it increases the efficiency of the exploration since it has many advantages over the small airplanes robots such as long time hovering, much less energy consumed, very low noise and cost efficiency which made them ideal for exploration of areas without disturbing environment . On the other hand, visual navigation, especially for humans and vehicles, is currently one of the most active research topics in computer vision. Actually, the increasing in the applications of robots has made the computer vision an important factor in such research area not only to put cameras in place of human eyes, but it is also to accomplish the entire task as autonomous as possible. Computer vision is demonstrated being a powerful as well as a non-intrusive and low cost sensor useful for many applications in robotics and control system. Hence, many robots can carry a light camera and use the images obtained by cameras in autonomous tasks. Perhaps the most common way to classify the computer vision in robots depends on the complexity degree of the applications. The common process in visionsystem is called a visual tracking that is analyzing of sequential images to identify a reference pattern and follow a moving interest point or defined object over time on the image. There are many tracking methods in which the algorithms based on features, color and shape . The visual odometry analyzed images to extrapolate the robot space movement relies on the image motion, then to estimate the position and orientation of the robot . The next process is the visual navigation which uses the visual data to determine object position as well as the safe path . The visual navigation can be classified as map-basedsystem. In this way, the robot makes a self