Top PDF Intelligent vision-based navigation system for mobile robot: A technological review

Intelligent vision-based navigation system for mobile robot: A technological review

Intelligent vision-based navigation system for mobile robot: A technological review

Vision system is gradually becoming more important. As computing technology advances, it has been widely utilized in many industrial and service sectors. One of the critical applications for vision system is to navigate mobile robot safely. In order to do so, several technological elements are required. This article focuses on reviewing recent researches conducted on the intelligent vision-based navigation system for the mobile robot. These include the utilization of mobile robot in various sectors such as manufacturing, warehouse, agriculture, outdoor navigation and other service sectors. Multiple intelligent algorithms used in developing robot vision system were also reviewed.
Show more

11 Read more

Vision-Based Mobile Robot

Vision-Based Mobile Robot

This article is written together by Shiao Ying Shing, Yang Jui Liang, and Su Ding Tsair from Department of Electrical Engineering from National Chang-hua University of Education, Taiwan. This article provides an alternative way for vision-based approach navigation. The control objective for the vision-based tracking problem is to manipulate the WMR to follow the desired trajectory. This system is composed of a ceiling-mounted fixed camera whose outputs are connected to a host computer, and a WMR with two atop different color, round marks to differentiate the front and rear of the WMR as show in figure 2.4.
Show more

24 Read more

SLAM Based Autonomous Mobile Robot
Navigation using Stereo Vision

SLAM Based Autonomous Mobile Robot Navigation using Stereo Vision

Slam is considered as an important integral of autonomous mobile robot navigation. It represents an ability to place an autonomous mobile robot at an unfamiliar location in an unknown environment, and then have it figure a map, using only relative observation of the environment, and then to use this map simultaneously to navigate. The focal idea behind the SALM technique, it is an observation of robot motion from a starting unknown posture or vicinity while maneuvering within an environment. Furthermore, absolute features localities are not accessible. It is proposed to adopt linear evolution of motion as this results in robot synchronous discrete-time model and the observations of landmarks are known. It is known that robot motion and observation of features, the landmarks, are nonlinear while navigating, however, using linear models does not reduce the accuracy of the approach. In this regard, within this paper, we shall use nonlinear robot model in addition to nonlinear observation models. This includes system state, such as position and orientation of the robot, in addition to the position of landmarks. Denote the state of the KSU- IMR robot as X ij ( ) k . While maneuvering, the dynamical motion of the robot is modeled by linear discrete-time state transition equation, Eq. (5):
Show more

14 Read more

Intelligent Fuzzy Logic based controller scheme for a mobile robot navigation

Intelligent Fuzzy Logic based controller scheme for a mobile robot navigation

The research is motivated by the gap between the current available technology and new Application demands. The current available industrial robots used in production and/or manufacturing lack flexibility, adaptability and autonomy, typically performing pre- programmed sequences of operations as pre-decided by the programmer, in highly constrained environments and these systems prove unable to function in new environments or face unexpected situations. Soft computing techniques like fuzzy logic, which use the tolerance for imprecision inherent in most real world systems, to improve performance of a system in an iterative manner have become hugely popular methods for controller design in mobile robotics. These techniques are used for expressing various subjective uncertainties in human-like behavior. The real world problems cannot be defined in crisp (hard) logic, and are characterized by uncertainties, so a fuzzy based modeling scheme was considered appropriate to design the controller of the robot in order to deal with real life data.
Show more

54 Read more

Semantic Segmentation to Develop an Indoor Navigation System for an Autonomous Mobile Robot

Semantic Segmentation to Develop an Indoor Navigation System for an Autonomous Mobile Robot

Received: 28 April 2020; Accepted: 23 May 2020; Published: 25 May 2020    Abstract: In this study, a semantic segmentation network is presented to develop an indoor navigation system for a mobile robot. Semantic segmentation can be applied by adopting different techniques, such as a convolutional neural network (CNN). However, in the present work, a residual neural network is implemented by engaging in ResNet-18 transfer learning to distinguish between the floor, which is the navigation free space, and the walls, which are the obstacles. After the learning process, the semantic segmentation floor mask is used to implement indoor navigation and motion calculations for the autonomous mobile robot. This motion calculations are based on how much the estimated path differs from the center vertical line. The highest point is used to move the motors toward that direction. In this way, the robot can move in a real scenario by avoiding different obstacles. Finally, the results are collected by analyzing the motor duty cycle and the neural network execution time to review the robot’s performance. Moreover, a different net comparison is made to determine other architectures’ reaction times and accuracy values.
Show more

19 Read more

Implementation of Fuzzy Decision Based Mobile Robot Navigation Using Stereo Vision

Implementation of Fuzzy Decision Based Mobile Robot Navigation Using Stereo Vision

In this article, we discuss implementation phases for an autonomous navigation of a mobile robotic system using SLAM gathered data, while relying on the features of learned navigation maps. The adopted SLAM based learned maps, was relying entirely on an active stereo vision for observing features of the navigation environment. We show the framework for the used lower-level software coding, that was necessary once a vision is used for multiple purposes, distance measurements, and obstacle discovery. In addition, the article describes the adopted upper-level of system intelligence using fuzzy based decision system. The proposed map based fuzzy autonomous navigation was trained from data patterns gathered during numerous navigation tasks. Autonomous navigation was further validated and verified on a mobile robot platform.
Show more

8 Read more

Online Mapping-Based Navigation System for Wheeled Mobile Robot in Road Following and Roundabout

Online Mapping-Based Navigation System for Wheeled Mobile Robot in Road Following and Roundabout

mobile robot between urban buildings [6]. GPS can be combined with LRF and dead reckoning for navigating vehicles in roads using beacons and landmarks [7], combined with camera vision for localizing mobile robot in color marked roads [8], combined with 3D laser scanner for building compact maps to be used in a miscellaneous navigation applications of mobile robot [9], combined with IMU, camera vision, and sonar altimeter for navigating unmanned helicopter through urban building [10] or combined with LRF and inertial sensors for leading mobile robot ATRV-JR in paved and rugged terrain [11, 12]. GPS can be combined with INS and odometry sensor for helping GPS to increase its positioning accuracy of vehicles [13]. GPS is combined with odometry and GIS (geographical information system) for road matching-based vehicle naviga- tion [14]. GPS can also be combined with LIDAR (light detection and ranging) and INS for leading wheelchair in urban building by 3D map-based navigation [15]. GPS is combined with a video camera and INS for navigating the vehicles by lane detection of roads [16]. GPS is combined with INS for increasing position estimation of vehicles [17]. GPS is combined with INS and odometry for localizing mobile robot in urban buildings [18]. Camera video with odometry is used for navigating land vehicle, road following by lane signs and obstacles avoiding [19]. Omni-directional infrared vision system can be used for localizing patrol mobile robot in an electrical station environment [20]. 3D map building and urban scenes are used in mobile robot navigation by fusing stereo vision and LRF [21].
Show more

23 Read more

Analysis On Cornering Performance Of PLC Based Mobile Robot Navigation System

Analysis On Cornering Performance Of PLC Based Mobile Robot Navigation System

completed by humans due to limited abilities. Robot are known have the higher ability in doing repetitive works with constant performance, working in dangerous area which could danger human life and make the job faster with less rest time. According to Dudek and Jenkin (2000), a mobile robot is autonomous systems which have intelligent function of traversing a facade with natural or artificial obstacles. The chassis is providing with wheel or legs and possibly a manipulator setup mounted on the chassis for work piece operation, tool or special system. There are variety trepanned operations are perform based on a reprogrammed navigation strategy taking into account the current status of the environment.
Show more

24 Read more

Sensor-Based Intelligent Mobile Robot Navigation in Unknown Environments Prasad Layam 1, Dr. Vitushi Sarma2

Sensor-Based Intelligent Mobile Robot Navigation in Unknown Environments Prasad Layam 1, Dr. Vitushi Sarma2

fundamental to both cooperation and coordination and hence the central role of the networked system. Embedded computers and sensors are now ubiquitous in homes and factories, and increasingly wireless ad- hoc networks orplug-and-play wired networks are becoming commonplace. Robots are functioning in environments while performing tasks requiring them to coordinate with other robots, cooperate with humans, and act on information derived from multiple sensors. In many cases, these human users, robots and sensors are not collocated, and the coordination and communication happens through a network. Networked robots allow multiple robots and auxiliary entities to perform tasks that are well beyond the abilities of a single robot. Robots can automatically couple to perform locomotion and manipulation tasks that either a single robot cannot perform, or would require a larger special-purpose robot to perform. They can also coordinate to perform search and reconnaissance tasks exploiting the efficiency inherent in parallelism. Further, they can perform independent tasks that need to be coordinated. Another advantage of networked robots is improved efficiency. Tasks like searching or mapping are, in principle, performed faster with an increase in the number of robots. A speed-up in manufacturing operations can be achieved by deploying multiple robots performing operations in parallel, but in a coordinated fashion. Perhaps the greatest advantage of using the network to connect robots is the ability to connect and harness physically-removed assets.
Show more

5 Read more

Vision mobile robot system with color 
		optical sensor

Vision mobile robot system with color optical sensor

Nowadays many objects or machines are built-in with sensor to make things works more easily, smoothly and intelligently. Object with intelligently system that control by controller and support with vision sensors is called Vision Mobile Robot System (VMRS). There are many researchers have been used and applied vision sensor to the mobile robot system [1, 2]. This kind of robot can identify the surrounding environment as well as immediate movement can be determined to reach the intermediate or final goal. This because vision sensors are a low-cost sensor and provide a vast amount of information on the environment in which robots can move [3]. In addition they are passive so that vision-based navigation systems do not suffer from the interferences often observed when using active sound- or light-based proximity sensors [4]. The VRMS mostly used in services robotic application which is equipped with a camera that can be controlled from its visual perception using visual data to control the motion of the robot [5].
Show more

5 Read more

Vision-Based Mobile Robot Self-localization and Mapping System for Indoor Environment

Vision-Based Mobile Robot Self-localization and Mapping System for Indoor Environment

In this research, self-localization and mapping system for autonomous mobile robot navigation is used in a structured indoor environment. It is based on the visual detecting process by fusing with the ultrasonic sensor sensing and encoder method. Artificial landmarks were used for robot localization, in which landmark distances and landmark recognition were worked out using camera and ultrasonic sensor. By using the artificial landmarks, the robot can reduce the uncertainty in the environment while estimating the position more accurately. In addition, the coordinates of each detected landmark were combined with distance to the robot from wheel encoders for estimation of the robot location. The system is based on Kalman Filter, which utilizes the data between previous observed position and the detection of artificial landmark to correct the position and orientation of the mobile robot. This method is applicable to mobile robot localization and proved to achieve solutions with less computational times. The system only needs to focus on the area which has artificial landmark to extract feature while making localization. So, this system is able to reduce the cost and computational time up to 15 minutes because of the sensor fusion and image processing techniques. In this research, map-based self-localization and mapping system is used to be more efficient in the structured indoor environment.
Show more

19 Read more

Development Of A Mobile Robot For Night Vision Assistive System

Development Of A Mobile Robot For Night Vision Assistive System

Besides that, there is also a popular technique called Noise Visibility Function (NVF). It is worked as based on the noise visibility of an image and modelled specially for watermarking. It can be used as the texture masking function. The idea is meant to extract all edges and texture from high frequency bands and choose most prominent among of them for the surveillance and navigation. Visual and night vision images are fused to provide better description for the current natural appearance. The attraction is considered as improving the image visual effect significantly than the existing methods [10].
Show more

24 Read more

Mobile Robot Navigation System Vision Based through Indoor Corridors

Mobile Robot Navigation System Vision Based through Indoor Corridors

Another method is through image processing method to process the data of a camera [2,3,9,13,14]. Segmented Hough Transform and Canny’s algorithm is the most common method used in the analysis corridor environment. Line segment detector (LSD) is an upgrade version of previous method by refer to methods proposed by Burn et al. method and Desolneux et al.. Both methods are using edge detection and line extraction to get the vanish point of the corridor. However, LSD is faster than pervious method in obtaining the line from an image, but LSD can’t get short line from the image [8]. Therefore, it will be loss some of the data in the image. Besides that, block-based image processing method also used to obtain data about the environment [6]. This method using the pixel of the image to get the information. Grouping of the same pixel into a large pixel will increase the respond time but this method only effective in large color different such as road. Hence this method more convenience using in analysis outdoor environment.
Show more

10 Read more

Review of Vision Based Robot Navigation System in Dynamic Environment

Review of Vision Based Robot Navigation System in Dynamic Environment

2) Absolute Localization: In the absolute localization method, the robot estimates its current position by determining the distance from predefined locations without regard to the previous location estimates. Therefore, any error in the localization measurement does not increase. This method usually employs landmarks to estimate the robot’s location. Landmarks are classified into active and passive landmarks. The former can be satellites or other radio transmitting objects and they actively send out information about the location of the robot. This has the advantage that the robot does not require prior information about the environment. However, the active landmarks’ signals might be disturbed before being received by the robot and this will cause errors in the measurement [4]. The Global Positioning System (GPS) is frequently used to measure the absolute position of robot that use active landmarks. The passive landmarks do not send signals as active landmarks do but they must be actively seen and recognized by the robot in order for it to determine its location. Landmark recognition depends on the type sensors used.
Show more

7 Read more

Computer Vision for Mobile Robot Navigation

Computer Vision for Mobile Robot Navigation

After having a reliable set of inlier, the relative transformation between the current and the past view can be calculated in closed form by singular value decomposition, from the 3D reconstructions of the corners (Haralick et al., 1989; Arun et al., 1987). Thereafter, the reprojection error, or for the sake of speed, the ellipsoid error (Matthies and Shafer, 1987) can be minimized by non-linear optimization. Finally, the error of the transformation can be calculated by error propagation (Stelzer et al., 2012). The transformation error estimate is important for the following processing steps. Pure incremental visual odometry suffers from drift, since small errors accumulate over time. We use a keyframe based approach that computes the transformation of the current frame to a few keyframes. The transformation with the minimal error is used for further processing. Then, the current frame is inserted into the keyframe list, such that it replaces a keyframe which is not visible any more or one with a high transformation error. In this way, visual odometry is drift free if the system is standing. Furthermore, the keyframe approach reduces drift if the system is moving slow in relation to the used frame rate.
Show more

12 Read more

Optimization Of Vision-Guided Mobile Robot Navigation

Optimization Of Vision-Guided Mobile Robot Navigation

The navigation of mobile robot always includes sensors like ultrasonic sensor, infrared sensor, and laser rangefinder to detect objects and physical changes at the world environment. However, the transmitting signal of these sensors will be interrupted when there is disturbance occurs at the surrounding environment. This will cause to the inaccurate signal and data to be received by the controller. Vision sensor which is a camera-based sensor is more preferable and selected to be developed as the vision system to the mobile robot for navigation. This is because of the data obtained by a camera sensor is of image form and thus occurrence of disturbance to the data transferring will be more difficult. A Pixy colour camera is installed on an Arduino mobile robot and the vision system is developed through programming using Arduino IDE to integrate the camera module with the robot module. The vision system is created to perform navigation and obstacle avoidance via several algorithms such as image processing, data extraction, and computation. The integrated vision-guided mobile robot able to follow line to navigate and switch path to avoid obstacle. The time taken for the vision-guided mobile robot in line following is much shorter when compared to mobile robot that utilizes infrared sensor in following the same path. The vision system is precise in detecting the colour difference during path switching. It also has high consistency in repeating the same path.
Show more

24 Read more

Biologically Inspired Vision for Indoor Robot Navigation

Biologically Inspired Vision for Indoor Robot Navigation

For testing the stereo algorithm we placed the robot in a 60 meter long corridor and programmed it to go from one end to the other end and to avoid walls and obstacles, always trying to the initial orientation. Every time it detected an obstacle it move away from it. The corridor had varying lighting conditions, being dark in some parts, well lit in others with fluorescent lamps, which generate a lot of image noise, or with direct sunlight from the side. Along the corridor there were 7 obstacles with different sizes, shapes and colors, such as a table, a chair and cardboard and styrofoam boxes. During autonomous robot navigation we randomly placed ourselves in front of the robot. Other persons occasionally passed in front of the robot as well. In the middle of the corridor, it had a wider region with two pillars that the robot also had to avoid. We rearranged the obstacles in three different setups (A, B and C) and made 20 entire runs for each setup. The results can be seen on Table 1.
Show more

8 Read more

Visual navigation of a mobile robot with laser-based collision avoidance

Visual navigation of a mobile robot with laser-based collision avoidance

In this paper, we propose and validate a framework for visual navigation with collision avoidance for a wheeled mobile robot. Visual navigation consists of following a path, represented as an ordered set of key images, which have been acquired by an on-board camera in a teaching phase. While following such path, the robot is able to avoid obstacles which were not present during teaching, and which are sensed by an on-board range scanner. Our control scheme guarantees that obstacle avoidance and navigation are achieved simultaneously. In fact, in the presence of obstacles, the camera pan angle is actuated to maintain scene visibility while the robot circumnavigates the obstacle. The risk of collision and the eventual avoiding behaviour are determined using a tentacle-based approach. The framework can also deal with unavoidable obstacles, which make the robot decelerate and eventually stop. Simulated and real experiments show that with our method, the vehicle can navigate along a visual path while avoiding collisions.
Show more

37 Read more

GIS Map Based Mobile Robot Navigation in Urban Environments

GIS Map Based Mobile Robot Navigation in Urban Environments

Two map representations are commonly used, topological and metric, although the trend is to combine both leading to a vast class of hybrid maps, see [6] for a review. In topological maps the environment is stored with linked nodes. They contain distinctive places [9] and the connexions between them, without metric information in its basic form [17]. Metric maps store unambiguous locations of objects, usually in a 2D frame, which allow to precisely position them. Objects may be stored from different points of view: they can be considered as punctual [5], as different points recorded from a surface [16], corners with an associated orientation [7] or lines defining polygonal boundaries [8]. Alternatively, they can contain a free space representation, i.e., the portion of the environment that is accessible to the robot. This is the main idea behind occupancy grids [12], [13].
Show more

6 Read more

Ground Robot Navigation Relies on Blimp Vision Information

Ground Robot Navigation Relies on Blimp Vision Information

[2]. The blimp robot not only creates a good opportunity to explore the environments, but also it increases the efficiency of the exploration since it has many advantages over the small airplanes robots such as long time hovering, much less energy consumed, very low noise and cost efficiency which made them ideal for exploration of areas without disturbing environment [3]. On the other hand, visual navigation, especially for humans and vehicles, is currently one of the most active research topics in computer vision. Actually, the increasing in the applications of robots has made the computer vision an important factor in such research area not only to put cameras in place of human eyes, but it is also to accomplish the entire task as autonomous as possible. Computer vision is demonstrated being a powerful as well as a non-intrusive and low cost sensor useful for many applications in robotics and control system. Hence, many robots can carry a light camera and use the images obtained by cameras in autonomous tasks. Perhaps the most common way to classify the computer vision in robots depends on the complexity degree of the applications. The common process in vision system is called a visual tracking that is analyzing of sequential images to identify a reference pattern and follow a moving interest point or defined object over time on the image. There are many tracking methods in which the algorithms based on features, color and shape [4]. The visual odometry analyzed images to extrapolate the robot space movement relies on the image motion, then to estimate the position and orientation of the robot [5]. The next process is the visual navigation which uses the visual data to determine object position as well as the safe path [6]. The visual navigation can be classified as map-based system. In this way, the robot makes a self
Show more

6 Read more

Show all 10000 documents...