images are down-sampled to 320×240 resolution for use in the navigation algorithm. The camera field of view is 41 degrees, and distortion coefficients were calculated in MATLAB using the Camera Calibration Toolbox (Cam, ). The vehicle receives power and communicates with the ground control station over a microfilament, which currently can be extended up to 50m. The microfilament also serves as an analog video link. The main reason for the microfilament is to overcome flight endurance limitations (less than 8 mins) imposed by state-of-the-art batteries. The power transferred through the cable allows the ease to operate over indefinite interval. Because this cable is already in place, the vision aided processing was performed off-board using an analog video link bootstrapped onto this cable. Since the video link is analog, it does not induce any more delay in transmitting the images than if the data were processed onboard. The V-INS and the position control algorithms are run on a computer on the ground and actuator commands are sent to the vehicle. The vehicle itself runs a high-frequency (>600 Hz) innerloop rate controller with a washout filter for attitude feedback. The specifications of the GTMAX, two single core 1.6 GHz pentium M computers, are comparable to the latest dual-core atom processors which could be easily carried by the EASE. Hence, the ability of the platform to carry required computational resources are not the limiting factor in implementing our algorithms onboard the EASE. Figure 1 shows a picture of the EASE. The baseline controller for the EASE consists of an adaptive attitude and velocity controller that is based on the GTMax adaptive flight controller (see (Johnson and Kannan, 2005)) and is implemented offboard. In addition, it features a high bandpass rate damping loop implemented onboard. Similar to GTMax, the EKF described in Section 4 processes vision data and makes updates based on vision at 20 Hz, and also features a sequential update.
Personal navigation requires technologies that are immune to signal obstructions and fading. One of the major challenges is obtaining a good heading solution in different environments and for different user positions without external absolute reference signals. Part of this challenge arises from the complexity and freedom of movement of a typical handheld user where the heading observability considerably degrades in low-speed walking, making this problem even more challenging. However, for short periods, the relative attitude and heading information is quite reliable. Self-contained systems requiring minimal infrastructure, for example, iner- tial measurement units (IMUs), stand as a viable option, since pedestrian navigation is not only focused on outdoor navigation but also on indoornavigation.
This paper has shown the design and implementation details of a low-cost UAS which is capable of semi-autonomous manoeuvre in extremely low light indoor environment without the need for GPS signal. It is also able to stream both visible and thermal video to ground station in real time. The test data has shown that with the present control and sensor implementation, the aerial platform is able to reliably manoeuvre in the testing environment with user only sending the velocity, altitude and heading command, which greatly simpliﬁes the skill requirement for a human pilot. Moreover, the result has also shown the effectiveness of the customised low cost thermal camera on heat signature gathering in such environment, which shows the great potential of deploying such system in safety, security, search and rescue scenarios.
However, for most of the practical safety, security, search and rescue tasks, especially in indoor scenarios, sufficient lighting for either computer vision based autonomous control and vision based data gathering is never easily provided. The problem can be solved by using thermal imaging camera payload, while current off-the-shelf stabilised dual eye visible/thermal camera payloads range in cost from $4000 upwards. Therefore, being able to be realistically deployed in such hazard-rich environments requires the system to be cost effective to reduce the risk of any failure.
The Global Positioning System (GPS) is used in multifarious systems as a position estimation sensor for vehicle navigation. Most inertial navigation systems (INS) are dependent on GPS to correct for drift errors that accumulate when processing inertial measurement unit (IMU) data (i.e., accelerometer and rate gyro data). For many scenarios of interest, however, such as UAV missions in urban environments, indoor operations, or space exploration missions, GPS data may be intermittent, corrupted, or completely denied. As a result, there has been an increasing need for navigation solutions that do not depend on GPS. The advent of computer vision and control theory in autonomous navigation applications, and the introduction of monocular camera systems on unmanned vehicles for the same purpose, has received growing interest as an alternative sensor to GPS systems. Therefore, vision-aided navigation systems represent a potentially important enabling technology for autonomous vehicle development. The diverse variety of available image processing algorithms serves as a primary reason for developing, implementing and testing two completely different tracking algorithms for advanced navigation applications with unmanned systems.
Table 4.4 shows the ATE’s resulted from these HLVN variants on the three datasets. We find that the inclusion of more feature types helps reduce the ATE’s, though the improve- ment may vary in different scenarios. For example, the error reduction brought by more feature types is less significant on Bicocca than on HRBB4 or Malaga6; this conforms to the observation in Fig. 4.10(a) that key points dominate the overall cost of LBA through- out Bicocca. Unfortunately it is hard to judge the relative importance between individual feature types in general since it is essentially a scene-dependent problem. Nonetheless, the result implies that exploiting more features types and geometric constraints, when- ever available, effectively reduces the overall estimation error. This justifies our choice of fusing heterogeneous landmarks for visual navigation in man-made environments, despite higher computational demands.
This section presents the various indoor positioning techniques that are appropriate for WiFi devices. The majority of the available research is focused on trilateration using the RSSI signal for calculating distances although several fairly recent articles have explored a cell based approach. Research is also being done on fingerprinting techniques.
Radio Frequency Identification (RFID) tags are seen as another approach in indoornavigation. It is considered as a promising technology in positioning system. RFID tags use radio waves to capture and exchange information. They mainly identify and track the tags which are attached to the objects. RFID as an important tool for detecting objects that are occluded from satellite visibility. RFID is very popular as it has high research interest due to its ability to provide a low-cost and high output capacity. RFID tag is simply designed with immensely low power consumption and thereby lowering the cost of the system .
Many types of sensors are being used for robot navigation. Some can be cheap, such as infrared and ultrasonic rangefinders , RFID [12,7] and GPS [10,17], but others can be quite expensive, like laser rangefinders . They allow ro- bots to acquire information about the environment within certain ranges and depending on certain environmental conditions. However, such sensors are not always appropriate if we want to build a robot that can adapt to changes in the complex world. The use of cameras as sensors offers new possibilities. Vision can provide information about odometry and obstacles in the path of the robot, or find landmarks  and objects along the path. A robot can also detect humans and interact with them by understanding their gestures. The advent of low-cost RGB-D sensors has spawned a lot of interest since they allow to get reliable depth maps effortlessly if compared to stereo vision [8,2]. However, such sensors also have their limitations, like the limited range and the fact that they can only be used indoors.
In this report, we propose a navigation system for smartphones capable of guiding users accurately to their destinations in an unfamiliar indoor environment, without requiring any expensive alterations to the infrastructure or any prior knowledge of the site’s layout.
If determining latitude is a relatively easy task, as it is for humans, it seems reasonable to assume that this is also the case in animals. Indeed, a number of studies have indicated how animals can recognise latitudinal displacements. The best evidence for latitudinal cues used in navigation comes not from migrating birds but from newts, juvenile turtles and lobsters (Boles and Lohmann, 2003; Fischer et al., 2001; Lohmann et al., 2004); all three studies have indicated that changing the intensity and inclination of the magnetic field to a value far outside the natural one at the site of testing results in the animal perceiving its location as latitudinally displaced. Whether this represents the use of a gradient map to allow determination of precise latitudinal displacement or a simpler system in which the animal relies on a rule of thumb to the effect of ‘when the magnetic field is greater than the goal, orient southward until it matches the home value’ has not yet been determined as these have not been combined with longitudinal displacements. More curious is that the distances over which these animals would normally be required to home are in the region of 10–30km or less. Because of local variation in the magnetic field, it has been proposed that a magnetic map is inoperative or, at best, highly inaccurate over these distances (Bingman and Cheng, 2006; Phillips et al., 2006).
PERCEPT (Ganz et al., 2011) was proposed as a system to improve the perception of visually impaired people of the indoor environments. The system has some different parts such as passive RFIDs which are installed in the environment, a handheld Android device unit which designed in this project, PERCEPT glove, and PERCEPT server. The Android device is basically the client side of the system and it has some distinct modules such as: Bluetooth module, PERCEPT application, Wi–Fi module, and text to speech engine. The server side of the system is responsible for generating and storing the building information and the RFID tags deployment. The maps are prepared using Quantum GIS. According to the components and parts of the system, it could be classified into three classes and the architecture of the system could be explained as the Environment, the PERCEPT glove and Android client and the PERCEPT server.
Since 2010, the Quantitec GmbH has been developing an indoor localization system that unites two technologies from aviation & aerospace. Thanks to the fusion of inertial-sensors technology and radio-based time of flight measurement, IntraNav (Intralogistics Navigation) is the world‘s most cost- effective, accurate and industrial-grade positioning system for indoor and outdoor areas.
but in the same time is cost-effective and easy to implement. The proposed navigation strategy uses the locations of three laser dots generated by the three laser beams fixed to the body of the vehicle to identify the status of the UAV without further requirement for optimization algorithm, fusing data estimation method like Extended Kalman Filter or additional GPS/IMU sensors. To capture the laser dots on the ground and analyze their coordinates, a computer vision technique is used, e.g., the Scale Invariant Feature Transform (SIFT) algorithm developed in . The computer vision algorithm is needed only to identify the positions of the three laser dots, and therefore, the required image processing is easy and can be done on-board. This in turn helps to reduce the computational cost of the algorithm and increase the level of autonomy of the vehicle. Furthermore, the identification of the coordinates of the laser dots can be done by recognizing one landmark in the image which makes the quality of the image less important compared with other vision-based navigation strategies. The proposed system is developed to be used indoor, however, it can be used in outdoors areas where the UAV flies in low attitude above planar surfaces such as sport fields.
For many years, researchers have sought to develop a methodology for the automatic generation of indoornavigation models [4–6]. Two main types of methods have been proposed: geometric methods that support the generation of Geometric Network Models (GNM) and grid methods for developing Regular Grid Models (RGM) . Geometric methods rely on the topological relations between neighboring structures [4,8], and they are referred to as Node‐Relation Structure (NRS) models. A Node‐Relation Structure is represented by graphs composed of nodes and edges. The nodes are usually centroids that identify structures. Network edges are lines connecting the centroids of the neighboring structures. A similar method of representation is used in the grid model. The elements of the grid constitute structures [9–11]. Structures are usually described with the use of semantic data, such as type of use. A building can be composed of corridors, rooms and stairs. The NRS model features sub‐graphs describing only the relations that are based on semantic data, such as (corridor‐room) relations. Lee [4,8] developed a method for transforming the NRS model into a network with the use of the Medial Axis Transformation (MAT) algorithm which has been extensively described in the literature [12–14]. The MAT algorithm has also been deployed in other solutions [15,7]. The results were generally satisfactory, but in open spaces (crossing paths), the generated lines were curved. Certain modifications, such as increasing the density of the initial model, are required to eliminate curved lines . However, this method generates lines that only approximate the real middle line . Numerous solutions have been proposed to deal with this problem [18,19]
to General Motors in 1960; while Elmer robot (1949) could be considered as the first mobile robot, which is able to avoid obstacles and move to charging station. Since that time, the robotics made great strides; thanks to the powerful processors, artificial intelligence, sensory systems and many other fields which offered promising applications for the mobile robotics in daily life. Nowadays, the world is on the door of the fourth industrial revolution which robotics is one of the fundamental pillars in it. That’s why, the major companies and research institutes are investing intensively in this field. According to the International Federation of Robotics (IFR), the total number of sold service robots in 2014 is 24,207 , and this proves that the robotics technology brings promising solution in the close future. Many robotic systems are put into investment, as robotic arms, quadrotors, drones, indoor mobile robots, surgery, and space robots. The fast progress in mechanical engineering, electric engineering, computer systems, and artificial intelligence will lead to have robots in many other fields such as self-driving cars, military, industry and life science laboratories.
195 length of 4 or 5 bits, the GALILEO E1 OS interleaver cannot reduced at all the effect of the burst. Moreover, the short size of the GALILEO E1 OS signal words make that some burst of errors are longer than the word itself, and thus it is impossible to recover the word for any existing channel code since the interleaver is only applied over individual words. Therefore, the difference in demodulation performance between the signals is increased compared to the AWGN case and to the 30 km/h case. Moreover, this difference is farther increased by the fact that the LDPC code implemented for the GPS L1C signal is much more robust to the burst of errors than the convolutional code implemented for GALILEO E1 OS signal. Some representative values are 5.4 dB for a BER of 10 -5 , 4.8 dB for a WER of 10 -3 and 5.7 for an EER of 10 -3 . And, as well as in the AWGN case, these differences grow as the C/N 0 increases. Additionally, from the previous mobile channel results and the AWGN channel results, it is confirmed that the strategy of dividing the transmission of the ephemeris data set into 4 words does not work either into an AWGN channel or into a mobile channel. Indeed, the difference between the signals obtained when searching the same WER and EER values is increased. In a previous paragraph, it was commented that the demodulation performance results presented before were the best punctual results that each signal could expect to obtain in the simulated scenario. Nevertheless, the average ephemeris data demodulation performance obtained by the GPS L1C signal was also analyzed in this chapter. The extracted conclusions are presented below.
Throughout this study, random processes are described as first-order Gauss-Markov or the additive white noise processes. Furthermore, the errors that occur due to quantization, averaging, rounding of the measured values and conversion between different data types are ignored. The accuracy of the navigation solution is established based on the control points (CP) defined for the test trajectory. These referent CPs are precisely determined by the differential GPS.