Tarot FY680 is a hexacopter frame that has 695 mm diameter and 180 mm height. Hexarotor frame is desired to support the total weight of the body whose maximum weight is approximately 6 kg. DJI E800 propulsion package is composed of 6 sets of 1345 propeller, a 3510/350KV brushless direct current (BLDC) motor, and a 6S 20A electric speed controller (ESC). The 1345 propeller has 13 inches or 33 cm diameter and the pitch angle that can travel 4.5 inches or 11.43 cm per single revolution. The BLDC motor has a stator size of 35 mm × 10 mm and capability of 350 rpm per volt. Unlike conventional brushed DC motors, the BLDC motors do not use the brush but use a polarity-altering stator and permanent magnets. The BLDC motor is preferred because it is efficient and has robust output compared to brushed DC motors. To operate the BLDC motor, the ESC is used which expects a certain range of pulse width modulation (PWM). The ESC has input signal frequency spans from 30 Hz to 450 Hz. For the consistent control, synchronized ESC-motor performance is desired. DJI E800 propulsion system is pre-tuned by the manufacturer so that the same PWM input is guaranteed to output the same RPM for the given input PWM.
However, for most of the practical safety, security, search and rescue tasks, especially in indoor scenarios, sufficient lighting for either computer visionbasedautonomous control and visionbased data gathering is never easily provided. The problem can be solved by using thermal imaging camera payload, while current off-the-shelf stabilised dual eye visible/thermal camera payloads range in cost from $4000 upwards. Therefore, being able to be realistically deployed in such hazard-rich environments requires the system to be cost effective to reduce the risk of any failure.
This paper has shown the design and implementation details of a low-cost UAS which is capable of semi-autonomous manoeuvre in extremely low light indoor environment without the need for GPS signal. It is also able to stream both visible and thermal video to ground station in real time. The test data has shown that with the present control and sensor implementation, the aerial platform is able to reliably manoeuvre in the testing environment with user only sending the velocity, altitude and heading command, which greatly simpliﬁes the skill requirement for a human pilot. Moreover, the result has also shown the effectiveness of the customised low cost thermal camera on heat signature gathering in such environment, which shows the great potential of deploying such system in safety, security, search and rescue scenarios.
The proposed filter architecture has been tested on real flight- test data and on-board autonomous UAV helicopter. The helicopter is based on a commercial Yamaha Rmax UAV helicopter (Figure 1). The total helicopter length is about 3.6 m. It is powered by a 21 hp two-stroke engine, and it has a maximum take-o ﬀ weight of 95 kg. The avionic system was developed at the Department of Computer and Information Science at Link¨oping University and has been integrated in the Rmax helicopter. The platform developed is capable of fully autonomous flight from take-o ﬀ to landing. The sensors used in this work consist of an inertial measurement unit (three accelerometers and three gyros) which provides the helicopter’s acceleration and angular rate along the three body axes, a barometric altitude sensor, and a monocular CCD video camera mounted on a pan/tilt unit. The avionic system is based on 3 embedded computers. The primary flight computer is a PC104 PentiumIII 700 MHz. It implements the low-level control system which includes the control modes (take-oﬀ, hovering, path following, landing, etc.), sensor data acquisition, and the communication with the helicopter platform. The second computer, also a PC104 PentiumIII 700 MHz, implements the image processing functionalities and controls the camera pan-tilt unit. The third computer, a PC104 Pentium-M 1.4 GHz, implements high-level functionalities such as path-planning and task- planning. Network communication between computers is
50 navigation strategies. Some authors centre their research in detecting and tracking the ground space across consecutive images, and steering the robot towards free space. Pears and Liang (2001) use homographies to track ground plane corners in indoor environments, with a new navigation algorithm called H-based Tracker. Liang and Pears (2002) extend this work using also homographies to calculate height of tracked features or obstacles above the ground plane during the navigation process. The accuracy of the navigation strategy must be a strategic point in aerial motion where the speed is high, the processing time must be reduced and the tracking process needs to be more accurate. Ollero et al. (2004) maintain an estimate of the homography matrix to compensate the UAV motion and detect objects. Saeedi et al. (2006) use stereo vision to navigate in unstructured indoor/outdoor environments. Detected corner features are positioned in 3D and tracked using normalized mean-squared differences and correlation measurements. Supporting vision information with GNSS data in outdoor environments is another possibility of increasing reliability in position estimation. Mejias et al. (2005) combined a feature tracking algorithm with GNSS positioning in the navigationsystem of the autonomous helicopter AVATAR. The vision process combined image segmentation and binarization to identify pre-defined features, such as house windows, and a Kalman filter-based algorithm to match and track these windows. The Scale Invariant Feature Transform (SIFT) method, developed by Lowe (1999), stands out among other image feature or relevant points detection techniques, and has become a method commonly used in landmark detection applications. SIFT- based methods extract features that are invariant to image scaling, rotation, and illumination or camera view-point changes. During the UAV navigation process, SIFT features are used as landmarks to be tracked for navigation, global localization (Se et al., 2005) and vision-based SLAM (Se et al., 2002).
therefore are very attractive for MAVs with limited payloads. Approsches are proposed to utilize a optical flow-based velocity estimator , or a monocular SLAM frame- work [87, 113, 115], as the main vision processing pipeline. In conjunction with a loosely coupled filtering framework, these approaches successfully enable autonomous quadrotor flight via a downward-facing camera. However, due to the unavaibility of direct distance measurement of monocular visionsystem, these approaches relies on the assumption of slowly-varying or good initialization of the visual scale. This can be difficult to en- force during fast motions at low-altitudes in unknown environments with potentially rapid changes in the observed features. On the other hand, stereo vision-based state estimation approaches for autonomous MAVs such as those proposed in [27, 89] have the advantage of direct scale observation. RGB-D sensors  have even higher depth measurement accuracy than stereo-based systems, but RGB-D sensors are inoperable in outdoor envi- ronment with strong sunlight. On the other end of the spectrum, Bills et al.  propose a map-less navigation algorithm for use with a MAV to circumvent this challenge, but at the cost of not building a global representation of the environment.
Algorithms for simultaneous estimation of vehicle altitude and elevation of underlying surfaces, in addition to path planning and obstacle avoidance, are examined , , , . Customized hardware solutions addressing these design considerations are explored in  and , while most commercial drones used for research purposes, as in  are not easily modifiable for customized control. Algorithms for obstacle detection and avoidance include camera- and laser- based SLAM  and CEO for fuzzy logic control . Grzonka  implements a SLAM algorithm in 2-D wherein a quadrotor platform, a derivative of the Mikrokopter for flight navigation, is operating in a large class of indoor environments by using efficient variants of 2-D that work on dense grid maps. The navigationsystem is based on a modular architecture in which different modules communicate via the network by using a publish-subscribe mechanism . In a separate study, a novel CEO-based Fuzzy Logic Controller (FLC) for Fail-Safe UAV has been presented by the authors  to expand its collision avoidance capabilities in the GPS- denied environments using Monocular Visual-Inertial SLAM-based strategy. Obstacle avoidance has been implemented , , ; the former utilizing ultrasonic sensor technology, with a state machine and PID controller . Vision-based sensors are applied , combining camera tracking with blob detection methods .
complex tasks autonomously and in real-time. The proposed algorithms deal with solving the navigation problem for outdoor as well as indoor environments, mainly based on visual information that is captured by monocular cameras. In addition, this dissertation presents the advantages of using the visual sensors as the main source of data, or complementing other sensors in providing useful information; in order to improve the accuracy and the robustness of the sensing purposes.
The thesis begins with a literature review (Chapter 2) that explores and outlines the current state-of-the-art methods used by both traditional and visual positioning systems for unmannedaerial vehicles. It proceeds to review current positioning methods and other sensors commonly available on unmanned vehicles. The review then describes and discusses methods for a higher-level visual navigationsystem, using feature description and matching methods based on work in other fields. The literature review has two aims: the first is to demonstrate that the current work in the field of visual positioning is focused on approaches distinct from the method proposed by this thesis. The second aim is to demonstrate that the algorithms surrounding the feature descriptor and matcher, such as landmark extraction and pose estimation, are well studied and that the data required, such as geographical reference databases and efficient retrieval methods, are available and accessible. This thus allows the thesis to concentrate on the core task: the recognition problem. Next, the System Overview chapter (Chapter 3) outlines the theory of operation and architecture of the proposed system. This includes a discussion of how the system operates and where it would fit among other systems onboard an autonomous vehicle. It also explains the proposed system architecture, including reasons behind the need for modularity and the various sub-systems that are required.
This report presents a way of using autonomous drones to enhance search and rescue operations and takes the first steps in bringing the system to life. By us- ing autonomous drones, less experience is required by the rescue personnel and drone specialists become excessive in this matter. Due to autonomy a drone can operate outside a valid radio link. Hence, when signal is lost, the craft can con- tinue to search, buffer the information and send it when the link becomes active. By creating affordable drones the threshold decreases for deploying a unit in bad weather or other missions where the feedback is more important than drone return.
The purpose of the project is develop a positioning- and control-system for an AUAV (Autonomus UnmannedAerial Vehicle). The positioning system shall be based on in- formation from a GPS, a 3-axis gyroscope, a 3-axis accelerometer, an electronic compass and other necessary sensors. The processing of positioning data and the computation of control data should be done in a Linux-based one-chip computer.
The verification and validation processes are an essential and integral part of the software development process. Many Software Quality Assurance SQA standards have set guidelines for these processes. The software testing in this paper can be offers verification of the software under test and its validation at several levels. The complexity of software projects is increasing rapidly and in turn both cost and time of the testing process have become a major proportion of the software development process. The functional requirements of the proposed model AGCUAV, in the context of the current research work, are Feature Detection: this includes detecting the predominant features such as corners, edges and blobs in an image and generating a feature descriptor associated to each feature, Feature Matching: matching the features between successive images and generate the correspondence set, Compute Motion Parameters :vehicle (or camera) motion is computed from the correspondence set to generate the homograph matrix containing the six state parameters (position and orientation) of the vehicle. Predict and Correction: this involves predicting and correcting the next state parameters of the vehicle to steer the vehicle to the desired trajectory. The correction is essential as errors may get accumulated in the due course of time during navigation from one way-point to the other. The prediction and the correction are achieved using a state estimator or a navigational filter resulting in reaching the destination via (known) way-points. These functional requirements are illustrated in details in Figure 4. Figure 4 represents the steps of the system workflow between system actors, control station system specialist and GPS & GIS specialist The workflow of this system is illustrated as follows: the system owner finds conjuncture in any location in his area work when
The Visible Light Communication (VLC) has become promising for various wireless applications. VLC- assisted indoor positioning is aimed at providing guidance to the blind people due to its unique advantage of electromagnetic interference immunity and accuracy, instead of conventional wireless positioning system using radio frequency equipment. The blind people could use the system to locate the objects or places in an unknown indoor environment. A number of fixed transmitters and a moving receiver together contribute to the positioning process. The transmitter is fixed in the object or place which the blind people have to identify. A transmitter section consisting of a mode switch, microcontroller and Li-Fi transmitter, i.e., a number of photodiodes (PDs) that emits visible light continuously, send the pre-defined information of the object or place when the moving receiver section consisting of a Li-Fi reader, microcontroller, speaker comes in line of sight with the transmitter. The multiple PDs have no help from other fixed receiving nodes with known co-ordinates. The blind people carry the receiver. When the transmitter and the receiver syncs, the information fed in the Tx microcontroller is delivered to the reader which in turn stimulates the Rx microcontroller to trigger the proper voice output by switching on the voice recorded circuit, and send it through the speaker thereby helping the blind people to identify the object or place ahead of them.
A UAV (UnmannedAerial Vehicle) is a pilotless flying system, generally comprised of control systems, navigation systems, communication systems and a functional payload. As an autonomous robotic system, a UAV can undertake dull and dangerous tasks, providing aerial solutions to access high alti- tude and high-risk sites such as offshore wind turbine blades. Innovation in UAV technologies reduces the risk of inspection tasks, which conventionally require the inspectors working at a high altitude. Their size and flexibility grants UAVs the freedom to access unreachable areas for NDT tasks, seeing utilisation for power line inspections  and concrete crack detection in bridges . The current state of the art research in the field of UAV based NDT thus focuses on the control system and optimization of non-contact measurement processes such as photogrammetric and thermographic inspections.
In this work is presented a Fuzzy control vision-based approach for the autonomous landing task. This 3D po- sition estimation of a VTOL aircraft is done using ho- mographies of a know landmark or helipad. The Fuzzy control approach works without any information about the model of the system, managing the longitudinal and lateral speeds, and the altitude of the helicopter. The use of the homographies using Lucas-Kanade and RANSAC gets good results despite the occlusion of the detected landmark, being this method ideal for this speciﬁc task. The present Fuzzy control approach manage the low rate of the vision control loop of 8 Hz and the vibration of the camera to accomplish successfully real tests with a reduced RMSE value, and without using any other sensor. The outline of this paper is organized as follows: Section 2 introduces the 3D position estimation based on homgra- phies. Section 3 shows the longitudinal and lateral speeds, and the altitude controllers for the autonomous landing task. Section 4 presents the RC helicopter used, the tests of the 3D position estimation using homographies, and a real test of an autonomous landing. Conclusions and future work are presented in section 5.
A wide variety of assisted wheelchair navigation systems have been developed over the past 30 years. Low- cost solutions are generally either limited and semi- autonomous  , or require an external localization system , or lack global localization planning capabilities. Maya Burhanpurkar et al., 2017  presented a cost- effective and robust autonomousnavigationsystem for existing PWs. According to the paper, based on an inexpensive sensor suite (an RGB-D sensor and wheel odometry), the various modules of the system (Simultaneous Localization and Mapping (SLAM), navigation, and door traversal) functioned synergistically to enable reliable operation under real-world conditions. Zhengang Li et al., 2017  presented a wheelchair which adopts two-wheel differential control structure and used RGB-D camera to perceive the environment.  The position and orientation of wheelchair was estimated by Adaptive Monte Carlo Localization (AMCL) algorithm. A* algorithm was applied for global path planning and Dynamic Window Approach (DWA) algorithm was used for local path planning. Their experimental results were consistent with the experimental expectation.
This chapter details the implementation of a behaviorally-flexible UAS develop- ment platform and an experimental methodology to validate the design in order to answer the research question. The chapter is divided into three sections. The first section addresses IQ2 by describing the design of the platform starting with the ma- jor hardware components and then the implementation of software components. The second section describes an experiment, conducted in simulation, which analyzed the effect of arbiter logic and organization on simulated agent performance on navigation- based tasks. This experiment addressed IQ3 since arbiter logic is derived from a cor- responding reactive robotic paradigm, allowing inferences about the effectiveness of each paradigm to be drawn from agent performance. The final section concerns the flight testing of the platform. Using a build-up approach, simple agents were flown first to show that this system was safe, stable, and compliant with testing regulations to answer IQ4. Using simulated data from the last experiment, a subset of stable and competent agents were selected for flight testing. Flight tests with these agents were used as the basis of a partial validation of the design. Agents were compared with respect to controller performance as well as qualitative agent behavior in order to answer IQ5.
Figures 3 , 4 and 5 show the relationship between the generalization performance of ELM and its network size for terrain modeling. As observed from these figures, the generalization performance of ELM is very stable on a wide range of number of hidden nodes. For DT, we itera- tively increase the number of sample points selected to form the triangulation for each terrain size and compare the accuracy performance with the ELM. Without lost of generality, we focus our attention on 50 hidden neurons. DT (N) requires a storage of 13.5N floating points [ 19 ]. In the case for ELM-trained NN, the N sample points are only required during the training of the network; thereafter, these sample points can be discarded. The only data that the network requires to be stored are the hidden node parameters, which depend on the dimension of the inputs and the number of hidden neurons implemented. In our terrain modeling case, the dimension of the inputs is 2 and the number of hidden neurons used is L, which is much lesser than the number of known sample points N, i.e. L N. With L hidden neurons, we have a total of 2L intercon- nections between input and hidden layer, giving us 2L input weights and L hidden neuron biases. Together with L output weights, a total of 4L parameters will be needed for the network to describe the same terrain model. A typical 50-node single-layer NN would need 200 parameters to model a terrain. For DT to maintain equivalent memory consumption, it would have resulted in a highly low-reso- lution model using only 15 samples. Yeu et al. [ 3 ] have conducted a good comparison of the memory requirement between DT-based interpolation and ELM. It is clear in [ 3 ] that as the terrain size increases, DT would require much more data points to represent the terrain, whereas ELM maintains a very much lower neuron count, and hence, much smaller memory is required. The generated terrain model (within a sensory region based on the current loca- tion of the UAV) can be used to build a network by con- sidering the elevation threshold of the topography. Once this network is available, navigation routes can be deter- mined for different routing objectives (i.e., shortest path; safest path; traveling salesman problem) to aid UAVs to
energy problems caused by the use of an internal combustion engine vehicle. Developing such vehicles for solving the environment and energy problems is a great idea. Currently, many researches publish technical papers in journals, which are related to autonomous EV. In their researches, steering wheel, brake and acceleration pedals are control by using computers [8-10]. On the other hand, users and drivers do not have a direct contact with them. A touch panel is installed in the EV and it serves a user Graphical User Interface (GUI) for users and drivers interact with controlling devices. Unfortunately, based on current outcomes more effort should be done for making sure that autonomous EV could move with safety. Although mechanism of mechanical could be used to solve safety and reliability issues of autonomous EV, computational approach also very important. The computational approach is for example the algorithm for controlling motor device, the capacity of data transmission device, image processing technique and etc. [11-13]. Autonomous vehicle with intelligent driving control are developed to provide the driver assistance as well as unmanned driver for road, logistics and flexible manufacturing system. It is an automatic guided vehicle and able to move automatically along the road. This research will focus on design and implementation of a sensor fusion system for navigation and control of an autonomous electric vehicle. It also introduced the intelligent vehicle trace, obstacle avoidance and speed control. In the first part, a visionsystem for the electric vehicle will be develop by using image processing techniques to recognize neighboring circumstances surrounding the electric vehicle.
If obstacle is detected in the front, the mobile robot will use fuzzy navigationbased on the infrared sensor. During this period, it is no longer use the fuzzy navigationbased on the vision sensor. Instead, the mobile robot can be obstructed effectively by the infrared sensor. Then, it continues to use the vision sensor so that making the robot move along the center line of the original path. In the fuzzy navigation which based on infrared sensors, each infrared sensor is scanned at eight angles, resulting in eight corresponding readings. Divide these readings into three groups: Group ( l ), l 1 , 2 , 3 , and it can reduce the amount of input effectively. In each set of readings, selecting the maximum reading as the final reading, because the reading reflects the distance of the barrier: the smaller the reading, indicating the farther the obstacle; otherwise, the larger the reading, indicating the closer the obstacle.