The study of robotics combines many complex systems, therefore this work could be continued by improving each of these different systems, as well as adapting the algorithms in order to use them for autonomous driving. Some of these ideas are discussed in the following sections. 6.2.1 Implementation of the path planning algorithms on an autonomous vehicle The main objective of this project was to develop navigationalgorithms for a mobile robot which could later be implemented on an autonomous vehicle. To achieve this, a good understanding of the mechanics of the vehicle used would be necessary. Unlike the mobile robot used in this project which uses differential steering, four wheeled vehicles use the Ackerman steering geometry which is more complex. Also, for mobilerobots, the control variables are the forward linear velocity and the angular velocity, meanwhile for a vehicle, the elements to control would be the acceleration, steering, the brakes and the gears.
The proposed reinforcement learning solution is model-based. Constructing online mod- els are beneficial since they allow executing hypothetical planning experiences to update the action-values. This helps to propagate the effect of updating the value of any state-action pair to all connected state-action pairs which accelerates the convergence of the Q-learning algorithm. This has been validated since the model-basedalgorithms converged significantly faster than model-free algorithms in both simulation and real-time. Moreover, the simulated hypothetical experiences are simulated around the shortest discovered trajectories which was proven to accelerate the convergence. These constructed models are probabilistic, therefore, they simulate the transition to the most probable states learned from the experience. This helps to relieve the effect of noisy rewards if a transition to a less probable state takes place. Therefore, the success rates of model-basedalgorithms are higher than model-free algorithms since model-basedalgorithms reduce the divergence from the discovered sub-optimal trajec- tories. One drawback of these models appears if the state space is very large. Since the models are tabular, it is less feasible to construct them for large-sized state spaces. Another drawback is related to extending the Dyna-Q platform with the robot kinematic model. The proposed method assumes an obstacle-free map while simulating applied actions to non-visited states. Therefore, the predictability of the robot depends on the obstacles distributions in the map. Although the model accounts for the stochasticity generated due to the skidding effects, it assumes that these stochastic properties are stationary. This assumption can not be guar- anteed, therefore, the model-based updates will be not be accurate if the stochastic effects change with time. Moreover, implementing the reinforcement learning in real-time suffered from local minima problems. Thus, the minimum exploration factor of the algorithms was kept at significantly large value leading to an ² − g r eed y policies. Accordingly, the agent will keep applying exploratory actions even after the convergence resulting in following sub-optimal navigation paths.
desired output, for example, LIDAR-based lasers. Not every person with limited abilities can afford an expensive wheelchair. Moreover, often it is not sufficient to have an am- biguous technology in order to satisfy everyday needs. Therefore, it is important to find a right balance between cost and performance. Another challenge, the algorithms required to run a mobile-wheelchair are demanding, which require a personal computer with great performance characteristics. Moreover, physical interface is poor on the most of wheel- chairs, and there is no standard in communication protocols required by wheelchair’s in- put devices and various modules. This problem can be solved by utilizing common ro- botic frameworks. Last but not the least, currently wheelchairs are not accepted much by society and clinic. It remains immature technology, which is hardly desired and appreci- ated by disabled people.
This paper presents a novel simulator that is based on Matlab codes, and allows users to debug their navigationalgorithms with Matlab, build an indoor environment, set the observation models with noises, and build the robot models with different driving mechanism, internal sensors and external sensors. The visual observation is also calculated by some projective and zoom calculation based on the built environment view. This function is important for the experiment of visual basedalgorithms. To save the time, the simulator also provides some functional modules which can implement some navigation tasks such as obstacle avoidance and bumping reactions. All the above functions of the simulator have been tested by designed experiments, which show that the simulator is feasible, useful and accurate for 2D indoor robotic navigation.
Hybrid navigation is a group of algorithms suggest a combination of the local navigation and global path planning methods. These algorithms aim to combine the advantages from both the local and global methods, and to also eliminate some of their weaknesses. In  a hybrid method blends the reactivity of local methods with the intelligence and optimality of global ones. The idea is to use global planning to determine only certain points (temporary goals) where the robot need to pass to reach the goal without getting stuck, and then do the actual navigation by a local algorithm. In  a hybrid approach has been proposed, it combines a goal- directed navigation using the Equiangular Navigation Guidance (ENG) strategy with a local obstacle avoidance technique based on keeping constant the angle between the instantaneous moving direction of the robot and a reference point on the surface of the obstacle. Many other approaches in path planning and obstacle avoidance have been proposed, including: fuzzy logic [21,22], neural networks , genetic algorithms , ant colony optimization (ACO)  and Honey Bee Mating Optimization (HBMO) .
Robots are more and more autonomous, as they are involved in more and more complex environments, achieving more and more complex tasks. Anyway, however their autonomy skills are (from ”simple” route following capabilities, to embedding complex AI planning algorithms), their auton- omy always rely on a navigation architecture. This navigation architecture provides both localization information and locomotion capabilities.
Slam is considered as an important integral of autonomousmobile robot navigation. It represents an ability to place an autonomousmobile robot at an unfamiliar location in an unknown environment, and then have it figure a map, using only relative observation of the environment, and then to use this map simultaneously to navigate. The focal idea behind the SALM technique, it is an observation of robot motion from a starting unknown posture or vicinity while maneuvering within an environment. Furthermore, absolute features localities are not accessible. It is proposed to adopt linear evolution of motion as this results in robot synchronous discrete-time model and the observations of landmarks are known. It is known that robot motion and observation of features, the landmarks, are nonlinear while navigating, however, using linear models does not reduce the accuracy of the approach. In this regard, within this paper, we shall use nonlinear robot model in addition to nonlinear observation models. This includes system state, such as position and orientation of the robot, in addition to the position of landmarks. Denote the state of the KSU- IMR robot as X ij ( ) k . While maneuvering, the dynamical motion of the robot is modeled by linear discrete-time state transition equation, Eq. (5):
According to the obtained image, selecting a center point A on the vertical center line and representing the actual position of the mobile robot by the center point. Regarding A as the center, then, scanning left and right sides and determining the width who’s left and right sides of mobilerobots by calculating the number of pixels. If the number of pixels on both sides is the same approximately, the mobile robot will be located at the center line of the path; else, the number of pixels on the left (right) side is more than the right (left) side, the mobile robot will be located on the right (left) side of the bias path. As a result, it is necessary to control the robot to move to the left (right) side of the path in order to allow it to along the center of the path.
Figure 6 demonstrates the navigation results in the simulation environment, which is built up using Visual C++ with the setting of 600 × 400 (Grid representation). In each case, the environment is assumed to be com- pletely unknown for the mobile robot except the start and goal states. The robot has to explore the environment using onboard range sensors. It is shown in Figure 6 that the PFLS method navigates the robot more safely and effectively. Then we further test the reactive navigation performance on the real robot MT-R in our office build- ing. As shown in Figure 7, the robot walks through a corridor with clustered obstacles quickly and safely. All these results also demonstrate the success and practica- bility of the proposed reactive navigation control ap- proach.
In the framework of research program "MARS- 500" of the Institute of Biomedical Problems of the Russian Academy of Sciences and European Space Agency the mobile robot-explorer "Turist" has been created with the purpose of expanding human capabilities in exploration of aggressive environments, including other planets . Research revealed the disadvantages of the robot- explorer "Turist" as well as other space-basedmobilerobots requiring manual remote control. Inability of the mobile robot to independently carry out interpretation of events and control operations necessary to reach mission goal was caused by the lack of universal intelligence and corresponding algorithms capable of taking into account all possible missions in advance and all possible situations emerging in the course of planetary missions. Solving this issue would only be possible with the implementation of the technology of autonomous remote modification of hardware.
We used OpenFABMAP as a method for feature-based place recognition. It requires a training phase before the actual localization can be done. Keypoints are detected on the training images and the corresponding descriptors are extracted. The descriptors are clustered to obtain a “vocabulary” (also called bag of words) of desired size, each cluster center representing a “word” of this vocabulary. For SIFT or SURF features with their floating-point descriptors, simple k-means clustering can be used: First, cluster centers are selected randomly out of the descriptors. Second, each descriptor is associated to its nearest cluster center, based on Euclidean distance. Third, new cluster centers are computed by averaging all descriptors associated to each current center. The second and third step are repeated until the cluster centers converge. Binary feature descriptors require a modified clustering, as they only contain ones and zeros as coordinates and rely on fast comparisons using the Hamming distance. The averaging step would result in fractional coordinates, eliminating this advantage. A majority-based clustering (k-majority) has been proposed in [ 34 ], which counts the number of zeros and ones for each coordinate over all cluster members and selects the dominant value (0 or 1) as the new value of the corresponding cluster center coordinate.
In the early days of artificial intelligence and robotics research it was widely believed by researchers that computers could be capable of displaying 'intelligence' through manipulation of symbols alone. This idea became known as the symbol-system hypothesis (SSH). Central to the SSH is the idea that intelligence can be realised internally, independent of a body. In accordance with the SSH, early autonomousrobots realised 'intelligence' through manipulation of internal models of the robot's local environment. The robot's model of the environment was continuously updated through sensor measurements and motor control was based upon interpretation of this model. This design methodology relies fundamentally on the ability to accurately model the rules and laws of environmental interaction. While this method worked relatively effectively with certain robots that acted within deterministic environments, it fared terribly when tested in stochastic real world scenarios. Not only did it turn out that the real world was far too complex and unpredictable to model in a computer simulation but the computational effort required to make even the most basic of decisions was excessive.
This section describes robotic experiments con- cerned with on-line environment inspection using the novelty filter. For examples of applications of the nov- elty filter to non-robotic tasks, and comparisons with other novelty filters, see [12,11] . The experiments pre- sented here focus on a simplified inspection task, where the robot learns an internal representation of sensor per- ceptions of an environment through exploration, and then signals deviations from that model when exploring further environments. The robot explores an environ- ment, and uses the novelty filter to build a model of the perceptions that it receives in that environment. Once the model is complete, the robot explores another en- vironment, highlighting perceptions that are found to be novel, i.e., that were not seen in the first environ- ment. This task was performed using the Nomad 200 robot shown in Fig. 2 . This is a fully autonomous robot equipped with 16 sonar sensors and 16 short-range in- frared sensors.
The frontier-based exploration was initially developed by Yamauchi et al. . He presented a technique to learn maps with a team of mobilerobots. He introduced the concept of frontiers between known and unknown areas in a grid map. Frontier-based exploration can be used to map indoor environments where walls and ob- stacles may be in arbitrary orientations. In his approach, robot shared perceptual information, however, it main- tain separate global maps, and make independent decisions about where to explore. This approach groups adja- cent cells into frontier regions. Each robot heads for the centered of the closest frontier region, but there is no explicit coordination. As a result, the robots may end up covering the same area and may even physically inter- fere with one another.
The objective of the work is the cloud-oriented solving of a broad class of problems based on mathematical models with a complex topology. In addition, this class is characterized by a large volume of floating-point calculations with stricter requirements for the result’s reliability. Among such problems there are, for example, the problem of finding a partial system of combinations of the works on the water body recreation in an industrial region, the problem of designing a ventilation system in the enclosed bounded spaces with a complex configuration and many others. The problems of this kind require a responsible solution, because the obtained results may influence the volume of capital investments, life safety, or the rated quality of products. Similar problems were considered in the 80s in the works of G.M. Ostrovsky [1-3] devoted to decomposition of complex chemical-engineering schemes. In reality, the gain in computation time on the uniprocessor computers was achieved by finding an optimal iterated set (OIS) and reducing the
The implementation of global distributed information systems based on cloud and grid approaches. There are many problems of ensuring a high level of information security of these systems because they operate critical or confidential data, and the elements of these systems found in different physical locations to communicate with that uses open standards and protocols of the Internet. The existing distributed information systems implemented using a variety of architectural and technology platforms, which usually do not meet the current challenges in the field of ensuring a high level of information security. In addition, the actual question of the integration of these systems with corporate information systems, and providing a high level of security used integration solutions. The aim of this work was the systematization and analysis of proven technologies for building high reliable information security infrastructure of global distributed computing systems. As part of the work identified approaches to implementation of information security infrastructure based on technical standards Globus Toolkit, OGSA, UNICORE, gLite. The features of the implementation of security infrastructure based on these standards, as well as the possibility to interact with external systems based on industry standards such as SOA, Web Services.
The aim of this research is to determine someone’s emotion in a video scene. The video was decomposed into image frame and in each image frame was extracted into feature (component) face, which include mouth, eyes, nose, and forehead. The feature extraction was done by combining two methods based on the color and the facial geometric figure. The state of each face feature related to AU’s (Action Unit) face that used to emotion recognition. State recognition of mouth and eyes can be seen based on the feature elongation, state on the forehead and nose are known based on the wrinkle density. In the last part of emotion, recognition is done with certainty factor method to determine the quality of each emotion, the classification of actor’s emotion is determined based on the quality level of maximum emotion. The results showed recognition of emotion in a single image, the average accuracy of 77%, while in the video, the average accuracy of 76.6%.
Open Source Software (OSS) in the education field has often been recommended by different researchers in literature. Despite the fact that there has not been evidence, yet, of current OSS dominance over traditional methods, OSS use in education has been steadily increasing and expanding covering new domains. The reputation of the OSS learning system has a great deal of importance when it comes to people interested in adopting a Learning Management System (LMS). At any rate, choosing to start a new application or adapt and modify an existing one is an important decision. The study aims to produce basic guidance towards available OSS LMS, and their substitutes in the field of education. 23 different alternatives were picked from the existing active OSS based on previously published papers, the study also aims to produce a summary of the available studies and guides available, finally the study is aiming to bridge the gaps in the current literature while proposing a taxonomy of OSS LMS in 56 papers taken from literature.
three stages, namely encryption, steganography and decryption. Encryption and decryption is done with DES algorithm (Data Encryption Standard). The use of LSB can minimize the image quality changes, but the capacity of messages that can be accommodated is due to the size of the image. Kekre, et al.  conducted a LSB steganography study to increase the messages capacity with PVD approach (Pixel value differencing). LSB insertion is based on comparison of the MSB bit value. If the value of the first 4 MSB bits is "1”, then embed it on the last 4 bits. If the first 3 bits MSB is " 1 ", then embed it on the last 3 bits. If the first 2 MSB bit is " 1", then embedit on the last 2 bits. If the value is outside the criteria, then the embedding is done on the last bit (least).
In this paper We presented the Path loss models are investigated with real time GSM cellular system at BSNL Vijayawada The environment such as urban area of Vijayawada were chosen through BTS specifications of such environment using two propagation model such as Okumura model and COST-23 model. The real time, signal strength value is compared with the received signal strength value is calculated using HATA and COST-231 models. The table 2,3and 4 shows the different real time data is obtained from test drive by using TEMS(Test mobile system measurement unit) is a special software used for processing and analyzing the collected measurement and display the measurements on the maps and the table 5 shows the data is collected from to test mobile