Top PDF Multi-robot Task Allocation using Agglomerative Clustering

Multi-robot Task Allocation using Agglomerative Clustering

Multi-robot Task Allocation using Agglomerative Clustering

Learning based approaches for task allocation have also been proposed [26]. This research work describes the problem as sequential decision making in a partially observable environment, and as such can use Markov Decision Processes and reinforcement learning to perform task allocation in a team of robots. It combines short-term planning and a value function together to generate a dynamic scheduler for multi-robot task allocation. It terms it as partially observable as the tasks will be entering the system randomly and thus, while planning for a set of tasks one cannot be aware of what set of tasks will be entering the system later on. As the agents in the system are not aware of the complete system, there is a notion of belief space, which is the probability distribution representing the current state of the system given the observation history. Since the state space can be high dimensional and technically belief space is infinite, there is a need for approximate methods. The work also discusses the concept of short term and long term planning. Short term planning assumes that the short-term future is predictable and it is then treated as static planning. However in the long term, everything is unpredictable and requires a method that caters tasks entering the system randomly. It mentions about the proposed hybrid planning and learning sys- tem in which in the short term planning, it ignores new tasks entering the system up to a particular threshold of time.
Show more

65 Read more

Research on Model and Algorithm of Task Allocation and Path Planning for Multi Robot

Research on Model and Algorithm of Task Allocation and Path Planning for Multi Robot

A factory uses m robots to complete the monitoring of n monitoring points. Each robot filled with power can travel limited distance continuously and each monitoring point is monitored with equal time. When the task is executed, each robot starts from the platform at the same time, and tries to avoid collisions among them during the process of performing tasks, and finally returns to the departure platform. In the process of completing tasks, the robot performs only one task, each task requires only one robot to complete, each task is only per- formed once, and all tasks will be executed. The call cost and the operating cost of per unit distance are known for each robot, calculating the optimal number of robots to complete tasks, and planning the path of each robot, making the total cost of monitoring tasks completed lowest.
Show more

9 Read more

High Resolution Satellite Imagery Changes Detection using Agglomerative Fuzzy K Means Clustering Algorithm

High Resolution Satellite Imagery Changes Detection using Agglomerative Fuzzy K Means Clustering Algorithm

The high-resolution commercial satellite imagery (HRCSI) has increased significantly over the last 5 years for a wide variety of applications. This has increase in volume, frequency of acquisition, and spatial resolution of HRCSI. In particular, satellite images contain land cover types; large areas (e.g., building, bridge and roads) occupy relatively small regions. The change detection and exploitation of change between multi temporal high- resolution satellite and air bone images. Overlapping multi temporal images are first organized in to 256m x 256m tiles in a global grid reference system. The tiles are initially ranged by these changes scores for retrieval, review, and exploitation in web based applications. Automatically detecting regions or clusters of such widely varying sizes is a challenging task. In this paper we present an agglomerative fuzzy K-Means clustering algorithm in change detection. The algorithm can produce more consistent clustering result from different sets of initial clusters centres, the algorithm determine the number of clusters in the data sets, which is a well – known problem in K-means clustering.
Show more

5 Read more

An Approach for Document Clustering using Agglomerative Clustering and Hebbian type Neural Network

An Approach for Document Clustering using Agglomerative Clustering and Hebbian type Neural Network

mining jobs, such as temporal document mining. PLSA [23] document act as a mixture of features, where each feature is represented by a multinomial distribution to evadeover fitting in PLSA, Blei and co-authors proposed a propagative aspect model called Latent Dirichlet Allocation (LDA), which could group up the themes from document. [8]In this paper we approach the software mining task with a combination of text mining and link analysis technique. This primarily deals with interlinks between one occurrence to another occurrence. A.Hotho et al., [24] suggest that in high dimensional space text clustering involves, but it difficult that it appears for all types of setting. This is one of the new approaches for applying background knowledge during pre-processing in order to improve clustering results and allow for selection between results. In order to overcome the striving, we calculate multiple clustering results using k-Means. The outcomes may be distinguished and clarified by the corresponding selection of concepts in the ontology. The problem of clustering high-dimensional data sets has been researched by Agrawal al. [25] They present a clustering algorithm called CLIQUE that identifies dense clusters in subspaces of maximum dimensionality. Hinneburg & Keim [26] show how projections improve the effectiveness and efficiency of the clustering procedure. Their paper shows that projections are very important for improving the performance of clustering algorithms.
Show more

6 Read more

Minimising Undesired Task Costs in Multi-robot Task Allocation Problems with In-Schedule Dependencies. Bradford Heap and Maurice Pagnucco

Minimising Undesired Task Costs in Multi-robot Task Allocation Problems with In-Schedule Dependencies. Bradford Heap and Maurice Pagnucco

Calculating task cost dispersion value ensures tasks collectively undesired by all robots are allocated before tasks more strongly preferred. Empirical results show this lowers the tea[r]

27 Read more

Minimalist Multi-Robot Clustering of Square Objects: New Strategies, Experiments, and Analysis

Minimalist Multi-Robot Clustering of Square Objects: New Strategies, Experiments, and Analysis

Figure 7.1 shows a clustering performance of the basic strategy and the mixed strategy (4T6D.) Compared to the 5 robot cases (see Figure 5.6), the task perfor- mances of both the basic and mixed strategies had qualitatively similar tendencies. In the basic strategy, few small central clusters were formed initially, but no central cluster emerged. In contrast, in the mixed strategy, the clustering performance in- creased gradually with time and formed the cluster having 19 boxes in the end in two of the three runs; the third run produced two central clusters, but no boundary clusters. However, some interesting differences between 5 and 10 robots experiments should be noted: the progress of clustering task was faster. With 10 robots, central clusters, in basic strategy, were easily broken down compared to the 5 robots case, taking an average time of 17 minutes (compared to 20 minutes) until all central clusters were disappeared. On the other hand, in the mixed strategy, less time was required to reach a single central cluster having 16 boxes (80%) as the number of robots changes from 5 to 10: the average time decreases from 48 minutes to 33 min- utes. Although the greater numbers of robots reduce the required time, it appeared to cause the performance to fluctuate more.
Show more

35 Read more

A study of human agent collaboration for multi UAV task allocation in dynamic environments

A study of human agent collaboration for multi UAV task allocation in dynamic environments

Supervisory control interfaces for autonomous systems has instead been an active area of research in the Human Fac- tors (HF), Human Robot Interaction (HRI), and Human- Computer Interaction (HCI) domains. A key issue they try to address is that plans computed by autonomous systems are typically brittle as they strictly conform to initially set design decisions and ignore the contextual decisions that humans need to make (e.g., weather conditions that may lead to UAVs dropping out, or the changing priorities of the mission) [Sil- verman, 1992; Smith et al., 1997]. For example, [Miller and Parasuraman, 2007] developed a ‘Playbook’ of tasks for au- tomated agents to perform when faced with certain situations (upon request from human controllers). Moreover, [Lewis et al., 2009] developed interfaces to help an operator interact with large numbers of UAVs (hundreds). Bertuccelli et al. [Bertuccelli et al., 2010] instead, developed operator mod- els for UAV control and, under simulations, study the per- formance of their ‘human-in-the-loop’ algorithms whereby operators are unreliable detectors and the algorithm may not perform well in search tasks. While these approaches relate to our context, they do not specifically study how decentralised coordination algorithms can be embedded in such systems.
Show more

9 Read more

Multi robot task allocation using market based approach

Multi robot task allocation using market based approach

The main issue in distributed multi robot coordination is the multi robot task allocation (MRTA) problem that has recently become a key research topic. Task allocation is the problem of mapping tasks to robots, such that the most suitable robot is selected to perform the most appropriate task, leading to all tasks being satisfactorily completed [5]. Ideally, in MRTA approaches, robots will act as a team to allocate resources amongst themselves in a way to accomplish their mission efficiently and reliably. The collaboration can lead to faster task completion, decreased the travelled distance and allow the completion of tasks which is impossible for single robots. Robots should, whenever possible, cooperate strongly in order to maximize their overall task performance.
Show more

26 Read more

Multi-robot Task Allocation Based on Ant Colony Algorithm

Multi-robot Task Allocation Based on Ant Colony Algorithm

There approaches simulate the behaviors of insects to assign the task of robots, swarm intelligence methods [10] include the threshold value method and ant colony algorithm, are mainly using for robot system in unknown environment. Because the group cooperation among individuals is distributed, a few of the individual's fault can’t affect the entire task to solving, Swarm intelligence methods have high robustness, scalability, are very suitable for distributed multi-robot systems. In this paper, we proposed a solution for MRTA based on ant colony algorithm.
Show more

8 Read more

AHAC: Decision Tree Classification with Agglomerative Hierarchical Algorithm Clustering for Time Series Data Clustering

AHAC: Decision Tree Classification with Agglomerative Hierarchical Algorithm Clustering for Time Series Data Clustering

K-means is one of the simplest unsupervised learning algorithms that solve the well known clustering problem. The procedure follows a simple and easy way to classify a given data set through a certain number of clusters (assume k clusters) fixed a priori. The main idea is to define k centroids, one for each cluster. These centroids should be placed in a cunning way because of different location causes different result. So, the better choice is to place them as much as possible far away from each other. The next step is to take each point belonging to a given data set and associate it to the nearest centroid. When no point is pending, the first step is completed and an early group age is done. At this point re-calculate the k new centroids as centers of the clusters resulting from the previous step. After these k new centroids, a new binding has to be done between the same data set points and the nearest new centroid. A loop has been generated. As a result of this loop k centroids change their location step by step until no more changes are done.
Show more

5 Read more

Task management of Robot using Priority Job          Scheduling

Task management of Robot using Priority Job Scheduling

Had the calculation been done by the robot, the Algorithm- II would have wasted too much energy and time of the robot as compared to Algorithm-I and it would have led to starvation of a number of processes and might also lead to deadlock, but all the calculations are done by the cloud and so higher time

6 Read more

K Mean Clustering based Task Allocation Model for Distributed Real Time System

K Mean Clustering based Task Allocation Model for Distributed Real Time System

Load sharing policies can be either static or dynamic. Static load sharing policies do not require system state information in making load distribution decisions [7]. Dynamic policies make their load distribution decisions based on the current system state. Dynamic load sharing policies provide significant performance improvements compared to static policies [5]. This paper considers dynamic load sharing in heterogeneous distributed systems.Most of the models ignore programs’ needs, real load conditions and users’ activities [16]. The task allocation is a NP-complete problem. The task allocation algorithms proposed in [4,5,7,8,15,17,18] are using static load sharing policy.
Show more

5 Read more

An Agglomerative Hierarchical Clustering with Various Distance Measurements for Ground Level Ozone Clustering in Putrajaya, Malaysia

An Agglomerative Hierarchical Clustering with Various Distance Measurements for Ground Level Ozone Clustering in Putrajaya, Malaysia

Abstract— Ground level ozone is one of the common pollution issues that has a negative influence on human health. The key characteristic behind ozone level analysis lies on the complex representation of such data which can be shown by time series. Clustering is one of the common techniques that have been used for time series metrological and environmental data. The way that clustering technique groups the similar sequences relies on a distance or similarity criteria. Several distance measures have been integrated with various types of clustering techniques. However, identifying an appropriate distance measure for a particular field is a challenging task. Since the hierarchical clustering has been considered as the state of the art for metrological and climate change data, this paper proposes an agglomerative hierarchical clustering for ozone level analysis in Putrajaya, Malaysia using three distance measures i.e. Euclidean, Minkowski and Dynamic Time Warping. Results shows that Dynamic Time Warping has outperformed the other two distance measures.
Show more

7 Read more

Learning Behavior-Selection by Emotions and Cognition in a Multi-Goal Robot Task

Learning Behavior-Selection by Emotions and Cognition in a Multi-Goal Robot Task

perceptual path performs the “quick and dirty” processing usually associated with emotions. The cognitive path attempts to provide a more sophisticated evaluation using higher-level reasoning. These layers may have their own separate learning mechanisms for adapting their evaluations, but in the experiments the perceptual evaluation is usually implemented as innate fixed knowledge and the cognitive layer always learns from scratch. In the DARE model, the perceptual layer extracts relevant features and the cognitive layer’s task is to identify objects. Nevertheless, a feature of its implementations which it shares with ALEC is that the perceptual layer has a non-differentiated evaluation of events according to their main characteristics, while the cognitive layer accumulates a set of individual instances of events. The DARE model differs from ALEC in the kind of learning mechanisms used (in general consisting of stimulus matching using Euclidean distances, at two levels of complexity) and the type of problems it addresses (the learning abilities are generally applied to stimulus evaluation and the problems derived from the interaction of the agent with a real world are often ignored).
Show more

28 Read more

A Modified Hierarchical Agglomerative Approach for Efficient Document Clustering System

A Modified Hierarchical Agglomerative Approach for Efficient Document Clustering System

The existence of an abundance of information makes a tedious process in searching information for the average user. This has led to an enormous demand for efficient tools that turn data into valuable knowledge. Researchers from numerous technological areas, namely, pattern recognition, machine learning, data mining etc. have been searching for eminent approaches to fulfill this requirement. As a result, document clustering system plays a vital role towards this achievement. Document clustering is an unsupervised approach of data mining. Document clustering groups similar documents to form a coherent cluster, while documents that are different have separated apart into different clusters. Document clustering has been studied intensively because of its wide applicability in areas such as web mining, search engines, information retrieval, and topological analysis [1]. Many algorithms are available in the literature for performing data clustering. Out of these, two major categories of algorithms are commonly used for document clustering are: “Partitioning” and “Hierarchical”. Partitioning clustering algorithm divides the documents into fixed partitions, where each partition represents a cluster. The commonly used partitioned clustering technique is k-means algorithm, where k is the desired number of clusters. The disadvantage of this method is that the number of clusters is fixed and it is very difficult to select a valid k for an unknown data set. Hierarchical clustering produces a hierarchical tree of clusters called dendrogram. The hierarchical clustering techniques can be divided into two parts - agglomerative and divisive. In Agglomerative Hierarchical Clustering (AHC) method, starting with each data point as individual cluster, at each step, it merges the most similar clusters until a given termination condition is satisfied. In Divisive Hierarchical Clustering (DHC) method, starting with the whole set of data points as a single cluster, the method splits a cluster into smaller clusters at each step until a given termination condition is satisfied. The time complexity of most of the hierarchical clustering algorithms is quadratic and the algorithm can never undo what was done previously [2]. This paper presents a modified algorithm based on an agglomerative hierarch approach for document clustering system. This paper is organized as: Section II presents a brief survey of various techniques used for document clustering so far. Section III provides procedures and algorithms used in document clustering. Section IV explains about the process of the proposed work. Experimental results and discussions are shown in Section V. Finally, the conclusion of this paper and future work is in Section VI.
Show more

11 Read more

Solving Task Allocation to the Worker Using
          Genetic Algorithm

Solving Task Allocation to the Worker Using Genetic Algorithm

When a worker performs a task repetitively, she/he requires less time to produce the succeeding units of a task due to his/her learning ability. While assigning task to the worker, a fixed flow of assignment is always accept as true without proof in developing a task-worker assignment. Since the learning period is a small part compared to the overall produce material. However, now in the current industry, products are introduce faster in the market and the size of product is small. Because of this tiny size, tasks-worker assignments based on a constant production rate assumption may not be applicable [6]. As a result, learning skill must be considered in the assignments in the current period. So here while assign task to the worker, we are considering the skill levels of the worker and learning ability of the worker. If the worker is caliber to perform the task then and then only we are assigning task to that worker. The processing time of each worker varies in the production period depending on worker learning ability. We focused on task-worker assignments where tasks are ordered in a series and the number of tasks is greater than the number of workers [6]. Workers can perform multiple tasks with the limited restriction. An optimizing technique was proposed to find the best assignment.
Show more

6 Read more

Multi-task clustering for stock selection to enhance prediction performance over multi data

Multi-task clustering for stock selection to enhance prediction performance over multi data

However, previous methods proposed training historical data of individual stock itself or all stocks to enhance prediction performance [17, 25, 26]. But the stock markets are driven by behavior of various market participants, as perhaps the most important of many variables influ- encing price [27]. For these reasons, it’s not reasonable to utilize limited methods. To overcome this issue, studies have conducted with financial news data [28–30]. Also, there is another way to clarify how stock markets move organically and affect mutually. In other words, stock price movements are results of multiple factors such as macro-economy, financial situation of a com- pany, investors’ sentiments, etc. And financial time series contain high noise. To predict stock price movement, features containing useful information are needed, so feature extraction and selection play significant roles in stock price movement prediction. By doing this, we are able to set those datasets into features to train proposed model. One of the novel method is clustering to link each stock as a close neighbor. Thus, it is to investigate the correlation structure within U.S stock exchange (NYSE) and obtain hierarchical structures with dendrograms based on the correlations among individual stocks.
Show more

47 Read more

A new agglomerative hierarchical clustering to model student activity in online learning

A new agglomerative hierarchical clustering to model student activity in online learning

Research on patterns of behavior in online learning is always interesting to review, because at this time the development of online learning very rapidly, in this paper the discussion of the pattern of student behavior from the liveliness contribution to the comparison with the final grade of students from a course. The two main problems discussed in this paper are first, how to find the right number of clusters (K) as the optimal solution of clustering where the right grouping solution will form a strategy in knowing the interpersonalities of students in online learning, the results of the analysis in this paper can be proven that the SLG method can find the right number of clusters (K) and this new algorithm after being evaluated get the highest score compared to other conventional Agglomerative Hierarchical Clustering methods (single, average, complete) using the CPCC validity index of the five batch data. The second problem discussed in this paper is how to generate students 'interpersonality models in detailed online learning of the students' active contribution in online learning and compared to the final grades of students of a course, from the experiments conducted in this paper can be produced a complete interpersonal model that can be used as a reference for schools and teachers in guiding students in online learning.
Show more

10 Read more

Implementing a Decision Support Module in Distributed Multi-Agent System for Task Allocation Using Granular Rough Model

Implementing a Decision Support Module in Distributed Multi-Agent System for Task Allocation Using Granular Rough Model

90 information granulation. Missing or null values will increase the uncertainty of information system and the induced rules will not be trusted. These missing values are represented by the set of all possible values for the attribute. To indicate such a situation, a distinguished value, a so-called null value is usually assigned to those attributes. Let , , is the information system, for every a ∈ A, there is a mapping a, a:U→ , where is called the value set of a. I f a∈A contains a null value (will be denoted as *) for at least one attribute a ∈A , then is called an IIS, otherwise it is complete information systems. Recently, the researchers presented many ways to handle the IIS; these researches can be classified into two main approaches :(1) Indirectly handle the IIS(Data Reparation): This approach is to transforming an IIS to a complete system using estimation methods (probabilistic and statistical techniques). These estimation methods mainly work as estimating the null value based on the appearing frequency of other values with same attribute. The value gained by these methods maybe not archive the best efficiency to the classification of decision attribute because it changes the original information and lead to a lower support degree and confidence degree. That is to say, this approach is to replace unknown value of attributes by either specific subsets of values or statistical values [28-30]. (2) Directly handle the IIS (Model Extension): This approach is to extend the concepts of RST on complete information systems for handling IIS. It does not require the changes in the original system and still is capable of reducing dispensable knowledge efficiently. By using knowledge reduction that eliminates only the information which is not essential from the point of view of classification or decision making or by relaxing the requirement of indiscernibility relation of reflexivity, symmetry and transitivity, i.e., the indiscernibility relation is extended to inequivalence relations that can process IISs directly [31-35]. A Missing Value Estimator (MVE) algorithm is proposed, as shown in figure 7, to handle the IIS based on information granulation. Suppose that
Show more

11 Read more

A Study on Ranking of Atomic web Services Recommendation with Agglomerative Hierarchical Clustering

A Study on Ranking of Atomic web Services Recommendation with Agglomerative Hierarchical Clustering

PERFORMANCE Performance is an important quality aspect of web service Which is measured using the throughput and latency. Higher throughput and lower latency indicates a good web service. Throughput represent the number of Web service requests served at a given time period. Latency is the round-trip time between sending a request and receiving the response.

7 Read more

Show all 10000 documents...