proposed scheduling algorithm is to minimize the variance of deadline missing and the total number of context switching among tasks. For these objectives, this paper combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Some drawbacks (i.e. low resource utilization and avoidable context switching overhead) of RM and EDF derived algorithms for soft real-time systems could be fixed in proposed algorithm. We take advantage not only of traditional approaches but plus side of GA such as high speed, parallel searching and high adaptability. The rest of the paper is organized as follows: In Section 2, we propose the continuous tasksscheduling problem in soft real-time systems. In Section 3, the problem is mathematically formulated. Section 4 introduces the GA methods and describes implementations used for this problem. Then, the experimental results are illustrated and analysed in Section 5. Finally, Section 6 provides discussion and suggestions for further work on this problem.
Other scheduling methods for continuous media are modification of stride scheduler underlying proportional share mechanisms. The stride schedulers which are designed to general tasks guarantee the fairness of resource allocation and predictability. Rate regulating proportional share scheduling algorithm based on stride scheduler is proposed for continuous media. In this research, to specify timing requirements of continuous media, a pair of parameters (P,C) is introduced. P means the task’s period and C means the computation time needed over each period. The key concept of Rate regulating proportional share scheduling algorithm is rate regulator which prevents tasks from receiving more resource than its share in a given period. This algorithm loses fairness which is strong point of stride scheduler and does not show graceful degradation of performance under overloaded situation.
There is a large literature in the real-time community on schedul- ing tasks on multi-processor architectures. Sporadic and aperi- odic real-time tasks are considered in respectively  and  whereas energy-efficient scheduling is proposed in . In  QoS management is proposed and  targets to minimize ei- ther the overall bandwidth consumption or the required num- ber of cores. However, to our knowledge, schedulability analysis dealing with several latency constraints (as it is defined in this paper) has not been considered. In fact, Among the constraints addressed in real-time scheduling issues, latency constraints are less studied comparing with the periodicity constraint for exam- ple . Nevertheless, latency is a major concern in several fields such as in embedded signal processing applications  for ex- ample. In the literature, most often, authors talk about an end- to-end deadline which ensures that the time lapse from sensors and actuators does not exceed a certain value . The main dif- ferences between latency and end-to-end deadline is that latency constraints are as much as system designer wants meaning that they can be imposed between any pair of connected tasks in the system (not necessarily sensor and actuator tasks only). In , a definition of this constraint is given and the existence of a link between deadlines and latency is proven. In addition, distributed architectures involve inter-processor communications the cost of which must be taken into account accurately. Furthermore, con- cerning synchronization cost reduction, the approach proposed in  is efficient in term of finding a minimal set of interproces- sor synchronization, however, this approach assumes that some dependence can be removed even though data are exchanged. Moreover, it is not suitable for latency constraints satisfaction because it imposes a tasksscheduling not exploiting the poten- tial tasks parallelism which is essential in minimizing their total execution time. Moreover, it was not possible to exploit results from parallelism community, essentially because of precedence constraints which are not taken into account .
Cloud computing is a new computing mode. It is similar to utility computing which involves a large number of computers connected through communication n/w. Cloud computing is trend to provide service as resources including hardware, software, network etc. Every service is provided over network that require high speed of network and persistence connection where its services are distributed over the network according to architecture and geo-location. It is based on pay as you go model, means it depends on matrices like usability, durability, cost, load etc. So that consumer does not need to buy any hardware, software etc. The main goal of cloud computing is to achieve higher throughput, availability, scalability, consistency guarantees, and usability, fault tolerance etc. used distributed resources . Cloud computing resources should able to solve large scale of computation problems. Cloud computing uses characteristics of Client–server model, Grid computing, Peer-to-peer, Mainframe computer, Utility computing to provide better services like gaming, tons of computation, message passing, network etc. Cloud computing has an advantage of delivered a flexible, very high performance, pay-as-you-go, on-demand service. Operators should guarantee to the subscribers and stick to the Service Level Agreement. Google adopts Map-Reduce scheduling mechanism scheduling algorithms are relatively simple (First fit etc.). FIFO, default algorithm performs not so well for short jobs. Besides, Facebook proposes fair share scheduler; Yahoo raises computation ability scheduler. However, these scheduling algorithms cannot work out a better scheduling scheme. In fact, tasksscheduling in cloud is a NP complement problem with time limit. That is to say, it is seldom impossible to search out a reasonable solution in polynomial time. To improve performance of cloud computing, efficient task scheduling and resource management is required.
Abstract — In recent years, Grid computing systems have emerged as a solution to achieve distributed systems. Grid System is a collection of computing resources and users that are scattered around the world. These systems are developing and becoming more widespread with ever-increasing speed. Development of Grids and increase of the number of available resources and also increase in the number of users’ requests to perform their computing tasks with minimum cost and in the least possible time, have made the issue of resource allocation and their scheduling as a challenge in such systems. On the other hand, some of users’ requests may have deadlines and this issue makes scheduling problem more critical. In this paper for the first time, we have proposed a method for tasksscheduling using some reserve resources, which in addition to considering minimization of time and costs, it also considers tasks deadline. The performance evaluation is conducted using MATLAB software and is compared by MinCTT method. We have shown that in addition to performance improvement for tasks which have deadline, less time complexity shall also be obtained.
The algorithm Min-Min starts with the set U of all tasks and then calculates the set of minimum completion times for each task Ti in the set U. The task with the overall minimum completion time is selected from this set of minimum completion times and then assigned to the corresponding resource. This assigned task is then removed from the set U, and the process is repeated until all tasks are scheduled (U becomes empty) . Min-min is based on the minimum completion time, as is MCT. However, the algorithm Min-min considers all unscheduled tasks during each scheduling decision whereas the MCT algorithm only considers one task at a time. The Min-min algorithm requires O(n 2 m).
A hybrid static and mobile grid computing system is proposed in  in which mobile and static computing devices and bio-sensing nodes are integrated and presented as a one unified system. The bio-sensors collect vital signs such as blood pressure, temperature, electrocardiogram, and oxygen saturation of an individual. The collected data is processed and analyzed on mobile grid computing infrastructure in order to determine the health of an individual. To deal with uncertainty, an idea of application waypoints has been introduced in which service provider executing application task reports to the broker with an estimate of residual task completion time. If the broker does not receive feedback about the estimated residual task completion time from the service provides at the specified waypoint, it marks service provider as failed and assigns additional resources to take over the incomplete tasks. A resource allocation algorithm to efficiently process telemedicine data in the grid is proposed in . In proposed algorithm sensors attached to patient’s body collect and send health related data to grid through a mobile device. A patient management application deployed on the grid processes and analysis the patient’s data.
Euiseong Seo, et. al. proposed on Energy Efficient Scheduling of Real-Time Tasks on Multicore Processors in November 2008. They tackles the problem of reducing power consumption in a periodic real-time system using DVS on a multicore processor. The processor is assumed to have the limitation that all cores must run at the same performance level. To reduce the dynamic power consumption of such a system, They suggest two algorithms: Dynamic Repartitioning and Dynamic Core Scaling. The former is designed to reduce mainly the dynamic power consumption, and the latter is for the reduction of the leakage power consumption. In the assumed environment, the best case dynamic power consumption is obtained when all the processor cores have the same performance demand. Dynamic Repartitioning tries to maintain balance in the performance demands by migrating tasks between cores during execution accompanying with deadline guarantee. Leakage power is more important in multicore processors than in traditional unicore processors due to their vastly increased number of integrated circuits. Indeed, a major weakness of multicore processors is their high leakage power under low loads. To relieve this problem, Dynamic Core Scaling deactivates excessive cores by exporting their assigned tasks to the other activated cores.
Lemma 3.1 states that if we can nd a feasible ow in which all arc ows are integer, then we can construct a schedule for the original periodic task scheduling problem that satises both the Task and Processor constraints. However, it does not answer the question of whether such a ow exists. The next lemma addresses this issue.
The task scheduling algorithm in Cloud Computing is an efficient approach for enhancing the system performance by scheduling and maintaining system resources in an appropriate manner. It supports de-duplication mechanism where files with same contents cannot be uploaded into the cloud thereby avoiding the chances of duplication. The algorithm supports multitasking with single CPU thereby providing a good quality of service. Main advantage of the Task Scheduling Algorithm is user authentication security where only registered users can access the data. This prevents unauthorized users like intruders and attackers from accessing the original files.
F columns with the value 0 define the ready- to-run or suspended state of a corresponding task. The elements of the last matrix row define the processor state, which is idle if the element value is 1, and active otherwise. Time periods that correspond to the matrix elements of the last row with the positive value could be used to schedule aperiodic tasks. However, in this sim- ple example, aperiodic tasks are not considered. But in modern ESs, the multitasking design has to forsee situations when aperiodic tasks arrive
In this work we proposed a new branch and bound method for solving the multiprocessor scheduling prob- lem of makespan minimization. We also presented a new approximate IIT (inserted idle time) algorithm. We found that the minimum execution time multiprocessor scheduling problem can be solved within reasonable time for moderate-size systems. With an increasing number of tasks, branch and bound method requires more time to obtain the optimal solution. Limiting the number of iterations seems justified and promising way to obtain a good approximate solution. Computer experiment con- firmed eﬃciency of branch and bound method with this restriction.
et al.proposed a centralized and decentralized scheduling algorithm in literature. To schedule concurrent bag-of-tasks, the online and off-line scheduling algorithms are presented by Benoit et al.. In literature, a decentralized scheduling algorithm, which minimizes the maximum stretch among user-submitted tasks, is designed. Yang Y et al. take the constraints of time, cost, and security into consideration, a scheduling algorithm for data-intensive tasks is designed. Literature  investigated both two problems: optimizing the makespan of the tasks under the constraints of energy, or minimizing energy consumption subject to makespan. However, this paper studied the static resource allocation to optimize makespan and energy robust stochastic for bag-of-tasks(BoT) on a heterogeneous computing system. A multi-objective optimization model, which minimizes makespan and resource cost, is established in literature. To solve the optimization model, a scheduling algorithm based on the ordinal optimization method is designed. However, the scheduling algorithm is inefficient when the task number or processing node number is large.
The remainder of Section III is organized as follows. In Section III-A we expand on the LP relaxation. In Sec- tion III-B we explain the branch and bound algorithm. In Section III-C we present the models to scheduling periodic real-time tasks. In Section III-D we show how to handle task chains with end-to-end deadlines. In Section III-E we expand on the FlexRay bus for the global communication.
Equation (1) and (2) are the objective function in this scheduling problem. In (1) means to minimize the total number of processors used and (2) means to minimize total tardiness of tasks. Constraints conditions are shown from (3) to (5). Equation (3) means that task can be started after its earliest start time, begin its deadline. Equation (4) defines the earliest start time of task based on precedence constraints. Equation (5) is nonnegative condition for the number of processors.
multiple transient faults. In case of multiple faults, the feasibility rate is considerably higher. It is used to dynamically select fault recovery method to overcome the faults occurring in the system. This method makes use of the task utilization for the critical and non- critical tasks. The paper on Tolerance to multiple transient faults , it is noted that all the methods proposed in this paper are used to sense only some special types of faults, and therefore there is no appropriate method to detect arbitrary faults that occur in real-time systems.
Let us outline our four-phase approximation algorithm.Our approach begins by assigning tasks to the processor where they are to be processed in such a way that the computing and communication time are balanced (suboptimally), and then the actual schedules are constructed. The schedules generated by our algorithm consist of two parts: a communication schedule that specifies when all the communications take place, and a computation schedule that specifies when all the processing of the tasks takes place. We say that we are approximating the problem by ``restriction'' as our initial approach performs first all the communications and then, at a separate time, all the computation. But, since the tasks will be processed in the same order in which their data arrives at the in-channel associated with the processor, then it may be possible to overlap at least portions of these schedules. So our approximation technique is actually ``restriction'' followed by a posteriori ``overlapping''. Our general approach is as follows.
The grid computing system can support the execution of computationally intensive parallel and distributive applications. The main characteristics of grid computing and heterogeneous computing system are similar. A novel scheduling algorithm, called NHEFT is proposed in this paper to enhance the functions of heterogeneous Earliest-Finish time (HEFT) algorithm. The NHEFT algorithm works for a bounded number of heterogeneous processors, the main objective of NHEFT is getting high performance and fast scheduling. The algorithm selects the tasks with a rank system at each step of execution of algorithm, which minimize earliest finish time with the minimization of cost.