International Journal of Engineering Technology and Computer Research (IJETCR) Available Online at www.ijetcr.org
Volume 5; Issue 5; September-October: 2017; Page No. 92-99 Journal Approved by UGC
Optimization of the Mixed Model Processor Scheduling
Dr. M. RAJA SEKAR
Professor, CSE Department, VNRVJIET, Hyderabad, India.
rajasekarm9@gmail.com
Received14Aug. 2017; Accepted 14Sep. 2017
Abstract
Optimization of the mixed model processor scheduling (OMMPS) is nothing but allocating the processor time to the number of process using mixed model of processor scheduling (MMPS) and minimizing the average turnaround time of process. Usually round robin scheduling and priority scheduling are taken into account with multilevel queue scheduling and and new scheduling concept is designed in which round robin scheduling consists of shortest job first. In the first level queue dead line scheduling is implemented. All the processes, which is ready to execute, must enter in to the ready queue. After entering in to the ready queue normally FIFO scheduling is used in round robin scheduling. Instead of FIFO we have also used priority scheduling in which the highest priority is given to the shortest processes and shortest are entering into the queue first. The nicety of this paper is that we have considered dead line scheduling in parallel with round robin (RR) scheduling in order to provide the execution of real time processes also. But we are not away from the concept of round robin scheduling. In round robin scheduling the hard part is deciding what the length of the time slice could be. This hard part is deciding what the length of the time slice could be. This hard part can be overcome by assuming the processes follows normal distribution with mean µ and variance σ2 and the time slice of each process is seal of σ. Optimization of the mixed model processor scheduling consists of two queues namely q1 and q2. Where q1 is meant for real time processes and q2 consists of remaining processes in which shortest process are executed using normal dead line scheduling. Round robin scheduling is implemented for q2. The mentioned model applied for randomly various set of processes and shown that their average turnaround time is reduced comparatively with the usual pattern of round-robin scheduling and average waiting time can also be reduced.
1. INTRODUCTION
In optimization of the mixed model processor scheduling discipline, we have implemented the complete scheduling to classify the workload according to its characteristic to maintain separate queues serviced by the different schedulers [1, 2, and 3] which is the multi level queue scheduling. In this mixed model we have taken two queues. First queue q1 is used to have real time /deadline jobs. The jobs in q1 are executed using deadline scheduling discipline then second queue q2 is used to load the remaining sorted jobs [4, 5 and 6]. The job, which has shortest execution time, is entering into the q2 first and the process continues unless and until all the jobs loaded in to q2. When q2 is ready, the jobs in q2 are executed using round robin scheduling discipline. We have mixed round robin (RR) scheduling discipline with shortest job first scheduling discipline. So as to minimize the average turn around time as well as the average waiting time of the process [7, 8 and 9].
Each job will have its own CPU burst time. This time is nothing but the total amount of time that the process is in CPU to complete its execution. Common performance measures and optimization criteria to increase the system performance are processor utilization, throughput, turn around time, waiting time and response time [10, 11 and 12]. Process utilization is average fraction of time during which the process is busy [13, 14 and 15]. Throughput is the amount of work complete in a unit of time. The way to express the throughput is by means of the number of user jobs executed in a unit time [16, 17 and 18].
Turn around time of process or job is defined the time span between the process is submitted and until it is completed. Turn around time is sum of the execution time plus waiting time. Waiting time is the penalty imposed for sharing resources with others, in other words waiting time is the time that a process spends waiting for resource allocation due to contentions with others in a multiprogramming system. Response time is the time that elapses from a
movement of the last character of a common line launching a program or transaction is entered until the first result appears on the terminal[19, 20].
First come first served is simplest scheduling discipline. The work load is given in the order of arrival, without any preemption. FCFS scheduling discipline is quite straight forward. The performance of FCFS is very poor because of its excess average waiting time with no preemption; the resources utilization may be quite low[21, 22].
Round robin scheduling is for better scheduling to achieve good and relatively evenly distributed terminal response time. The performance of round robin scheduling is very sensitive to choice of the time slice. For this reason, in our model we have considered that if the processes follow normal distribution then the time slice can be taken as standard deviation of the normal distribution. In this paper we have clearly shown that the average turn around time and average waiting time of mixed model processor scheduling is less than the round robin scheduling [23, 24, 25, 26, 27 and 28].
2.ROUND –ROBIN SCHEDULING
Round robin is one of the oldest, simplest, fairest and most widely used scheduling algorithms, designed especially for time-sharing systems. A small unit of time, called times lice or quantum, is defined. All runnable processes are kept in circular queue. The CPU scheduler goes around this queue, allocating the CPU to each process for a time interval of one quantum. New processes are added to the tail of the queue.
The CPU scheduler picks the first process from the queue, sets a timer to interrupt after one quantum, and dispatches the process.
If the process is still running at the end of the quantum, the CPU is preempted and the process is added tail of the queue. If the process finishes before the end of the quantum, the process itself releases the CPU voluntarily. In either case, the CPU scheduler assigns the CPU to the next process in the ready queue. Every time a process is granted the CPU, a context switch occurs, which adds overhead to the process execution time. When other processes are waiting for processor no process can run for more than one time slices. If the process need more time than allocated time slice, then process is preempted after one time slice and placed at the end of the ready queue for next allocation.
Figure 1: Mixed model Processor scheduling The performance of round robin is dependent on the choice of time slice. Usually this is in milliseconds.
Rotating priority basis, the processor time is effectively allocated to processes, hence it called Round Robin.
3.TIME QUANTUM SIZES
The size of the time quantum chosen influences how effective optimization of the mixed model processor scheduling will be. If the quantum is too large then the MMPS becomes too similar to SJF. If the quantum is very small, then there will be a lot of context switching. A general guideline is to make the time quantum a little larger that the time allowed for typical interaction. But it may not work well. In order to overcome this difficulty, we have considered that processes time follows normal distribution with mean
µ and σ2 variance in order to fix the time slice. Let us consider the size of the time quantum is σ.
4. THE NORMAL DISTRIBUTION
This distribution is extremely important in stastical applications because of the central limit theorem, which states that, under very general assumptions, the mean of a sample of n mutually independent random variables, having distribution with finite mean and variance, is normally distributed in limit as n goes to infinity. The probability density function of normal distribution is
where µ and σ2 are mean and variance of the distribution .
4.1 . MIXED MODEL PROCESSOR SCHEDULING
The optimization of mixed model processor scheduling algorithm is designed specifically for time- sharing systems. This model is similar to round robin.
For mixed model processor scheduling the ready queue is treated as a SJF, but preemption is added Most systems involve parameters and variables, which are random variables due to uncertainties.
Probabilistic methods are powerful in modeling such systems. Stochastic programming deals with situations where uncertainty is present in the data or model. Usually in deterministic mathematical programming the data(coefficients) are known whereas in stochastic programming the data are unknown, instead we may have a probability distribution present, such models are also sometimes called chance-constrained linear programs, as the name implies, is mathematical(i.e. linear, integer, mixed-integer, non-linear ) programming but with a stochastic element present in the data. For example the modeling of an investment portfolio so as to meet random liabilities, modeling of strategic capacity investments, power systems, that is modeling the operation of electrical power supply systems. So as to meet customers demand for electricity etc.[1, 2, 3, 4, 5, 6 and 7].
This paper focuses mainly on LSFP with joint probabilistic constraints, in which some of the data is random with at least one deterministic constraint.
The objective of this paper is to maximize the ratio of two linear functions subjected to a set of probabilistic
and deterministic inequalities and non negative constraints on the variables. We shall introduce the elements of LSP with joint probability constraint techniques. Also we shall discuss the transformations of LSFP problems into non-linear programming problems and solving through SLP method and branch and bound technique [8, 9, 10 and 11].
The SLP method, also known as the cutting plane method, was originally presented by Cheney and Goldstein and Kelly [12,13,14 and 15]. The concept of solving a series of linear programming problem in order to obtain the solution of the original non-linear programming problem, is known as sequential linear programming. Each linear programming problem is generated by approximating non-linear objective and constraint functions using first order Taylor series.
Though the LSFP with joint probability constraints model is non-linear, after adopting the concepts of chance –constraints. The model become partially linear. In the sense that objective function turn turns to be linear. Our one of the interest was in converting non-linear constraints to linear constraints which was fulfilled by SLP method [16, 17, 18 and 19].
Once the solution is obtained by SLP method, the B and B techniques comes in to picture to obtain integer solutions. This method starts by considering the original solution with complete feasible region, which is called root node problem/solution. The lower bounding and upper bounding procedures are applied to the root problem. For that, algorithm is given in section 3.b. If the bound satisfies the constraints, then an integer solution obtained and the procedure terminates, else if the constraints not satisfied, the corresponding branch terminates or else procedure moves recursively. Among the set of all integer solutions, which is optimize the objective function is considered as optimal solution.
In this paper Linear Fractional Programming problem and linear stochastic programming problem have been discussed succeeding sections. The formulation of Linear stochastic fractional programming problem with joint probability constraints with the various cases of stochastic data have been given in next section[21,22].
5. METHODOLOGY
In real world situations a decision maker faces the problem of optimization ratio. For example, in a manufacturing company one may have to optimize output/cost, Profit/Cost etc. Optimizing the ratio of two linear functions subject to some constraints is
called Linear fractional programming problem(LFPP).
Using the notation, the LFPP is stated as follows,
where T is an mxn matrix, b is an mx1 vector, c, d and x are nx1 vectors respectively and α, β are scalars. Let F be the set of feasible solutions of this problem.
Using the transformation Y = Tx where t is sacalar, have replaced the problem by the following two problems:
The denominator (d′X+β)>0 ∀ x∈F, then the problem is infeasible whereas if (d′X+β)<0, ∀ x∈F then the problem is infeasible. The denominator (d′X+β) for all practical cases, cannot have two problems, therefore, ultimately amounts to solving only the problem.
5.1 Linear Model
The chance –constrained stochastic programming was first studied by Jonson[2].The mathematical formulation can be stated as follows
Optimize f(X)= c’X (3) Subject to:
1-pi is the least probability with which ith constraint is satisfied. Assume that the random variables are normally distributed with known parameters.
In real life situation, there are many applications of stochastic programming.
5.2 Fractional Programming Problem
Let us consider the mathematical form of LSFP as follows: Maximise
Thus the deterministic fractional non-linear programming problem equivalent to the given fractional stochastic linear programming is
Using the transformation Y=Tx, the above model reduces to non linear programming problem
Using the transformation Y=Tx, the above model reduces to non linear programming problem
5. 3 Proposed Algorithm
1. Start with an initial point Yint and a scalar and scalar t. The point Yint must be feasible.
2. Linearize the non-linear functions in constraints about the point Yj as
3. Formulate the approximation LSFP problem with joint probability constraints as
4. Solve the approximating LPP to obtain the solution vector Ynext and scalar tnext
5. Evaluate the original constraints at Ynext i.e Get gi(Ynext), i=1 to m.If gi(Ynext)<r for i=1 to m, where r is a prescribed small positive tolerence, all the original constraints can be assumed to have been satisfied.
Hence stop the procedure by taking Yopt=Ynext and Yopt=Tnext.
6. Let next=int+1. Goto step 4.
7. Solve the problem as if all the variables were real numbers i.e not integers. This solution is the upper bound of the LSFP problem.
8. Choose one variable at a time that has a non integer value, say xj and branch that variable to the next higher integer value for one problem and to the next lower integer value for the other. The real valued solution of variable j can be expressed as xj.
9. Now variable xj is an integer in either branch. Fix the integer of xj for the following steps of branch and bound. Select the branch which yield the maximum objective function with all constraints satisfied. Then repeat step2 on another variable.
10. Stop the particular branch if the solution is not satisfying the constraints of the original problem else stop the branch when all the desired integer variables are obtained, that is all the integer variables find integer values.
6. RESULTS AND CONCLUSION
A company manufactures n types of machines.
Further, their production requires m processes. The processing time tij are known to be independently distributed normal variables with estimated means uij
and standard deviations sij. We also know the average profit for each machine. The annual average fixed cost to manufacture the machine is α. The mean capital invested is Rs. Ci per machine of type i. Below table may provide us more details.
The combination of the SLP and the B and B technique takes advantage of an exact method. The SLP method quickly reaches a non-integer solution that is close to optimum and branch and bound technique guarantees the global optimum as well as integer solution. Since a good approximation is obtained by the SLP method, if does not take many branches for the B&B to reach the optimal solutions.
Figure 1: Multi processor Scheduling
Figure 2: Graph in between number of product and CMAX
Figure 3: LSFP Comparison
Figure 4: LSFP vs ILPP
Table 1: Data for manufacturing
Resources Machine types Availability
1 234 213 980 321 452
2 432 521 643 643 675
3 123 431 612 125 342
4 678 653 890 764 432
5 908 784 765 532 145
6 765 476 432 145 653
7 643 098 165 653 543
8 231 712 653 421 642
9 621 453 532 543 234
10 213 543 432 653 765
11 542 213 765 324 345
12 521 764 875 542 231
Table 2: Process timings
Process T1 T2 T3 T4 T5
P1 432 234 452 126 764
P2 543 654 786 234 564
P3 321 564 764 343 234
P4 654 125 675 432 765
P5 642 698 892 998 342
P6 453 678 436 654 621
P7 543 654 982 892 980
P8 543 432 564 643 987
P9 953 643 247 864 231
P10 982 126 875 321 421
P11 098 412 762 542 509
P12 412 864 986 721 986
P13 213 543 221 981 891
P14 986 213 653 981 965
P15 213 975 098 213 165
P16 981 543 128 098 254
In addition, the B&B technique generates many sets of solutions. The competitive alternative provides management with several options and flexibility. This method can be applied to any optimization problem whose objective function is to optimize the ratio of two linear cost functions subject to constraints with some stochastic variables. AB and B technique for LSFP with joint probability constraints successfully yield better results for the machine manufacturing problem which we have discussed.
7. REFERENCES
1. Aggarwal, K. K., Gupta, J. S., &Misra, K. B. (1975).
A new heuristic criterion for solving a redundancy optimization problem. IEEE Transactions on Reliability, 24(1), 86–87.
2. Aggarwal, K. K. (1976). Redundancy optimization in general system. IEEE Transactions on Reliability, 25(5), 330–332.
3. Aggarwal, K. K., & Gupta, J. S. (2005). Penalty function approach in heuristic algorithms for constrained redundancy reliability optimization.
IEEE Transactions on Reliability, 54(3), 549–558.
4. Ansari, S. I. (2011). Study of some developments in stochastic programming and their applications
(Doctoral dissertation). Retrieved http://hdl.handle.net/10603/28650
5. Beraldi, P., &Bruni, M. (2010). An exact approach for solving integer problem under probabilistic constraints with random technologi-cal matrix.
Annals of Operations Research, 177(1), 127–137.
6. Birnbaum, Z. W., Esary, J. D., & Saunders, S. C.
(1961). Multi-component systems and structures and their reliability. Technometrics, 3(1), 55–77.
7. Bulfin, R. L., & Liu, C. Y. (1985). Optimal allocation of redundant components for large systems. IEEE Transactions on Reliability, 34(3), 241–247.
8. Charles, V., & Dutta, D. (2005). Linear stochastic fractional programming with sum-of probabilistic fractional objectives. Retrieved from http://www.optimization-
online.org/DB_FILE/2005/06/1142.pdf
9. Charles, V., &Udhayakumar, A. (2012). Genetic algorithm for chance constrained reliability stochastic optimization problems. International Journal of Operational Research, 14(4), 417–432 10. Charnes, A., & Cooper, W. W. (1963).
Deterministic equivalents for optimizing and satisficing under chance constraints. Operations Research, 11(1), 18–39.
11. Charnes, A., Cooper, W. W., & Thompson, G. L.
(1964). Critical path analyses via chance constrained and stochastic programming.
Operations Research, 12(3), 460–470.
12. N. Baba and A. Morimoto. Stochastic approxi- mations methods for solving the stochastic multi objective programming problem. International Journal of Systems Sciences, 24, 789–796, 1993 13. R. Caballero, E. Cerd´a, M. M. Munoz, L. Rey, and
I. M. Stancu-Minasian. Efficient solution concepts and their relations in stochastic multi-objective programming. Journal of Optimization Theory and Applications, 110, 53–74, 2001.
14. D. R. Carino, T. Kent, D. H. Meyers, C. Stacy, M.
Sylvanus, A. L. Turner, K. Watanabe, and W. T.
Ziemba. The Russell-Yasuda Kasai Model: an asset liability model for a Japanese insurance company using multistage stochastic programming. Inter- faces, 24, 29–49, 1994.
15. V. Charles. E-model for Transportation Problem of Linear Stochastic Fractional Programming.
Optimization online, 2007. http://www.optimiza- tiononline.org/DB_FILE/2007/03/1607.pdf 16. M.RajaSekar, “Region classification using SVMs”,
Journal of Geomatics, pp 87-89, 2007.
17. M.RajaSekar, “Automatic Vehicle Identification”Journal of Advanced Research in
Computer Engineering, pp 0974-4320, 2015.
18. M.RajaSekar “FER from Image sequence using SVMs “, Journal of Data Engineering and computer science, pp 80-89, 2016.
19. Burges, C. J. C., A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):pp 121–167, 2015.
20. BehroozKamgar-Parsi, BehzadKamgar-Parsi, Jain, A., Dayhoff, J., “Aircraft Detection: A Case Study in Using Human Similarity Measure”, IEEE Trans.
Pattern Analysis and Machine Intelligence, Vol.
23, No. 12, 2001, pp. 1404-1414,2016.
21. M.RajaSekar, “Implementation of recursive construction for building effective compression strategy”, JRRECS, P23-28, 2013.
22. M.RajaSekar, “An Effective Atlas-guided Brain image identification using X-rays”, IJSER, P23- 29,2016.
23. M.RajaSekar, “Mammogram Images Detection Using Support Vector Machines”, International Journal of Advanced Research in Computer Science, Volume 8, No. 7, July – August 2017.
24. M.RajaSekar, “Areas categorization by operating support vector machines”, ARPN Journal of Engineering and Applied Sciences vol. 12, no. 15, August 2017.
25. M.RajaSekar, “Diseases Identification by GA- SVMs, International Journal of Innovative Research in Science, Engineering, Vol 6, Issue 8, August 2017.
26. M.RajaSekar, “Classification of Synthetic Aperture Radar Images using Fuzzy SVMs”, International Journal for Research in Applied
Science & Engineering Technology (IJRASET), Volume 5 Issue 8, August 2017.
27. M.RajaSekar,” Breast Cancer Detection using Fuzzy SVMs”, International Journal for Research in Applied Science & Engineering Technology (IJRASET), Volume 5 Issue 8, September 2017.
28. M.RajaSekar, “Software Metrics in Fuzzy Environment”, International Journal of Computer
& Mathematical Sciences(IJCMS), Volume 6, Issue 9, September 2017.