Chen et al [35] introduced an improved Load Balanced algorithm on the ground of **Min**-**Min** algorithm to reduce the Makespan and increase resource utilization (LBIMM). Cloud providers offer computer resources to users on a pay-per-use base. To accommodate the demands of different users, they may offer different levels of quality for services. Cost per resource unit depends on the services selected by the user. So the user receives guarantees regarding the provided resources. To observe promised guarantees, user-priority was considered the proposed PA-LBIMM so that user's demand gets satisfied more completely. Finally algorithm was simulated using Mat lab toolbox. The simulation results show that the improved algorithm leads to significant performance gain and achieve over 20% improvement on both VIP user satisfaction and resource utilization ratio.

Show more
So LBMM executes **Min**-**Min** in the first round. In the second round it chooses the resources with heavy load and reassigns them to the resources with light load. LBMM identifies the resources with heavy load by choosing the resource with high makespan in the schedule produced by **Min**-**Min**. It then considers the tasks assigned in that resource and chooses the task with minimum execution time on that resource. The completion time for that task is calculated for all resources in the current schedule. Then the maximum completion time of that task is compared with the makespan produced by **Min**-**Min**. if it is less than makespan then the task is rescheduled in the resource that produces it, and the ready time of both resources are updated. Otherwise the next maximum completion time of that task is selected and the steps are repeated again. The process stops if all resources and all tasks assigned in them have been considered for rescheduling. Thus the possible resources are rescheduled in the resources which are idle or have minimum load.

Show more
Abstract cloud computing is a new emerging are in academics and industry. In cloud computing various service providers provide different services to customers for data storage, data processing. Cloud analyst is tool that is much useful for development and simulation of cloud environment before actual deployment is real world application. In the processing of cloud analyst various service broker polices and load balancing policies have been used for response of different users. In this paper a new hybrid approach has been purposed for load balancing in cloud computing. This approach use hybrid allocation VM’s to different user base requests. Keywords: load balancing and cloud computing, Round-Robin and **min**-**min** algorithm, response time, Data Center Response time and Load.

Show more
The time sliced and priority based algorithm is better compared to round robin and equally spread current execution load balancing algorithm with respect to waiting time and turnaround [r]

11 Read more

QoSSufferage is new task scheduling algorithm presented by E. UllahMunir [15]. This algorithm considers network bandwidth and assigns tasks based on their bandwidth requirement as the QoS guided **Min**-**min** does. It achieves smaller makespan compared to Max-**min**, **Min**-**min**; QoS guided **Min**-**min** and QoS priority grouping algorithms. K. Etminani et al. provided a new algorithm, that uses Max- **min** and **Min**-**min** algorithms to select one of these two algorithms depending on standard deviation of the expected completion times of the tasks on each of the resources [16]. SaeedParsa et al.proposed a new task scheduling algorithm called RASA [2]. It takes advantage of both Max-**min** and **Min**-**min** algorithm. RASA uses the **Min**-**min** strategy to execute small tasks before large ones and applies the Max-**min** strategy to avoid delays in the execution of the large tasks and to support concurrency in the execution of large and small tasks.

Show more
We have previously noted that the Max-**Min** algorithm is better than the **Min**-**Min** algorithm. The larger the number of tasks, the more difficult it is to predict the results. When the number of resources is relatively more than the number of tasks, some of the resources may be idle, and some of the tasks may not be intended. In light of this, the lost time and management time are almost the same for both algorithms. Our analysis of the final results of the experiments shows that the Max-**Min** algorithm is better than the **Min**-**Min** algorithm. G.Sharma, et al. [11] in his paper shows that the pace code for each of the algorithms (suffrage, max-**min**, **min**-**min**) which are compared by posting the examples in the form of multiple scenarios. The first example about the number of individual tasks shows that suffrage begins scheduling before Max begins. The remaining tasks are assigned to resources through one of two strategies: if the first task is assigned to a resource through a strategy of Max-**Min**, the next task shall be assigned by suffrage. In the next round, the task is assigned by using strategies that are different from those used in the final round.

Show more
Scheduling is one the most important problems in cloud computing. It also has a significant effect on the efficiency of cloud computing. The **min**-**min** algorithm offers an effective scheduling for time reduction. The **Min**-**Min** algorithm first finds the minimum execution time of all tasks. Then it chooses the task with the least execution time among all the tasks. The algorithm proceeds by assigning the task to the resource that produces the minimum completion time. The same procedure is repeated by **Min**-**Min** until all tasks are scheduled. Figure1 illustrates **Min**-**Min** algorithm [8],[9]. The improved **min**-**min** algorithm was developed by Rajwinder Kaur et al. in 2013, executed through the following stages. At first, **min**-**min** algorithm is executed as task T is assigned to resource R with the shortest execution time. Then, resources are arranged based on execution time, calculating the makespan and selecting a resource capable of responding to makespan. At the next stage, executed tasks will find the resources producing makespan. Then, it will find the minimum completion time of those tasks and the resources capable of responding. It will apply the settings on every single task. If the next completion time of task is shorter than makespan and the new completion time of machine is shorter than makespan, it will schedule the task on the responding resource. Finally, it will update the

Show more
Sabriya Fisher, Aaron Freeman, and Anton Karl Ingason. The camaraderie cultivated among the phonetics labmates, Aletheia Cui, Wei Lai, Yong-cheol Lee, Nari Rhee, Jingjing Tan, Jia Tian, Ting Wang, and Hong Zhang, has continuously encouraged and motivated me during this process. My appreciation also goes to colleagues and friends in the department, including Amy Goodwin Davies, Duna Gylfadottir, Einar Freyr SigurDsson, Betsy Sneller, and Robert Wilder. I’ve also benefited a great deal from my non-linguist friends and mentors, who have greatly enriched my grad school life in Philadelphia. I would like to acknowledge my Taiwanese fellows, Yi-Lin Chi- ang, Chia-Yu Chung, Yen Pei Huang, Yih-Chii Hwang, Chih-Yen Huang, Sandy Jan, Ji-Ying Lee, Hua-**Min** Shen, Eunice Tsao, and Yi-Ju Tseng, and my church friends and families, Yoonjung Byun, Ang Chen, Pastor Enrique Leal, Qianhui Lin, Joe Park, Lani Borgman Shade, and Ruoxin Wang.

Show more
194 Read more

Abstract. The paper presents a simple memetic algorithm for the solution of **min**-max problems. It will be shown how some of the heuristics provide the anal- ogous mechanism of other evolutionary and non-evolutionary heuristics proposed in the literature. It will be also argued that some existing heuristics might not be sufficient to correctly solve the problem and to avoid the so called red-queen effect.

After all the nodes in the graph are merged and it has only one node left, we proceed to construct the **min**-cut tree by using the information from intermediate stages. We move from last to first stage and at each stage we see the two nodes that were merged during last stage and separate the node with smaller of the two upperbound values from the other by an arc bearing the value equal to the smaller of the two upperbound values. Since we separate the two merged nodes in the tree by an arc having the value equal to smaller of the two upperbound values, it is necessary to consider the nodes during merging process in the increasing order of upperbound values so that the node with less upperbound value will be merged first, if possible at all.

Show more
The fastest algorithm known so far for solving max flow problem between two specified vertices has the complexity O(V 3 ). Therefore **Min**-Cut tree can be constructed in O(V 4 ) using the algorithm proposed by Gomory and Hu [3]. In this paper we do not use max flow subroutine here; rather we present an approximation algorithm in which we first calculate an upper bound for each vertex and repeatedly relax it till it becomes minimum-cut value. This approximation algorithm has a significantly better running time than the fastest existing algorithm till now and gives surprisingly good results for dense graphs.

Show more
It was possible to merge Nj with some other node Nk such that upper-bound value of the resulting node(let it be val) would have been less than upper-bound(Ni), In this case, the resulting **min**-cut tree will not be correct and will give wrong **min**-cut values for some pair of nodes. More precisely, it would give the value of **min** Ni-Nj cut as upper- bound(Ni) but the correct value is val. We call such a pair of nodes Wrong pair to merge.

This paper works on fuzzy **min**-max classifier neural network implementation. Fuzzy **min**-max classifier creates hyperboxes for classification. Each fuzzy set hyperbox is an n-dimensional pattern space defined by a **min** point and max point with a corresponding membership function. The fuzzy **min**-max algorithm is used to determine the **min**-max points of the hyperbox. The use of a fuzzy set approach to pattern classification inherently provides degree of membership information that is extremely useful in higher level decision making. A confidence factor is calculated for every FMM hyperbox, and a threshold value defined by user is used to prune the hyperboxes with low confidence factors. This will describe the relationships between fuzzy sets and pattern classification. Keywords - Fuzzy **min**–max neural network, Hyperbox, Membership function, Pattern classification, Pruning

Show more
The unanimous and strategy-proof SCFs on the maximal single-peaked domain are known as **min**-max rules (Moulin (1980), Weymark (2011)). **Min**-max rules are quite popular for their desirable properties like tops-onlyness, Pareto property, and anonymity (for a subclass of **min**-max rules called median rules). Owing to the desirable properties of **min**-max rules, Barber`a et al. (1999) characterize maximal domains on which a given **min**-max rule is strategy-proof. Recently, Arribillaga and Mass ´o (2016) provide necessary and sufficient conditions for the comparability of two **min**-max rules in terms of their vulnerability to manipulation. Motivated by the importance of the **min**-max rules, we characterize all domains on which (i) every unanimous and strategy- proof social choice function is a **min**-max rule, and (ii) every **min**-max rule is strategy-proof. We call such a domain a **min**-max domain.

Show more
23 Read more

Vickers hardness measurements were used to find the variation in hardness in the PDZ and chip region for an orthogonally cut sample machined under cutting speed of 50m/**min** and feed rate of 0.35mm per revolution. Using a load of 10gms, indentations were taken at or around the points of intersection of an imaginary grid of 100μm by 75μm, carefully avoiding any second phase particles. Additional measurements were taken around the grid points to substantiate the hardness data further. Estimation of hardness around the area of shear zone was done by placing the indentations very close to the shear cracks that separate two segments of the chips. Fig.5.45 (a and b) shows the polished section of the chip that was used for this analysis, and as can be noted from it, the region in the shear zone had a considerable amount of fractured second phase particle free area on which indentations were placed for hardness estimation of this zone.

Show more
320 Read more

other sphingolipids. Other sphingolipids are also involved in cardiac status regulation. Sphin- gosine appeared to be a cardioprotectant and a physiological concentration 0.4 microM of sphin- gosine was as effective as 5 microM S1P [53]. When used in pre- or post-conditioning ex vivo rat heart models, both compounds provided more than 75% recovery of left ventricular developed pressure during reperfusion and a reduction in size of infarct from 45% to less than 8% after 40-**min** ischemia. Combined application of both com- pounds provided an enhancement of the protec- tive effect against ischemia for up to 90 **min** and a combination of S1P, sphingosine, and a ramped ischemic postconditioning regimen gave an addi- tional heart protective effect [81]. Bielawska et al [82] were the first to demonstrate in vitro and in vivo that ceramide signaling can be involved in is- chemia/reperfusion-induced cardiomyocytes apo- ptosis. Using a left coronary artery occlusion mod- el in rat hearts they found ceramide to accumulate during ischemia progress (155 and 330% baseline level after 30 and 210 **min** of ischemia, respec- tively). However, reperfusion was needed for the promotion of cardiomyocytes apoptosis [82]. S1P increase under hypoxia correlated with a decrease in cellular ceramide that might be explained by sphingomyelinase inhibition. Exogenous ceramide and ceramidase inhibition diminished hypoxia- induced cell growth [83]. In parallel to S1P, sphin- gosylphosphocholine (SPC) was identified as the plasma and serum factor responsible for activating the inwardly rectifying K + channel, and it has been

Show more
17 Read more

The pH of aqueous solution is an important parameter in the adsorption process. The effect of pH on heavy metal adsorption was studied by varying medium pH from 1 to 11 and result is shown in Table 6. Otherparameterssuch as agitationtime, adsorbentdosage, and initialionconcentrationwere kept constant.Agitation time, adsorbent dose and initial ion concentration are 45 **min**, 1.8 gm/100 ml and 8 ppm, respectively. Fig.4 shows that the retention of copper by sawdust increases till pH 6 and then decreases slightly in the range of 8 – 11. The maximum sorption capacity 72% was observed at pH 6 due to the interaction of Cu 2+ , Cu(OH) + and Cu(OH) 2 with the surface of the sawdust.

Show more
11 Read more

The ability of A-12 to influence cell membrane damage was assessed by changes in LDH release into the myocar- dial effluent before and after global ischemia (Table 3). For the 5-**min** period of 140 M A-12 infusion prior to ischemia, LDH leakage did not differ significantly from that one in control. Therefore A-12 administration did cause damage to the sarcolemma of nonischemic car- diomyocytes. During the 5-**min** period after ischemia, the release of LDH activity in the perfusate of the con- trol group was increased on average 2.6 times compared with this value before ischemia indicating I/R membrane damage. However in the A-12-I group, the postischemic LDH leakage was reduced by 40% compared with con- trol. This finding suggested fewer membrane defects determining a release of cytoplasmic LDH from the myocardium.

Show more
Table 4.5 Time comparison Task Set up Scanner 15 min each green Robotic GPS 10 min each green 15 min initial base setup Collection of data 5 min per scan 40 min dependant 40 min dependan[r]

97 Read more

LU: So if we’re serious about improving birth outcomes and reducing disparities, we’ve got to start taking care of woman before pregnancy and not just talking about that one visit th[r]