This system emphasizes economic fairness over economic efficiency. A fair resource distribu- tion is one in which each application is able to obtain a share of system resources proportional to its share of the total system funding. An efficient resource distribution maximizes aggregate utility (social welfare). Spawn uses a second-price auction. Thus, at any given time, there is only one auction open on each machine: bidding for the next available timeslot. This is a simple scheme, but it is not combinatorial, i.e., each winning task, whose execution may span multi- ple timeslots, is guaranteed the next available timeslot but is not guaranteed future timeslots. Therefore, a situation can occur where a task needs K timeslots to execute, but is only able to get k < K of them before exhausting its funds. This is a major disadvantage of the second- price single-item auction compared to combinatorial auctions. Furthermore, a winning task has exclusive use of the machine during its timeslot; situations can occur where less demanding tasks do not use their entire reservations, causing low utilization.
Abstract—Virtual Environments (VEs) provide the opportunity to simulate a wide range of applications, from training to entertainment, in a safe and controlled manner. For applications which require realistic representations of real world environments, the VEs need to provide multiple, physically accurate sensory stimuli. However, simulating all the senses that comprise the human sensory system (HSS) is a task that requires significant computational resources. Since it is intractable to deliver all senses at the highest quality, we propose a resource distribution scheme in order to achieve an optimal perceptual experience within the given computational budgets. This paper investigates resource balancing for multi-modal scenarios composed of aural, visual and olfactory stimuli. Three experimental studies were conducted. The first experiment identified perceptual boundaries for olfactory computation. In the second experiment, participants (N = 25) were asked, across a fixed number of budgets (M = 5), to identify what they perceived to be the best visual, acoustic and olfactory stimulus quality for a given computational budget. Results demonstrate that participants tend to prioritize visual quality compared to other sensory stimuli. However, as the budget size is increased, users prefer a balanced distribution of resources with an increased preference for having smell impulses in the VE. Based on the collected data, a quality prediction model is proposed and its accuracy is validated against previously unused budgets and an untested scenario in a third and final experiment.
are conducted. We compare our bi-level resourceallocation algorithm with IEEE 802.11e and IEEE 802.11 DCF with Round-Robin scheduling algorithm (hereinafter referred to as Round-Robin). The IEEE 802.11e protocol uses high priority access categories (AC2) to serve video traffic, but only pro- vides one queue for the multiple video streaming and schedules the video packets with FIFO algorithm. IEEE 802.11 DCF uses the same access parameters for both AP and background traffic users. The Round-Robin is a simple and fairness scheduling algorithm in MAC layer and schedules the video queues in turn. The video traces used in the simulations are obtained from , which are encoded with H.264 standard. We choose the encoded quantization parameter as 48, which means the video traces are encoded to very low average bit rates (24.095 to 31.993 Kbps). In the WLAN scenario, there are 15 video users, and each of which randomly selects one video file on the video server to download and playback. The number of background traffic users increases from 5 to 23 with a step of 2. We set buffer level target B target as 5s and the α i (1 ≤ i ≤ n)
ABSTRACT: Media streaming applications assumed importance as they can render multimedia services to subscribers. They became popular of late with innovative technologies like cloud computing. Cloud data centres and their scalability and availability made these applications prosper. They are associated with Video on Demand (VoD). These applications do have huge number of subscribers across the globe. Provided this fact, the subscriptions are increasing in exponential fashion. This has caused the cloud based media streaming providers an issue pertaining to resourceallocation. The main problem is that when more resources are allocation, the streaming quality is good but there is wastage of resources. In the same fashion, when fewer resources are allocated, it may cause deterioration of quality besides denial of services to subscribers when there is sudden and unprecedented increase in the number of subscribers. The existing reservation based resourceallocation needs to be improved for efficiency. In this paper we proposed and implemented a time-series based forecast algorithm that can have accurate predictions to help service providers. The algorithm helps in optimizing the resourceallocation to media streaming applications in cloud. We built a prototype application that demonstrates the proof of concept. The empirical results revealed that the forecast is useful for efficient cloud resourceallocation. Index Terms: Cloud computing, media streaming applications, time series analysis, and resourceallocation
In the future, we plan to: 1) inspect different constraint agreement approaches to address scalability issues present in large systems; 2) offer deployment hints that will effectively handle the deployment of virtual infrastructures in the context of real large cloud installations; 3)We investigate the mentioned advance in a resource distribution situation. The conducted experiments demonstrate that in critical situations, the estimated characteristics can help the service provider to decide about which users should be served and which ones can be discarded. This can raise user satisfaction level as much as possible and leads to more loyalty of users and higher profit for service provider.
Servers are one of the most power-hungry elements in data centers, with CPU and memory as their main power consumers. The average power consumption of CPU and memory is reported to be 33% and 23% of the server’s total power consumption respectively. Therefore, any improvement on processor and memory-level power consumption would definitely reduce the total power consumption of the server, which also improves the energy efficiency of data center. Dynamic voltage and frequency scaling (DVFS) are an effective system level technique utilized both for memory and CPU in bare metal environments and it is demonstrated to improve the power consumption of these two elements considerably. DVFS enables dynamic power management through varying the supply voltage or the operating frequencies of the processor and/or memory.
Bursty workloads are often found in multi-tier architectures, large storage systems, and grid services , , . Internet flash-crowds and traffic surges are familiar examples of bursty traffic, where bursts of requests are aggressively clustered together during short periods and thus create spikes with extremely high arrival rate. We argue that the presence of burstiness can cause load unbalancing in clouds and con- sequently degrade the overall system performance. In cloud systems, many applications are no longer single-program- single-execution applications. These applications involve a large number of concurrent and dependent jobs, which can be executed either in parallel or sequentially. Simultaneously launching jobs from different applications during a short time period can immediately cause a significant arrival peak, which further aggravates resource competitions and load un- balancing among computing sites. Also, as the number of these applications significantly increases in recent years, the present of Internet flash-crowds and traffic surges becomes more frequent. As a result, how to counteract burstiness and maintain high quality of service and system availability becomes imminently important but challenging as well in clouds. However, conventional methods unfortunately neglect cases of bursty arrivals and cannot capture the impacts of burstiness on system performance.
Why is the collaborative aspect important? There are a many reasons why group ex ploration enhances the way we use geospatial imagery Earth observation images provide a complex picture of our world. Interpreting them can be difficult and time consuming, and usually requires specialist knowledge. In this context there is con siderable value in having multiple analysts develop a collective interpretation of an image. Furthermore, when images are being used as part of a decision making pro cess, it is clearly preferable if all stake-holders can view the images at the same time, sharing observations and forming a consensus. For example, in a military context it may be valuable for officers in the field and trained image analysts to participate when command staff use reconnaissance data to make tactical decisions. There is also an economic argument in favour of collaboration. Access and processing large datasets incurs considerable costs in terms of network and computational resource consumption. These costs can be shared when more than one user views a dataset. Finally there are technical arguments for considering collaboration in this disserta tion. Firstly, the information shared for collaboration can also be used to improve application performance. Secondly, collaboration is not easily retro-fitted to exist ing solutions and should to be built in from the start. So collaborative data sharing is a fundamental requirement.
Successful formation and long-term stability of a cooperative venture are often linked to the perceived fairness of the associated cost or resourceallocation. Whether a venture is a simple business partnership or a global collaboration, such as that which led to the Kyoto protocol, its effectiveness can be hampered by the lack of a consensus view on what basis should be used for gauging an allocation’s “fairness.” Consider, for instance, the classic airport problem in which multiple airlines share a common landing strip (Littlechild and Owen, 1973). Aside from the issue of how costs should be apportioned amongst the set of players is the more fundamental question of what represents the relevant basis over which principles of equity should be applied. Should concern be focused on the distribution of costs across the set of airlines, the set of flights, the set of passengers, the set of revenues, or perhaps some other basis? Although these various bases are intertwined, each offers a different perspective on notions of fair treatment. This multiplicity of logically compelling fairness bases is a feature that is common to many practical cost-sharing applications. For example, participants of international initiatives to mitigate global climate change must agree whether the burden of reducing greenhouse gases should be distributed according to a per capita, per unit of GDP, per unit of wealth, or some other basis (Ashton and Wang, 2003). 1 What then leads to the selection of one basis over another in practice? Is this choice essentially arbitrary or can parameters of the cooperative environment predict the fairness basis that is embraced? Moreover, if such explanatory power exists, is it consistent with theoretical principles of collective behavior? This paper aims to shed light on these puzzles.
Cloud computing has emerged as a promising approach to rent a large IT infrastructure on a short-term pay-per- usage basis. Operators of so-called Infrastructure-as-a-Service (IaaS) clouds, like Amazon EC2, let their customers allocate, access, and control a set of virtual machines (VMs) which run inside their data centers and only charge them for the period of time the machines are allocated. In the proposed system Nephele, a new processing framework explicitly designed for cloud environments.
Video traffic is experiencing tremendous growth, fueled by the proliferation of online video content and the steady expansion in transmission bandwidths. The amount of video traffic is forecast to double annually in the next several years, and is expected to account for the dominant share of wire line as well as wireless Internet traffic soon.The available hardware and technology for consumers and service providers today allow for advanced multimedia services over IP-based networks. Hence, the popularity of video and audio streaming services such as Video-on-Demand (VoD), advanced on-line gaming, and video chatting and conferences are increasing. The demand for resource efficiency and robustness in the network follows. The current commercially available data transfer technology for streaming media does not adjust well to the best effort heterogeneous Internet architecture, and QoS demands are impossible to guarantee. A central
Data centers have lately increased important quality as expenditure-effective level of hosting big-standard service applications. While ample information centers use by liquidate superior gown over an ample of organization and receive enormous energy costs in status of power organization and air-cooled. For example, it has reported that power-related expenses account for something like 12 percent of in general data center disbursement . For huge companies like Google, a 3 percent reduction in energy cost can translate to over a million dollars in cost savings . Recently, there is large investing on rising information of right energy efficiency. The content of this technique is to impulsive modifying to organization of information center to trim strength body process piece of gathering the service level of objectives (SLOs) of work. The situation of employment planning in information centers . Project planning hold is a main concern in information center environments for several causes: (a) a customer may require to instantaneously measure up a request to meet flow on need and therefore requires the assets request to the content as early as possible. (b) Level for lesser-priority of pro-longed planning delay can cause to starvation. Production information centers have huge amount of different assets requests with diverse assets condition, period of time, precedency and performance. In particular, it has reported the variations of resource condition and period of time for several states of magnitude. Even so, scheming heterogeneousness-aware DCP strategy can be ambitious because it requires an exact classification of both workload and machine heterogeneities . We propose Harmony: Heterogeneity-AwareResourceMONitoring and management system that is capable of performing DCP in heterogeneous data centers is provided a theoretical bound on the size of each task class to achieve an efficient tradeoff between planning delay and energy consumption, and evaluated the effect of resource over-provisioning on solution quality. The DCP framework is to achieve both high usage of performance and ration of strength . We propose an algorithm for reducing risks from attacker in order to reduce delay and allocation of time for machines.
example, file-transfer does not have real-time constraint. Audio/video streaming, on the other hand, has real-time constraint. The difference between the sensitivities to human aura and visual systems indicates that the audio and video should be handled differently when adverse conditions arise, thereby affecting the playback of media streams. It is known that the aural sense is more sensitive to disturbances than the visual one. Therefore, it is appropriate to assign higher priority to audio than video. If data needs to be discarded when congestion occurs in the network, it is preferable to discard the video data first. For the video data, in some applications it may also be possible to use receiver’s information, e.g., delete backgrounds or transmit high-priority objects when there is not enough bandwidth available, to improve the subjective quality. In some other applications, we may just transmit the important layers such as base layer and lower enhancement layers when bandwidth is limited. Therefore, selectively protecting the scene content according to the priority of its relevance and its application is very useful and important for the final subjective impact.
The virtual network embedding problem across multiple domains has been considered in , where it was proposed to use iterative local search (ILS) to partition the virtual request. For this problem, ILS starts with a random clustering, following which a sequence of solutions is generated by randomly remapping some of the nodes to other clusters. Of these solutions, the one that improves upon the current solution the most is kept, and the algorithm iterates until a stopping criteria is met. Despite the simplicity of this method, it is hard to guarantee the quality of the solution within a limited time. In a related study, a general procedure for resourceallocation in distributed clouds was presented in . The objective was to select the data centers, the racks, and processors with the minimum delay and communication costs, and then to partition the virtual nodes by mapping them onto the selected data center and processors.
Second, in inventory problems, the problem dynamics are simpler. The marginal holding and backorder cost can be stated in terms of the current state and exogenous future random variables, using closed-form expressions. In our problem, the incremental costs and benefits depend more intricately on the sequence of exogenous random variables that are realized in the future, including resource availability and resource constraints, reneging, and emergency arrivals. Thus, there is no easy way to express these costs and benefits. We must define them implicitly, in terms of the costs incurred by certain limit policies. The decisions of a limit policy are predefined, so that the expected cost of the entire policy can be calculated efficiently on every sample path. Because of this implicit definition, the analysis of these policies and the proof of the bounds are considerably more technical. The advantage of working with limit policies, however, is that they are a very rich class of policies and they provide a general method to generate potential bounds. We believe that this method of generating bounds will find application in many other settings.
Anticipatory optimization techniques are motivated by a series of seminal papers, such as [23, 26], which discuss the predictability of human mobility patterns and the link be- tween mobility and communication. Shafiq et al.  studied mobile network traffic and its spatio-temporal correlation with mobility patterns. Similarly, Ahmed et al.  studied network user habits in terms of content: the study links con- tent requests and user categories, aiming to their prediction. The predictability of network capacity and the achievable rate of mobile users have been extensively studied in the literature. These studies range from short term prediction using filtering techniques [24, 25], to medium and long term forecasting solutions [12,20] accounting for position and tra- jectory estimates. We contributed to the literature with a general model  for predicted rates in mobile networks ac- counting for prediction uncertainties, and we use the model to devise single user optimal resourceallocation policies . For what concerns the state of the art on prediction based network optimization, in what follows we review a few of the papers that are more closely related to our current work.
In this paper we address resourceallocation for the wireless downlink of a cellular network when future knowledge about the achievable data rate is available. To provide a simpler notation we will consider a system with a single base station to which all the K users connect. We call the set of users U and our prediction horizon is T time units and we refer to the set of time slots as T . In the following, we consider unitary time unit t = 1, in order for data rates and download size to be used interchangeably. In the rest of the paper we use the following assumptions: 1) the future knowledge is perfect (this does not hold in practice, but the problem solution can be updated periodically. The present results can be considered as an upper bound for real scenarios); 2) the average video bitrate is continuous between 0 and q M (e.g.: by combining
uninterrupted video streaming devising a single-user optimal resourceallocation algorithm. ,  devised a solution using a Mixed Integer Linear Program to maximize through- put and minimize energy consumption respectively. Finally,  among a few others relaxed the assumption of perfect knowledge by accounting for prediction reliability and errors. Following the same direction, we recently presented a general model for mobile user capacity prediction .