In this study, we investigate the value of information sharing in serial service operations. When services are offered sequentially in two stages, we may use the demand information from the previous period in assigning servers at each stage. This study compares an information-based policy with a basic policy for capacity allocation in serial service operations in order to explore the value of information shared between the two stages. Among several possible candidates for an information-based policy, we chose an assigning rule such that the number of servers at the latter stage is determined by the number of customers served out of the prior stage in the previous period. On the other hand, the basic policy is designated as an assigning rule such that the two stages have the same constant number of servers through all periods. Assuming independent and identically distributed Normal demands with various parameters, we conducted computational experiments to compute the cost savings from using the information-based policy over the basic policy. The cost of using each policy includes the labor cost and the waiting cost. The results show that the cost savings of information sharing are relatively low and that the value of information sharing increases with demand va riability or with unit waiting cost. These results give us some managerial insights on capacity allocation in serial service operations.
In this paper, we want to provide a capacity allocation algorithm for the operators who have their own optical networks. This model is used for the decision making of admitting the maximum number of new VPNs without influencing the QoS guarantee for the existing VPNs and our objective is to maximize our total revenue.
Nevertheless, one important issue of such the practice is high variability in trucking capacity resulted from the fluctuation in a number of shipments requested by manufac- turers and the failure to cooperate delivery planning between both parties. As a result, a carrier struggles between two extreme cases. On one extreme, it may allocate too many trucks thereby under-utilizing its trucking capacity or missing opportunities to create more revenues from other shipments. On the other, a carrier may have insufficient trucks thereby incurring a contract penalty or hiring another carrier to fulfill the request. The economic trade-off between opportunity cost to utilize allocated trucks and the contract penalty is similar to the Newsvendor problem, an inventory control model with a stochastic demand, as a trucking capacity can be viewed as a perishable item that is sensitive to time . This article aims to explore this trade-off and to illustrate an economic model using transportation data of a manufacturer and a carrier. Before discussing related literatures and formally stating the problem, it is important to understand how a manufacturer selects a suitable carrier.
The Gaussian Approximation algorithm is the easiest to implement, and suitable to be used as an algorithm setting an upper bound, since it does not consider buffer size. The Courcoubetis Effective Bandwidth Allocation is also another easy, and promising one, since this one takes into account buffer also. But neither of the previous two algo- rithms is designed with long range dependent traffic in mind. Norros effective bandwidth and DRDWM algorithms are the only ones incorporating the Hurst parameter, therefore addressing to long range dependent traffic. Although DRDMW is designed to alleviate the numerical overflows in the direct effective bandwidth allocation, that problem can not be completely alleviated due to the structure of (1). Therefore, we picked the following three algorithms for further simulation analysis:
16 Read more
Unlike the estimation of observable parameters such as the mean and variance, the s parameter can not be directly estimated from the measurements. The space parameter is calculated by using Large Deviations Theory (LDT) and by making a large buffer assump- tion (LBA). LDT deals with rare event probabilities and is suitably applied to the effective bandwidth problem since loss probability constraints to be satisfied are very small. The loss probability in a buffer of size B is approximated by the probability that queue content exceeds threshold B in an infinite (or large) buffer (LBA). In large deviations analysis, the overflow probability is calculated from an asymptotically exponential decrease assumption (2.2), where s is the space parameter, being a function of the server capacity, C.
59 Read more
In order to adjust in a flexible mode, the capacity allocated to connections, the network must be capable of dynamically changing the size of the containers in a hitless manner, i.e. without affecting the service. The resiz- ing of ODUflex containers can be accomplished using the protocol Hitless Adjustment of ODUflex (HAO), while the members of a VCG can be added or removed through the Link Capacity Adjustment Scheme (LCAS). Both techniques have their own advantages and drawbacks. ODUflex is easier to implement and manage than VCAT and, as each signal is transported as a single entity, it does not require differential delay compensation, as it is required with the second technique. However, resizing operations are more complex for ODUflex paths than for VCAT ones, since they require the participation of all nodes in the path, contrary to VCAT, where only the ingress and egress nodes take action in the operation. Furthermore, when multipath routing is provided, the scheme based on Virtual Concatenation permits to implement traffic engineering techniques, such as load ba- lancing, guaranteeing the use of network resources more efficiently . In addition, LCAS can be employed for resilience purposes -: it can automatically remove disrupted VCG members in the presence of link fail- ures, assuring that an unprotected ODU connection still continues operating despite working at a lower capacity; it can also be employed to activate backup VCG members used in protected connections whenever necessary. The first scheme is particular useful in data communications with unprotected connections where it is preferable to have a connection working at lower bit rate with “degraded service”, rather than no connection at all. The second scheme rely on the existence of backup VCG members, which are set up in advance using paths which are link disjoint from the working ones, to protect the working members.
13 Read more
Literature reviews A large number of articles is written about operation room planning and scheduling. In order to obtain an understanding of the research conducted we consulted three re- cent literature reviews that encompass operating room planning and scheduling. All these reviews are published in the past two years. All of these authors choose a different method to structure their article. Cardoen et al.  organise the literature by the use of six descriptive fields: patient characteristics, performance measures, decision delineation, research methodology, uncertainty and applicability of the research. Cardoen et al. conclude with directions for further research. They emphasise to conduct more research on scheduling of non elective patient types, incorporation of uncertainty and stochasticity and a better integration of the operating room planning with down- stream facilities and resources. However, they realise that this last recommendation widens the scope of the problem setting, yielding increased difficulty, to obtain reasonably fast results, that are likely to be general . Guerriero and Guido  classify the reviewed articles in terms of strategic, tactical and operational decision levels. Guerriero and Guido conclude with the follow- ing five objectives that operation research techniques aim at inside the operating room theatres: increased patient throughput, increased satisfaction of patients, surgeons and staff, increased utili- sation of OR resources, reduction of cancellations and reduction of time loss due to the late starts and changeovers. The third review studied is from May et al.  which categorises the reviewed articles by the planning horizon and the domain of the problem studied. May et al. distinguish six different planning horizons ranging from very long term (12 - 60 months) to contemparous (on the same day). Furthermore they distinguish six different domain areas: capacity planning, process re-engineering, surgical services portfolio, procedure estimation, schedule construction and schedule execution monitoring and control. May et al. concludes that the economic and project manage- ment aspects of the surgical scheduling process might be the most promising lines of research in the foreseeable future . They mention that many interesting models have been proposed but that none appear to have had widespread impact on the actual practice of surgery scheduling.
93 Read more
To solve congestion problem and to maintain the system within security limit is a challenge for the independent system operator (ISO). Due to the behavior of players looking for the cheapest electricity, no matter where it is located, large and difficult to predict unscheduled power flows often appear in the interconnected network. Consequently, it comes as no surprise that international interconnections very often become congested, meaning that the transmission system cannot be operated securely under the requested pattern of generation and demand. Congestion can be mitigated by rescheduling the generators, and simultaneously curtailing the electrical load. The various Capacity Allocation methods for congestion management, such as Pro-Rata rationing, Priority Based Rules, Transfer Capacity Based Auctioning (Explicit Auction, Implicit Auction), Market Splitting and Market Coupling are used. The increased number of hours in which market splitting was observed since the inception of the power exchanges in 2008, clearly indicates the need for investment in the transmission infrastructure. Market Splitting is a better method for congestion management. In India the total volume of electricity that could not be cleared due to congestion was 3143 MU which accounted for about 10% of Unconstrained cleared volume of both the power exchanges. A declining trend in congestion has been observed after the integration of grid.
While previously considered primarily as a tool of service operations, revenue management has considerable potential for manufacturing operations. MTO manufacturing firms share the environmental characteristics of companies in which yield management practices has been successfully employed, such as fixed capacity, perishable resource and uncertain demand (Barut and Sridharan ). However, prior works on applying the revenue management concept for short - run capacity management in MTO manufacturing environment is limited. Sridharan  provides a comprehensive contrast between the capacity allocation problem in manufacturing and the perishable asset revenue management (PARM) problem well developed in the service operations literature. Citing the example of high-fashion apparel industry, Balakrishnan, Sridharan and Patterson  propose a single-period rationing model when demand is stochastic. Focusing on the short - term capacity allocation problem faced by a class of make - to - order manufacturing firms
16 Read more
As illustrated in past research and studies by Trafikverket  and Nash et al. , differences between countries’ internalization degrees and differences between transport modes both tend to risk distorting competition and creating socioeconomic inefficiency. Possible improvement proposals given by the interviewed respondents have the possibility to influence long-term predictability and transparency among regulators, capacity evolution and capacity allocation. Many of the respondents emphasized current shortcomings related to the condition and maintenance of transport infrastructure. Furthermore, the current fee structure is often viewed as complex and difficult to grasp by many industry players. The latter interviewed respondents also pointed out that the differ- ences in freight fee levels between different transport modes could distort competition. Proposals that have emerged in the study are successive capacity allocation for railway capacity, public-private partnership (PPP) for infrastructure investments and a more quality-based and differentiated fee system. The bottom line is that more knowledge about the transport policy, fee systems, and external costs of transport is required for a better differ- entiated fee system and a higher degree of internalization of external costs.
13 Read more
As future work, we first intend to extend these mechanisms to deal with iterated allocations (i.e., ones in which new demand continuously appears) since in several of the cases we consider it is conceivable that the agents can observe and learn about the behaviors of other agents in the system. Our centralized mechanism would still work in such situations if we consider myopic agents (i.e., agents that cannot strategize over more than one round of allocation ) since then these agents will not strategize over rounds. However, this assumption might be too restrictive in some settings. Also, we wish to further investigate the link between task allocation protocols which are efficient and those that are robust (i.e., protocols in which it is highly likely that agents will fulfill their assigned task despite being uncertain about their capabilities when revealing their type). The link has been revealed here via the penalty scheme and the connection of the penalty scheme to a trust-based scheme has been discussed. However, a deeper study is required to formally establish the consequence of requiring robust mechanisms on the efficiency of the resultant mechanism. We believe that the hybrid approach combining trust and penalties would be a very interesting field to pursue. Finally, we aim to develop more sophisticated strategies for the decentralized mechanism in order to enhance the efficiency of the system, while ensuring that these sophisticated strategies derive higher profit than their simpler counterparts. This has been shown to be achievable in simple CDAs , , , and we believe it is also achievable in our modified CDA protocol. Such developments will enable us to more effectively find the set of agents who can perform the required task at the lowest cost (i.e., the efficiency will be increased).
15 Read more
In a global manufacturing enterprise, there are plants each producing multiple parts and multiple assemblies that serve multiple assembly plants in a year, or alterna- tively, each assembly plant demands multiple parts from many different suppliers. Hence, such a global manufac- turing enterprise can be formulated as a combined pro- duction-distribution network consisting of multiple sup- pliers and multiple destinations. In this paper, we con- sider a production-distribution network composed of a single manufacturer with multiple plants and multiple retailers. The retailers are given annual demand of the product. To meet the annual demands of the product, the manufacturer procures the materials and multi-plants produce within their capacity in the manufacturer. The multi-plants of the manufacturer have their production rate. The finished products are transferred to the common warehouse at the plants’ transfer rate. Finally, the ware- house delivers the ordered lots of a fixed size to the re- tailer periodically. The network is shown in Figure 1. The cost components considered include two parts, the first part is the ordering cost from raw materials, the pro-
These questions are answered in Chapter 5. The results of the models and scenar- ios formulated in Chapter 4 are analyzed and discussed. First we want to know what the optimal assignment would be. In addition, we want to investigate the impact of adding capacity at the optimal location and decreased transportation time. And lastly, we want to know what is the best we (hypothetically) could do with the available nationwide capacity, with regards to transportation time. In Chapter 6 we conclude our research and answer the main research question. In addition to that, we will give our recommendations to the Neonatal Care Network, discuss the limitations of this research, and give suggestions for further research. While these sub-questions are answered chronologically in this report, the research process, however, is not be linear. Figure 1.1 shows the main research activity in each chapter and how these activities are related. Since formulating, verifying, and vali- dating a model is iterative, it might require taking a step backwards in the chain. For example, different data might be required for a model formulation than initially ob- tained.
102 Read more
The most pressing issue and challenge in the current operation is the capacity of the transito flow; the cross dock operation between the national and the regional DC’s. There are frequently too few staging lanes to unload the shuttle rides between the sites; again this will result in waiting time. But because the goods in the truck are meant to combine with the other store orders produced in the regional DC; there is some time pressure. When the transito truck is unloaded overdue, it is not possible anymore to add more load carriers to the shipment. This time is marked as the Latest Time on Staging Lane or LTS. The truck to the store will then leave without the load carriers of the national centre. Besides the time pressure of this process, either the sorting of the load carriers is taking a lot of space. Before the load carriers are brought to the outbound staging lanes they are sorted on the transito staging lane. When the outbound lane is not opened yet (current time is before VTS- earliest time to staging lane) the load carriers of that store a still on the transito lane. Only when all load carriers have left the staging lane, the lane is released for a new transito shipment.
71 Read more
H E optical communications applications at backbone as well as access networks dictates the need for finding closed form expressions of the information capacity. In particular, the capacity expressions of Poisson channels that model the application. Therefore, in this paper, we accomplish an information-theoretic approach to derive the closed form expressions for: the SISO Poisson channel already found by Kabanov  and Davis , the parallel multiple input multiple output (MIMO) Poisson channels, as well as for the MAC Poisson channel. Several contributions using information theoretic approaches to derive the capacity of Poisson channels under constant and time varying noise via martingale processes or via approximations using Bernoulli processes are in [1-5], to define upper and lower bounds for the capacity and the rate regions of different models in [6-7], to define relations between information measures and estimation measures , in addition to deriving optimum power allocation for such channels  . However, this paper introduces a simple framework similar to  for deriving the capacity of Poisson with any model of consideration, in addition, it builds upon derivations for the optimal power allocation for SISO, Parallel, and MAC models, or any other Poisson channel model of consideration.
The concept of environmental carrying capacity has been studied and applied in the fields of ecology, tourism, urban planning, and environmental planning since the 1930s. The concepts of carrying capacity are based on the assumption that there are certain environmental thresholds that when exceeded can cause serious and irreversible damage to the natural environment . In terms of urban planning, carrying capacity is the determined ability of the natural and artificial environments to support the demands of various uses . In addition, carrying capacity is defined as the ability of natural and human-made systems to absorb population growth or physical development without serious decline or damage. Carton (1987) suggested three types of sustainable loads based on environmental carrying capacity (see Fig. 1). To sum it up, the urban carrying capacity concept is defined as the level of human activities that can be sustained by the environment without causing degradation or irreversible damage .
But in most cases, in addition to the interference con- straint on primary users’ bands, there is a constraint re- lated to the maximum transmittable power of the trans- mitter. Therefore, through the present study, in addition to the interference constraint, the constraint on the max- imum power that can be transmitted by the transmitter has been regarded. It will be shown that finding the algo- rithm for optimal power allocation is a highly complex matter which makes its practical applications impossible. Therefore, a sub-optimal algorithm which can be imple- mented has been introduced in this scenario.
The results are plotted in figure 9.8. The green line, that represents the upper bound of the 99% confidence interval for the maximum hour allocation, fluctuates heavily sometimes. The main reason for that, besides the fluctuation in allocation, is the appearance of new bookings. For the shippers that change their booking is after 7 days assumed to be known what their behavior is. Up to that time no oversell capacity is offered on their bookings. Their total individual booking is then reserved for them for the next day. The same holds for day bookings. When a shipper does a day booking on top of its month, quarter or year booking, its behavior is unknown. Their total bookings, the original bookings plus the day bookings, will be reserved for them. It is probably not realistic to assume that when a shipper does a day booking on top of its original booking, its behavior over the original booking is the same as without an additional booking, because the orig- inal booking does not suffice anymore. The oversell capacity that can be offered with 1% risk in forward direction is the difference between the red and the green line in the upper plot of figure 9.8. In almost none of the hours of the last six months of 2014 was the total booking of all the shippers at ’s-Gravenvoeren higher than 80% of the the technical capacity. GTS would have offered 20% of the technical capacity for overselling on nearly all days, which is 3, 36 · 10 6 kWh per hour. By
55 Read more
Optimal power allocation for maximizing the sum capacity of multiple access channel (MAC) with quality-of-service (QoS) constraints is investigated in this paper. Majorization theory is the underlying mathematical theory on which our method hinges. It is shown that the optimal structure of the solution can be easily obtained via majorization theory. Furthermore, based on our new approach, an efficient searching method for power allocation is developed by restricting the attention to a new searching variable. Our new method requires less than half of the computational cost of the existing method in the worst case and is even much faster in general. Simulation results demonstrate the effectiveness of our method.
Third, to reduce interference, BSB (as any ICIC scheme) limits the number of transmission opportunities of the base stations. This objective conflicts with the goal of scheduling relays as often as possible, to take advantage of their high channel qualities. In turn, the quantity of traffic that relay nodes can handle is mostly limited by the capacity of the D2D system. Therefore, rather than using independent optimizations of relay nodes activity and cellular scheduling, TOMRAN jointly solves the two problems and identifies whether the system bottleneck lies in the cellular capacity or in the D2D achievable rates. Specifically, to evaluate whether the traffic received by relay nodes can be retransmitted using D2D, TOMRAN uses a conservative estimation of the rates achiev- able over WiFi D2D links , and instructs the base station scheduler to never exceed such rates using the proportional fair optimization expressed by Problem (1). Therefore, TOMRAN ensures an efficient and fair utilization of the resources at the base stations, and frees the highest quantity of resources, which allows to serve more users, consuming much less energy than legacy systems. However, it is possible that some relay node could also achieve rates higher than the ones assigned