• No results found

OPEN DATA CENTER ALLIANCE USAGE MODEL: Input/Output (I/O) Controls Rev. 2.1

N/A
N/A
Protected

Academic year: 2021

Share "OPEN DATA CENTER ALLIANCE USAGE MODEL: Input/Output (I/O) Controls Rev. 2.1"

Copied!
11
0
0

Loading.... (view fulltext now)

Full text

(1)

Input/Output (I/O) Controls Rev. 2.1

(2)

2 © 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

Table of Contents

Legal Notice ...3

Executive Summary ...4

Purpose ...5

Taxonomy ...5

Methods of Controlling I/O ...6

Provisioning Models ...6

Provisioning Model for I/O Control ...6

Technology Model for I/O Control ...6

Usage Model Diagram ...7

Usage Model Details ...7

Usage Scenarios ...8

Usage Requirements ...10

RFP Requirements ...11

Summary of Required Industry Actions ...11

Contributors

Mustan Bharmal – T-Systems Axel Knut Bethkenhagen – BMW Sudip Chahal – Intel IT

Alan Clarke – SUSE Ravi A. Giri – Intel IT Bernd Henning – Fujitsu

Eric Kristoff – ODCA Infra Workgroup Tobias Kunze – Red Hat

Ben MP Li – Deutsche Bank

Geoff Poskitt – Fujitsu

Peter Pruijssers – Atos

Erik Rudin – Science Logic

Avi Shvartz – Bank Leumi

Ryan Skipp – Deutsche Telekom

Catherine Spence – Intel IT

Mick Symonds – Atos

Arivou Tandabany – Telstra

Hans van de Koppel – Capgemini

Stephanie Woolson – Lockheed Martin

(3)

Legal Notice

© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

This “Open Data Center Alliance Usage Model: Input/Output (I/O) Controls” document is proprietary to the Open Data Center Alliance (the “Alliance”) and/or its successors and assigns.

NOTICE TO USERS WHO ARE NOT OPEN DATA CENTER ALLIANCE PARTICIPANTS: Non-Alliance Participants are only granted the right to review, and make reference to or cite this document. Any such references or citations to this document must give the Alliance full attribution and must acknowledge the Alliance’s copyright in this document. The proper copyright notice is as follows: “© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.” Such users are not permitted to revise, alter, modify, make any derivatives of, or otherwise amend this document in any way without the prior express written permission of the Alliance.

NOTICE TO USERS WHO ARE OPEN DATA CENTER ALLIANCE PARTICIPANTS: Use of this document by Alliance Participants is subject to the Alliance’s bylaws and its other policies and procedures.

NOTICE TO USERS GENERALLY: Users of this document should not reference any initial or recommended methodology, metric, requirements, criteria, or other content that may be contained in this document or in any other document distributed by the Alliance (“Initial Models”) in any way that implies the user and/or its products or services are in compliance with, or have undergone any testing or certification to demonstrate compliance with, any of these Initial Models.

The contents of this document are intended for informational purposes only. Any proposals, recommendations or other content contained in this document, including, without limitation, the scope or content of any methodology, metric, requirements, or other criteria disclosed in this document (collectively, “Criteria”), does not constitute an endorsement or recommendation by Alliance of such Criteria and does not mean that the Alliance will in the future develop any certification or compliance or testing programs to verify any future implementation or compliance with any of the Criteria.

LEGAL DISCLAIMER: THIS DOCUMENT AND THE INFORMATION CONTAINED HEREIN IS PROVIDED ON AN “AS IS” BASIS. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THE ALLIANCE (ALONG WITH THE CONTRIBUTORS TO THIS DOCUMENT) HEREBY DISCLAIM ALL REPRESENTATIONS, WARRANTIES AND/OR COVENANTS, EITHER EXPRESS OR IMPLIED, STATUTORY OR AT COMMON LAW, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, VALIDITY, AND/

OR NONINFRINGEMENT. THE INFORMATION CONTAINED IN THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND THE ALLIANCE MAKES NO REPRESENTATIONS, WARRANTIES AND/OR COVENANTS AS TO THE RESULTS THAT MAY BE OBTAINED FROM THE USE OF, OR RELIANCE ON, ANY INFORMATION SET FORTH IN THIS DOCUMENT, OR AS TO THE ACCURACY OR RELIABILITY OF SUCH INFORMATION.

EXCEPT AS OTHERWISE EXPRESSLY SET FORTH HEREIN, NOTHING CONTAINED IN THIS DOCUMENT SHALL BE DEEMED AS GRANTING YOU ANY KIND OF LICENSE IN THE DOCUMENT, OR ANY OF ITS CONTENTS, EITHER EXPRESSLY OR IMPLIEDLY, OR TO ANY INTELLECTUAL PROPERTY OWNED OR CONTROLLED BY THE ALLIANCE, INCLUDING, WITHOUT LIMITATION, ANY TRADEMARKS OF THE ALLIANCE.

TRADEMARKS: OPEN CENTER DATA ALLIANCE

SM

, ODCA

SM

, and the OPEN DATA CENTER ALLIANCE logo

®

are trade names, trademarks,

and/or service marks (collectively “Marks”) owned by Open Data Center Alliance, Inc. and all rights are reserved therein. Unauthorized use

is strictly prohibited. This document does not grant any user of this document any rights to use any of the ODCA’s Marks. All other service

marks, trademarks and trade names reference herein are those of their respective owners.

(4)

© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

4

OPEN DATA CENTER ALLIANCE USAGE MODEL:

I/O Controls Rev. 2.1

Executive Summary

Cloud computing’s potential is all about providing an IT service that is elastic to a large range of demand fluctuations. But cloud services are not yet immune to unanticipated downtime, making capacity management a top priority.

Capacity management in most enterprise IT environments has primarily been focused on CPU, memory, and storage capacity, without significant focus on I/O capacity. As host hardware becomes increasingly more powerful, the ability to increase the Virtual Machine (VM) density also increases, leading to potential bottlenecks in the I/O performance of the sub-systems, along with multi-tenancy and sufficient service-level agreements (SLAs) for quality of service (QoS) for cloud environments. The impact of properly monitoring and controlling network and storage I/O at various levels of the environment will become increasingly important.

The Open Data Center Alliance

SM

(ODCA) recognizes the need for bandwidth control in order to remedy inefficient use of physical systems caused by I/O bottlenecks. This usage model is designed to assist organizations to create and launch virtual machine (VM) workloads in an environment that meets their storage and network I/O performance requirements and effectively manage I/O performance requirements. At the same time, it is designed to assist providers of cloud services to achieve the technical capability to efficiently and effectively manage I/O demands from multiple running VM images.

This document serves a variety of audiences. Business decision makers looking for specific solutions, and enterprise IT groups involved in

planning, operations, and procurement will find this document useful. Solution providers and technology vendors will benefit from its content to

better understand customer needs and tailor service and product offerings. Standards organizations will find the information helpful in defining

standards that are open and relevant to end users.

(5)

Purpose

As host hardware becomes increasingly powerful, the ability to increase virtual machine (VM) density also increases. However, resultant bottlenecks in the I/O performance of the system can make it hard to realize such gains. Part of the problem is that capacity management in most enterprise IT environments has not adequately considered I/O capacity. Furthermore, controls over I/O do not generally allow management of I/O on a per VM (or more granular) basis. This can result in uncontrolled contention between VMs (i.e., a noisy neighbor) and failure to meet quality of service (QoS) targets for an application or workload. Minimizing the consequence of decreased performance for applications running on a multi-tenant infrastructure requires seeing and controlling both potential and existing contentions.

Usage models can fit into one of two major categories: functional or non-functional usage models. If “functional” describes the technical system capability out-of-box, and “non-functional” describes the configurable and controllable functionality, then I/O control fits into the non- functional category – assuming the existence of the functional elements – it addresses the non-functional control of those functional elements in order that they sustainably meet certain service expectations and performance targets or commitments.

To encourage potential solutions to these I/O issues and drive implementation requirements, this usage model focuses on making network and storage resources fully manageable. This means that aspects like guarantees, limits, and QoS must be manageable by the infrastructure or provider of the cloud service and must be exposed to the customer through appropriate controls and monitors. The intent is to provide partitioning to bandwidth allocation by task, VM or time of day, using priority scheduling and bandwidth throttling. The keys here are development of performance targets by type of I/O and guarantees for meeting these performance targets. The ability to manage this level of control requires an understanding of all potential thresholds or bottlenecks of the underlying infrastructure, and even more importantly, the I/O requirements of the workload in idle and peak scenarios. Knowing application/workload requirements in relation to infrastructure capabilities should assist in the effort to have appropriate mapping happen at provision time and help enable automated control with dynamic changes during runtime of the workload. Note that many workloads will require high I/O at peak times. Therefore, bandwidth controls for I/O promote the effort to have QoS met for mission-critical and other important services.

Taxonomy

Actor Description

Cloud Subscriber A person or organization that has been authenticated to a cloud and maintains a business relationship with a cloud.

Cloud Provider An organization providing network services and charging cloud subscribers. A (public) cloud provider provides services over the Internet.

(6)

© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

6

Methods of Controlling I/O

Although the concept of controlling the performance and throughput of an infrastructure element is not new, it acquires some new contexts and dimensions in cloud platforms, due to the nature of the cloud context – high sustained resource utilization, shared between multiple workloads, and business organizations. The ability to segregate and manage resources in context of service quality and performance commitments has not yet been fully achieved technologically. However, there are a number of approaches as to how the committed service characteristic metrics could be achieved, ranging from direct, to indirect, and the following lists some of these approaches:

1. Well-designed provisioning models

2. Detailed infrastructure element technology control For both of the mentioned models, transparency is required:

1. On the provisioning model side, transparency of workload and transaction activities is needed – e.g., what types of transactions are there and what type of load they create, and what number of transactions are forecast (current, short, medium terms).

2. For the technology element control model, it is necessary to create transparency on each element, with defined high and low watermarks for each element.

Typically, this is achieved by applying management and control capabilities to the elements of the infrastructure – e.g., storage, network, processors, memory, and so on, and these monitoring data are then integrated into a control interface that enables active allocation or tuning/

configuration of resources where necessary, and directs transactions according to pre-defined frameworks and concepts. Ideally, a hybrid of both methods will yield the best long-term results and enable the service orchestration layer and Service Catalog to provide the most useful management information and capability; however, this is unusual in cloud provider/service consumer interactions today.

Provisioning Models

The ability to control and shape I/O by means of managing the provisioning model requires some insight into the workload. Through this insight, one is able to calculate the sum of potential resource requirements for the whole environment, and to then pre-determine which workloads should co-exist on which resources. An additional implication is that workloads may evolve over their lifecycle, and so some monitoring of defined high and low watermarks is needed to track trends, and potentially relocate workload as appropriate, from time to time.

There are two macro use cases for considering IO Control:

1. Service performance management, to actively manage the achievement of planned or committed service thresholds

2. Optimization of infrastructure (in order that unnecessary infrastructure is not thrown at performance problems, thereby driving up infrastructure and operational costs)

In order to create sustainable performance, the following IO Control models exist:

Provisioning Model for I/O Control

For this model, the following must be considered:

1. Segregation of layers and elements, and locating services on them according to their reported/forecast workload capacities, based on one of the following three approaches:

– Like workload location (sum of similar workloads = total capacity) – Dissimilar workload aggregation (low and high = total average) – Dynamic workload allocation management

2. Alternately, the following resource allocation methods can be applied in context of the previous three workload groupings:

– Over provision capacity

– Queue workload demands

(7)

Technology Model for I/O Control

The more ideal method (although at this stage this is less mature) to try to ensure that targeted service and performance metrics are achieved by the I/O layer, is to actively manage and control the technical infrastructure and/or the transactions passing through that infrastructure. The following techniques are typical ways of trying to achieve this:

1. Separation of technology layers (leveraging virtualization and abstraction)

2. Provisioning spare capacity at the fabric level, to always have enough buffer to absorb peaks, and as the workloads grow, to provision additional spare capacity

3. Through protocol selection and management (e.g., InfiniBand) 4. Dynamic workload management (e.g., vMotion*)

The ideal configuration is a full hybrid model where both provisioning control and technology control are integrated to create active workload control and enablement.

The diagram below represents the different elements in a typical example system where IO Control is relevant. This includes both physical and virtual (abstracted) layers. One has to consider monitoring and management of IO between each of the elements of the chain, and each of these IO links are managed by different probes and control systems – for example physical disk, logical LUN, Physical Controller, Logical Fabric, with similar in effect on the Network side. Hot spots or under-utilization may occur at any level in the chain represented by the graphic, and therefore each point has to be considered regarding setting the high and low thresholds to be measured, and level of control to be applied there in context of its impact on the overall infrastructure and hosted service.

Usage Model Diagram

Hard Disk Spindles LUNS

SAN SP AppVM VM

App

Server

HBA SAN Fabric

HBA Network Fabric

Server Server

Storage IO Storage IO

Network IO Network IO

Storage IO Storage IO

Usage Model Details

Through the use of sufficient monitors, the cloud subscriber’s consumption of I/O and the cloud provider’s ability to provide I/O can be balanced

appropriately. Ideally, the applications and workloads that a cloud subscriber submits to the cloud would be closely matched to the appropriate

multi-tenant environment where the impact of the workload would not cause issues for other tenants, and where other tenants’ workloads

would not constrain the throughput and latency requirements of their cloud neighbors.

(8)

© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

8

Usage Scenarios

Goals:

Assist cloud subscribers with the capability to create and launch VM workloads that meet their storage and network I/O performance requirements.

Assist the cloud provider so that it has the technical capability to manage I/O demands from multiple running VM images in a way that ensures that an individual VM workload’s I/O cannot adversely and unexpectedly impact service to another running VM workload.

Assumptions:

• Assumes the Allocate VM Instance Usage Model by the National Institute of Standards and Technology (NIST)

1

.

• Cloud providers may provide varying levels of assurance based on price, and would therefore provide the appropriate level of instrumentation to the cloud subscriber to ensure that the VM workloads are matched to the appropriate underlying infrastructure.

• The cloud provider must provide appropriate visibility (through monitors) that allows the cloud subscriber to see workload characteristics in idle and peak scenarios. This will enable both the cloud provider and cloud subscriber to improve long-term mapping of I/O requirements to infrastructure capabilities.

Success Scenario 1 (instrumented):

1. The cloud provider shall be able to provision fully instrumented VMs and the hosting infrastructure to obtain complete visibility of I/O activity on a per component (physical and virtual) basis.

2. The cloud provider is therefore able to monitor I/O consumption and take necessary steps (e.g., provision additional I/O capacity) to ensure that cloud subscriber requirements are met over the lifespan of their workloads.

3. The cloud subscriber is notified of the level of I/O performance assurance available and, when appropriate, informed of the need to upgrade to a higher level of assurance, based on the cloud subscriber’s workload characteristics.

4. The cloud subscriber is able to utilize on-demand monitors and reports to see its workload/application characteristics and understand the peaks and valleys of utilization of I/O.

Failure Condition 1:

1. The cloud provider cannot satisfy the cloud subscriber’s request to allocate a new VM instance due to an inability to identify the sources of I/O traffic of either the cloud subscriber or other tenants throughout the hosting infrastructure.

2. The cloud provider is unable to: 1) identify the sources of I/O traffic through the hosting infrastructure, and/or 2) quantify the volume and rate of I/O from each VM through the infrastructure.

3. The cloud subscriber is not able to acquire necessary I/O to ensure performance and throughput of the running workload/application. In a multi-tenant environment, this leads to constraints across the I/O paths in the entire infrastructure.

1 www.nist.gov/itl/cloud/3_7.cfm

(9)

Success Scenario 2 (partial):

1. The cloud provider is able to provision fully instrumented VMs and the hosting infrastructure to provide complete visibility of I/O activity on a per component (physical and virtual) basis.

2. The cloud provider is able to assign to each VM image the relative weightings/performance shares that the hosting infrastructure can use to determine the share of available I/O that each cloud subscriber VM may use at any one time.

3. These shares are enforced deterministically by the hosting infrastructure to manage contention between individual VM I/O demands.

4. Correct provisioning is verified and confirmed to the cloud provider.

5. The cloud subscriber is notified of the level of I/O performance assurance available and the allocated performance shares relative to the overall capacity.

6. The cloud subscriber is able to utilize instrumentation to determine when the workload characteristics are impacted by throttles implemented through the performance shares.

Failure Condition 2:

The same as Failure Condition 1, but in addition the infrastructure is unable to allocate available I/O capacity based on individual VM

weightings/performance shares. VMs may then exceed the assigned relative performance constraints and negatively and non-deterministically impact the I/O capacity available for other VMs.

Success Scenario 3 (full):

1. The same as Success Scenario 2 (partial), but in addition the cloud provider is able to configure full end-to-end I/O QoS management, enabling storage and network-specific SLAs (e.g., target storage service times) with assured deterministic QoS to be assigned to cloud subscriber VMs (either individually or in predefined groupings).

2. Correct provisioning is verified and confirmed to the cloud provider.

3. The cloud subscriber is notified of the level of I/O performance assurance available and the allocated I/O characteristics.

Failure Condition 3:

The same as Failure Condition 2, but in addition the infrastructure is unable to allocate available specific I/O capacity to an individual VM. VMs may then exceed the desired performance constraints and negatively and non-deterministically impact the I/O capacity available for other VMs.

Failure Handling:

1. For all failure conditions, both the cloud provider and the cloud subscriber should be notified of the inability to provide I/O assurance.

2. Failure Condition 1 (the cloud provider does not meet the SLA and cannot sustain I/O for a period of time) should result in the cloud provider rejecting the Allocate VM instance request if the cloud provider cannot assure the desired I/O traffic. The cloud subscriber may modify the I/O requirement for VM and resubmit the request. Failure Condition 2 should result in an automatic attempt to achieve Success Scenario 1 (instrumented).

3. Failure Condition 3 should result in an automatic attempt to achieve Success Scenario 2 (partial).

(10)

© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.

10

Usage Requirements

At a fundamental level it is expected that all usage requirements are multi-vendor and open. Key requirements need to be met across vendors and hypervisors.

A selection of important business metrics must be prevalent within a controlled environment, to enable service selection, service management, and service operations. When rolled up to a KPI level, these are as follows:

KPI Explanation

Response time experienced by users How long does a defined business transaction take, from triggering within the cloud to completion by the cloud, for the end user? This, plus the sum of all latency effects from the users’ PC, to the LAN, to the network interface, and back again to the user’s screen, will provide an overall KPI result.

The number of potential users that a system or service can support

Given a specific configured and allocated set of cloud system or service resources, what is the maximum number of users (against a defined workload baseline) that the system can carry before a resource change is required? This enables the business to plan their operations, in line with resource costs.

The number of potential transactions that a system or service can support

Given a specific configured and allocated set of cloud system or service resources, what is the maximum number of defined business transactions (against a defined transaction workload baseline) that the system can carry before a resource change is required? This enables the business to plan their business capability, in line with resource costs.

A number of metrics are additionally required to quantify the infrastructure which forms part of the cloud system or service:

Monitoring For cloud provider:

• Monitor network and storage at individual VMs

• Monitor network storage I/O: throughput, latency

• Monitor latency and throughput at individual component level and aggregate level

• Monitor aggregate network I/O capacity and bandwidth

• Monitor aggregate storage I/O capacity and bandwidth For cloud subscriber:

• Network and storage I/O reservations – per VM

• Aggregate workload I/O consumption – by hour, day, week SLA Metrics For cloud subscriber (per VM):

• Average latency/time-period, max latency/time-period, min latency/time-period

• Average throughput/time-period, max throughput/time-period, min throughput/time-period

APIs • Representational State Transfer (REST) and Web Service (WS) APIs for SLA definition, monitoring, reporting, and so on Timeslice Monitoring

and Control • Time granularity over which reservations are met to obtain definable latency

• Throttling thresholds: 1-Sigma, 2-Sigma, 3-Sigma deviations from mean

I/O Reservations For cloud subscriber, these reservation attributes: Min, max, average – network I/O shares/VM

• Min, max, average – storage I/O shares, I/O Operations per Second (I/OPS)/VM

• Latency and throughput (see SLA Metrics section) For cloud provider:

• Definition of I/O share that is independent of machine

• Allocation of different I/O capacity to different VM sharing the same pool of I/O resources

(11)

If the usage requirements are met, cloud providers will typically have the appropriate monitoring tools for viewing, both visually and through APIs, the levels for all component and aggregate I/O constraints. The monitoring tools would normally enable threshold views for each

component and aggregate levels for the entire cloud platform, thus allowing proactive remediation and reactive monitoring, as well as exposing issues with the cloud platform that would lead to the cloud subscriber missing SLAs. The expected solution implementation would allow connection through APIs and standard connection methods to foster interoperability between both existing manageability solutions and new standards-based solutions. The cloud provider could then set thresholds at each appropriate level of the cloud platform to automate monitoring and throttling or to migrate cloud subscriber workloads so that the SLA is being met.

Cloud subscribers will be able to view how the cloud provider is meeting their SLA requirements at an aggregate level for their landed compute, storage and network capacity. Using tools provided by the cloud provider, cloud subscribers will be able to analyze their workloads/applications to find issues with I/O (e.g., too much I/O for the workload) that would allow them to tune their workload and meet performance costs at a potentially lower tier of cloud platform.

RFP Requirements

Following are requirements that the Alliance suggests should be included in requests for proposal (RFP) to cloud providers so that proposed solutions deliver bandwidth control for I/O that ensures QoS requirements can best be met for mission-critical and other important services.

• ODCA Principle Requirement – Solution is open, and works on multiple virtual and non-virtual infrastructure platforms, and is standards- based. Describe how the solution meets this principle and any limitations it has in meeting these goals.

• ODCA I/O Control Usage Model Rev 2.1 – Solution should provide the ability to control I/O for network traffic, fibre channel traffic, and converged fabric traffic. Please state if solution includes all, or partial abilities, and detail any limitations.

• ODCA I/O Control Usage Model Rev 2.1 – Solution should be able to work on standalone systems, multi-tenant systems, clustered environments, and with network teamed platforms. State any limitations with the solution in regards to these types of implementations.

• ODCA I/O Control Usage Model Rev 2.1 – Solution provides the ability to control variance in maximum peak usage and guaranteed minimum is quantifiable.

• ODCA I/O Control Usage Model Rev 2.1 – Solution is capable of independent I/O control for both input and output traffic.

• ODCA I/O Control Usage Model Rev 2.1 – Solution provides a quantifiable I/O control performance measure.

• ODCA I/O Control Usage Model Rev 2.1 – Solution is able to chart history of usage, by VM, by tenant, by time, and by traffic type.

Click here for the online Proposal Engine Assistant Tool (PEAT)

2

to help you detail your RFP requirements.

Summary of Required Industry Actions

To give guidance on how to create and deploy solutions that are open, multi-vendor and interoperable, we have identified specific areas

where the ODCA suggests there should be open specifications, formal or de facto standards, or common intellectual property-free (IP-free)

implementations. Where the ODCA has a particular recommendation on the specification, standard, or open implementation, it is called out

in this usage model. In other cases, we plan to work with the industry to evaluate and recommend specifications in future releases of this

document.

References

Related documents

For those among you who take interest in mental science, 1 wish to call attention to the perhaps unthought o f fact that a better chance to develop through

15.1.1 EXPECT SUPPLIERS TO COMPLY WITH RISK MITIGATION AGREEMENTS CTRL Do you clarify the information security risks that exist whenever?. your suppliers have access to

Land-cover influence on the climatic controls of biomass burning is also evidenced by the significant correlation between the amount of open water within 5 km of each study

Preci- sion depends on the size of the random errors in the responses, the number of units used, and the experimental design used.. Several chapters of this book deal with designs

A hybrid ECM solution gives cloud file sync and share the enterprise content management back end it needs to ensure enterprise content is properly managed; and it extends ECM to

GetBucks is a digital financial services platform, providing a range of lending and insurance products to the South African consumer, with a strategic focus on simplicity and ease

In this form, user can view, enter and delete information about daily ticket sales which includes Date, Movie ID, Ticket Price, Ticket book number, First Ticket number, Last

Laparoscopic hysterectomy is preferred over laparotomy in early endometrial cancer patients, except in very obese women. Bijen CBM