• No results found

An Introduction to System Sizing for Data Warehousing Workloads

N/A
N/A
Protected

Academic year: 2021

Share "An Introduction to System Sizing for Data Warehousing Workloads"

Copied!
33
0
0

Loading.... (view fulltext now)

Full text

(1)

An Introduction to System Sizing for Data

Warehousing Workloads

An Example Sizing Method for Optimal System Performance

Tony Petrossian, pSeries Performance, IBM, Austin

Ann Matzou, Cluster Software Performance, IBM, Poughkeepsie Kwai Wong, DB2 Performance, IBM, Toronto

(2)

Note

y © Copyright International Business Machines Corporation 2004. All rights reserved.

y TPC Benchmark, TPC-D, TPC-H, TPC-R, QppD, QppH, QppR, QthD, QthH, QthR and QphD, QphH, QphR are trademarks of the Transaction Processing Performance Council

y The following terms are trademarks or registered trademarks of IBM in the United States and/or other countries: IBM, AIX, DB2, DB2 Universal Database, Power PC Architecture, POWER, ~, pSeries. y Performance results described in this paper were obtained under controlled conditions and may not be

achievable under different conditions. All information is provided “AS IS” and no warranties or guarantees are expressed or implied by IBM. Actual system performance may vary and is dependent upon many factors including system hardware configuration and software design and configuration.

y All TPC-H results referenced are as of March 30, 2004

Special Notices

This publication was produced in the United States. IBM may not offer the products, programs, services, or features discussed herein in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the products, programs, services, and features available in your area. Any reference to an IBM product, program, service, or feature is not intended to state or imply that only IBM's product, program, service, or feature may be used. Any functionally equivalent product, program, service, or feature that does not infringe any of IBM's intellectual property rights may be used instead of the IBM product, program, service or feature. Information in this presentation concerning non-IBM products was obtained from the suppliers of these products, published announcement material, or other publicly available sources.

Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. IBM may have patents or pending patent applications covering subject matter in this presentation. The furnishing of this presentation does not give you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, 500 Columbus Avenue, Thornwood, NY 10594 USA. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your IBM local Branch Office or IBM Authorized Reseller for the full text of a specific Statement of General Direction. The information contained in this presentation has not been submitted to any formal IBM test and is distributed AS IS. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere.

The use of this information or the implementation of any techniques described herein is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. Customers attempting to adapt these techniques to their own environments do so at their own risk. The information contained in this document represents the current views of IBM on the issues discussed as of the date of publication. IBM cannot guarantee the accuracy of any information presented after the date of publication. IBM products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. Any performance data contained in this document was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Users of this presentation should verify the applicable data for their specific environment. UNIX is a registered trademark in the United States and other countries, licensed exclusively through X/Open Company, Limited. Other company, product and service names, which may be denoted by a double asterisk (**), may be trademarks or service marks of others. This publication was developed for products and services offered in the United States. IBM may not offer the products, services or features discussed in this document in other countries. Information is subject to change without notice. Consult your local IBM representative for information on offerings available in your area.

(3)

1 SIZING THE HARDWARE FOR A DATA WAREHOUSE ... 4

1.1 Introduction...4

1.1.1 The Common Misconception about Sizing...4

1.1.2 Understanding the Sizing Problems for a Data Warehouse...4

1.1.3 Accuracy Goal in Sizing Estimates...5

1.1.4 Optimally Balanced System...5

1.1.5 Overview of the Sizing Process...6

2 SIZING METHODOLOGY ... 7

2.1 Choosing a System...7

2.2 Understanding System Capabilities...8

2.2.1 The eServer pSeries 655 System...8

2.2.2 FAStT900 Storage Server...10

2.3 Understanding the Workload...11

2.3.1 Storage Space Requirements of the Workload...11

2.3.2 Data Processing Characteristics of the Workload...14

2.3.3 Operational Characteristics of the Workload...19

2.4 Business Requirements...20

2.5 Sizing the System for Effective Utilization...22

2.5.1 Sizing the Storage...22

2.5.2 Sizing For CPU and Memory...24

2.5.3 Sizing for the Network...25

2.5.4 The Overall System...26

2.6 Conclusion...27

APPENDIX - 1 TPC-H BENCHMARK OVERVIEW... 28

1.1 TPC Council...28

1.2 TPC-H Overview...28

1.3 TPC-H Metrics...31

1.4 Benchmark Evolution...32

(4)

1 Sizing the Hardware for a Data Warehouse

1.1 Introduction

Today’s successful enterprises have a significant interest in Business Intelligence (BI) and data warehousing (DW) because they rely on these tools to gain a better understanding of their business and to establish a competitive position within their marketplaces. When reviewing analyst reports and market research papers related to data warehousing and Business Intelligence applications, we can easily see the following common theme:

“Data volumes are growing at unprecedented rates”

To accommodate huge volumes of data, we must learn to build large data warehouses that can function successfully and provide an ever increasing return on investment. The need to effectively process large volumes of data requires abilities beyond just storing the data in a warehouse.

This paper introduces the reader to the various aspects of sizing a system for data warehousing. To bring awareness to the critical issues that are frequently ignored in sizing projects, a sizing example is provided that estimates the hardware requirements of a recently published data warehousing benchmark.

1.1.1 The Common Misconception about Sizing

One of the key elements contributing to the success of a Data warehouse project is the hardware that stores, processes, and facilitates the movement of the data. Obviously, a large warehouse requires a large storage capacity, but the challenges of building a successful data warehouse are not limited to amassing a huge storage complex.

Unfortunately, many system sizing exercises put too much emphasis on the capacity and function of the storage without considering the overall IO subsystem and the balance of system resources needed to make efficient use of the storage investment. The ability to attach a large storage complex to a system does not suggest that the system is appropriately equipped to process the large volumes of data within a reasonable window of time.

1.1.2 Understanding the Sizing Problems for a Data Warehouse

Sizing a system for a new data warehouse project without any experimental data can be a daunting task. Unlike more traditional OLTP workloads, only a small portion of common performance information can be used to size different data warehouse systems and applications. The majority of OLTP workloads tend to have well understood units of work per transaction. As a result, resource requirements can be scaled using transaction rates and number of users. In contrast, a unit of work in a data warehouse application is variable and mostly unrelated to the data size. This variability makes it difficult to compare resource utilizations of different DW applications for estimating system requirements.

(5)

Many existing DW installations have created a science of capacity planning and measuring the resource utilization of their workloads. Unfortunately, most of this information is unavailable or inapplicable to a new installation. The ad hoc nature of a DW workload makes it difficult to compare different systems.

Estimating CPU requirements for data processing in a data warehouse is a complex task. CPU requirements to process 100MB of data can vary depending on the complexity of the queries. In order to build a knowledge base from which to estimate the processing requirements for a specific warehouse workload, be prepared to experiment, use benchmarks, seek expert opinions, and even guess.

Understanding sizing problems can help to build flexible configurations for future refinements. 1.1.3 Accuracy Goal in Sizing Estimates

An overly complex sizing methodology that requires massive amounts of estimated input will most likely produce a false sense of accuracy without necessarily producing a better system sizing estimate.

The goal should be to produce a sizing estimate for a flexible system configuration with room for minor adjustments in the future. The system should have a good balance of resources that scale proportionally.

It is important to remember that the outcome of any sizing methodology is an estimate and although the accuracy can be improved, it will never reach one hundred percent. It is critical to recognize the point of diminishing returns when going through a sizing process. Hopefully, there exists a point between knowing the size of the data and understanding the resource requirements of all possible queries in order to achieve a reasonable size estimate.

Each sizing effort should include an accuracy goal and a margin of error based on the level of existing knowledge of the application. Like any other business decision, this task requires risk calculation and contingency planning.

An alternative to a sizing estimate is to run the custom benchmark using specific application and data. When feasible, these efforts are usually very expensive. In most cases, an application is built after the hardware infrastructure is installed, so benchmarking the application before buying hardware is not possible.

1.1.4 Optimally Balanced System

Regardless of the methodology used to establish sizing estimates for data warehouse workloads, the outcome should always be a system with balanced resources that can be used efficiently. A well balanced configuration should be capable of maximizing one or more of the most expensive system resources at any time. Quite often, poorly configured systems leave expensive processing power idle due to an inadequate IO subsystem.

Data warehousing workloads present additional challenges that are not seen in traditional OLTP systems. The volume of data moved between storage and CPU for any given OLTP transaction

(6)

is very small. The aggregate data movement for a large OLTP system is minuscule when compared with data warehousing systems.

The balance between system CPU power, storage capacity and the IO subsystem is critical when building data warehousing systems. The IO subsystem connects the storage to the CPU and accommodates the movement of data between the two components.

System Storage IO Subsystem CPU & Memory IO BUS Storage Interconnect Networks Network Interconnect System Storage IO Subsystem CPU & Memory IO BUS Storage Interconnect Networks Network Interconnect

Figure 1: Data Movement in Systems

Due to the high volume of data moving through data warehousing systems, special consideration should be given to the IO subsystem for this type of workload.

1.1.5 Overview of the Sizing Process

To effectively size a system for optimal performance, architecting a solution requires the following steps:

y Understand system performance capabilities and limits y Understand the workload and resource usage characteristics y Establish business requirements and system expectations y Size the system for optimal use of its resources

With some analytical work a reasonable configuration that meets the business requirements can be estimated. As the quality of data increases, sizing estimates become more accurate.

(7)

2 Sizing Methodology

This section introduces the sizing methodology using a sample sizing effort. The workload used is a benchmark from the Transaction Processing Performance Council (TPC) for data warehousing. The TPC Benchmark™ H (TPC-H) is a well recognized data warehousing benchmark and its detailed description is publicly available. Appendix - 1 contains more information about the TPC and the TPC-H benchmark.

The following diagram illustrates the methodology used:

Sizing Process Assumptions Facts Requirements Workload Characteristics Business Requirements Product Data Sizing Estimate Data Collection Sizing Knowledge Base Sizing Process Assumptions Facts Requirements Workload Characteristics Business Requirements Product Data Sizing Estimate Data Collection Sizing Knowledge Base

Figure 2: System Sizing Methodology

This example, characterizes the behavior of a data warehouse workload and sets specific performance goals for achieving the business objective. Performance data sheets on IBM® ~ pSeriesTM 655 (p655) and IBM TotalStorageTM FAStT900 Storage Server were used to establish a system sizing estimate to meet the goals.

Although not part of this example, the data was used to run and publish a TPC-H benchmark that validated our work. More details on this benchmark result can be found on the TPC web site:

http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=103120801/1.

The following sections describe the steps required to collect data and size the system. 2.1 Choosing a System

Selecting a vendor, a product line, and a system for a new project is a complicated process beyond the scope of this paper. It should be noted that most selection processes are influenced by organizational preferences, historical purchasing patterns and other non-technical issues.

(8)

Regardless of the reasoning, the selection team is responsible for ensuring that a selected system is capable of meeting the technical requirements of the workload and providing the return on investment sought by the business.

For this project the following products were selected:

y Clustered configuration of IBM eServer pSeries 655 systems y IBM FAStT900 Storage Server

y IBM DB2® UDB

The choice of products was influenced by the project requirements, as well as the desire to highlight these products.

2.2 Understanding System Capabilities 2.2.1 The eServer pSeries 655 System

The pSeries 655 Central Electronics Complex (CEC) (7039-651) is a 4U tall, 24-inch half drawer, rack-mounted device. It houses the system processors, memory, system support processor, I/O drawer connection capability, and associated components.

The p655 server includes the latest IBM POWER4+™ chip technology in a building-block approach to the requirements of high-performance, clustered technical and commercial computing. With the speed advantages provided by the powerful 1.7GHz POWER4+ processor and its associated system architecture, a fast system bus, extremely high memory bandwidth and robust input/output (I/O) subsystems, the pSeries 655 (p655) provides a versatile solution to the most demanding client requirements.

The following diagram shows the p655 system configuration.

MCM Me m or y S lo t Me m or y S lo t Me m or y S lo t Me m or y S lo t GX to RIO-2 Bridge RIO-2 RIO-2 to PCI-X Bridge RIO-2 GX

Internal PCI Devices and PCI Slots

PCI Buses External I/O Drawer MCM Me m or y S lo t Me m or y S lo t Me m or y S lo t Me m or y S lo t GX to RIO-2 Bridge RIO-2 RIO-2 to PCI-X Bridge RIO-2 GX

Internal PCI Devices and PCI Slots

PCI Buses

External I/O Drawer

Figure 3: p655 CEC

The following sections describe the major components of the p655. For general description and configuration information about the p655, refer to the following IBM web site:

(9)

http://www-1.ibm.com/servers/eserver/pseries/hardware/midrange/p655_desc.html2

2.2.1.1 Processing Power

The p655 system is powered by a single multi-chip processor module. A Multi-Chip Module (MCM) has either four or eight 1.7GHz, POWER4+ processor cores. Each processor core contains 32 KB of data cache and 64 KB of instruction cache. Each processor chip has a 1.5MB L2 cache onboard that operates at chip frequency. On the 8-way MCM, the two cores on each processor chip share that chip’s L2 cache, while on the 4-way MCM each core has a dedicated L2 cache. A 32MB L3 cache is located between each processor chip and main memory and operates at one-third of the chip frequency. For more detailed information on the p655 configuration, refer to the following white paper:

http://www-1.ibm.com/servers/eserver/pseries/hardware/whitepapers/p655_hpc.pdf3

2.2.1.2 Memory Configuration

The p655 System has four memory slots which allow from 4GB to 64GB of memory to be installed. Memory cards are available in 4GB, 8GB and 16GB sizes. The following table shows possible memory configurations.

Total memory Slot 1 Slot 2 Slot 3 Slot 4

4GB 4GB 8GB 4GB 4GB 16GB 4GB 4GB 4GB 4GB 16GB 8GB 8GB 32GB 8GB 8GB 8GB 8GB 32GB 16GB 16GB 64GB 16GB 16GB 16GB 16GB

Table 1: System Memory Configuration Options 2.2.1.3 IO Subsystem

The p655 has two RIO-2 (Remote I/O) buses. The first RIO-2 bus supports the service processor, two Ethernet ports, an integrated SCSI adapter and three hot-plug/blind-swap PCI-X slots on the system board (see Figure 3). The second RIO-2 bus can be connected to 7040-61D I/O drawer for addition IO adapter slots and performance. The p655 supports a maximum of one I/O drawer with two RIO-2 ports. The IO drawer contains two PCI I/O planars. Each planar has three PCI Host Buses (PHB), first PHB has four 64-bit (133MHz) PCI slots, the second and third ones have three 64-bit (133MHz) PCI slots. Figure 4 shows the detailed configuration of the I/O drawer connected to the RIO-2 bus.

(10)

RIO-2 to PCI-X Bridge PHB1 PCI-PCI Bridge PHB3 PCI-PCI Bridge PHB2 PCI-PCI Bridge RIO-2 to PCI-X Bridge PHB1 PCI-PCI Bridge PHB3 PCI-PCI Bridge PHB2 PCI-PCI Bridge RIO-2 HUB Passive/Failover Active Active RIO-2 64 bit PHB 600MB/s Sustained Sustained 1050MB/s Duplex Sustained 2100MB/s Duplex 7040-61D IO Drawer RIO-2 RIO-2 RIO-2 to PCI-X Bridge PHB1 PCI-PCI Bridge PHB3 PCI-PCI Bridge PHB2 PCI-PCI Bridge RIO-2 to PCI-X Bridge PHB1 PCI-PCI Bridge PHB3 PCI-PCI Bridge PHB2 PCI-PCI Bridge RIO-2 HUB Passive/Failover Active Active RIO-2 64 bit PHB 600MB/s Sustained Sustained 1050MB/s Duplex Sustained 2100MB/s Duplex 7040-61D IO Drawer RIO-2 RIO-2

Figure 4: IO Drawer Configuration 2.2.2 FAStT900 Storage Server

The FAStT900 Storage Server is a member of the IBM FAStT family of disk storage products. The FAStT900 is an enterprise-class storage server designed to provide performance and flexibility for data-intensive computing environments.

The FAStT900 Storage Server has four host side FC interfaces which provide an extremely high IO bandwidth and four drive side interfaces which accommodate a very large storage capacity. It offers up to 32 TB of Fibre Channel disk capacity using 18.2, 36.4, 73.4, and 146.8GB drives with EXP700 disk drive enclosures. Dual controllers with mirrored cache in the FAStT900 provides for the RAID functions necessary to protect data from disk failures A FAStT900 can be connected through SAN switches or attached directly to the host.

The FAStT900 Storage Servers sustains an enormous IO rate with a mixture of read and write operations. When performing sequential IO operations a FAStT900 can saturate the four 2Gb FC host interfaces and deliver more than 720MB per second of IO to the system.

For more information about the FAStT900 features and performance refer to the following IBM web site:

(11)

2.3 Understanding the Workload

It should be mentioned that the authors have experience with the TPC-H workload based on previous projects. To characterize a workload for sizing a system, it is useful to have experience in data warehousing, with a good understanding of the specific database products and the targeted business environment.

There are two major workload related areas of concern in sizing a system for data warehousing projects:

y The storage of the data warehouse y The processing of the data

Both storage and processing requirements have an impact on all system components. Each will be considered separately.

2.3.1 Storage Space Requirements of the Workload

Estimating the disk space to store data is the simpler aspect of system sizing so many sizing efforts put most of the emphasis on this task. For this example, the assumption was to have 1,000GB of raw data. Most DW projects can easily calculate the raw data size based on information provided by the data sources. For example, when the data is extracted from an existing transactional system, its size is either known or easy to estimate.

The various components requiring storage space are: y Table data storage

y Index data storage y Database log space

y Temporary space required by the database y Staging space required to store raw data

Once the raw data size is established, it is necessary to estimate the database space requirement for storing the data in tables using the internal format of the database. For this, the schema, page size, row density per page are required. A database administrator and a data architect must be involved in this process. Most database vendors provide accurate estimates once the schema and data size are known. For this example, the information provided in the “DB2 Administration Guide: Planning”5 document was used. This manual has a specific section that can help estimate table, index, log, and temporary space requirements for the schema.

2.3.1.1 Table Data Storage

For each table in the database, the following information was used to calculate the space requirements for the base table that holds the warehouse data:

y Raw data size y Data Row size y Number of rows

(12)

y Page size y Rows per page

ƒ Including page overhead ƒ Free slots per page

y Free space for future page additions

Considering the above items, the space requirement for storing all the base tables was estimated to be 1,350GB. Most database vendors provide ample documentation to help calculate the database table space requirement for any given schema. The database product documentation should be consulted for information on estimating table space requirements.

2.3.1.2 Index Data Storage

For each index, the following information was used to calculate the space requirements for the indices in the schema:

y Index size per row of the table y Page size

y Index page overhead y Rows per page

ƒ Including the page overhead ƒ Free slots per page

y Free space for future page additions

Considering the above information, the space requirement for storing all the indices was estimated to be 258GB. Once again, the database product documentation should be consulted for information on estimating index space requirements.

2.3.1.3 Database Log Space

Most data warehouses have infrequent update activity in comparison with OLTP workloads. The data warehouse maintenance strategy will generally dictate the database log requirements. Some data warehouses are loaded periodically from operational data and are never modified between loads; these configurations have insignificant logging requirements. Other strategies involve regular maintenance of data that requires inserts and deletes from the database. In this configuration, regular updates to the data had to be accommodated but the volume of data being changed was only 0.1% of the total warehouse which adds up to 1GB (See TPC-H Specification in the Appendix-1 for details) per update cycle.

The following was taken into consideration:

y Data warehouse update and maintenance strategy y Frequency of updates

y Volume of changing data per cycle y Data recovery requirements

(13)

y Transactional characteristics of the workload

The log space requirements were estimated to be insignificant in size relative to the data size of the warehouse. 36GB of space was allocated to the database logs to satisfy logging needs for at least twenty update cycles between log backups. Consult “DB2 Administration Guide: Planning”6 for more details on sizing log requirements.

2.3.1.4 Temporary Database Space

When databases are executing queries that process large join, sort, or aggregation operations, they require memory to hold intermediate or partially processed data. As the database reaches the limits of its allotted memory, it uses disk space to temporarily store the partially processed data and free up memory for further processing. Most data warehouse systems process more data than can be held in memory and therefore they need temporary database storage space. For example, a query that attempts to sort 300GB of data on a system with 16GB of memory will require significant temporary storage.

Estimating the temporary storage space requirements for a DW workload is difficult because several external factors such as system memory size and concurrency levels impact the need for space. It usually takes an experienced data warehousing architect with help from database vendors to estimate temporary space needs. Underestimating the temporary space requirements of a workload can prevent the proper execution of large queries or limit concurrency levels of query execution.

The following information was considered when estimating our temporary storage needs: y Percentage of data used in large sort and join queries

y Number of concurrent queries that can be running at any one time ƒ Number of concurrent query segments per query

y Previous experience with the workload memory usage

ƒ Expert guesses and rules of thumb available from product vendors y Comparative estimates provided by the database vendor

y Future growth of data and increase of concurrency levels

Based on the above information, 140GB of temporary space was estimated for the worst case query and since seven query streams could run concurrently, about 1,000GB of temporary space was needed.

2.3.1.5 Staging Space

Most data warehouse projects require some space for staging raw data for loading or maintaining the warehouse. Depending on the operational procedures, the space requirement can vary drastically.

The following information was considered when estimating the staging space requirement: y Data warehouse loading procedures

(14)

ƒ Location and storage needs for the raw data y Data warehouse maintenance procedures

ƒ Location and storage needs for the maintenance data y Future growth of data

Based on the operational needs to store update data and some load data it was estimated that 1,500GB of space was sufficient for the project.

2.3.1.6 Minimum Storage Space Requirement

The storage space estimate is the minimum of space requirements. There are several factors which impact the overall storage configuration, for instance:

y RAID requirements to protect the data

y Number of disks required to meet the performance requirements y Number of disks needed to balance the IO performance.

For example, suppose that the storage requirement can be satisfied with 11 disk drives, but if the system has two disk controllers it might be better to use six disks per controller to evenly distribute the IO on both controllers. Adjustments to the number of disks for performance reasons may result in having more space than the minimum required, but as always one must balance the performance needs and cost based on the project priorities.

The following table shows the overall storage requirements for the configuration. 1,000GB TPC-H Warehouse Space Requirement

Data 1,350GB

Index 258GB

Database Log 36GB

Database temporary storage 1,000GB

Staging space 1,500GB

Total Storage Space 4,144GB Table 2: Minimum Storage Space Requirements 2.3.2 Data Processing Characteristics of the Workload

The TPC-H workload consists of a number of queries executed on the data. Although an infinite number of queries can be formulated in an ad hoc environment, most of these queries can be put into a few general categories. For example, queries with a significant number of arithmetic operations per row of data are CPU bound, while other queries that require simple filtering operations on large volumes of data become IO bound. It is important to understand the resource needs of the different categories of queries and to size the system to accommodate as many categories as possible.

The three major system resources that are stressed by a data warehousing workload are: y CPU resources

(15)

y Memory resources y IO Resources ƒ Network ƒ Disk • Sequential IO scans • Random IO scans

A well balanced data warehouse system maximizes one or more of the resources in the system. Unlike OLTP workloads, a single unit of a DW work (Query) can be parallelized to maximize the utilization of the system and return results in a minimum amount of time. When additional concurrent queries are added, the databases start to distribute system resources among the active queries. The time to complete a query in a DW workload will vary from one execution to another depending on the mix of queries running at the same time.

Only an expert in data warehousing workloads, with help from the database vendor and a data architect, can analyze a schema and anticipate the resource needs of potential queries. Without the ability to experiment and run test queries, even experts can have a hard time gauging the CPU needs of a query.

The process of categorizing the various queries starts by careful examination of the database schema, data characteristics and queries. The goal is to determine the following:

1. Estimate the size of the data accessed by each group of queries 2. Categorize the queries based on the dominant resource needs 3. Select some larger queries to represent these categories 4. Use these queries as a guide for system sizing

2.3.2.1 Estimate the Size of Each Query

Based on the predicates used in a query and knowledge of the data, the minimum amount of data each query would need to produce the query result can be anticipated. For example, the following query should access the entire LINEITEM table (see Appendix 1 for details):

select

sum(l_extendedprice*l_discount) as revenue from

lineitem

For this workload, the various queries were organized in three groups: 1. Small: Less than 15% of data is needed to produce results 2. Medium: Between 15% and 50% of data

3. Large: More than 50% of the data

In estimating the data set size for queries, it was assumed that the database can be optimized using indices and other schemes to minimize data access to what is necessary to calculate and produce the query results. Different databases and schema definitions may behave differently for the same queries. For example, a missing index may force the database to execute a full table scan and result in significantly more access to data.

(16)

Since it was intended to use the largest queries for this characterization work, there was less concern about the ability of the database to optimize data access. The following chart shows the approximate minimum data size required to complete the six largest queries in the workload:

0 500 1,000 1,500 2,000 2,500 3,000 21 9 17 19 18 1 Query Number Da ta Siz e (GB)

Figure 5: Query Size Estimates

The details of the size categorization for all the queries are gathered in Table 3, in section 2.3.2.2.

2.3.2.2 Categorizing Queries

TPC-H queries are categorized with respect to the most dominant resource they use. To do this, the intensity of the processing being applied to the data being read for the query is estimated. For example, the following segment of SQL code shows a query with significant number of calculations for every row of data to be processed:

select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedprice) as sum_base_price, sum(l_extendedprice*(1-l_discount)) as sum_disc_price, sum(l_extendedprice*(1-l_discount)*(1+l_tax)) as sum_charge, avg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_disc from lineitem group by l_returnflag, l_linestatus

In contrast, the following query has modest processing requirements for each row being scanned. This query will run as fast as the data can be read by a database and therefore it is IO bound.

(17)

select

sum(l_extendedprice*l_discount) as revenue from

lineitem

Few of the diverse queries were compared and contrasted with each other and based on the experience of the team with DB2 query processing, they were categorized in the following table:

IO Requirement Query Data Set Size Requirement Memory Requirement CPU

Sequential Random

Network Requirement

21 Large High High Low Low Low

9 Large High High Low Medium High

17 Large Low Low High None Low

19 Large Low Low High None Low

1 Large Low Medium Medium None Low

18 Large High High Low Medium Low

7 Medium High High Medium Medium High

5 Medium High High Low None High

13 Medium Medium High Low Low Low

11 Small Medium Low High None Low

6 Small Low Low High None Low

2 Small Low Medium Medium None Low

22 Small Low High Low Low Medium

16 Small High High Low Low Medium

14 Small Low High Low Low High

15 Small Medium High Low Low High

Table 3: Query Categorization

Once the characteristics of the various queries is established, it can be seen that most of the queries have high CPU or sequential IO requirements. To balance the system, sufficient IO performance is needed to keep CPU resources maximally utilized.

As can be seen in the above table, the top six largest queries can easily represent the entire query set with respect to resource needs. All the major system resource categories can be maximized by one or more of the six largest queries.

2.3.2.3 Selecting a Representative Set of Queries

For this characterization work, the assumption was that if the system is configured to meet the resource needs of the six largest queries it can also provide for the smaller queries with similar characteristics.

The following chart was built using data from the analysis of the six largest queries. The chart showed the relation between CPU and IO requirements of each query. Each bubble on the chart represents a query. The size of the bubble is proportional to the volume of data required to obtain the query result. The location of the bubble on the chart shows IO versus CPU

(18)

requirements for the query. When a query is CPU intensive, its IO requirements are lower than a query that requires little CPU power to process the same amount of data.

Relative Resource Needs of Queries

1 21 17 19 18 9 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0

Relative IO Throughput Requirements

Relative CPU Resource R

equirements

Figure 6: Relative Resource Requirements

Figure 6 can be used to make system sizing decisions based on the relative information. For example, if a system is configured with just enough IO bandwidth to maximize the CPU utilization during Query 1, then all queries to the right of Query 1 will be IO bound. It can also be concluded that configuring the system with more IO than is necessary for Query 17 will not provide any benefit.

Since it is not possible to configure the system optimally for every possible query, this type of categorization can be used to optimize for the largest class of queries within a given budget. 2.3.2.4 The Reference Queries

The IO Sizing Query

Based on the information in section 2.3.2.1 and Figure 5, Query 17 reads 80% of the data to produce a result. Assuming this query is IO bound, the time to complete this query is equal to the time it takes to read the data. A system with 1GB per second of IO bandwidth would require 800 seconds to read the 800GB of data to complete this query. Based on the business requirement a reasonable response time for this class of queries can be set and the IO system can be configured accordingly.

To complete a given query in a fixed period of time, the system needs enough IO bandwidth to read the required data in that time and enough CPU power to keep up with the processing of the data.

(19)

The point representing Query 17 on the far right edge of the chart in Figure 6 indicates that this query has an insatiable requirement to read data when compared to the other queries in the workload.

The CPU Sizing Query

Based on the assessments in section 2.3.2.2 and the information provided in Figure 6, it was determined that Query 18 is the most CPU intensive query. Using the information in Figure 6, if the IO requirements of Query 17 can be met, there will be no shortage of IO bandwidth for Query 18. Query 18 will be limited by CPU during its executions.

The point representing Query 18 on the top left edge of the chart in Figure 6 indicates that this query has massive requirements for data processing power relative to all other queries in the workload.

2.3.3 Operational Characteristics of the Workload

The TPC-H benchmark has two execution models, the Power Test and the Throughput Test. These two modes of operation have very different system resource requirements, so the needs of the two must be balanced. This multi-mode operational requirement is also common in many DW installations, where, depending on time of day or class of users, system requirements can be very different. When sizing a system all modes of operation must be considered and prioritized between conflicting requirements.

2.3.3.1 The Power Test (Single-user Batch Jobs)

For this test the goal was to optimize the single-user query processing (Power Test) because of the benchmark requirements. The decision to optimize for single stream query processing makes sense depending on the business needs for the system. If users are scheduled to have dedicated system time with the expectation of the fastest possible response time to the queries they submit, then the system has to be optimized for single stream processing. On the other hand, if the system is mostly used by multiple users who submit queries and are not sensitive to response time detail, then single stream performance is less critical.

2.3.3.2 The Throughput Test (Multi-user Operation)

Considering that the number of queries that access large amounts of data are CPU intensive and have low IO rate requirements (large bubbles near the top left of the chart in Figure 6), it can be assumed that running multiple queries at the same time would result in a CPU bound system. If a single stream of queries takes 1 CPU hour to complete, then 7 streams of similar queries could potentially take about 7 hours of CPU time. This rule of thumb provides reasonable estimates. However, sometimes simultaneously running query streams can benefit from sharing data caches while at other times the reduced availability of memory results in resource conflicts. To be safe a margin of error should be anticipated.

(20)

If a system is mostly IO bound during the execution of single stream queries, then more than one stream of queries is required to fully utilize the system resources. To get the best return on investment, the system should be configured to fully utilize CPU resources as much as possible. 2.4 Business Requirements

Obviously, the primary purpose of the data warehouse infrastructure is to solve business problems; so it is critical to collect the business requirements applicable to the sizing effort and translate them to system requirements.

For example, a business unit may require that a warehouse be reloaded from transactional data once per week and the task must be completed in a six hour window of time. This requirement must be translated to the various read, write and process rates for the data to ensure the system has the appropriate capacity.

The following table shows the list of business requirements and expectations addressed in this sizing example. This list is by no means conclusive, but it does capture the most critical elements for this purpose.

Requirement Expectation Comment

Raw data size 1000GB of raw text data

The raw data size is only the base number for calculating storage space requirements. Annual growth rate

of data Less than 2% Normal growth rate must be accommodated without major architectural changes in the system.

Tolerance for performance decline due to growth rate

Less than 2% Slower response time for any operation can be tolerated if the percentage of degradation is less than or equal to that of the growth in data.

Service life expectancy

3 Years The system is expected to operate for at least 3 years without major changes.

Raw data load rates 145MB/Sec 145MB per second load rate is derived from the need to load the 1000GB data in less than two hours. At this rate, a DBA will have enough time to rebuild the warehouse from scratch and prepare it for query process in less than 4 hours.

(21)

Requirement Expectation Comment Scan query response

time based on total data

Less than 200 seconds

This requirement is critical because a significant number of ad hoc queries in the workload perform scan and filter operations on all or part of the data. In addition, extract operations from the warehouse are bound by the scan rate of the system. The response time for several large and small queries with simple calculation will be impacted by the scan rate. Query 17 is our guide for this criterion.

Reporting and computational query response time based on total data.

Less than 900

seconds Our workload has several queries that frequently run to generate reports. These queries require several complicated computations and sort operation on all or parts of the data. The intent of this requirement is to ensure the worst case report will complete in what is considered reasonable time based on the business requirement. We also intend to run multiple simultaneous queries and we like to ensure the work can be completed in a reasonable time. We will use Query 18 as our guide for sizing the system to meet this requirement. Query concurrency

rate

7 query streams The workload requires that at least 7 query streams can operate concurrently at any one time. In addition, query response times must not degrade by more than a factor of 7 to 8, when compared to a single stream

execution. Performance versus

cost optimization priority

Performance In this exercise, reaching the specific

performance targets had the highest priority. Although the overall hardware cost had a budget ceiling, the price/performance was secondary to overall performance. Our goal was to achieve the performance target with optimal configuration and price within the budget guidelines.

(22)

Requirement Expectation Comment

Data protections RAID Level 5 Our workload requires RAID protection to prevent a single disk failure from disabling the system. RAID level 5 is most

appropriate for us because it allows protection with a reasonable overhead. A 5+P RAID level 5 configuration requires 5 disks for data and one disk for parity; this adds a 17% disk overhead for RAID-5. Table 4: Business Requirements

The above table maps the business requirements to a set of checklist items for the system. When sizing the system, this table should be used as a boundary guide to ensure the business needs are met.

2.5 Sizing the System for Effective Utilization

In data warehouse workloads, it is difficult to always maximize the utilization of CPU, IO or both resources. In many situations, factors such as limited memory or network resources can result in idle CPU or IO subsystems. In addition, all databases occasionally have execution paths that fail to maximize system utilization. The goal is to configure a system that meets the workload requirements and runs as close to the system utilization limit as possible. Saturating processing power and maximizing the IO subsystem during most of the operational hours of the system will provide the best return on investment.

2.5.1 Sizing the Storage

After collecting all the relevant data from workload characterization, business requirements, and system data sheets, the information can be compiled into a single set of guidelines for sizing the storage configuration.

y RAID level 5 configuration using 5+P settings y 145MB/sec load rate

y 5,000MB/sec scan rate based on Query 17 IO profile and 200 second limit (see Table 4) on scan query response time.

y Approximately 700MB/sec IO rate per FAStT900 (See section 2.2.2) y 4 Fibre Channel interfaces per FAStT900

y 14 disk capacity per FAStT700 Expansion unit y Disk size of our choice 36GB

y Minimum storage space needs 4,144GB

ƒ Minimum number of disks is 122 (4144 / 34 = 122)

y Number of FAStT900 needed to meet IO 5,000MB/sec IO rate is 8 (5000 / 700 = 8) y Number of FAStT900 needed to meet load rate of 145MB/sec is 1

(23)

y Number of EXP700 needed to fully utilize all FAStT900 disk side interfaces is 16 (2 per T900)

Since 122 disks do not evenly distribute amongst 16 EXP700, the number of disks should be rounded up from 7.6 per EXP700 to 8. Additionally to configure for 5+P RAID level 5 configuration, the number of data disks in each EXP700 should be divisible by 5 so once again the number of disks in each EXP700 is rounded up to 10. For every 5 data disks, a parity disk should be added so the total number of disks per EXP700 is 12. The following diagram shows the logical configuration of a FAStT900 with the enclosures and disks attached:

FAStT900 EXP700 Enclosure 0 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 E E A B ESM A SFP SFP ESM B SFP SFP EXP700 Enclosure 1 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 E E A B ESM A SFP SFP ESM B SFP SFP FC HBA FC HBA FC HBA CTRL A CTRL B FC HBA Host PCI-X 5+P RAID 5 Arrays

Figure 7: FAStT900 Disk Configuration

12 disks per EXP700, 2 EXP700 per FAStT900 and 8 FAStT900 brings the total number of disks to 192. Obviously 192 disks is significantly more than the initial requirement of 122 disks but this configuration provides a balanced, evenly distributed load with RAID level 5 protection that meets the performance requirement.

Random read and write requirements for this workload was relatively low when compared to the rest of the IO needs. Considering the number of disks and FAStT900 servers, the random IO requirements of the workload is easily met.

Although a single FAStT900 can easily accommodate the space needs for the workload it can not possibly meet the performance requirements. Space should never be the only determining factor for storage configuration.

Based on the storage sizing the system has to be able to accommodate 8 storage servers each with 4 Fibre Channel interfaces. A total of 32 Fibre Channel host bus adapters are required to match the performance of the storage subsystem.

(24)

Each p655 can easily provide for 1400MB/sec of IO (See section 2.2.1.3). From this, it can be determined that at least four p655 nodes are required to satisfy the 5000MB/sec IO needs of the workload. Depending on the CPU and memory needs for the workloads, the storage can be connected to 8 or 16 nodes to satisfy the IO bandwidth requirements.

Considering the flexibility of the FAStT900, any size system can be connected that can evenly distribute the access to 32 HBAs amongst all processors to provide for 175MB/sec of bandwidth for each HBA. This system can be a single node with any number of CPUs or a multi-node cluster of systems with an aggregate IO bandwidth of 5000MB/sec.

The following table sums up the storage requirements: Storage Configuration

FAStT900 Storage Server 8

EXP700 Enclosures 16

36GB disk drives 192

2 Gb Fibre Channel Host bus adapters 32

Table 5: Storage Configuration 2.5.2 Sizing For CPU and Memory

Sizing for CPU and memory requires extensive experience and significant knowledge of the workload. For this project, the experience of past testing with specific queries and the knowledge of the relative performance of the older systems compared to the targeted systems was beneficial.

In addition to past experiences, the sizing team needs to run experiments and make educated guesses. To estimate processor requirements for CPU intensive queries, a small one processor system can be set up with a fraction of the data to measure query performance. The test system can have any processors so long as their performance can be related to the targeted system. For example, as long as it is known that the test system processor is ten times slower than the targeted system processor, reasonable estimates can be made. The test system can be used to measure the time it takes to process a fixed amount of data by the worst case query.

If the test system completes a 1GB query in 50 seconds then it is known that a processor which is ten times faster can complete the work in 5 seconds. To process 1000GB in 500 seconds, 10 of the new processors are needed. This type of estimate can only be applied to simple compute intensive queries that are known to scale and that have stable query plans regardless of their size. Complex multi-table join queries are not good candidates for simple CPU testing because most databases try to apply query optimizations that may behave differently based on data size.

The memory usage of the test system needs to be limited to a relative size. It would be unreasonable to allow the 1GB test query to consume 1GB of memory unless is was intended to configure the target system with 1000GB of memory. To simplify the estimates it is preferable to establish a fixed relation between memory size and data size during testing. For this testing, it was established that at least 100MB of database memory was needed for each 1GB of data.

(25)

Throughout the testing, a similar ratio was maintained. This ratio can be drastically different from one workload to another.

Workloads that concurrently run many large queries with multi-table join and sort operations require more memory than workloads that run mostly scan/filter queries and aggregate data. Alternatively, some sizing experts use the ratio of memory per CPU as a guide. A common rule of thumb for data warehouse workloads used to be 1GB of memory per CPU but as memory prices have gone down and processor speeds have gone up this ratio has increased to 4GB. Too little memory will result in increased use of database temporary storage which will require more IO operations and possibly more idle processing power. Too much memory will fail to provide performance improvements once processing power is saturated.

Adding memory to a production system is much simpler than adding IO or processing capacity so this area of the configuration can further be improved so long as some flexibility is built into the plan.

For this project, Query 18 was selected as a guide for sizing the system processing needs. Query 1 was known to be a much simpler query to run tests with and that it scales for all data sizes. Query 1 was used for small scale testing and measurement and the relative resource usage graph (Figure 6) was used to estimate the processing needs for Query 18.

Based on the business requirements it was known that the worst case query running on 1000GB of data needed to complete in 900 seconds. Assuming Query 18 was the worst case and knowing that it used about 80% of the data, it was estimated that it would take 720 seconds to complete a query similar to 18. Using the relative resource requirement chart (Figure 6) it was estimated that Query 1 used about 2.5 times less processing resources to process the same size data as Query 18 in a fixed period of time, and from this it was determined that enough CPU processing power was needed to complete Query 1 in approximately 300 seconds.

Using experience with older systems and a subset of the data, it was calculated that the 1.7GHz processor available for the p655 system could process Query 1 at the approximate rate of 180MB per second. For Query 1 to process 800GB of data in 300 seconds a processing rate of 2,730MB per second was needed. At a rate of 180MB per processor, at least 16 processors were needed to meet the performance target.

It was estimated that 16 1.7GHz p655 processors could complete 7 concurrently executing queries similar to Query 18 in less than 90 minutes. These timing estimates were well within the business requirements.

Based on these estimates it was assumed a configuration with 16 processors and approximately 128GB of memory would satisfy the requirements.

2.5.3 Sizing for the Network

Based on the workload characteristics, the data management schemes used by DB2 and previous experience, it was estimated that a single Gigabit Ethernet interface per node would be sufficient for the configuration. Considering that only a few queries in the workload required significant data movement between the nodes, the risk of under configuring the network was low. Having

(26)

said that, the team was prepared to add a second interface if necessary. Relative to the total system cost, adding an additional network interface would have been insignificant. The network switch and IO subsystem were configured to accommodate additional network interfaces if needed.

Different databases have different network bandwidth requirements even when running the same workload. When sizing the network requirements for a clustered data warehouse installation the database vendor should be consulted and their recommendation should be followed with care. 2.5.4 The Overall System

Based on the IO, CPU, and memory requirements a 4-node cluster of p655 systems was needed to meet the workload requirements. This configuration provided a balanced performance for the workload that maximized the overall system utilization. Ability to provide more than 1400MB per second of IO bandwidth was a critical feature of the IBM eServer p655 system that made it an attractive option. The ability to provide a huge IO bandwidth to feed the powerful processor and maximize utilization helps achieve the return on the initial investment needed to be successful. The following is a general diagram of the system configuration:

Gigabit Switch FAStT900 FAStT900 EXP700 EXP700 EXP700 EXP700 IBM ^ p655 FAStT900 FAStT900 EXP700 EXP700 EXP700 EXP700 IBM ^ p655 FAStT900 FAStT900 EXP700 EXP700 EXP700 EXP700 IBM ^ p655 FAStT900 FAStT900 EXP700 EXP700 EXP700 EXP700 IBM ^ p655 Gigabit Switch FAStT900 FAStT900 EXP700 EXP700 EXP700 EXP700 IBM ^ p655 FAStT900 FAStT900 EXP700 EXP700 EXP700 EXP700 IBM ^ p655 FAStT900 FAStT900 EXP700 EXP700 EXP700 EXP700 IBM ^ p655 FAStT900 FAStT900 EXP700 EXP700 EXP700 EXP700 IBM ^ p655

Figure 8: Overall System Configuration

The four node cluster was connected with a gigabit Ethernet switch for all inter-node communications. The overall configuration provided about 5,500MB per second of read bandwidth from disk to system memory. Each node was directly connected to 1/4th of the total storage configuration. This configuration was well balanced and flexible and could be easily extended for larger DB2 installations.

The project was completed by building the configuration based on the sizing estimate and executing the TPC-H benchmark. The sizing effort was a success since the minimum

(27)

requirements for the workload were met. Some of the IO bound queries performed better than expected due to conservative performance estimates for various components but the results were within 5-10 percent of expectations. The CPU bound queries performed within ±10 percent of expectation with about equal number on the plus and minus side. The benchmark was submitted to the TPC and published in December of 2003. For further detail, see the following TPC web site:

http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=1031208017

2.6 Conclusion

System sizing for a data warehouse is a complicated task that requires some expertise to accurately estimate the configuration that can meet the needs of a business. The quality of the sizing estimate depends on the accuracy of the data put into the process. With some analysis and workload characterization, it is possible to drastically improve a sizing estimate. In the absence of the workload characterization data, the expectation of an accurate system sizing estimate should be set appropriately.

To safeguard a project build flexibility into plans by configuring systems that are well balanced in resources and extensible. For clustered configurations, ensure that the basic building block system is well balanced and meets both IO and CPU requirements for its subset of the workload. The reliability, availability, and serviceability of a cluster are only as good as that of the building blocks.

(28)

APPENDIX - 1 TPC-H BENCHMARK OVERVIEW

1.1 TPC Council

The Transaction Processing Performance Council™ (TPC) was founded in August 1988 by eight leading hardware and software companies as a non-profit organization with the objective to produce common benchmarks to measure database system performance. More than 20 companies are currently members of the council. There are four active benchmarks (TPC-C, TPC-H, TPC-R and TPC-W) for which results can be published. IBM® was the very first company to publish a TPC-H result at the 10,000GB scale on December 5, 2000.

Prior to publication, results must be reviewed and approved by designated auditors and a full disclosure report documenting compliance is submitted to the TPC. Published benchmarks as well as the benchmark specifications are accessible on the TPC web-site:

http://www.tpc.org8 1.2 TPC-H Overview

The TPC-H benchmark models a decision support system by executing ad-hoc queries and concurrent updates against a standard database under controlled conditions. The purpose of the benchmark is to “provide relevant, objective performance data to industry users” according to the specifications and all implementations of the benchmark, in addition to adhering to the specifications, must be relevant to real-world (i.e. customer) implementations. TPC-H represents information analysis of an industry that must manage, sell, or distribute a product worldwide. The 22 queries answer questions in areas such as pricing and promotions, supply and demand management, profit and revenue management, customer satisfaction, market share, shipping management. The refresh functions are not meant to represent concurrent on-line transaction processing (OLTP); they are meant to reflect the need to periodically update the database.

The TPC-H database size is determined by the scale factor (SF). A scale factor of 1 represents a database with 10,000 suppliers and corresponds approximately to 1GB of raw data. Only a subset of scale factors are permitted for publication: 1, 10, 30, 100, 300, 1000, 3000 and 10000. The database is populated with a TPC supplied data generation program, dbgen, which creates the synthetic data set. The set of rows to be inserted or deleted by each execution of the update functions is also generated by using dbgen. The database consists of eight tables:

(29)

TABLE NAME CARDINALITY REGION 5 NATION 25 SUPPLIER SF*10K CUSTOMER SF*150K PART SF*200K PARTSUPP SF*800K ORDER SF*1500K LINEITEM SF*6000K (approximate)

The chart below gives a more detailed view of the relationships between the tables and the number of rows per table when the scale factor is 10,000, i.e. for a 10,000GB (10TB) database.

PARTKEY NAME MFGR BRAND TYPE SIZE CONTAINER RETAILPRICE COMMENT PART (P_) 2,000M REGIONKEY NAME COMMENT REGION (R_) 5 CUSTKEY NAME ADDRESS NATIONKEY PHONE ACCTBAL MKTSEGMENT COMMENT CUSTOMER (C_) 1,500M NATIONKEY NAME REGIONKEY COMMENT NATION (N_) 25 ORDERKEY CUSTKEY ORDERSTATUS TOTALPRICE ORDERDATE ORDERPRIORITY CLERK SHIPPRIORITY COMMENT ORDER (O_) 15,000M ORDERKEY PARTKEY SUPPKEY LINENUMBER QUANTITY EXTENDEDPRICE DISCOUNT TAX RETURNFLAG LINESTATUS SHIPDATE COMMITDATE RECEIPTDATE SHIPINSTRUCT SHIPMODE COMMENT LINEITEM (L_) 60,000M PARTKEY SUPPKEY AVAILQTY SUPPLYCOST COMMENT PARTSUPP (PS_) 8,000M SUPPKEY NAME ADDRESS NATIONKEY PHONE ACCTBAL COMMENT SUPPLIER (S_) 100M PARTKEY NAME MFGR BRAND TYPE SIZE CONTAINER RETAILPRICE COMMENT PART (P_) 2,000M REGIONKEY NAME COMMENT REGION (R_) 5 CUSTKEY NAME ADDRESS NATIONKEY PHONE ACCTBAL MKTSEGMENT COMMENT CUSTOMER (C_) 1,500M NATIONKEY NAME REGIONKEY COMMENT NATION (N_) 25 ORDERKEY CUSTKEY ORDERSTATUS TOTALPRICE ORDERDATE ORDERPRIORITY CLERK SHIPPRIORITY COMMENT ORDER (O_) 15,000M ORDERKEY PARTKEY SUPPKEY LINENUMBER QUANTITY EXTENDEDPRICE DISCOUNT TAX RETURNFLAG LINESTATUS SHIPDATE COMMITDATE RECEIPTDATE SHIPINSTRUCT SHIPMODE COMMENT LINEITEM (L_) 60,000M PARTKEY SUPPKEY AVAILQTY SUPPLYCOST COMMENT PARTSUPP (PS_) 8,000M SUPPKEY NAME ADDRESS NATIONKEY PHONE ACCTBAL COMMENT SUPPLIER (S_) 100M Figure 9: TPC-H Schema

TPC-H enforces the ad-hoc model by severely restricting the implementation of auxiliary data structures such as indices and materialized query tables (sometimes known as automatic summary tables or materialized views). It also restricts how horizontal partitioning (by row) may be implemented. The partitioning column is constrained to primary keys, foreign keys, and date columns. If range partitioning is used, the ranges must be divided equally between the minimum and maximum value.

(30)

By imposing these restrictions, the TPC-H benchmark maintains the server platform as part of the performance equation and represents an ad-hoc environment. The TPC-R benchmark which does not restrict auxiliary structures models a reporting environment.

The table below summarizes the differences between the TPC-H and TPC-R benchmarks:

TPC-H (ad-hoc) TPC-R (reporting)

Auxiliary data structures Restrictions on indices No aggregates

Extensive indices and aggregates OK Simulated environment Ad-hoc query

Heavy stress on system

Preplanned, frequently asked queries Lots of tuning by DBA

Side Effects Average response time several minutes

Update function times similar to query times

Sub-second response times for several queries

Load time much longer

Update function times much longer than query times

The TPC-H benchmark exercises the following areas: y 22 Queries with the following characteristics:

ƒ Left outer join

ƒ Very complex queries with nested sub queries ƒ Aggregate with "HAVING" clause

ƒ Queries with multiple "OR" predicates

ƒ Query combining "EXISTS" and "NOT EXISTS" ƒ Query with multiple "SUBSTRING" operators ƒ Large scans with multi-table joins

ƒ Aggregate operations with large number of distinct values ƒ Large number aggregations and sorts.

ƒ Queries relying on index access as well as table access

ƒ Long running queries as well as short running queries exercising all aspects of query processing.

y Database refresh functions to perform inserts and deletes on the database

The benchmark specifications require that the implementation chosen for the benchmark satisfy ACID (Atomicity, Consistency, Isolation and Durability) properties. Specific tests are designed to show:

y That the system either performs individual operations on the data or assured that no partially- completed operations leave any effects on the data (A),

y That execution of transactions take the database from one consistent state to another (C), y That concurrent database transactions are handled correctly (I) and

y That committed transactions and database consistency are preserved after recovery from hardware failures such as loss of power, communications, memory, data and log disks, etc (D)

(31)

The concurrent updates insert and delete from the two large tables, LINEITEM and ORDER. Each of the refresh functions represents 0.1% of the initial population of these two tables. Each pair of refresh functions alters 0.2% of these two tables. A single update pair must be run for the power test and a set of update pairs for each query stream is run in the multi-user throughput test. The exact implementation of the refresh functions is left to the vendor. There are certain rules that need to be followed for the implementation of these refresh functions. The TPC-H specification state that each refresh function (RF1 or RF2) can be decomposed into any number of database transactions as long as the following conditions are met:

y All ACID properties are satisfied.

y Each atomic transaction includes a sufficient number of updates to maintain the logical database consistency. For example, when adding or deleting a new order, the LINEITEM and ORDER table are both updates within the same transaction.

y An output message is sent when the last transaction of the update function has completed successfully.

1.3 TPC-H Metrics

The benchmark specification provides details on how to report results, which consist of two performance metrics and one price/performance metric:

y Composite Metric (QphH@Size™) = QppH@size*QthH@size. This metric is the primary performance metric which is composed of the two pieces:

ƒ Power (QppH@Size™) = 24 1* 2*...* 22* 1* 2 * 3600 RF RF Q Q Q SF where Q1, Q2,... RF1, RF2 are timing intervals in seconds of queries and updates. The geometric mean of the queries and updates is used here to give equal “weighting” to all the queries even though some may be much longer running than others. The power metric is derived from a power run (single stream) in which all queries and update functions are run in a specified sequence.

ƒ Throughput (QthH@Size™) = edTime TotalElaps SF reams NumberOfSt *24*3600* where each stream is defined as a set of the 22 queries and 2 updates in the predefined order and total elapsed time includes the timing interval for the completion of all query streams and the parallel update stream. The throughput metric must be derived from a throughput run (multi-stream).

y Price/Performance (Price-per-QphH@size™) =

Size QphH @

$

where $ is the total hardware, software and 3 year maintenance costs for the system under test.

The size of the database (or scale factor) is explicitly stated in the metric names. The TPC believes that comparisons of TPC-H results measured against different database sizes are misleading and discourages such comparisons.

(32)

In addition to these TPC metrics, the number of streams is reported, which gives an indication of the amount of concurrency during the throughput run. Each scale factor has a required minimum number of streams that must be run for the throughput run, defined in the specifications. The database load time (defined as the total elapsed time to create the tables, load data, create indices, define and validate constraints, gather database statistics and configure the system under test) is also reported. Two consecutive runs must be executed. The metrics for the run with the lower QphH are reported.

1.4 Benchmark Evolution

The following table shows the evolution of the TPC Decision Support benchmarks:

Benchmark Version Date Released Date Obsolete

TPC-H 2.* November 2002 current

TPC-H 1.* Feb 1999 November 2002

TPC-D 1.* & 2.* May 1995 April 1999

Although the TPC-H benchmark evolved from TPC-D they are vastly different and can not be compared in any way. The TPC-H version 2 became effective in November of 2002 and although the basic performance aspects of V1 and V2 are identical the pricing methodology was changed. The price/performance metric of TPC-H V2 is based on 3-year cost while V1 was based on 5-years.

1.5 Performance Evolution

Since the inception of the TPC Decision Support benchmark, there have been some general trends in the industry standard benchmark results, which reflect the growth pattern of the business intelligence market.

Although the incompatibilities between the four different versions of the TPC Decision Support benchmarks make it impossible to chart a continuous trend line from 1995 to 2003 the following points are undisputed:

y Price/performance has improved steadily

y Scale factor size had increased, i.e. database sizes have increased y Processing power has increased

y Memory and disk requirements have increased

TPC-H benchmark measures the server’s I/O, CPU and memory capabilities via various database operations such as full table scans, sorting, joins and aggregation. It also measures how well a DBMS performs these basic database operations, and rewards those with efficient code paths and advanced query optimizers and parallel technology.

1http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=103120801/ - TPC Transaction Processing Performance Council

(33)

2http://www-1.ibm.com/servers/eserver/pseries/hardware/midrange/p655_desc.html - pSeries 655 Description

3http://www-1.ibm.com/servers/eserver/pseries/hardware/midrange/p655_desc.html - IBM eServer pSeries 655 – Ultra-dense Cluster Server for High Performance Computing, Business Intelligence and Data Warehousing Applications by Harry M. Mathis, John D. mcCalpin, Jacob Thomas

4

http://www.storage.ibm.com/disk/fastt/fast900/index.html - FAStT900 Storage Server – Scalable,

high-performance storage for on-demand computing environments 5 DB2 Administration Guide: Planning SC09-4822-00

6DB2 Administration Guide: Planning SC09-4822-00

7http://www.tpc.org/tpch/results/tpch_result_detail.asp?id=103120801 - TPC-H Result Highlights – IBM eServer p655 with DB2 UDB

References

Related documents

9 This estimate is derived using estimates of the total number of rental occupied housing units from the American Community Survey (2009-2013 five-year estimates) in combination

In Section 4.1, the benchmark innovation and market value Model 1 is presented, which measures the interaction among a firm’s stock return, observable and latent patent

As explained in the previous chapters, BPM is a business initiative, using software tools introduces order in respect of due process procedures and allows the reporting of

83, Issue 1, ISSN: 00333611 Scott B 1996, “The Liberal and National Parties: Local Government Policy”, Canberra: AGPS Scott M 2002, “The cutting edge of change”, The

(2011).Transforming sustainability education: ethics, leadership, community engagement and social entrepreneurship, International Journal of Business and Social

In 1992, the National Renewable Energy Laboratory (NREL) released the 1961-1990 National Solar Radiation Data Base (NSRDB), a 30-year data set of measured and modeled solar

DIVAR IP 7000 2U is an affordable, simple and reliable all-in-one recording, viewing and management solution for network surveillance systems of up to 128.. channels (with 32