• No results found

Setting the Direction for Big Data Benchmark Standards 1

N/A
N/A
Protected

Academic year: 2021

Share "Setting the Direction for Big Data Benchmark Standards 1"

Copied!
13
0
0

Loading.... (view fulltext now)

Full text

(1)

Setting the Direction for Big Data Benchmark Standards

1

Chaitan Baru1, Milind Bhandarkar2, Raghunath Nambiar3,

Meikel Poess4, Tilmann Rabl5

1San Diego Supercomputer Center , UC San Diego, USA baru@sdsc.edu

2Greenplum, EMC, USA Milind.Bhandarkar@emc.com

3Cisco Systems, Inc, USA rnambiar@cisco.com 4Oracle Corporation, USA meikel.poess@oracle.com

5University of Toronto tilmann.rabl@utoronto.ca

Abstract. We provide a summary of the outcomes from the Workshop on Big Data Benchmarking (WBDB2012) held on May 8-9, 2012 in San Jose, CA. The workshop discussed a number of issues related to big data benchmarking defini-tions and benchmark processes. The workshop was attended by 60 invitees rep-resenting 45 different organizations covering industry and academia. Attendees were chosen based on their experience and expertise in one or more areas of big data, database systems, performance benchmarking, and big data applications. The workshop participants concluded that there was both a need and an oppor-tunity for defining benchmarks to capture the end-to-end aspects of big data ap-plications. Benchmark metrics would need to include metrics for performance as well as price/performance. The benchmark would need to consider several costs, including total system cost, setup cost, and energy costs. The next Big Data Benchmarking workshop is currently being planned for December 17-18, 2012 in Pune, India.

Keywords: Industry Standard Benchmarks, Big Data,

1

Introduction

The world has been in the midst of an extraordinary information explosion over the past decade, punctuated by rapid growth in the use of the Internet and the number of connected devices worldwide. Today, we’re seeing a rate of change faster than at any point throughout history, and both enterprise application data and machine generated data, known as Big Data, continue to grow exponentially, challenging industry ex-perts and researchers to develop new innovative techniques to evaluate and

1 The WBDB2012 workshop was funded by a grant from the National Science Foundation (Grant#

(2)

mark hardware and software technologies and products. As illustrated in Fig. 1, stud-ies have estimated that the total amount of enterprise data will grow from about 0.5 zettabyte in 2008 to 35 zettabytes in 2020.

As indicated in Fig. 2, the phenomenon is global in nature, with Asia rapidly emerg-ing as a major source of data users as well as data generators. With increasemerg-ing pene-tration of data-driven computing, Web and mobile technologies, and enterprise com-puting, the emerging markets have the potential for further adding to the already rapid growth in data.

Fig. 2: Internet users in the world distributed by world regions - 211

Confronting this data deluge and tackling big data applications requires development of benchmarks to help quantify system architectures that support such applications. The First Workshop on Big Data Benchmarking (WBDB) [15] is a first important step towards the development of benchmarks for providing objective measures of the ef-fectiveness of hardware and software systems dealing with big data applications. The-se benchmarks would facilitate evaluation of alternative solutions and provide for comparisons among different solution approaches. The benchmarks need to character-ize the new feature sets, enormous data scharacter-izes, large-scale and evolving system config-urations, shifting loads, and heterogeneous technologies of big-data and cloud plat-forms.

(3)

The objective of the First Workshop on Big Data Benchmarking (WBDB2012) [15] workshop was to identify key issues and to launch an activity around the defini-tion of reference benchmarks that can capture the essence of big data applicadefini-tion sce-narios. Attendees included academic and industry researchers and practitioners with backgrounds in big data, database systems, benchmarking and system performance, cloud storage and computing, and related areas. Each attendee was required to submit a two page abstract on their vision towards Big Data Benchmarking. The program committee reviewed the abstracts and classified them in to four categories namely:

data generation, benchmark properties, benchmark process and hardware and soft-ware aspects for big data workloads.

1.1 Workshop description

The workshop was held on May 8-9, 2012 in San Jose, CA, hosted by Brocade at their Executive Briefing Center. There were a total of about 60 attendees representing 45 different organizations (see table in the Appendix for the list of attendees and their affiliations). The workshop structure was similar on both days of the meeting, with presentations in the morning sessions followed by discussions in the afternoon. Each day started with three 15-minute talks. On the first day, these talks provided back-ground on industry benchmarking efforts and standards and discussed desirable at-tributes and properties of competitive industry benchmarks. The 15-minute talks on the second day focused on big data applications and the different genres of big data, such as genomic and geospatial data. On both days, these opening presentations were followed by 20 “lightning talks” of 5 minutes each by the invited attendees. The lightning talk format proved to be extremely effective for putting forward and sharing key technical ideas among a group of experts. The workshop group of about 60 at-tendees was then arbitrarily divided into two equal groups for the afternoon discus-sion sesdiscus-sion. Both groups were provided the same topic outline for discusdiscus-sion and ideas from both groups were collated at the end of the day.

The workshop program committee (also listed in the appendix) was responsible for drawing up the list of invitees; issuing invitations; designing the workshop structure described above; grouping white papers into lightning talk sessions; and developing the outline to guide the afternoon discussions.

The rest of this paper is organized as follows. Section 2 provides a background dis-cussion on the nature of big data, big data applications, and some benchmark related issues; Section 3 discusses guiding principles for designing big data benchmarks; Section 4 discusses the objectives of big data benchmarking, whether such bench-marks should be targeted specifically to encourage technological innovation or pri-marily for vendor competition; Section 5 goes into some of the details related to big data benchmarks; Section 6 provides conclusions from the workshop discussions and also points to next steps in the process. The paper also includes an appendix with the list of attendees and the workshop program and organizing committee.

(4)

2

Background

There was general agreement at the workshop that a big data benchmarking activity should begin at the end application level, by attempting to characterize the end-to-end needs and requirements of big data applications. While isolating individual steps of such an application such as, say, sorting, is also of interest, this should still be done in the context of the broader application scenarios. Furthermore, a range of data genres that need to be considered for big data including, for example, structured, semi-structured, and unstructured data; graphs; streams; genomic data; array-based data; and geospatial data. It is important to model the core set of operations relevant to each genre while also looking for similarities across genres. Interestingly, it may be possi-ble to identify relevant application scenarios that involve a variety of data genres and require a range of big data processing capabilities. An example discussed at the work-shop was that of data management at an Internet-scale business, say, for an enterprise like Facebook or Netflix. One could construct a plausible use case for such applica-tions that requires big data capabilities for managing data streams (click streams), weblogs, text sorting and indexing, graph construction and traversals, as well as some geospatial data processing and structured data processing.

It is also possible that a single application scenario may not plausibly capture all the key aspects, for genres as well as operations on data that may be broadly relevant to big data applications. Thus, there may be a need for multiple benchmark definitions, based on differing scenarios, which together would capture the full range of possibili-ties.

There are good examples of successful benchmarks by benchmark consortia, such as the Transaction Processing Performance Council (TPC) [12] and Standard Perfor-mance Evaluation Corporation (SPEC) [9], as benchmarks from industry leaders, such as VMMark, and Top500, as well as benchmarks like Terasort and Graph500 for spe-cific operations and data genres (social graphs), respectively. It is fair to ask whether a new big data benchmark could simply be built upon existing benchmark efforts, by extending the current benchmark definitions appropriately. While this may be possi-ble, a number of issues need to be considered including, whether:

the existing benchmark models relevant application scenarios;

 

the existing benchmark can be naturally and easily scaled to the large data vol-umes that will be required for big data benchmarking;

 

such benchmarks can be used more or less “as is”, without requiring significant re-engineering to produce data and queries (operations) with the right set of characteristics for big data applications; and

 

the benchmark has inherent restrictions or limitations, such as, say, a require-ment to implerequire-ment all queries to the system in SQL.

 

Several extant benchmarking efforts were mentioned and discussed such as, the Sta-tistical Workload Injector for MapReduce SWIM, developed at the University of California, Berkeley [10]; GridMix3, developed at Yahoo! [1]; YCSB++, developed

(5)

at the Carnegie Mellon University on top of YCSB of Yahoo! [11]; and TPC-DS , the latest benchmark addition to the TPC’s suite of decision support benchmarks [7][5][3].

3

Design Principles for Big Data Benchmarks

As mentioned, some benchmarks have gained acceptance and are commonly used in industry including the ones from TPC (e.g., TPC-C [14], TPC-H [4]), SPEC, and Top500. Some of these benchmarks have demonstrated longevity—TPC-C is almost 25 years old and the Top500 list is just celebrating 20 years. The workshop identified desirable features of benchmarks, some of which apply to these benchmarks and may have well contributed to their longevity.

In the case of the Top500, the metric is simple to understand: the result is a simple rank ordering according to a single performance number (FLOPS).

TPC-C, an On Line Transaction Processing (OLTP) workload, possesses character-istics of all TPC benchmarks: it models an application domain; employs strict rules for disclosure and publication of results; uses third-party auditing of results; and pub-lishes performance as well as price/performance metrics. A key desirable aspect of TPC-C is the “self-scaling” nature of the benchmark, which may be essential for big data benchmarks. TPC-C requires that as the performance number increases (viz. the transactions/minute) the size of the database (i.e. the number of warehouses in the reference database) must also increase. TPC-C requires a new warehouse to be intro-duced for every 12.5 tpmC. In this sense, TPC-C is a “smoothly scaling” benchmark with respect to database size.

Other benchmarks such as, say, TPC-H, specify fixed, discrete “scale factors” at which the benchmark runs are measured. The advantage of that approach is that there would be multiple, directly comparable results at a given scale factor. A desirable feature of the big data benchmark should be comparability of results across scale fac-tors, for at least neighboring values. It would also be beneficial if results from a given scale factor could be used to extrapolate performance to the next scale factor (up or down). While there is an argument to be made for simplicity of benchmark results, i.e. publishing only one or very few numbers as the overall result of the benchmark, the desirability of extrapolating results from one scale factor to the next may also mean that the benchmark run must produce an adequate number of reported metrics that would allow such extrapolations to be done.

One of the definitions of “big data” is that it is data at a scale and velocity at which “traditional” methods of data management and data processing are inadequate. Given the technology-centric nature of this definition, and the fact that there are increasingly many different approaches for large-scale data management, the big data benchmark should be careful in being technology agnostic, as much as possible. Thus, the benchmark itself cannot presume the use of any specific technology, say, the use of a relational database and/or SQL as the query language. The applications, reference data, and workload have to be specified at a level of abstraction (e.g. English) that does not pre-suppose a particular technological approach.

(6)

Finally, there was discussion at the workshop about the cost of running bench-marks, especially big data benchmarks which, by definition, may require large instal-lations and significant amount of preparation. The discussions converged very quickly that it would not be possible to extrapolate performance of large systems running large data sets from performance numbers obtained on small systems running small data sets. To be successful, the benchmark should be simple to run. This includes the easy ability to generate the large benchmark datasets, robust scripts for running the tests, perhaps, available implementations of the benchmark in alternative technolo-gies, e.g. RDBMS and Hadoop and an easy way to verify the correctness of bench-marks results.

4

Benchmark for innovation versus competition

The nature of the benchmarking activity is influenced heavily by the ultimate objec-tive, which can be characterized as technical versus marketing (or competitive). The goals of a technical benchmarking activity are primarily to test different technological alternatives to solving a given problem. Such benchmarks would focus much more on collecting detailed technical information and using that, for example, to optimize system implementations. Whereas, a competitive benchmark is focused on comparing performance as well as price/performance among competing products and may re-quire an auditing process as part of the benchmark to ensure a fair competition. The workshop discussion identified that, indeed, both are not mutually exclusive activities. The conclusion was that benchmarks need to be designed initially for competitive purposes—to compare among alternative solutions. However, once such benchmarks become successful (such as, say, the TPC benchmarks), there will be an impetus with-in organizations to use those same benchmarks for with-innovation as well. Vendors will be interested in developing features in their products that can perform well on such competitive benchmarks. There are numerous examples in the arena of database soft-ware where product features and improvement were motivated at least in some part by the desire to perform well in a given benchmark competition. Since a well-designed benchmark suite would reflect real-world needs, this would mean that the product improvements really end up serving the needs of real applications.

In sum, a big data benchmark would be utilized for both purposes: competition as well as innovation, though the benchmark should establish itself initially as being relevant as a competitive benchmark. The primary audience for big data benchmarks at this stage is the end customers who need guidance in their decisions about what types of systems to acquire to serve their big data needs. Acceptance of a big data benchmark for that purpose would then automatically lead to the use of the bench-mark internally by vendors for innovation as well. Also, while industry participants are more interested in competitive benchmarking, participants from academia are more interested in benchmarking for technical innovation.

Also, a benchmark whose primary audience is the end customers must be specified in “lay” terms, i.e. in a manner that allows even non-technical audiences to grasp the essence of the benchmark and relate it to their real-world application scenarios. At

(7)

one end, the benchmark workload could be expressed in English, while at the other it could be completely encoded in a computer program (written in, say, Java or C++). Given the primary target audience, the preference is for the former (English) rather than the latter. The benchmark must appeal to a broad audience and thus must be expressible in English, and not require knowledge of a programming language just in order to understand the benchmark specification. Using English would provide the most flexibility and broadest audience, though some parts of the specification may still employ a declarative language like SQL.

5

Benchmark Design

In this section, we summarize the outcome of the discussion sessions on the bench-mark design.

5.1 Component vs. End-to-End benchmark

One can identify two types of benchmarks, ones that measure performance of individ-ual components—whether components of an application or components in a system— and others that measure end-to-end performance or full system performance. We refer to the first type as Component Benchmarks and the second type as End-to-End Benchmarks. The term “system” in this context may refer to a physical item, e.g. a server or a software product, e.g. a database management system.

Component benchmarks measure the performance of one (or a few) components of an entire system with respect to a certain workload. They tend to be relatively easier to specify and run, given the focused/limited scope. For components that expose standardized interfaces (APIs), the benchmarks can be specified in a standardized way and could be run as-is, i.e. using a benchmark kit. An example for a component benchmark is SPEC’s CPU benchmark (latest version is CPU2006 [8], which is a CPU-intensive benchmark suite that exercises a system's processor, memory subsys-tem and compiler. Another example of a component benchmark is TeraSort [16], since sorting is a common component operation in many end-to-end application sce-narios. Several workshop attendees observed that TeraSort is a very useful benchmark and has served an extremely useful purpose in helping to exercise and tune large-scale system. The strength of the TeraSort benchmark is its simplicity.

End-to-end benchmarks measure the performance of entire systems but can be more difficult to specify and to provide a benchmark kit that could be run as-is due to various systems dependencies and the intrinsic complexity of the benchmark itself. The TPC benchmarks in general are good examples of such End-to-End benchmarks including for OLTP (C and E) and Decision Support (H and TPC-DS). TPC-DS for instance, measures a system’s ability to load a database and serve a variety of requests including ad hoc queries, report generation,

 

OLAP and data min-ing queries, in the presence of continuous data integration activity on a system that includes servers, IO-subsystems and staging areas for data integration.

(8)

5.2 Big Data Applications

Big data issues impinge upon a wide range of application domains, covering the range from scientific to commercial applications. Applications that generate very large amounts of data include scientific applications such as the Large Hadron Collider (LHC) and the Sloan Digital Sky Survey (SDSS) as well as social websites such as Facebook, Twitter, and Linkedin, which are the often-quoted examples of big data. However, the more “traditional” areas such as retail business, e.g. Amazon, Ebay, Walmart, have also reached a situation where they need to deal with big data. Thus, it may be difficult to find a single application that covers all extant flavors of big data processing.

There is also the question of whether a big data benchmark should actually attempt to model a concrete application or whether an abstracted model, based on real appli-cations—hence a generic benchmark—would be more desirable. The benefit of a concrete application is that one could use real world examples as a blueprint for mod-eling the benchmark data and workload. This makes a detailed specification possible, which helps the reader of understand the benchmark and its result. It also helps in relating real world business and their workloads to the benchmark. Possible applica-tions for big data benchmarks could be a retailer model, such as used in TPC-DS and many others, which has the additional advantage that it is very well understood, re-searched and provides for many real world scenarios, especially in the area of seman-tic web data analysis. Another model application is a social website. Large social websites and related services such as Facebook, Twitter, Netflix, and others deal with all different kinds of big data and transformations. To find a suitable model the key characteristics of big data applications have to be considered. Big data applications typically operate on data in several stages using a data processing pipeline. An ab-stract example of such a pipeline is shown in Fig 3.

Fig. 3. Big Data Analytic Pipeline2

(9)

5.3 Data Sources

A key issue to consider is the source for benchmark data. Should the benchmark be based on “real” data taken from an actual, real-world application, or synthetic data? The conclusion was to rely on synthetic data generated by parallel data generator programs (in order to efficiently generate very large datasets), while ensuring that the data have some real-world characteristics embedded in them. Use of reference da-tasets is not practical, since that would mean downloading and storing extremely large reference datasets from some location. Furthermore, real datasets would reflect only certain properties in the data and not reflect others. But foremost, it is very difficult to scale reference data sets (see also next section). A synthetic dataset can be designed to incorporate the necessary characteristics in the data. The different genres of big data will require corresponding data generators.

5.4 Scaling

An important issue will be the ability to scale up or scale down results based on re-sults obtained with the system under test (SUT). Assuming a big data benchmark allows for a scalable data set and system size, the very nature of big data may well lead to an arms race, i.e. in order to top a competitor’s result a vendor may double the hardware, which will in turn provoke the competitor to follow. Such a scenario is not desired as it may hide hardware and software performance improvements over time and force smaller competitor out of the game because of the huge financial burden large configuration impose on the test sponsor.

Arguably, one of the characteristics of TPC benchmarking is that the SUT tends to be “over-specified” in terms of the hardware configuration employed. Vendors (aka

benchmark sponsors) are willing to take an additional hit on total system cost in order to achieve better performance numbers without too much of a negative impact on the price/performance numbers. For example, the SUT may employ, say, 10x the amount of disk for a given benchmark database size, whereas real customer installations may only employ 3-4x the amount of disk. However, there is a possibility that with big data benchmarking, the SUT may actually be smaller in size than the actual customer installation, given the expense of pulling together a system for big data applications. Thus, there might be a need to be able to extrapolate (scale up) system performance based on the benchmark results. The set of measurements taken during benchmarking should therefore be adequate to enable such scaling.

Another scaling aspect that needs to be considered is the notion of “elasticity” in cloud computing. How well does the system perform when the data size is dynamical-ly increased as well as when more resources (processors, disk) are added to the sys-tem?

And, how well does the system perform when some resource are removed as a con-sequence of a failure, e.g. node or disk failures? TPC benchmarks require a series of Atomicity Consistency Isolation and Durability (ACID) test to be performed with the SUT. For big data benchmarking, the notion would be to run such ACID test, specific

(10)

failure tests more suited to big data systems or a combination of both depending on the type of systems being targeted for the benchmark.

5.5 Metrics

Big data benchmark metrics should include performance metrics as well as cost-based metrics (price/performance). The TPC is the forerunner for setting the rules to specify prices of benchmark configuration. Over the years the TPC has learned “the hard way” how difficult it is to specify the way hardware and software is priced for benchmark configurations (see [2] for a detailed discussion on this topic). The TPC finalized on a canonical way to measure price of benchmark configurations and de-fined a pricing specification that all TPC benchmark are required to adhere. The TPC model can be adopted for pricing. Other costs that need to be paid attention to are energy costs and systems setup cost, since big data configurations may be very large in scale and setup may be a significant factor in the overall cost.

6

Conclusions and Next Steps

The Workshop on Big Data Benchmarking held on May 8-9, 2012 in San Jose, CA discussed a number of issues related to big data benchmarking definitions and pro-cesses. The workshop concluded that there was both a need and an opportunity to define benchmarks for big data applications. The benchmark would model end-to-end application scenarios and consider a variety of costs, including total system cost, set-up cost, and energy costs. Benchmark metrics would include performance metrics as well as cost metrics. Several next steps are underway. An informal “big data bench-marking community” has been formed. Biweekly phone conferences are being used to keep this group engaged and to share information among members. A few members of this community are beginning to work on simple examples of end-to-end bench-marks. Efforts are underway to obtain funding to support pilot benchmarking activity.

The next Big Data Benchmarking workshop is already under planning, to be held on December 17-18, 2012 in Pune, India, hosted by Persistent Systems. A third work-shop is being planned for July 2013 in China,.

7

Bibliography

1. Gridmix3: git://git.apache.org/hadoop-mapreduce.git/src/contrib/gridmix/. 2. Karl Huppler: Price and the TPC. TPCTC 2010: 73-84

3. Meikel Pöss, Bryan Smith, Lubor Kollár, Per-Åke Larson: TPC-DS, taking decision support benchmarking to the next level. SIGMOD Conference 2002: 582-587

4. Meikel Pöss, Chris Floyd: New TPC Benchmarks for Decision Support and Web Com-merce. SIGMOD Record 29(4): 64-71 (2000)

5. Meikel Pöss, Raghunath Othayoth Nambiar, David Walrath: Why You Should Run TPC-DS: A Workload Analysis. VLDB 2007: 1138-1149

6. Pricing Specification Version 1.7.0: http://www.tpc.org/pricing/spec/Price_V1.7.0.pdf 7. Raghunath Othayoth Nambiar, Meikel Poess: The Making of TPC-DS. VLDB 2006:

(11)

1049-1058

8. SPEC CPU2006: http://www.spec.org/cpu2006/

9. Standard Performance Evaluation Corporation: http://www.spec.org/ 10.Statistical Workload Injector for MapReduce (SWIM):

https://github.com/SWIMProjectUCB/SWIM/wiki

11.Swapnil Patil,Milo Polte, Kai Ren, Wittawat Tantisiriroj, Lin Xiao, Julio López, Garth Gibson, Adam Fuchs, Billie Rinaldi: YCSB++ : Benchmarking and Performance Debug-ging Advanced Features in Scalable Table Stores SOCC ’11, October 27–28, 2011, Cascais, Protugal

12.Transaction Processing Performance Council: /www.tpc.org

13.Transaction Processing Performance Council: TPC Benchmark DS. Standard (2012) 14.Trish Hogan: Overview of TPC Benchmark E: The Next Generation of OLTP Benchmarks.

TPCTC 2009: 84-98

15.Workshop On Big Data Benchmarking: http://clds.ucsd.edu/node/23 16.Jim Gray: Sort Benchmark Home Page. http://sortbenchmark.org/

(12)

Appendix A: Workshop Participants

1. Anderson, Dave, Seagate

2. Balazinski, Magda, U.Washington 3. Bond, Andrew, Red Hat

4. Borthakur, Dhruba, Facebook 5. Carey, Michael, UC Irvine 6. Chandar, Vinoth, LinkedIn

7. Chen, Yanpei, UC Berkeley/Cloudera 8. Cutforth, Craig, Seagate

9. Daniel, Stephen, NetApp 10. Dunning, Ted, MapR/Mahout 11. EmirFarinas, Hulya, Greenplum 12. Engelbrecht, Andries, HP 13. Ferber, Dan, WhamCloud 14. Foltyn, Marty, CLDS/SDSC 15. Galloway, John

16. Ghazal, Ahmad, Teradata Corporation 17. Gowda, Bhaskar, Intel

18. Graefe, Goetz, HP 19. Gupta, Amarnath, SDSC 20. Hawkins, Ron, CLDS/SDSC 21. Honavar, Vasant, NSF 22. Jackson, Jeff, Shell 23. Jacobsen, Arno, MSRG 24. Jost, Gabriele, AMD

25. Kelly, Mark, Convey Computer 26. Kent, Paul, SAS

27. Kocberber, Onur, EPFL 28. Kolcz, Alek, Twitter 29. Koren, Dan, Actian 30. Krishnan, Anu, Shell 31. Krneta, Paul, BMMsoft 32. Manegold, Stefan, CWI/Monet 33. Mankovskii, Serge, CA Labs 34. Miles, Casey, Brocade

35. O'Malley, Owen, Hortonworks 36. Raab, Francois, InfoSizing

37. Schork, Nicholas, Scripps Research Institute

38. Shainer, Gilad, Mellanox 39. Sheahan, Dennis, Netflix 40. Shekhar, Shashi, U Minnesota 41. Short, James, CLDS/SDSC 42. Skow, Dane, NSF

43. Szalay, Alex, Johns Hopkins 44. Taheri, Reza, VMware 45. Vanderbilt, Richard,

NetApp/OpenSFS 46. Wakou, Nicholas, Dell 47. Wyatt, Len, Microsoft 48. Yoder, Alan, SNIA 49. Zhao, Jerry, Google

(13)

Appendix B: Program/Organizing Committee

1. Baru, Chaitan, CLDS/SDSC 2. Bhandarkar, Milind, Greenplum 3. Gutkind, Eyal, Mellanox 4. Hawkins, Ron, CLDS/SDSC 5. Nambiar, Raghunath, Cisco 6. Osterberg, Ken, Seagate 7. Pearson, Scott, Brocade

8. Plale, Beth, IndianaU / HathiTrust Research Foundation 9. Poess, Meikel, Oracle

References

Related documents

clinical faculty, the authors designed and implemented a Clinical Nurse Educator Academy to prepare experienced clinicians for new roles as part-time or full-time clinical

Objectives We sought to investigate whether genetic effects on response to TnF inhibitors (TnFi) in rheumatoid arthritis (ra) could be localised by considering known

The others (e.g. Playing Videos, adding the shutdown button) are not crucial to the camera project but can be done if you’re also interested in exploring these capabilities.

* This paper is presented to the 2nd KRIS-Brookings Joint Conference on "Security and Diplomatic Cooperation between ROK and US for the Unification of the

matrices of the multivariate time series data of solar events as adjacency matrices of labeled graphs, and applying thresholds on edge weights can model the solar flare

The tense morphology is interpreted as temporal anteriority: the eventuality described in the antecedent is localised in the past with respect to the utterance time.. Compare this

Guild Master's Armor Light Armor Quest Reward: Under New Management: Become the Guildmaster of the Thieves Guild Guild Master's Boots Light Boots Quest Reward: Under

In summary, I have quickly reviewed the process of high-rise plumbing design, particularly focusing on pressure control and the impact of piping systems on the general construction