• No results found

HP Vertica Concurrency and Workload Management

N/A
N/A
Protected

Academic year: 2021

Share "HP Vertica Concurrency and Workload Management"

Copied!
11
0
0

Loading.... (view fulltext now)

Full text

(1)

Technical white paper

HP Vertica Concurrency and

Workload Management

Version 2.0

HP Big Data Platform Presales

January, 2015

Table of Contents

1. Introduction ... 2

2. Concurrency ... 2

2.1 Concurrency vs. Parallelism ... 2

2.2 Queue ... 2

2.3 Linearity ... 3

2.4 Short Query Bias ... 4

3. Workload Management ... 4

3.1 Objective ... 4

3.2 Resource Manager ... 5

3.3 Workload Metric ... 5

3.4 Cascading Pool ... 7

4. Performance Tuning ... 7

4.1 Business Requirement ... 7

4.2 A Design Example ... 8

4.3 Query Classification Criteria ... 8

4.4 Result Evaluation ... 9

5. A Customer Use Case ... 10

6. Conclusion ... 11

About the Author ... 11

(2)

1. Introduction

Analytic databases, such as HP Vertica, often need to process different types of workloads. These workloads can range from the simplest primary-key lookups to analytical queries that include several large tables and joins between them. Different types of load jobs (such as batch ETL and near real-time trickle load) must keep the data up to date in an enterprise data warehouse (EDW). Therefore, any enterprise-class database, such as HP Vertica must have robust mixed-workload management capability that is simple to use.

The following sections of this paper discuss:

• Concepts such as concurrency, parallelism, linearity, queue and short query bias. • HP Vertica resource management (resource pool) and cascading pool.

• The relationship among the three key workload management metrics—throughput, concurrency and performance. • How to gather requirements and best practices for HP Vertica performance tuning

• The HP Vertica query classification criteria • How to profile an HP Vertica query

• A design example and a method to evaluate performance tuning results • A detailed real-world customer use case that illustrates the key points. • Summary of important concepts and best practice recommendations

2. Concurrency

2.1 Concurrency vs. Parallelism

HP Vertica is an MPP columnar database and an inherent multi-threaded application. As such, it can use multiple-CPU/core server architecture to process queries both concurrently and in parallel. These two approaches have many similarities that can cause confusion, so first you must understand what each term actually means.

Concurrency means having multiple jobs running in an overlapping time interval in a system regardless of whether these

jobs run at the same instant. The concept of concurrency is synonymous with multi-tasking and thus it is often confused with parallelism.

Parallelism means that two or more jobs are running at the exact same instant. The simplest example is to consider a

computer with a single CPU. On such a computer, you can, in theory, run multiple jobs by switching context between them. This approach gives the user the illusion that multiple jobs are running on the single CPU at the same time (known as virtual parallelism). However, if you take a snapshot at any given instant, you’ll find that only one job is running. In

contrast, actual parallel processing requires multiple working units (for example, multiple CPU/cores in a database server such as the HP DL380p).

2.2 Queue

Consider a thought experiment (Figure 1). Given two 1-hour tasks, you can either:

• Do them one by one (serially). Deliver one task in the first hour and the other in the second hour. Average runtime is 90 minutes.

• Do them concurrently (aka multi-tasking). Finish both tasks in 2 hours, and the average runtime is 120 minutes.

However, this option demonstrates the case of ideal sharing of resources assuming there is no context switching cost. • Add another worker and perform these two tasks in parallel. The two jobs will be done in 1 hour, and the average

runtime is 60 min.

The average response time in a system that doesn’t process concurrently is actually shorter than the average response time in a time-sharing system with no cost for context switching.

In this experiment, queuing actually improves the system throughput (which is the inverse of average response time) over time in the case of highly concurrent workloads. Having multiple CPU/cores on an HP Vertica node is also optimal because it allows parallel processing of multiple tasks at the same time.

(3)

The following figure shows processing at time t1.

• In the serial mode, there is only one task running (task 1). Because no information is available on the status of task 2, you can only assume that this second task is waiting in a queue.

• In the concurrent mode, even though task 1 is the only job running we also know that task 2 has made some progress. Being able to monitor the progress of task 2 translates into better user experience.

HP Vertica uses lockless concurrency control on queries and is based on a shared-nothing MPP (Massively Parallel Processing) architecture. This approach provides excellent management of complex customer workloads to achieve both system performance (throughput) and response time (latency) objectives in a cluster.

2.3 Linearity

Linear concurrency means that increasing the number of outstanding requests increases the total response time

proportionally. In the following discussion, assume that there are N tasks and when running standalone, each task (ith)

takes Ti amount of time to finish.

Perfectly Linear: Total Time = SUM(Ti)

In this case, perfect resource sharing is achieved. Each task is processed without wasted resources, and there is no apparent advantage of running multiple jobs concurrently.

Worse Than Linear: Total Time > SUM(Ti)

In this case, running multiple jobs takes more time than running each job one by one. Processing is obstructed for one of the following possible reasons:

• Lock conflicts on shared resources • Context switching cost

• Skew in data distribution and/or execution • Better Than Linear: Total Time < SUM(Ti)

If running multiple requests together takes less time than serial execution, then better than linear concurrency is achieved because of optimal processing factors, such as:

• Cooperative caching

• Ability to work on encoded data • Spare resources for each task

Serial

Avg Runtime = (1+2)/2 = 1.5 hours

Concurrent (Multi-Tasking)

Avg Runtime = (2+2)/2 = 2 hours

Parallel

Time

2 Hour

1 Hour

t1

Avg Runtime = (1+1)/2 = 1 hour

}

(4)

2.4 Short Query Bias

Many relational database management systems (RDBMS), including HP Vertica, have implemented a feature called

Short Query Bias. In a mixed-workload environment, this feature prioritizes short tactical queries over long-running or

more complex queries. In HP Vertica, users can tune two resource pool parameters, RUNTIMEPRIORITYTHRESHOLD and RUNTIMEPRIORITY, to allow higher priority access to resources for those short/tactical queries.

To better understand the advantages of Short Query Bias, consider another simple thought experiment (Figure-2).

Suppose you have two tasks, one that takes one hour to finish and another that takes two hours to finish. You can consider three different ways of accomplishing these two tasks:

• Process serially and in the order of short task first and then long task. Average runtime is 2 hours.

• Multitask or process concurrently. Assuming that there is no cost of context switching, the average runtime is 2 hours 15 min.

• Process serially but run the long task first and then short task. Average runtime is 2 hours and 30 min.

The average response time or system throughput (TP) in a system depends on how the tasks are ordered. Prioritizing short jobs over long jobs improves TP, which is why the Short Query Bias feature has become popular. Task order is also

an important consideration in tuning a mixed workload (to be discussed in later sections).

While queuing improves overall system performance in a busy system, intelligent queuing (prioritizing short queries

over long running ones) is an even better option.

3. Workload Management

3.1 Objective

Databases must provide robust mixed-workload management capability. Mixed workloads are workloads with very

different characteristics and requirements that can support: • Batch and incremental load jobs

• Complex reports • New forms of analytics • Pre-built reports • Ad-hoc queries

These workloads can include simple tactical queries, but also more complex queries. Extremely complex analytic queries may contain joins among multiple large tables, group-by operations, and sophisticated statistical calculations.

Figure-2: Queuing and Short Query Bias

Serial (Short to long)

Avg runtime = (1+3)/2 = 2 hours

Concurrent (Multi-Tasking)

Avg runtime = (1.5+3)/2 = 2.25 hours

Serial (Long to short)

Time (hour)

1

2

3

(5)

As a column store database platform, HP Vertica is designed for workloads containing mostly read-only queries (OLAP) as opposed to OLTP type consisting of many small insert/update/delete operations.

To achieve effective workload management in an enterprise data warehouse, you must use available system resources to meet:

• Specific business requirements

• Any service level agreement (SLA) in place

• You manage workloads based on the relative priority/urgency of each request. In a mixed workload environment, you must tune your HP Vertica database so that it provides:

• Fast response time for short/tactical queries

• Acceptable response times for long queries and batch ETL

Be careful that workload management tuning does not change the characteristics of a workload before and after processing. Small jobs should remain small, medium jobs remain medium, and large jobs remain large. In HP Vertica, the best

determination for query complexity is its memory usage. See Section 4.3, for a set of query classification criteria. This section also provides a method to help you quickly profile to find the memory footprint of an HP Vertica query.

3.2 Resource Manager

HP Vertica manages complex workloads using the Resource Manager. With this tool, you manage resource pools, which

are predefined subsets of system resources with an associated queue. HP Vertica is preconfigured with a set of built-in resource pools that allocate resources to different request types. The default “catch-all” general pool allows for a certain concurrency level, based on the RAM and CPU cores in the machines.

The HP Vertica resource management scheme allows diverse, concurrent workloads to run efficiently in a distributed database. For basic operations, the default general pool is usually sufficient. However, you can customize this pool to handle specific workload requirements if necessary.

In more complex situations, you can also define custom resource pools that can be configured to limit memory usage, concurrency, cpu affinity and query priority. Optionally, to control memory consumption, you can restrict each database user’s request to a specific resource pool and put limits on the total memory used, total amount of temp space and runtime, etc.

A resource pool contains several parameters that you can tune to fit any specific customer requirement. For a detailed description of the meanings of these parameters, their typical usage patterns and example use cases and scenarios, refer to the HP Vertica documentation.

3.3 Workload Metric

For most customers concurrency is not a direct requirement. Rather, they have a specific requirement to execute a certain workload in a database governed by a set of system performance (throughput) and/or response time (latency) objectives:

Throughput (TP) represents the number of queries/jobs that a database can perform in a unit of time. This value is the

most commonly used metric for measuring a database’s performance.

Response time is the sum of queuing time and execution time and it depends on both:

Concurrency - A determining factor for overall system loadQuery performance - How fast a query executes in the database

Throughput (TP) is the inverse of response time. The use of one term rather than the other is often a matter of customer

preference and it is also a function of workload types that are under consideration (e.g. real-time interactive queries vs. batch ETL jobs).

For a given workload, the three metrics: throughput (TP), concurrency, and performance are related through the simple equation: Throughput (TP) = Concurrency * Performance

(6)

If you know any two of these three metrics, you can derive the third. This relationship can be visually illustrated by the following Workload Management Metrics Triangle (Figure-3):

Often, concurrency is not a direct customer requirement because it depends on query performance and throughput SLA. Customer requirements are usually stated in the form of number of queries processed in a unit of time such as:

”We need to process 10000 queries in 1 hour….”

Thus, throughput (TP) is often the metric that interests the customer, and concurrency is a derived metric.

Consider a hypothetical customer proof-of-concept (POC) requirement for processing 1200 queries in 1 minute (or 20 queries per second). Assume that there are two competing systems, X and Y:

• On System X, executing such a workload requires a currency level of 40 with an average query runtime of 2 s. • On System Y, assuming that the average query response is 100 ms, executing the same workload requires a

concurrency level of only 2 (because 20/s=2*1/100 ms).

What do these results mean for the customer? Clearly, System Y has better query processing capability than System X. It needs far less concurrency to satisfy the SLA than System X. Thus, from a technical perspective, System Y is a better platform.

For a given throughput (TP) SLA, the better the query/job performance, the less concurrency it needs. Less concurrency generally means less resource usage and better user experience. This improvement in user experience occurs because more system resources are available to process other workloads.

The goal of a performance tuning exercise is not to increase concurrency. Instead, the goal should be about minimizing a query’s resource usage and improving its performance. You can achieve this goal by applying the lowest possible concurrency level to satisfy a customer’s Service Level Agreement (SLA).

3.4 Cascading Pool

In HP Vertica 7.1, there is a new feature to meet the customer requirement for ad-hoc queries - cascading pool. Prior to version 7.1, we recommended redirecting truly ad-hoc queries to different sets of custom pools. But for ad-hoc queries this is an almost impossible task. So, to simplify things for the customer, to integrate better with third-party BI tools, HP Vertica introduced the cascading pool feature in release 7.1.

Here’s how cascading pools work. Let’s assume there are two resource pools: R1 (a primary/starter pool) and R2 (a secondary/cascading pool). When a query’s execution time exceeds the pre-set RUNTIMECAP in R1, it cascades to R2. When that happens, all the resources are released from pool R1 and moved to pool R2 (from an accounting perspective). The query continues to execute without interruption. This, of course, assumes that there are enough resources available Figure-3: Workload Management Metrics Triangle

Throughput (TP)

Concurrency

Workload

Management

Throughput (TP) = Concurrency * Performance

(7)

How does this feature help an HP Vertica customer? A typical HP Vertica customer often has two or more different types of workloads in their production environments. Prior to HP Vertica 7.1, customers needed to figure out a way to properly classify a query based on certain criteria (see Section 4.3 for more details). Customers then had to use a program or script to direct the query to a resource pool. With cascading pools, customers can now route all queries through the starter pool R1 and let the queries cascade to the secondary pool R2 automatically.

Furthermore, this feature means that users need not know the existence of the secondary cascading pools. After secondary cascading pools are configured, they work in the background; you don’t even need to grant end users explicit access to these secondary pools. So in some sense, one pool - the starter pool - is all that HP Vertica customers and third-party BI tools need.

4. Performance Tuning

4.1 Business Requirement

Before starting any HP Vertica workload management tuning or design exercise, you must first understand your customer’s business requirements. Ask your customer:

• What are the workload types?

• What is the maximum number of users?

• What is the expected SLA on throughput?

• What is the expected response time?

• What is the maximum allowable runtime for each type of job?

The next step is to map the answers to the preceding questions for each workload type to a proper design by using the HP Vertica built-in resource management scheme (resource pools).

The following table provides some high-level design considerations:

Business requirement Vertica design

Workload types Resource Pool (1:1 mostly but can be N:1) Max # of users MaxClientSessions

Concurrency (Derived) MAXCONCURRENCY

Memory usage (Derived) PLANNEDCONCURRENCY and query budget. Need to consider other pools Max allowable runtime RUNTIMECAP/QUEUETIMEOUT

SLA (TP or runtime) ALL. Complex considerations

4.2 A Design Example

Consider a simple example based on a hypothetical customer requirement: There are 1200 active users on a 6-node HP Vertica production cluster. The customer is running simple to medium complexity ad-hoc queries, and the SLA requires that no user should wait more than 1 min for a query response.

Begin by asking:

• What would be a good starting resource pool design?

• What would be the optimal maxconcurrency (MC) for such a pool?

To answer these questions, you need to know more about the customer’s query workload and the average query runtime (performance). In this case, use the following assumptions:

• Any query requires < 100 MB of memory • Average query runtime < 1.5 sec

(8)

Target TP = 1200/60 = 20 queries/sec, using the Workload Metrics Triangle equation (see Section 3.3), you can determine that the concurrency (MC) is 30. The following pool would be a very good candidate:

CREATE RESOURCE POOL q_pool MEMORYSIZE ‘3G’ MAXMEMORYSIZE ‘3G’ PRIORITY 20 PLANNEDCONCURRENCY 30 MAXCONCURRENCY 30 EXECUTIONPARALLELISM 8;

This pool has a fixed memory size of MEMORYSIZE= MAXMEMORYSIZE. In HP Vertica, this pool is considered to be standalone because it can’t borrow from the general pool and hopefully will not need to do so.

You can think of the EXECUTIONPARALLELISM (EP) of a resource pool as the number of CPU cores that will be used in an HP Vertica node. These cores can be physical, if there is no hyper-threading, and logical, if there is. For simple-to-medium types of queries, not all cores are needed. Thus, EP=8 is a reasonable start value.

You budget memory in such a pool according to MEMORYSIZE/ PLANNEDCONCURRENCY=3 GB/30=100 MB because, as shown in the preceding profiling exercises, all the customer’s queries require < 100 MB of memory to run.

In this situation, there are tradeoffs to consider. You must choose between a low MC, high queue time, and underutilization of system resources vs. a high MC, high context switching costs, and contention for system resources. For small-to-medium types of workload, the optimal MC is somewhere between N and 2*N where N is the total number of physical CPU cores per node. 4.3 Query Classification Criteria

Before you start thinking about resource pools and workload optimization in HP Vertica, you must develop a solid understanding of your customer’s workloads. Only then can you know how to properly classify them.

What should you use as the classification criteria?

You could deconstruct a complex query to determine a weighted score. However, this process would be tedious, requiring you to count the number of tables, joins, and aggregate functions, the number and types of derived tables and analytical functions. Such an approach can be both subjective and tedious and hence is often not practical.

What if you use the standalone runtime of a query as the criterion? This method is also problematic. A query that runs in 1 minute while using up 80% of a system’s resources should not be classified in the same category as a query that runs in the same amount of time (1 minute) but uses < 1% of the available resources.

For HP Vertica, the best determination of query complexity is memory usage. As an MPP columnar database, HP Vertica is rarely, if ever, I/O bound. HP Vertica is also less likely to encounter CPU processing bottleneck because of the continuous increase in power and speed of multi-core CPUs. Often the most common resource bottleneck in a production HP Vertica cluster running a complex mixed-workload is memory. Because of the importance of available memory, HP Vertica Resource Manager attempts to allocate memory equitably among different workloads or resource pools. The goal is to make sure that no resource pool is starved out of memory in the worst-case scenario under full system load. If you can determine how much memory a query requires per node, then you can use that value to classify an HP Vertica query (or any other job). Based on real-world experiences from working with many of the largest HP Vertica customers, HP Vertica recommends the following classification rules to simplify query classification:

Small < 500 MB

Medium Between 500 MB and 2 GB

Large > 2 GB

How can you quickly determine an HP Vertica query’s memory footprint? It turns out that HP Vertica has a convenient profiling option (similar to EXPLAIN). You can use the PROFILE statement to get the total memory required for the query (among other things). As a best practice, you should set up a small and dedicated profiling pool for this purpose, as shown in the following example:

CREATE RESOURCE POOL p_pool MEMORYSIZE ‘1K’ PRIORITY 80 PLANNEDCONCURRENCY 4 MAXCONCURRENCY 4; Creating a dedicated profiling pool forces a query to borrow from the default general pool for any extra memory that it needs to execute. If you use the general pool (a common mistake), depending on the detailed pool settings, the reserved memory may be more than a query actually needs. HP Vertica could be “fooled” to report on reserved memory as opposed to the actual allocated/used memory under certain circumstances and this would skew your result.

(9)

4.4 Result Evaluation

Now, you have gathered your customer requirements and developed several resource pool designs. You have also performed the tests for each design on a given set of workloads. How do you judge these results? Which design is the best? What should be the criteria?

These are not easy questions because you can view design results from several different perspectives. What is considered to be best in one perspective may not be viewed as optimal in another. Thus, it is difficult or impossible to agree on a set of objective criteria that is applicable to all customer cases.

Based on real-world customer experiences, HP recommends the following method for evaluating complex mixed workload test results:

• For each test result, calculate a score by using the formula: Score = SUM(Weight*Relative TP Change) • The lower the score, the better the result

As an example, consider the following scenario which has three different types of workload: small, medium, and large. For each workload, assign a relative weight factor that indicates its relative importance per customer requirement.

Type Weight Base Test-1 (s) Test-2 (s) Test-3 (s)

Small 10 1 4 1.5 2

Medium 5 5 10 12 10

Large 1 20 40 50 45

The Base column indicates the runtime for each workload when running by itself. After performing the scoring calculation, you get the following result:

Score-1 Score-2 Score-3

10.5 6.9 8.1

Based on the rules, Test-2 has the lowest score and this makes Design-2 the most favorable. Intuitively this result also makes sense because a small workload is typically the most important workload (as indicated by its relative weight being the biggest). Thus, the effect on the small workload’s performance in the presence of other workloads is often the dominant factor in the score. Design-2 has the least impact on the small workload runtime and hence is the best design even though it may have the longest overall running time among the three design candidates.

5. A Customer Use Case

Consider this complex mixed workload example from a hypothetical HP Vertica customer. This customer identified the following types of workloads (each with its own SLA) running in its HP Vertica database:

Type-1: 14 concurrent ETL/ELT jobs of varying complexity Type-2: Large single file load (five tables)

Type-3: Large single file export (five tables) Type-4: Three jobs of update/upsert

Type-5: Complex multi-dimensional model analysis (ad-hoc query)

The customer’s HP Vertica cluster has 48 HP DL380p Gen8 nodes. Each node has two CPUs, and each CPU has 8 physical cores plus 128 GB of memory. For simplicity, assume that the queuing threshold for the general pool is ~120 GB. The rest of the memory is reserved for Linux OS and the file system.

The customer requirements for a workload management solution are the following: • Keep short jobs short, medium jobs medium and long jobs long.

• Clearly demonstrate the effects of resource management. • Need to consider both run-time and memory usage for each job.

(10)

First, run baseline benchmark tests of all these different types of workload. Classify these workloads into four categories based on their respective resource usage: small, medium, large and load (for data loading). Then, create a custom resource pool for each workload type. See the following table for design details:

Pool Usage Memory Size Max Memory Size Priority Runtime Priority Threshold (s) Planned concurrency Max Concurrency Execution parallelism

s_pool tactical query 2GB 2GB 60 5 32 32 6 m_pool export/upsert 0 50% 40 0 15 10 12 I_pool ELT/complex

query

0 50% 20 0 4 3 16

load_pool load/copy 0 16GB 10 0 4 4 16

What are the key considerations in this design? As mentioned previously, HP Vertica resource management depends on memory usage. Thus, you must be sure that no resource pool runs out of memory under any circumstances. In particular, verify that:

• The total memory is sliced up carefully so that no pool is memory starved.

• There is enough memory for a query/job in any given pool in the worst case scenario (such as a full system load). Under the full system load, the number of running jobs in each pool is equal to its maxconcurrency (MC) setting. In this case, the following is the minimum amount of memory that a job is expected to get in each pool:

• s_pool is standalone and can use 2 GB. Each query is allotted 2000/32=62.5 MB.

• m_pool is allotted a targeted memory of 120*50%/15*10=40 GB. Each job can use up to 4 GB. • l_pool is allotted a targeted memory of 120*50%/4*3=45 GB. Each job can use up to 15 GB. • load_pool is allotted 16 GB when 4 jobs are running. Each job can use up to 4 GB.

Even under the full system load, the total memory taken up by all jobs in all the custom resource pools comes to an estimated (2+40+45+16)=103 GB. The leftover 17 GB remains in the general pool and can aid in certain memory-intensive operations such as hash join (HJ) and group by hash.

Figure-4 shows the results for three different workload management scenarios: • Running standalone

• Using the general pool (default)

• Using the custom design with multiple pools

0

500

1000

1500

2000

2500

3000

3500

4000

4500

Standalone

One pool

Multiple pools

(11)

Share with colleagues

6. Conclusion

Concurrency is not the same as parallelism, and it is usually not a direct customer requirement. Rather, it is a derived metric and depends on throughput and performance, as defined in the following equation:

Throughput (TP) = Concurrency * Performance

HP Vertica provides a workload management scheme (resource pool) to prioritize short running queries over long-running queries. It enables multiple jobs to run concurrently and use the system resources efficiently even under a changing mixed workload.

Attaining high concurrency should not be the goal of any workload performance tuning or design exercise. Instead, your goal should be:

• Optimizing the physical design • Minimizing query resource usage

• Understanding a customer’s specific service level agreements (SLA) • Applying the lowest possible concurrency level to satisfy a customer’s SLA

Resource usage (memory footprint) is the best determining factor for query complexity in HP Vertica. To find the memory footprint of a query, it is important that you follow the best practices for profiling in HP Vertica.

It is a common practice to map one or multiple customer workloads to a custom resource pool in HP Vertica. When you design custom resource pools, consider the full system load scenario. Make sure that no resource pool will be starved out of memory under any circumstances.

About the Author

Po Hong, PhD is a senior solutions architect at HP Big Data Platform Corporate Presales. He has a broad range of experience in various relational databases such as HP Vertica, Neoview, Teradata, and Oracle. For comments and suggestions regarding this white paper, please email: po.hong@hp.com.

Acknowledgements

The author would like to thank Priya Arun, Yassine Faihe and the HP Big Data Platform EMEA SE team for their comments, suggestions and insights. Judith Plummer and Hochan Won have provided careful editorial help, comments and proofreading.

Sign up for updates

hp.com/go/getupdated

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein

References

Related documents

The catalytic activity of platinum (Pt) nanoparticles (NPs) towards methanol electrooxidation in alkaline media was demonstrated to be dependent on their interactions with

In Table 2 , we present the “Ave”, the average of the parameter estimates (based on the frequentist GMM) or the average of the posterior means (each posterior mean was based on

For the HP CV SMB RA for Citrix VDI-in-a-Box, the HP ProLiant DL380p Gen8 Server can be configured to support 75 and 150 users running a standard “Medium” user workload..

Firstly, having seriously compromised her credibility with some of John Franklin’s friends (especially Dr. John Richardson) after the Tasmanian debacle, Jane Franklin was forced

To support the maximum performance DL380 Gen9 24-SFF node configuration, Hewlett-Packard recommends the following Linux I/O configuration settings for the HP Vertica data

The fundamental goal of the course is to prepare students for Social Service Work practice by introducing the profession of Social Service Work, the guiding ethics, values and

- Matthew Daubert, LifeLock Member.. is proud to support our members and has teamed up with FedEx to help boost your bottom line. Now more than ever, you need to get the most value