• No results found

Azul Compute Appliances

N/A
N/A
Protected

Academic year: 2021

Share "Azul Compute Appliances"

Copied!
10
0
0

Loading.... (view fulltext now)

Full text

(1)

WP_ACA0209 © 2009 Azul Systems, Inc.

Azul Compute Appliances

Ultra-high Capacity Building Blocks for Scalable Compute Pools

(2)

W H I T E P A P E R : A z u l C o m P u T E A P P l I A n C E s

ExECuTIvE summARy

The rapid adoption of JavaTM and J2EETM platforms and other virtual machine-based applications has caused a

proliferation of application host servers in the data center, presenting IT organizations with the increasingly complex challenge of cost effectively provisioning and managing hundreds of servers without compromising system

availability, utilization, and service levels.

To meet this challenge, Azul Systems introduces network attached processing, an innovative way to deliver massive amounts of compute power to enterprise Java applications. Network attached processing allows virtual machine based applications that are initiated on a general-purpose application host server to tap the processing power of a specialized compute appliance within a compute pool. Instead of running on the host server, the virtual machine and the application are shifted to the compute appliance for execution while all interfaces to clients, databases, and host operating system services remain on the host server. This innovative capability is enabled by Azul Virtual Machine technology.

Compute pools are created with mountable, ultra-high capacity compute appliances ranging in size from 108 processor cores and 48 GB of memory to 864 processor cores and 768 GB of memory. Compute pools provide virtually unlimited processor and memory resources that Java based applications can tap into at any time.

Provisioning is simplified. Through the aggregation of processing onto a few high-performance compute appliances, management overhead is reduced. Compute pools provide the unique capability to create an application-processing infrastructure — a resource that is available to all applications as a consistent, predictable supply of compute power.

nETWoRk ATTACHEd PRoCEssIng

Using network attached processing; servers transparently redirect application processing tasks to an Azul compute pool. These pools of multiple ultra-high-capacity Azul appliances provide highly available application processing power. Relative to the requirements of a single application, the compute pool provides a virtually infinite, unbound pool of compute resources.

Workload consolidation onto Azul compute pools is transparent to the application and the existing servers.

Applications continue to be invoked by and run on the existing servers, each with their own separately configurable operating system and application server, but their raw processing power is augmented by the Azul compute pool.

(3)

over the network to the compute appliance for execution. When the VM Engine starts on the appliance, it loads the required Java files and begins execution of the application. For each application, there is a VM Proxy on the initiating application host and a corresponding VM Engine on one of the appliances in the compute pool. Any application host can run multiple VM proxies, and any compute appliance can run multiple VM engines from one or more hosts. Although all Java processing is executed on the compute appliance, application calls for communications to clients, the host operating system, or the database are handled through the VM Proxy via the link between the VM Engine on the compute appliance and the VM Proxy on the host server. The application’s security, OS, and host platform dependencies are preserved on the application host platform.

Compute pools are monitored and managed as a cohesive resource by the integrated Compute Pool Manager (CPM), allowing administrators to define policy-based rules to select the best available compute appliance according to the policy rules that dictate the required resources and whether the applications require redundant placement. If more than one compute appliance satisfies the policy rule, load balancing is used to determine best fit.

Applications running on the compute appliance function exactly as they would on a host-based JVM. The only difference is in the amount of available resources and the improved processing efficiency that is achieved through network attached processing. With capacities of up to 864 processor cores and 768 GB of memory in a single, coherent shared memory system, compute appliances enable creation of an application processing infrastructure that can handle fifty or more applications and thousands of concurrent process threads.

ARCHITECTuRE ovERvIEW

The Azul system architecture removes the limitations inherent in general-purpose servers and provides the ability to efficiently run virtual machine environments where hundreds of concurrent threads are common. The system is characterized as a highly scalable, symmetrical multi-processor design, with uniform memory access based on the Azul Systems VegaTM processor. The scalable design allows multiple products of varying capacity to be built from a

(4)

W H I T E P A P E R : A z u l C o m P u T E A P P l I A n C E s

• Azul Systems Vega processor: each Vega chip contains 54 fully independent processor cores and an integrated quad-channel memory controller.

— Each appliance contains up to 16 Vega chips, providing up to 864 total processor cores (96, 192, 384, 768 core models are also available) — Each processor core is a 64-bit RISC processor

with optimizations for multi-threaded VM execution

— Three banks of four ECC memory modules are attached to each Vega chip, for a total of 192 memory modules in a 16-chip configuration

• Cache coherent, uniform memory access through a passive, non-blocking interconnect mesh

• 205 GBps aggregate memory bandwidth • 544 GBps aggregate interconnect bandwidth

• Instruction-level support for concurent, pauseless, VM garbage collection.

• Dual network processors for system control and I/O communications: — Dual Gigabit Ethernet network links per network processor

— 4 Gbps peak I/O bandwidth (2 Gbps full duplex) per network processor, 8 Gbps total — In-band or out of band management

— Built-in system monitoring and diagnostics — Fail-over interconnect for dynamic fault resiliency

• RAS Features

— ECC and DRAM fault tolerance (Chipkill) on all system memory — Memory scrubbing

— ECC protection on all system busses and interconnect paths

— ECC on L1 and L2 processor caches & translation look-aside buffers (TLBs) — ECC on processor duplicate tags

— ECC on processor register files — Predictive failure monitoring

— Automatic restart and configuration around failed system elements — N+1, hot pluggable, power and fans

• Integrated management through Azul Systems Compute Pool Manager (CPM)

Powering the Azul Compute Appliance is the Vega pro-cessor (actually, many Vega propro-cessors), a new general purpose processor designed for running virtual machines in highly concurrent environments. The Vega chip includes features not found in conventional processors, enabling a variety of optimizations that would otherwise be impossible. With support for features such as read and write barriers that help optimized garbage collection and object relocation, speculative locking to enable safe concurrent execution of code that would otherwise be serialized, and an instruction set designed for the needs of virtual machines, the Vega processor is designed to provide consistently high throughput to Java applications. THE vEgA PRoCEssoR

(5)

swappable power supplies are inserted from the front at the bottom. All I/O cabling is done via connections to the dual redundant network processor modules at the rear, located just above the AC power inlets. Hot swappable fan assemblies providing N+1 fault resilient fans are mounted in the rear with a pull orientation, providing front to back cooling airflow for all components.

PoWER ConsumPTIon

Benefiting from the highly integrated Vega chip design, Azul compute appliances consume far less power than comparable large-scale general-purpose servers, and in a much smaller system footprint. The Model 7380, with 864 processor cores in a compact 14U rack mount package, consumes a maximum of 4.0 kilowatts of power and occupies only 8.1 cubic feet of rack space.

The Azul Systems compute appliance provides a dramatic reduction in power consumption and rack space usage for systems of this performance level. On average, Azul compute appliances are expected to consume 85% less power than equivalent performing general-purpose servers, and occupy a fraction of the data center footprint.

sysTEm sofTWARE

Azul compute appliances do not run traditional general purpose operating systems. The integrated system software provides the appliance management environment and the execution environment for virtual machine engines. The Azul integrated system software was designed with three key objectives:

• Enable throughput scalability to hundreds of concurrent threads.

• Provide real-time response to rapid changes in resource demand to ensure consistent response times. • Enable fine-grained allocation and control of application resources.

To achieve these objectives, The Azul system software was designed with many unique capabilities. Several of those capabilities are described below.

mulTI-sTAgE sCHEdulER

To facilitate faster thread scheduling, the scheduler is implemented as a two-stage design using a process level scheduler and a thread level scheduler. The process scheduler assigns a group of processor cores to each

application based on the commitments, maximums, and priority weights that are defined in Compute Pool Manager (CPM) policy rules. When scheduling processors for an application, the scheduler considers the requirements and

(6)

W H I T E P A P E R : A z u l C o m P u T E A P P l I A n C E s

priorities of all other applications (processes) in the compute appliance. This is done every 10ms to guarantee immediate allocation of processing resources in response to demand. The thread scheduler, on the other hand, assigns individual processor cores to threads, but only works within the context of one application at a time. Whereas a general-purpose operating system simultaneously examines all threads in the system and incurs delays due to excessive context switching, the thread scheduler is able to limit its scheduling to a single application at a time, resulting in faster scheduling actions.

guARAnTEEd REsouRCE AlloCATIon

A key requirement for managing multiple applications within the shared processor and memory space of a compute appliance is the ability to guarantee resource levels. Every application should be guaranteed a minimum level of processor and memory resources when it is activated, as well as a fair share of incremental processor resources when competing with other applications within the same system. The Azul system software is closely integrated with the Compute Pool Manager (CPM) to provide this capability.

Applications cannot start on a compute appliance unless that appliance can guarantee that the application will receive its committed level of processor and memory resources. When an application is started, The system software ensures that those resources are always available as required. For memory, the requested heap space is pre-allocated and protected from use by other applications. For processor resources, the application is guaranteed to have the committed number of cores allocated for its use whenever the demand is placed. When the application demand for processor cores is below the committed (guaranteed) level, those cores can be used by other

applications. If the application requests additional processor cores, the remaining number up to the committed level are made available within approximately 10 milliseconds of the request.

When multiple applications simultaneously ask for incremental processor resources over and above their committed levels, their requests are granted from available free resources. In most cases, due to the large capacity of the compute appliances, there are sufficient resources to meet these requirements. However, if demand exceeds the available amount, resources are allocated to the applications according to processor priority weights that are defined in the application’s respective policy rules within the Compute Pool Manager.

HETERogEnEous vm suPPoRT

Azul compute appliances are capable of simultaneously running VM engines originating from different application host platforms (e.g. Solaris and Linux) as well as VM engines from different J2SDK versions (e.g. 1.4.2 and 1.5). This capability enables a compute appliance to be used as a shared compute resource by a wide variety of application hosts. Additionally, VM engines running on the compute appliance are 64 bit and have heap space of up to 670

(7)

oPTImIsTIC THREAd ConCuRREnCy

The system software incorporates an innovative lock management scheme to greatly improve transaction throughput. Rather than serialize processing when a Java lock is invoked, optimistic thread concurrency allows the processing of all threads to proceed instead of blocking. Prior to committing any changes, optimistic thread concurrency evaluates whether or not any write conflicts have occurred in the affected memory region. If there are no conflicts, as is most often the case, the writes are committed and the thread proceeds as expected. If a conflict occurs, the memory writes are rolled back and restarted after the lock contention is resolved. Because conflicts only occur in a very small number of cases, the overall effect is a significant improvement in application throughput.

CooPERATIvE mEmoRy mAnAgEmEnT

The Azul Compute Appliance software includes a unique system memory capability called cooperative memory management. This feature lets applications continue running even when they exceed their committed memory allocations, due to leaks or other causes. With this extra protection, called Grant memory, applications do not need to be individually over-provisioned, thus allowing higher system memory utilization.

Each application is launched with a Java command line parameter specifying how much memory the system must “commit” to be available for its use. The application is guaranteed to be able to increase its active memory up to that level. When the application’s live object set size reaches the stated commitment level, instead of throwing an out of memory exception, the cooperative memory manager within the system software grants the incremental allocation if the Grant memory is available. Even the underlying JVM can tap into Grant memory for extra memory to circumvent scenarios that ordinarily would lead to unavoidable crashes, for example, thread stack overflows. Once Grant

memory is allocated, a warning is generated that displays activity graphs to alert administrators that the commitment threshold has been exceeded. Regardless of the reason for the request, this incremental allocation prevents the application from running out of heap space and crashing. If it is an indication of a memory leak, the administrator can wait until a more convenient time to restart the application and reduce its memory footprint to the starting level once again. If it turns out that there is no memory leak, and the application has a legitimate requirement for incremental memory, the administrator uses that information to adjust the resource profile for that application and avoid the warnings in the future.

Grant memory is returned to the common pool for reuse if memory demand decreases. The memory savings from cooperative memory management are best realized with multiple applications running on the same appliance, that tend to spike on different occasions.

(8)

W H I T E P A P E R : A z u l C o m P u T E A P P l I A n C E s

sysTEm THRougHPuT

The scalability and efficiency features engineered into Azul Compute Appliances results in a family of systems with enormous capacity for virtual machine workloads. Model 7380, the largest compute appliance in the product family, has 864 processor cores and can simultaneously run as many as 200 concurrent applications while maintaining consistent response times for all users.

Figure 1 below characterizes projected throughput vs. response time curves for the Model 7380 in a network attached processing configuration compared to commonly used general-purpose servers. Each curve shows how response time rises as total transaction rate increases. The maximum throughput for a system is the point at which the response time crosses over the acceptable response time threshold appropriate for the transaction type. The throughput measure characterized in this projection is the mixed transaction workload measured by an industry standard benchmark for application servers.

Transaction Throughput Response

Time

Maximum Acceptable Response Time Commodity

Server General-purposeLarge-scale, SMP Servers

Azul Systems Compute Appliance

768 Cores

(9)

garbage collection prevents response time delays while available memory is being optimized. Combined, these

capabilities enable response times to remain consistent as the load increases.

Although the throughput capacity of each compute appliance is far greater than even the largest general-purpose SMP server, the real power and value of network attached processing is realized when multiple systems are grouped together to form a compute pool. Compute pools provide a scalable application-processing infrastructure that can be shared by all applications in the data center.

ConClusIon

Compute appliances from Azul Systems are an innovative way to deliver massive amounts of compute power to commercial applications. The system architecture is a unique combination of hardware and system software that is designed specifically for the heavily multi-threaded environment of Java based applications. When managed with the Compute Pool Manager, these systems enable IT organizations to dramatically aggregate application-processing resources and lower overall cost of ownership. Perhaps most importantly, they enable the creation of an application-processing infrastructure — a centrally managed, virtually unlimited compute resource for all applications in the data center.

(10)

W H I T E P A P E R

1600 Plymouth Street, Mountain View, CA 94043 T 650.230.6500 | F 650.230.6600 | www.azulsystems.com

Copyright © 2009 Azul Systems, Inc. All rights reserved. Azul Systems and Azul are registered logos in the United States and other coun-tries. The Azul arch logo, Compute Pool Manager, and Vega are trademarks of Azul Systems Inc. in the United States and other councoun-tries. Sun, Sun Microsystems, J2EE, J2SE, Java are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. Other marks are the property of their respective owners and are used here only for identification purposes. Products and speci-fications discussed in this document may reflect future versions and are subject to change by Azul Systems without notice. This document may not be used for commercial purposes.

RElATEd WHITE PAPERs

• Delivering a Breakthrough Java Computing Experience

• Network Attached Processing: Scaling Enterprise Application Deployments

• Azul Compute Appliances: Ultra-High Capacity Building Blocks for Scalable Compute Pools • Pauseless Garbage Collection: Improving Application Scalability and Predictability

• The Azul Virtual Machine: Realizing the Elegance of Java Technology • Optimistic Thread Concurrency: Breaking the Scale Barrier

Visit www.azulsystems.com for additional whitepapers.

AbouT Azul sysTEms

Azul Systems is a global provider of enterprise server appliances that deliver virtualized compute and memory resources as a shared network service for transaction-intensive and QoS-sensitive applications built on the Java™ platform. Azul Compute Appliances enable Java-based applications to transparently achieve 5X to 50X improved performance by scaling and simplifying application integration. Our green friendly compute infrastructure supports the business priorities of today’s most demanding enterprise environments and delivers increased capabilities, capacity, and utilization at a fraction of the operating cost of traditional computing models.

Figure

Figure 1 below characterizes projected throughput vs. response time curves for the Model 7380 in a network  attached processing configuration compared to commonly used general-purpose servers

References

Related documents

Objective: To identify factors related to older patients’ clinical, nutritional, functional and socio-demographic profiles at admission to an acute care ward that can predict

DEFINITIONS Webi Processing Server Central Management Server Adaptive Job Server Adaptive Processing Server Machine: grumpy Node: siagrumpy1...

Iz navedenog jasno se može zaključiti da je poslovni plan izuzetno važan dokument koji je od presudnog značaja. Naime, poslovnim planom prezentiramo svoju poduzetnički

The workstation computer 4 was chosen to be an application server: a computer whose processing power is shared amongst several users on the network, distinguished from a file

Using virtual machines we can simulate a network, but with a virtual network, the DHCP server is often the host, meaning the router and the DHCP server are the same

vCenter Server performs compatibility checks before it allows migration of running or suspended virtual machines to ensure that the virtual machine is compatible with the target

Compost extract preparations fortified with water extract of over-fermented tempe showed suppression effects on either mosaic disease or its aphid vector,

Application Servers Central Processing Unit (CPU) Compute Node DRAM N IC Central Processing Unit (CPU) Compute Node DRAM Central Processing Unit (CPU) Compute Node