Wp
High-Performance
Wp
What Is a Cluster?
There are several types of clusters and the only constant is that clusters keep changing. Here are some examples of common clusters:
High-availability (HA) Clusters
High-availability Clusters (a.k.a Failover clusters) are used in industries where down-time is not an option. Toll roads, subways, financial institutions, 911 call centers, to name a few, are all entities that have high-availability requirements. HA Clusters generally come in a redundant two-node configuration with the redundant node kicking in when the primary node fails. Seneca Data offers the NEC Express5800 to meet this need. For more details on High-availability / Fault Tolerant Clusters, visit www.senecadata.com/products/server_nec.aspx.
Load-balancing Clusters
Load-balancing Clusters (a.k.a. server farms) operate by having the entire workload come through one or more load-balancing front ends, which then distribute computing tasks to a collection of back end servers. A commonly used free software package, for Linux OS-based Load-balancing Clusters, is available at www.linuxvirtualserver.org. Most Seneca Data Nexlink Load-balancing Clusters use Linux.
I/O-PCI CHIPSET CPU MEMORY DISK Conventional System I/O-PCI
I/O Subsystem A Processing Subsystem A NEC EXPRESS5800 Fault Tolerant System
No Single Point of Failure Zero Switchover Time Single Software Image I/O-PCI
DISKS I/O Subsystem B Processing Subsystem B
A A CHIPSET CPU MEMORY Fault Detection Fault Detection & Isolation CHIPSET CPU MEMORY Fault Detection Fault Detection & Isolation FT Cross Bar MIRROR D u al M o d u le R ed u n d an cy M u lt i-p at h I/ O C P U L oc ks te p CENTRAL DATABASE 111.111.111.10
(Cluster Virtual IP) Network Load-balancing Cluster
111.111.111.1 WEB SERVER HTML ASP.NET COM ASP 111.111.111.2 WEB SERVER HTML ASP.NET COM ASP 111.111.111.3 WEB SERVER HTML ASP.NET COM ASP
Wp
High-performance Computing (HPC) Clusters
HPC Clusters provide increased performance by splitting a computational task across many homogenous nodes in a cluster and working on it in parallel. HPC Clusters are most commonly used in scientific computing where the end user designs specific programs that utilize the parallelism methods used on HPC Clusters. Seneca Data works with a number of different government, higher education, and enterprise customers to design and build HPC Clusters.
Grid Computing Clusters
Grid Clusters are similar to HPC Clusters; the key difference between grids and traditional clusters are that grids connect collections of computers which do not fully trust each other and hence operate more like a computing utility than like a single computer. In addition, grids typically support more heterogeneous collections of systems than are commonly supported in clusters.
Microsoft Compute Clusters
Windows Compute Cluster Server 2003 is a cluster of servers that includes a single head node and one or more compute nodes. The head node controls and mediates all access to the cluster’s resources and is the single point of management, deployment, and job scheduling for the compute cluster. Windows Compute Cluster Server 2003 uses the existing corporate Active Directory infrastructure for security, account management, and overall operations management using tools such as Microsoft Operations Manager 2005 and Microsoft Systems Management Server 2003.
Typical Windows Compute Cluster Server 2003 network
Configuring Windows Compute Cluster Server 2003 involves installing the operating system on the head node, joining it to an existing Active Directory domain, and then installing the Compute Cluster Pack. If you are using Remote Installation Services (RIS) to automatically deploy compute nodes, RIS will be installed and configured as part of the To Do List after installation is complete. When Compute Cluster Pack installation is complete, it will display a To Do List page that shows you the remaining steps necessary to finish the configuration of your cluster. These steps include defining the network topology, configuring RIS using the Configure RIS Wizard, adding compute nodes to the cluster, and configuring cluster users and administrators.
ACTIVE DIRECTORY FILE SERVER
HEAD NODE
COMPUTE NODE
COMPUTE NODE COMPUTE NODE
MOM SERVER MAIL SERVER
WORKSTATION
PRIVATE NETWORK
MS-MPI INTERCONNECT
PUBLIC (CORPORATE) NETWORK
Wp
Nexlink High-performance Computing (HPC) Cluster Offering
Nexlink HPC Clusters, manufactured by Seneca Data, are tailor made to customer requirements, offer maximum system performance, and are on the cutting edge of innovation and design.
As a custom HPC Cluster manufacturer, Seneca Data delivers solutions tailored for diverse market segments and industries by offering: Industrial cluster hardware engineering and manufacturing in our 40,000 sq ft, ISO 9001:2000 compliant facility Stylish marketing, branding, and labeling design
Simplified ordering, build-up, testing, qualification, and staging Comprehensive delivery, setup, and post-purchase service options
Cluster Hardware Engineering
Seneca Data Sales Engineers can specify any/all hardware/software requirements, such as: Systems Hardware Components
Intel or AMD
Graphic Adapter - GPU requirements 1U to 8U system chassis
Interconnectivity 1,2,3 NIC ports InfiniBand HCA
Comprehensive System Software Requirements Security considerations
Port manipulation Services enabling/disabling Node-naming conventions Storage mounting
Network file system administration Access considerations
User access control Storage Considerations
Storage requirements per head-node Short term
Long term
Storage requirements per compute-node Rack Hardware Configuration
Rack height requirements (30U to 46U racks) Rack depth requirements
Rack power requirements (single, dual, triple, quad power) Rack color – front, back, sides
Rack labeling – front, back and sides Rack Components
Power distribution units per rack Cover-plates needed Cabling • • • • • -• • - • •
Wp
Rack Management Options Out-of-band management IPMI
SRENA
Intelligent power KVM/IP
Marketing / Branding / Labeling Design Specifications
Seneca Data can assist with:
Marketing related documents - training guides, user manuals, tech-specs, and white-papers Pre-sales collaboration
Branding - boxes, back-plate, front-bezel, silk-screening, and documentation
Pre-order Build-up / Testing / Qualification and Staging of Clusters
By nature, HPC Clusters demand the latest advances in hardware to accommodate future expandability. Seneca Data helps mitigate technology concerns by pre-building, testing, and qualifying designated hardware. Upon completion, staging services are provided until the end user’s environment is prepared to receive the hardware.
These services include, but are not limited to:
Operating system benchmarks on specific hardware Driver optimization on all hardware
Management and utilities suite installation
Customer login, testing, and pre-inspection prior to delivery Staging, prior to delivery
Engineer onsite during install
Typical Cluster Related Sales Engineering Considerations
Site Location Considerations
Is site building built/completed Does site have adequate power Does site have adequate air conditioning Is cluster going into first floor of building Does building have an elevator Is elevator door wider than 48"
Does site have a raised floor with easy access for cabling Can cluster be rolled to final location in building Will cluster be moved to another site later Will cluster be phased out after a certain date Will cluster be in a limited access space Will cluster be in a closed access space Rack Cabinet Related Considerations
Is there a height restriction on cabinet • • • • • • • • • • • •
Wp
Is there a depth restriction on cabinet Should back door of cabinet be screen holed Should front door of cabinet be glass Should front door of cabinet be labeled Should back door of cabinet be labeled
Does Customer need additional open space per rack Should cabinet be color other than black
System Hardware Related Considerations How many total compute nodes Does customer prefer Intel processors Does customer prefer AMD processors Does customer needs Graphic Processing Unit Does customer prefer SAS Drives
Does customer prefer SCSI Drives Does customer prefer SATA Drives
Does customer need 1, 2, or 3 NICS (Don’t count Infiniband) Security Related Considerations
Does customer want SSH disabled or enabled Does customer want Telnet disabled or enabled Does customer want ports disabled
What naming convention needed on Nodes Storage Related Considerations
Does customer want a storage array with cluster Management Related Considerations
Does customer want serial Out-of-Band Management Does customer want KVM/IP Management
Does customer want Intelligent Power - cold reboot ability
Conclusion
Seneca Data is an established manufacturer of compute clusters. Our expertise in design, manufacturing, and logistics makes us an ideal partner for single build projects or contract manufacturing engagements. For more information about Seneca Data and our cluster offering visit us at www.senecadata.com or www.nexlink.com.
• • • • Sources:
Wikipedia, Microsoft® Compute Clusters, NEC® EXPRESS5800, ISTARUSA®, RED HAT®, Intel®