• No results found

W H I T E P A P E R T h e V a l u e o f S m a r t e r D a t a c e n t e r S e r v i c e s

N/A
N/A
Protected

Academic year: 2021

Share "W H I T E P A P E R T h e V a l u e o f S m a r t e r D a t a c e n t e r S e r v i c e s"

Copied!
15
0
0

Loading.... (view fulltext now)

Full text

(1)

W H I T E P A P E R

T h e V a l u e o f S m a r t e r D a t a c e n t e r S e r v i c e s

Sponsored by: IBM

Michelle Bailey Rob Brothers

Katherine Broderick May 2011

I D C O P I N I O N

The next five years in IT will likely be some of the most exciting and demanding for datacenter managers and the office of the CIO. In this post-recession period, organizations will be setting in place strategies that will expand their core business while mining for new market opportunities. For many, this business transformation will include new product development, mergers and acquisitions, geographic expansion, cross-selling opportunities, and partnerships. Technology will be a critical enabler for these new initiatives, and a diverse and efficient datacenter strategy will be essential. While the emphasis has been on consolidation and cost reduction during the economic downturn, as IT organizations look to the future, success will be built on streamlining processes, reducing complexity, and improving time to market. IT organizations will be responsible for real IT transformation in the coming years and not simply the cost reduction of the past. They will have to strike a balance between integrating new applications and multiple infrastructure delivery models and continuing to support their already substantial IT portfolio.

The future datacenter will be a highly automated set of standardized infrastructure where applications and data will be deployed and provisioned on systems and in sites based on workload demand. New cloud-based technologies and methodologies will expand the options for IT organizations to source hosting or outsourcing providers for software, platforms, infrastructure, and datacenters at varying price points and locations. The backdrop to all of these choices is the physical backbone of the datacenter, otherwise known as facilities. Power and cooling will need to be flexible enough to keep up with automated, virtualized, dynamic IT while keeping in mind capacity limitations, efficiency, and budgets.

With these new demands being placed on IT and facilities, it is not surprising that in a recent IDC survey of over 250 IT managers, more than one in five found that their IT staff is not skilled enough to implement a private cloud. In another IDC survey of over 400 IT decision makers, lack of in-house IT expertise is listed as a top challenge to virtualization by over 22% of respondents. These two data points indicate that for many IT organizations, the journey toward adding incremental value to the business will require external help. In addition, the large number of forthcoming sourcing options and technology decisions will challenge IT organizations to balance their need to maintain control without inhibiting innovation. As time to market becomes a differentiator in the economic recovery, speed of deployment must be balanced against security, availability, and service levels across the IT organization.

Glo b a l H e a d q u a rt e rs : 5 S p e e n S tr e e t Fr a mi n g h a m , M A 0 1 7 0 1 U S A P .5 0 8 .8 7 2 .8 2 0 0 F. 5 0 8 .9 3 5 .4 0 1 5 w w w .id c .c o m

(2)

Achieving this equilibrium will require IT organizations to plan carefully, conduct ongoing monitoring and measurement, and draw on extensive experience.

I N T H I S W H I T E P A P E R

This paper provides an overview of how IT organizations can optimize across the entire life cycle of their datacenters and build a strong foundation for future IT operations. Opportunities for optimization remain strong, including increasing efficiency on both the IT side and the facilities side, improving user support and protection, and increasing the flexibility of the datacenter to be more responsive to the business. With so many opportunities for CIOs and senior IT decision makers to focus on enabling business improvements, this paper also includes suggestions on where to start while keeping in mind a long-term vision.

S I T U A T I O N O V E R V I E W

Today, the most significant challenge for IT organizations is meeting the needs of the business with their limited resources. As business goals change from cost cutting to innovation and growth, IT organizations will have to rethink their datacenter strategy. Many companies have already extracted significant cost reduction through extensive consolidation, virtualization, and standardization programs and in doing so have built credibility with the business. In addition, this improved architecture has laid the foundation for the next phase of the datacenter, which will place automation and new delivery models at the heart of supporting business change without trading off significant cost increases or lower service levels.

These new delivery models speed time to market by decreasing deployment and procurement time while increasing availability. To achieve these results and maintain service levels, IT needs to increase predictability. IT needs to know the amount of resources available (both on-premise and off-premise) along with resource utilization for the past, present, and future. To obtain this predictability, many leading IT departments are using sensors, software, and hardware to gather information about their datacenter design and ongoing operations. This information is, unfortunately, not enough to ensure predictable ebbs and flows in a datacenter. The next step is to perform analytics on this large set of disparate data and optimize the datacenter for efficiency, on both the IT side and the facilities side.

For many CIOs and IT organizations, a lack of insight into this powerful information is a constraint that many IT organizations are not even aware of. Further, in IDC's experience, where this information is available, many are unsure of how to leverage the data or are concerned with the risk of change, and so no decision is made. Inaction can often be a by-product and a fear that if one change is made, it will cause waves, sometimes resulting in downtime, across the datacenter floor. This combination of not knowing where to start and the fear of making waves causes many IT organizations to overlook the many opportunities that are present in today's datacenters for increased efficiency, improved reliability, and increased flexibility.

(3)

O p p o r t u n i t i e s f o r t h e I T O r g a n i z a t i o n

Today's datacenters look very little like their predecessors, and this is attributable to how much CIOs and IT organizations have worked to attain efficient, dependable, agile datacenters for the business. The evolution from monolithic warehouses for IT to modular designs optimized for IT has been a long road worth taking. Datacenters of the future will evolve even further to address efficiency, on both the IT side and the facilities side, simplifying management while maintaining uptime and speeding time to market. To make smart, effective change in the datacenter, CIOs need to keep in mind multiple goals simultaneously. They need to balance the goals outlined in the following sections while moving forward because the datacenter today is so interconnected.

IT Infrastructure Resource Efficiencies

Many datacenter managers have already made lengthy inroads to increase IT resource utilization. IT managers have virtualized, increased the number of logical servers per system administrator, and deployed tools in datacenters to manage the environment more effectively. According to IDC's recent virtualization survey, in 2011, one in five physical servers shipped will be virtualized. From the workload view, that equates to 65% of all workloads running on a virtualized physical host. This difference from the physical virtualization to the workload view is due to the increased density possible with virtualization. In 2011, IT managers will deploy over seven virtual machines (VMs) per host.

This new, highly virtualized world requires changes not only to the servers but also to the storage, networking, process, and people sides of datacenter operations. As shown in Figure 1, virtualization has driven an increased need for management consolidation and cost control. Virtualization has decreased server spending and decreased the cost of power and cooling systems, but it has increased management costs greatly. This is largely because today's systems administrators handle virtual machines in much the same way they handle physical machines. The explosion of virtual machines depicted in Figure 1 by the red line exposes the virtualization management gap that has come about in many of today's datacenters. On the storage side, the explosion in virtualized computing requires smart, agile, highly utilized storage resources. Networking needs to be able to see into servers, not only at the physical level but also at the virtual level. This is essential for deploying policies for VMs over the network.

Power and cooling, a historically stagnant, not agile part of the datacenter needs to stretch and become malleable to keep up with today's virtual resources. As the number of VMs increases and decreases and as the machines move physically throughout the datacenter, it is essential for power and cooling to change airflows, power draws, and temperatures accordingly. The stagnancy on the facilities side and the increasing complexity on the IT side make the results of a recent IDC survey expectable. As mentioned earlier, in a survey of over 400 IT decision makers, lack of in-house IT expertise is listed as a top challenge to virtualization by over 22% of respondents. This means that about one of every five IT departments has a lack of internal expertise regarding virtualization. The need for external help with such a crucial technology for the future of the datacenter is real. External datacenter service

(4)

providers can help IT departments make the most of their resources, in terms of process improvement, better management through tools and software, more accurate analytics, and an objective point of view.

F I G U R E 1

N e w E c o n o m i c M o d e l f o r t h e D a t a c e n t e r

Source: IDC, 2011

To address these growing concerns around management, power, cooling, and IT infrastructure in the datacenter, many datacenter managers are charting a course toward cloud computing. This long-term process will not be completed summarily but will happen through a series of stages. Figure 2 depicts these stages and where most datacenters and IT organizations fit today along their journey toward the private cloud. The stages are:

Pilot. Fifteen percent of datacenter managers are in this stage of testing virtualization. Less than 10% of their servers are virtualized, and they are not yet familiar with the virtualization management gap problem depicted in Figure 1.  Consolidation. The majority of IT organizations are in the consolidation phase.

These IT organizations have experience with virtualization now and are seeing savings in terms of physical server cost, power, cooling, and space. Although their production environment runs on virtual IT assets, only ad hoc policies are in place for management. These organizations are starting to see increased virtual machine deployments and increased management costs.

Customer Spending ($B)

$0

$50

$100

$150

$200

$250

$300

Power & Cooling Expense

Management Cost

Server Spending

Virtualization Management Gap

0

20

40

60

80

'96 '97 '98 '99 '00 '01 '02 '03 '04 '05 '06 '07 '08 '09 '10 '11 '12 '13

Physical Server Installed Base (M)

Logical Server Installed Base (M)

Installed Servers (M)

Worldwide Spending on Servers, Power and Cooling, and Management/Administration

(5)

Assured computing. One in four CIOs is in the assured computing stage. The problem of management and visibility has been recognized and is starting to be addressed. The IT processes and policies are partially integrated and standardized, and VMs are becoming more mobile and reliable. Production-level, mission-critical workloads are being run in this virtual environment.

Private cloud. The virtualization management gap has been addressed in this stage. Processes, policies, and automation tools are in place to make administering a virtual server less cumbersome than managing a physical one. Only 5% of CIOs are in this position, but many are headed in this direction.

F I G U R E 2

V i r t u a l i z a t i o n M a t u r i t y

Source: IDC, 2011

The beginning stages of this maturity curve, the pilot stage and the consolidation stage, present hard cost savings in terms of physical IT infrastructure, power, and cooling. In the later stages, savings are presented in the form of total cost of ownership (TCO) as the savings are largely in soft costs such as management and downtime. Moving along this curve requires IT directors to focus not just on the singular goal of increasing IT utilization but also on balancing reliability and flexibility.

Pilot

Consolidation

Assured

Computing

Private Cloud

Staff Skills Little or no expertise Hands-on expertise;

some formal training Formal training; certification desirable

Certification required Technology & Tools Simple static

partitions

Simple Mobility: Manual & Off-hours Matched application pairs Portable Applications: Automated Failover CMDB Implemented Policy-Based Automation; Service Management; Life-Cycle Mgmt; Self-Service Delivery Financial Impact No substantial

financial impact

Measurable Hard Cost Savings: Consolidation Power/Real Estate Justified TCO savings: Business Continuity Variable costs recognized or chargeback models established IT Process & Policies Skunk Works Ad hoc Partially Integrated

Partially Standardized

Fully Integrated Fully Standardized Line of Business Hidden Revealed Transparent Engaged in

Governance Process Application Usage Test Development Production:

Noncritical Production: Business Critical Production: Service Profiles & Catalogs

% of Customers 15% 55% 25% 5%

Average VM Density 4 6 10 35

Experience 9–12 months 9 months–2 years 1.5 –3 years 3–5 years

% Virtualized Servers <10% 25% 50% 80%

(6)

As stated earlier, a recent IDC survey found that of 250 IT manager respondents, more than one in five believe their IT staff is not skilled enough to implement a private cloud. It is clear that additional, external help will be needed to move along this virtualization maturity curve. External datacenter service providers can help IT with an objective viewpoint, advanced analytics (to see what the real issues are versus perceived issues), and years of experience in multiple, diverse datacenter environments.

Improving Storage Efficiencies

IT organizations are being pulled in multiple directions simultaneously with pressures for increased efficiency, flexibility, and availability through evolved deployment models and opposing force from the growth in complexity of the IT environment and shrinking budgets in already extremely lean organizational structures. Nowhere is this juxtaposition more clear than in the world of storage. IDC expects storage capacity in enterprises to soar through 2014 (see Figure 3). This growth in data is not only in the structured data that is more familiar but also in unstructured data. Businesses are already growing reliant on mining and analyzing this structured and unstructured data for improved intelligence, competitiveness, and financial results. It is difficult to imagine how IT will keep available, let alone gain value from, this growing, complex data swamp.

F I G U R E 3

W o r l d w i d e E n t e r p r i s e S t o r a g e S y s t e m C a p a c i t y S h i p p e d , 2 0 0 8 – 2 0 1 4

Source: IDC, 2011

According to IDC's latest enterprise storage forecast (Worldwide Enterprise Storage

Systems 2010–2014 Forecast Update: December 2010, IDC #226223, December

2010), the quantity of petabytes shipped is expected to increase at a compound annual growth rate (CAGR) of 50% over the course of the next four years (refer back to Figure 3 for additional detail). The ongoing management of enterprise storage

0

20,000

40,000

60,000

80,000

2008

2009

2010

2011

2012

2013

2014

(PB

)

(7)

systems will always play a critical role in any IT environment. With businesses moving from standalone systems to virtualization, and from local applications to cloud, the task of maintaining these devices internally is becoming increasingly complex for IT staff. It is IDC's opinion that because of the complexities and proprietary nature of storage subsystems, utilizing experts who work with these systems on a regular basis and who have industry best practices on how to deploy and support these arrays is the best way to get the most value, performance, and reliability from these IT assets. Datacenter Flexibility to Adapt to Changes in Demand

Datacenters of the future will be built with modularity in mind. These buildings will really be big computers that adapt and change their operations to respond to the needs of the business. This flexibility is achieved through predictable, repeatable designs that can be easily monitored and measured during operations.

To achieve these modular, amendable designs, IT organizations will construct greenfield (new implementation) and retrofit datacenters. Both greenfield and retrofit datacenters will achieve more efficient equipment, better standardization, more evolved processes, longer life cycles, and better overall TCO than those in the past. To attack these issues and make the most of the IT organization's investment, CIOs need to consider their organization's innate abilities and possibly get external help. It is sometimes difficult to see the forest through the trees and really identify what the sources of problems are. This is an essential stage for choosing what the priorities will be in new or retrofitted datacenters. With IT budgets not getting any larger, it is important for datacenter design teams to identify what is really important to future designs and operations. These priorities need to be identified and their value needs to be quantified in terms of downtime, dollars, and people. This quantification increases the likelihood of these priorities surviving strict budgets. These strategic design choices are difficult, but they can really set up the IT organization for success down the road.

In addition to achieving flexibility in the on-premise datacenter, IT organizations are increasingly looking to off-premise solutions for flexibility and the freedom to focus on critical workloads internally. These public cloud software-as-a-service (SaaS) solutions require IT organizations to prioritize what should be moved to the cloud and what should remain on-premise. In platform-as-a-service (PaaS) solutions, new frameworks for application development need to be worked out (to be easily portable to the cloud and back). In the case of infrastructure-as-a-service (IaaS) solutions, capacity planning needs to extend beyond the four walls of the datacenter onsite and out to the cloud.

The options for increasing savings, flexibility, and reliability abound. In fact, many CIOs find themselves with so many options that it's difficult to know where to begin. In many cases, the help of an outside partner is necessary to determine how to go about making a change and how to make sure the solution is effective. The use of datacenter services can be a wise choice for many datacenter managers at this fork in the road. This presents yet another choice, which is who to ask for help. There are quite a few factors to consider when evaluating who should be IT's partner in the datacenter.

(8)

F A C T O R S T O C O N S I D E R W H E N E V A L U A T I N G

D A T A C E N T E R S E R V I C E S

As enterprise IT departments struggle with the challenges of maximizing the performance of their IT landscape, many need help from external datacenter service providers to facilitate that process across the IT ecosystem. These services help datacenter managers identify opportunities where they can succeed today and set themselves up for more success tomorrow. After many years of covering the datacenter environment, IDC has identified the following best practices for IT departments that are evaluating high-quality services for optimizing the datacenter. K e y P o t e n t i a l S u c c e s s F a c t o r s

Putting the Puzzle Pieces Together

Extensive knowledge across all aspects of the IT landscape — from facilities to IT to the customization required for specific solutions — is a critical consideration when selecting a datacenter service provider. Knowledge of all aspects of the datacenter has never been more pertinent. Today, the datacenter is interconnected, mobile, and dynamic, with VMs moving, tools automating, and power and cooling flows changing frequently. Breadth of knowledge is vital from storage to computer room air conditioners (CRACs). At the same time, datacenter managers should not give up depth of expertise for breadth.

System-level knowledge is just as important as understanding the connections between systems. Both systems and datacenter operations are possible opportunities for optimization, and both are potential sources of downtime. IT managers need a partner in the datacenter that understands not only how complicated these environments are but also how to simplify daily operations. The true help in today's datacenter is making operations appear automatic and simple but simultaneously keeping track of the physical backbone (infrastructure and facilities).

Thinking Globally

Datacenter service providers with experience across multiple geographies, multiple datacenter environments, and various stages along the virtualization maturity curve are invaluable. No two datacenters are alike, and IT organizations should be looking for a datacenter service provider that has seen it all.

Experience across products and geographies adds value to datacenter services in two ways. First, datacenter service providers with experience working in multiple environments understand the common dilemmas faced by datacenter managers and the solutions that are time-tested and work. Second, these service providers work with clients at all stages of the virtualization and cloud maturity curves. They can help the datacenter go from the earliest stages of adoption to the late stages of automation and cloud usage models. This can be done in one large project, one small project, or a series of smaller projects because of the large product lines available from datacenter service providers with experience and expertise.

(9)

Being Credible Quickly

The growing complexity of IT environments requires a detailed, coordinated approach to identify, diagnose, and resolve specific issues in the IT infrastructure. Today's datacenter systems and operations are kept in disparate spreadsheets and workbooks with little method in place for continuity or the ability to replicate what works. Bringing in a datacenter service provider with systematic time- and customer-tested strategies specific to a given datacenter environment will have positive effects in the long term. This approach also makes moving along the evolution curve that much easier because as the business and IT scale, systems, datacenter capacity, and operating procedures can scale as well.

Delivering Rapid Return with a Strategic Goal in Mind

Datacenter managers need to choose projects with quick ROIs while keeping in mind the 15- to 20-year life cycle of their brick-and-mortar datacenter. These projects need to have a quick but, more importantly, lasting payoff for IT in terms of efficiency and availability. These "quick wins" are great on their own and also in the beginning stages of larger projects. These up-front successes pave the way with business units and executives for further optimization. At an organizational level, they allow IT to demonstrate its relevance and ability to deliver for the business. The rapid returns of early projects are crucial to the long-term viability of budgets and approval for strategic shifts and initiatives.

Understanding the Importance of Analytics Throughout the Datacenter Life Cycle

Datacenter analytics is an emerging field within the IT organization. While customers have long had a surplus of information on their server, storage, and networking devices, as well as mechanical and electrical equipment, the ability to capture this data for meaningful analytics that provides a holistic view of the datacenter remains aspirational for many.

In IDC's experience, information capture on systems and facilities is the beginning of any IT transformation project and, until recently, has been an extremely manually intensive task. With the advent of virtualization comes a new wave of systems management tools that are enabling more automated data capture across the entire datacenter. This information is continuously captured, increasingly in real time, from a variety of sources, including statistics on utilization, deployment and provisioning tools, orchestration and governance practices, health monitoring systems, as well as failover and disaster recovery activities.

This large body of information for the enterprise datacenter opens the possibility for higher-level analytics that can intelligently provide insight across the entire life cycle of the datacenter that will optimize both day-to-day tasks and ongoing operations of the entire facility. Predictive analytics opens the possibility of taking a wide variety of disparate data and sorting through what is really relevant to set in place accurate, long-range planning strategies. Imagine a datacenter where a site-based outage is predicted before it happens by understanding system, application, and power dependencies along with historical information on system, application, and utility performance. From this incident, analytics could suggest a new architecture or blueprint for the datacenter.

(10)

The real benefits of this type of analysis are twofold. The first benefit is in improved day-to-day operations; the second, and most important, payoff is in using these analytics as part of a feedback loop for continuous improvement. Datacenter analytics not only can be part of a cycle of making daily improvements but also, once a change is made, can be recalibrated to drive a new set of analytics that constantly drives an enhanced datacenter environment and a more predictable, repeatable service. This type of continuous improvement should of course be tied to business metrics so that the datacenter is fully aligned with the organization. These metrics include projected revenue growth, profit margins, customer service requirements, and new business or regional expansion.

These New Solutions Require Support

In many cases, enterprises assume that because they have virtualized their datacenter environment, they will not need support for the infrastructure and software. This most certainly is not the case. The main reason for this is that the complexity of these configurations can cause even the savviest end users to need help when things go wrong. Whether it be a hardware or software issue or a user error, when the servers are running mission-critical workloads, they require external support services. Approach to Support and Deployment Needs to Change

Despite the fact that enterprises do need to support their environments, how they support the [virtualized] environment does need to be different from the traditional server support model. Because their mission-critical data will be on fewer servers, when something goes wrong, it can have a broad impact across many departments in the organization. The ability to contact a vendor that has intimate knowledge of the environment will be crucial. IDC interviewed a customer at an IDC virtualization forum that was in the process of "devirtualizing" its environment at a significant cost because it did not go through a robust planning process. As a result, the customer virtualized several applications that did not work well together on the same physical server, which led to significant application performance problems. The situation deteriorated to such an extent that the customer determined that the best remedy was to devirtualize and then start again. These issues and others are significant, and IDC believes that enterprises need to enlist organizations that have performed complex virtualization implementations.

Choice of Support Vendor

Virtualized datacenters require a vendor that can support the entire environment rather than just one technology asset. As a result, selecting a vendor that has a robust support portfolio and can look across all of the assets that are required to support the business processes becomes increasingly critical in a highly virtualized environment.

I B M S E R V I C E S O F F E R I N G S

IBM has all of the factors that IT organizations should consider when choosing a partner in datacenter operations and design services. IBM has breadth of offerings, deep expertise, a systematic approach, an experienced team, great support, and industry-leading analytics. IBM's strengths are the breadth of its offerings and the ability to deliver a holistic

(11)

set of services that identify interdependencies across the IT portfolio and provide analytics that can optimize across the entire life cycle of the datacenter. IBM addresses the datacenter during the entire life cycle across IT and facilities. Key services include: E x t e n d

IBM extends the life of the datacenter and the life cycle of IT assets with server virtualization servers, storage automation, and middleware optimization. These services allow IT managers to defer constructing a new datacenter or procuring new IT equipment. At the same time, these services increase the efficiency of the systems that are already in place in terms of power, cooling, space, and personnel time.  Server Optimization and Integration Services. In terms of virtualization, most

datacenter managers have already virtualized the "easy workloads," and they do not know where to go with more complicated virtualization projects in terms of time, resources, and skill sets while providing a strong ROI for the business. IBM services work with datacenter managers to virtualize complex Wintel workloads. IBM utilizes an outside tool and partner called CiRBA to automatically collect workload characteristics and interdependencies. Then IBM runs profiling to determine which workloads are good virtualization candidates. The profiles break down into six patented workload scenarios. IBM virtualizes the appropriate workloads using a standard factory model for faster implementation and a repeatable model. This service leaves IT with efficiencies from consolidated physical servers, a quick virtualization plan, and lowered power and cooling costs.  Intelligent Storage Service Catalog. The rapid explosion of structured and

unstructured data predicted by IDC will lead to a storage management conundrum for IT organizations. IBM can automate storage provisioning to speed time to market and decrease management costs. This IBM service also frees up storage architects so they can focus on adding incremental value to the business rather than maintaining and managing existing storage. The Intelligent Storage Service Catalog defines common application-based standards, maps the standards to the appropriate storage, and builds the corresponding catalogs and requests. The process is policy based for ease of repeatability. This service can increase storage utilization, decrease management time, and decrease the demand for tier 1 storage.

Middleware Design and Strategy Services. The shift toward application rationalization is happening in many datacenters as IT organizations take a hard look at efficiencies, virtualization opportunities, and the amount of time they spend on management (not innovation). This process is important in the middleware environment as well. IBM combines its performance and relationship analysis multiple error diagnostic (ParaMedic) tool, which identifies abnormal performance bottlenecks from unusual CPU utilization, with performance and capacity evaluation services (PACES). PACES analyzes and optimizes workloads by looking at Web response times. This process enables IBM and the IT organization to model performance outcomes before the actual implementation. In the end, IBM uses these tools to speed middleware consolidation and optimization so that IT managers have what they need and do not need to manage what they do not need.

(12)

R a t i o n a l i z e

IBM's services help datacenter managers perform a portfolio rationalization of their datacenter. Many environments today are reactionary and have not established what assets, management, and designs are necessary to be proactive for the business. IBM's rationalization services include:

Datacenter Strategy. IBM's datacenter strategy helps business balance the goals of budget, availability, and expanding services. The necessity for tools is apparent in many IT organizations as this exercise is not undertaken with much regularity, and the risk of not taking a hard look at overall datacenter strategy is too risky not to do it. IBM uses cash flow analysis, outage analysis, and capacity planning tools to set up the datacenter for success. In particular, the capacity planning and resiliency tools are patent-pending, leading-edge tools developed in collaboration with IBM Research. The capacity planning tool provides a new level of predictability that can be used to plan for the next 10–20 years. The tool empowers decision making and improved performance through the use of complex modeling and Monte Carlo simulations to determine the best way to meet the unpredictable demands of datacenter capacity in the future. The datacenter strategy service is useful for datacenter managers wondering where to start while keeping the delicate balance of the datacenter (budget, availability, and expansion) in check.

Datacenter Consolidation and Relocation. This patent-pending technology maps dependencies of all IT assets up to the application level. Analytics for Logical Dependency Mapping (ALDM) is ideal for datacenter relocation or consolidation. ALDM allows datacenter managers to focus on application availability during datacenter moves and consolidation because what runs together unfortunately goes down together. With this new technology, this risk is mitigated because the dependencies are known. In a world where the cost of moving a datacenter can sometimes equal or exceed the cost of building a new datacenter, this technology is very valuable.

D e s i g n

As noted earlier, the physical backbone of the datacenter is often forgotten when these transitions and services come in to play. With IBM, this is not the case; it has expertise in datacenter design, construction, and operation from its worldwide hosting and outsourcing businesses. This experience can be brought in to help datacenter managers figure out how to retrofit, expand, or build a new datacenter. The services that help IT organizations go down the path of datacenter capacity expansion include:  Scalable Modular Datacenter (SMDC). This package is for new datacenter needs in small to midmarket companies that are experiencing capacity, availability, or flexibility limitations. The package includes a preintegrated enclosed rack with cooling, onsite services and consultation, and a power distribution unit (PDU). The greatest value-add here from IBM is the "single throat to choke." With small to midmarket companies sometimes lacking facilities knowledge or staff bandwidth, having a single point of contact to provide project management services and manage other vendors is invaluable. This solution is a great place to start a new datacenter footprint without going through a massive project.

(13)

Portable Modular Datacenter (PMDC). This solution is a great way to add capacity to an existing site, create a new point of presence, increase disaster recovery capabilities, or gain capacity in remote areas. IBM also offers preintegrated datacenters in shipping containers (20 feet long and 40 feet long) with facilities included. The specification includes cooling, uninterruptable power supply (UPS), fire suppression, batteries, and remote monitoring. IBM is vendor neutral for IT equipment, although of course, it can populate the container with IBM IT systems as well.

Enterprise Modular Datacenter (EMDC). This IBM service for enterprise clients supports modularity from the first stages of the datacenter build. By building modularity into the datacenter design from the ground up, enterprises avoid costly retrofits down the road and improve flexibility to meet changing business requirements. The EMDC is essentially a "shrink-wrapped," standardized datacenter between 5,000 square feet and 20,000 square feet in size. This approach to enterprise-level datacenter construction provides just-in-time compute for the business without overprovisioning today for tomorrow's computing requirements. Datacenter managers undertaking the building of a new datacenter have a plethora of choices, and making the correct decision, in many cases, will impact the datacenter for the next 15 to 20 years. This is a difficult situation to be in due to the unpredictability of IT's needs over the course of the future, lack of information, and lack of perspective. IBM datacenter life-cycle cost tools can help rightsize the trade-offs in terms of capital expenditure (capex) and operational expenditure (opex) for different types of cooling (one of the longest-term impact decisions in datacenter design). IBM uses these tools to design the modular datacenters mentioned in this section.

M a n a g e

A common problem for datacenter managers today is making more time for their staff to focus on strategically critical projects rather than mundane day-to-day tasks. These day-to-day tasks need to be accomplished to keep the datacenter, IT, and the business running but are not adding incremental value to IT or the business. To solve the problems of today's datacenter and increase flexibility, efficiency, and reliability, IT needs to focus on incremental improvements rather than keeping the ship afloat. The problem is that there are a finite number of IT staff members, so IT managers need a datacenter service provider to accomplish maintenance and day-to-day chores, thereby freeing up internal IT staff to focus on helping the business. IBM's services to help manage the IT environment include:

Managed Server Services. IBM's Enterprise Server Managed Services provide monitoring and management of the IT infrastructure, including servers, middleware, storage, and databases. IT organizations that utilize this service and give up an essential but not incrementally valuable task, free up these administrators to innovate, create value-adding services for lines of business, and focus on more mission-critical work. This IBM service is available for System Z and System I platforms with local service delivery where offshore delivery is noncompliant. Native language delivery and support are available for Japan, Korea, and China.

(14)

Managed Storage and Data Services. Given the oncoming explosion of data and storage capacity and management demands being placed on IT, IBM's Enterprise Managed Storage Services are rather timely. This service features flexible, scalable, resilient storage capacity on demand for clients. Disk, archive, backup, and restore management services are available as part of a fully managed solution. These services include reporting, monitoring, management, and allocation-based pricing. The location options abound from an IBM service delivery center, a hosting center, or a customer's datacenter. In terms of connectivity, storage area networks (SANs) and local area networks (LANs) are available. This highly secure service from IBM cures the headache of many datacenter managers looking to offload some of the data onslaught to free up internal resources for more strategic initiatives.

Tivoli Live Monitoring Services. For datacenter managers facing repeated instances of downtime and a deluge of alerts, IBM offers Tivoli Live Monitoring Services. This service allows datacenter managers to have greater visibility into the incidents from their infrastructure without installing management tools. IT organizations are constantly looking for better insight around availability, capacity, and energy efficiency. Tivoli Live Monitoring Services uses intelligent automation and policy-based alert monitoring to limit issues resulting in downtime. This ultimately frees up IT staff to focus on problems affecting business performance.

F U T U R E O U T L O O K

The future outlook for external services in the datacenter is bright. Now that most of the easily virtualized workloads are consolidated, the next hurdles for many IT shops are to decrease management time and resources, increase availability, and expand to support the business. Many of these process and soft issues are difficult to address from within the organization. Bringing in an external point of view, both to help the IT director choose where to begin while maintaining balance between resource limitations, availability, and expansion and to see the forest through the trees, is, in many cases, a valuable endeavor.

IDC believes that the opportunity for IT managers to gain knowledge, strategic insight, and an improved IT environment from datacenter service providers will grow as more companies move along the virtualization management curve. For datacenter service providers, the keys to success are breadth and depth of expertise, proven strategic insight, global experience, and analytically driven actions.

C H A L L E N G E S / O P P O R T U N I T I E S

C h a l l e n g e s

 Datacenter managers are still focused on day-to-day survival instead of long-term excellence. IBM needs to get these IT organizations to think differently about their IT and facilities environments.

(15)

 Cloud computing and the rise of off-premise computing is a tempting proposition for some datacenter managers. Luckily, IBM has a strong hosting offering that complements its internal services offerings.

 Changing behaviors is difficult. To really increase datacenter efficiency, agility, and availability, IBM needs to effect change at an organizational level.

O p p o r t u n i t i e s

 With a large, ongoing post-recession buildout of enterprise-class datacenters to replenish an outdated and out-of-capacity supply, IBM's timing is impeccable.  IBM has the ability to cross-leverage its other lines of business, including

systems and hosting. IBM can either be best-of-breed one-stop shopping or work with other vendors in the datacenter to provide what the situation requires. This is a true comparative advantage in the market today.

 Virtualization is at the point in its adoption curve where complexity is really becoming the limiting factor. IBM can help datacenter managers optimize that last mile.

C O N C L U S I O N

Today's datacenters have solved one problem (physical server sprawl) with virtualization and are not dealing with a resulting problem (management and virtual server sprawl). Many CIOs know they need help, but they do not know where the best place is to begin optimization for flexibility, reliability, and efficiency. Many times there are trade-offs between goals, and, unfortunately, stagnancy is not a viable option. IBM's datacenter services are meant to help. Datacenter managers have many options to choose from with the assistance of a reliable, proven partner. IBM's breadth and depth of analytically backed offerings are proven not only through measurement and analysis but also in its own outsourcing and customer datacenters. For datacenter managers on the road to optimization or those looking for where to begin, IBM is an excellent place to start.

C o p y r i g h t N o t i c e

External Publication of IDC Information and Data — Any IDC information that is to be used in advertising, press releases, or promotional materials requires prior written approval from the appropriate IDC Vice President or Country Manager. A draft of the proposed document should accompany any such request. IDC reserves the right to deny approval of external usage for any reason.

References

Related documents

AF: Annulus Fibrosus; ALL: Anterior Longitudinal Ligament; CL: Capsular Ligament; CT: Computed Tomography; DICOM: Digital Imaging and Communications in Medicine; DOF: Degrees

accommodate the first three words of its gloss quite readily. However, the problem seems to settle upon the spelling of the fourth. 2257 provides a very thorough reading: “foueam

We were not only able to show that clinical and radiological re- sults with the stemless shoulder prosthesis showed favourable improvements from pre- to short and mid- term outcome

Associations Between Recent Exposure to Ambient Fine Particulate Matter and Blood Pressure in the Multi-Ethnic Study of Atherosclerosis (MESA)..

After the same follow-up time, patients treated with the former ACD technique without autologous bone showed a hip survival rate of 67%, which was nearly the same as the survival

Mas enfim, mesmo quando você pensa numa organização assim grande, até o micro, que é uma organização local como a Redes da Maré, você tem dificuldades de trazer o gênero, eu acho

International Classification of Functioning, Disability and Health (ICF) constructs of Impairment, Activity Limitation and Participation Restriction in people with osteoarthritis

However, Hispanic patients with knee and back pain appear to differ from non-Hispanic Whites in areas pertinent to shared decision-making, including the role of adverse experiences