• No results found

W H I T E P A P E R D r i v i n g B u s i n e s s O p t i m i z a t i o n w i t h E n d - to- E n d P e r f o r m a n c e T e s t i n g

N/A
N/A
Protected

Academic year: 2021

Share "W H I T E P A P E R D r i v i n g B u s i n e s s O p t i m i z a t i o n w i t h E n d - to- E n d P e r f o r m a n c e T e s t i n g"

Copied!
14
0
0

Loading.... (view fulltext now)

Full text

(1)

W H I T E P A P E R

D r i v i n g B u s i n e s s O p t i m i z a t i o n w i t h E n d - t o - E n d

P e r f o r m a n c e T e s t i n g

Sponsored by: HP Melinda-Carol Ballou September 2013

I D C O P I N I O N

Dynamic business demand for accelerated mobile, cloud, and other cross-platform software deployments is driving the need to evolve and adopt end-to-end performance testing and management. IDC research indicates that this push to deploy has led to strong adoption of iterative, agile approaches to development, with a majority of organizations evolving and standardizing on agile processes.

In addition, IDC recently forecast an increase in annual mobile application downloads on mobile devices from 87.8 billion in 2013 to 187 billion in 2017. End-user revenue associated with IDC's mobile download projections is forecast to increase from $10.3 billion in 2013 to $25.2 billion in 2017.

With such high stakes and high user visibility, poor performance is simply not an option for customer-facing and other business-critical applications. Application deployment complexity across mobile and cloud environments where composite applications increasingly leverage social media and big data analytics also demands effective and adaptive approaches to software quality and performance validation. Managing a disparate array of application components continuously released across a range of platforms is costly, inefficient, and inherently risky because of the complexity, distributed deployment, and velocity of these applications.

To increase the immediate business benefit of software through release speed, and focus, organizations are increasingly using agile development approaches. IDC research shows that nearly 65% of organizations today are using agile approaches and that revenue for agile software development products is expected to grow around 34% (on small numbers) for 2013. Current IDC analysis for the agile application life-cycle management (ALM) market projects a CAGR of around 33% for the 2013– 2017 forecast period. Along with adopting agile, G2000 organizations are increasingly leveraging code from a vast array of sources — including internally built, open source and outsourced, commercially built, and customized packaged applications. Complex sourcing brings challenges in managing effective end-to-end performance, and these issues are often coupled with global releases, security and compliance needs, and churn with agile iterations. In addition, many organizations are also seeing a rise in the demand for embedded software to drive product innovation and competitive position. All of these dynamics drive a business need for high-quality software performance across hybrid environments throughout life-cycle phases.

Glo b a l H e a d q u a rt e rs : 5 S p e e n S tr e e t Fr a mi n g h a m , M A 0 1 7 0 1 U S A P .5 0 8 .8 7 2 .8 2 0 0 F. 5 0 8 .9 3 5 .4 0 1 5 w w w .id c .c o m

(2)

Given this situation, IDC sees emerging end-to-end performance testing solutions being evaluated and adopted increasingly with a number of core factors as context:  With an economy that remains volatile, G2000 organizations are dealing with

human and financial resource constraints. At the same time, they must maintain application performance and monitoring visibility to proactively help measure and execute applications consistently across domains. Appropriate prioritization and leverage of test and ALM resources with project and portfolio management (PPM) approaches — coupled with the visibility and metrics made possible by performance testing and monitoring — can enable effective execution. Coupling performance test management with PPM can provide visibility and control to prioritize and apply resources where they are most needed and to enable metrics as work is completed. Given resource constraints and volatility, coordinating these capabilities can be key for ongoing application performance and business success.  Evolved, comprehensive, and united performance testing approaches —

spanning requirements, testing, and user experience and monitoring, with release management coordinated by effective project portfolio approaches — can support successful deployments as well as enable ongoing business and IT productivity. Savvy IT organizations are increasingly focusing on this area. In a diverse and hybrid-sourced landscape, connecting performance testing with core ALM phases such as requirements, test management, user experience and monitoring, and deployment — and bringing performance testing processes and results back into the life-cycle process — can increase application performance predictability and user satisfaction throughout the life of the application.

 Effective tools automation across ALM areas must be accompanied by strong process, cultural, and organizational approaches that encompass agile development and continuous integration and deployment. It's not enough just to provide the tools for automation. Testing and test automation must be integrated into agile and continuous integration processes in the organization to become more adaptive. Without this coordination, testing can continue to be a bottleneck, done too late in the process to be effective or done as an afterthought that can lead to software production defects.

 An "end to end" application performance testing strategy becomes particularly important given complex sourcing, including offshoring and outsourcing, open source usage, regulatory compliance, and emerging new development paradigms. In this context, there is a need to define best practices for conducting performance testing as part of ALM as a key part of the strategy for complex sourcing. How do organizations enable offshore testers to effectively test what matters even in the face of the type of rapid change exemplified by agile development? Automation in this context — coupled with effective, iterative processes supported by tools that capture analytics and insight — can enable collaboration and provide metrics and consistent data access.

(3)

I N T H I S W H I T E P A P E R

With this background we now turn to the focus of this paper. The purpose of this paper is to lay out the role that performance testing can play in the context of overall ALM, security, resource, and demand management. As part of that objective, the document defines an application performance testing framework and discusses an performance test evolution path and the relationship between a successful approach to performance testing and related IT elements such as monitoring, security, ALM, and PPM.

S I T U A T I O N O V E R V I E W

M a r k e t T r e n d s a n d E v o l u t i o n

Software drives business optimization now more than ever, and given the complexity, velocity, and dynamism of mobile and other delivery environments, old approaches to performance testing and overall quality are rapidly becoming inadequate. Organizations need a comprehensive, adaptive performance test strategy that enables them to face the challenges of complex, highly dynamic applications. Coordination between agile approaches to development and efforts to test and validate performance is becoming increasingly important. This coordination is especially significant in environments where complex application architecture, application dependencies, distributed development teams, and the need for visibility into end-user experience are the norm.

Developers and testers need a context for collaborative interaction, supported by integrated solutions and effective processes that provide common data and an opportunity to coordinate well and iteratively. It is a fact that the earlier that problems are found in the application life cycle, the less costly they are to resolve. This early detection becomes particularly critical with both mobile and social software development as the impact of failed quality is rapid and visible. Agile approaches demand up-front continuous testing, which can be better enabled via automation and test planning and management as well in conjunction with iterative processes (see Figure 1).

F I G U R E 1 C l o s i n g t h e L o o p : L e v e r a g e S k i l l s a n d T o o l s f o r A g i l e , E n d - t o - E n d A p p r o a c h Source: IDC, 2013 Design Define Develop Support Deploy & Monitor

(4)

Organizational pain points can be acute if teams try to deploy key applications without appropriate performance testing and/or if they try to test without coordination with effective end-to-end life-cycle processes and automation.

Narrow, shortsighted approaches to performance testing can lead to delays, late delivery, poor visibility, and lack of data about actual performance. Low-quality releases can sharply impact customer perception. Inept application performance strategies can result in sluggish production software with no time for tuning, in teams capturing risks rather than optimizing solutions, and in environment issues that block progress to delivery. Users in mobile environments, for instance, have little patience for poor performance, anemic experiences, and unseemly interfaces. Their reactions are immediate and public (i.e., immediate ranking of application experience and quality — "two stars versus four stars") as are the transactional and reputation costs to businesses. The expense to the business and to corporate reputation of broken, ugly, poorly performing software is prohibitive. A well-designed performance testing framework that offers support through the application life cycle is needed to take performance management to the level demanded by current environmental challenges for agile development as well as continuous integration and deployment. This framework should include performance testing techniques for leveraging requirements, performance testing during development, security, and code analysis as key aspects of quality and performance analytics. Moving through to release management, performance testing environments that leverage production realities using shared metadata and deployment data (i.e., DevOps) with effective monitoring and metrics can then feed back into software project and portfolio assessments and future planning.

This concept of a framework for performance testing becomes particularly important as mobile software dominates branding and competitive execution; dynamic, engaging app stores can enable — or decimate — audiences, prospects, and revenue. App store ratings and failures are public and unforgiving. With the combination of mobile and social media, software has moved from enabling systems of record to driving systems of engagement for business responsiveness. IDC survey research from 2Q 2013 shows that 69% of respondents have 50% or more of their mobile applications feeding systems of record (see Figure 2). Only 6% said that none of their mobile applications provided information for systems of record. The findings underscore the business criticality of these environments.

(5)

F I G U R E 2

M o b i l e A p p l i c a t i o n U p d a t e s t o S y s t e m s o f R e c o r d

Q. Are any of your organization's custom mobile applications used to update systems of record (e.g., financial transactions, reservation systems)?

Source: IDC Custom Survey, 2Q 2013

Software challenges complicate the ever-changing chaotic mix of emerging mobile platforms, devices, and lack of standards. Establishing effective processes and strategies for enabling high-performing mobile software creation from inception to deployment becomes a key element of an effective performance testing framework, combined with the need for testing multimodal deployment across areas.

Given the need to evaluate and articulate the value of IT services, project optimization, value, risk, and cost across multiple business units, metrics and assessment also play an increasing role in a performance testing framework. Performance testing as part of the life cycle from planning to execution, supported by related software metrics, can help demonstrate to business units the quality of provided services. These performance metrics can potentially help track execution against plans in the context of combined PPM and analytics capabilities for current and future project decisions. In addition, these metrics can help justify continued investment in performance testing and validation at a time of resource constraints and ongoing economic volatility with demand for reinvestment in software and IT.

P e r f o r m a n c e T e s t i n g T e c h n o l o g y

C h a l l e n g e s / R e q u i r e m e n t s a n d C r i t e r i a

Successful and secure evolution to end-to-end performance testing within the enterprise can enable a key link between the business side and IT for successful application development and deployment. This approach can provide visibility into how well customer expectations have been met throughout the life of the application. But it requires a shift to more effective usage of automated tools with appropriate iterative processes and workflow. What are the cultural issues and barriers to making this happen? How do users struggle with and overcome disparate approaches and inadequate coordination between business requirements evolution, resource and financial constraints, and prioritization to successfully develop and launch IT projects and

All of our custom mobile applications update systems of record (16.0%) More than 50% of our custom mobile applications update systems of record (53.0%) Less than 50% of our custom mobile applications update systems of record (25.0%) None of our custom mobile applications update systems of record (6.0%) n = 208

(6)

The following elements can help to make up an effective performance testing framework. These are iterative, not linear, processes:

 Define business requirements for application performance, prioritize for project planning and resource phase, and collect detailed requirements prior to development (application/product).  Assess architecture, design, and security planning up front and determine how these areas

will impact application performance and overall quality.

 Start early performance testing and continuous testing as part of an agile process. Contemporary agile methodologies require that you start performance testing with the first components built during the first sprint of development, before other components have been completed. In this phase, developers can also execute performance unit tests to identify performance issues in the code. Leverage service virtualization to start the testing earlier and execute end-to-end testing in complex environments: During the early stages of development, where testers struggle to validate mostly incomplete application builds, virtualized internal and external components can help remove roadblocks to early testing.

 Coordinate testing with software change, configuration management, and version control.  Execute performance testing.

 Translate your user requirements into load testing objectives: Understand the user behavior and application under test. A thorough evaluation of the requirements before beginning load testing can help provide realistic test goals and conditions.

 Create virtual user scripts: Capture business processes in test scripts.  Define and configure user behavior.

 Create a load test scenario.

 Understand the network impact/behavior within the application under test.  Run the load test scenario and monitor the performance.

 Analyze the results.

 Leverage performance testing scripts for production handoff and release management/ deployment.

 Transfer other IP and testing assets (such as application diagnostics and monitoring templates/models) to monitoring teams to reuse these assets and leverage investment into application performance assurance.

 Monitor applications and end-user experience and perform root cause analysis (can combine event, performance, stress/load, network monitoring).

 Coordinate service management and application changes, prioritize new requirements, and test performance.

 Employ static/dynamic analysis for software quality analysis and measurement and security analysis (should be iterative throughout)

 Leverage production data to retest changed and modified applications that feed the application portfolio and that support future project portfolio planning.

 Channel application performance data gathered in production back to development and testing as an input and ingredient for improvement and as validation of how the application performs in production.

 Prioritize new requirements and project/program requests iteratively to feed new projects as part of the portfolio; to appropriately allocate resources, assess with performance test and other metrics for future initiatives.

Elements of an End-to-End Performance Testing Framework

title

programs in conjunction with ongoing performance testing? What are key pain points and benefits of such a differentiated strategy, and how does one increase maturity along an evolutionary scale?

The necessary process and organizational shifts to address corporate and IT needs for end-to-end performance involve a shift in culture to coordinate across teams. These groups have typically been fractured from one another in the past on the business, development, and deployment/operations sides of the organization. End-to-end performance testing technology building blocks and solutions can help teams coordinate and collaborate with common data visibility, reporting, and analysis. In addition, a transition to integrated tools can be a catalyst to focus the organization — speeding the transition to effective performance management, monitoring, and portfolio and resource governance. The costs and challenges of maintaining stovepipes and brittle, monolithic systems are problematic and are also driving a push toward the adoption of more agile approaches that become coordinated with code analysis. All of these factors contribute to a portfolio process that can enable both governance and a transition toward coordinated yet agile performance testing. The Elements of an End-to-End Performance Testing Framework sidebar describes specific elements of an

(7)

HP Performance Testing Solutions and Points of Differentiation

In the context of an integrated approach to performance testing, HP is in a differentiated position. The core life-cycle areas on which HP has focused include automated software quality (ASQ), requirements, security, project and portfolio management, and some other ALM areas, with development tool integrations and partnering for change management (Tasktop). The role of a combined toolset in enabling consistent data for a better articulated business/IT relationship and HP's commitment to a partner-focused organization (facilitating integration for users across disparate environments and other third-party tools) are differentiators.

HP's application portfolio management solutions — ALM, Agile Manager, Performance Testing, Application Performance Management (APM), and PPM released in 1H 2013 — help position HP for a combined approach to end-to-end performance testing.

HP's strengths are in the company's dominant ASQ revenue position, with 10,000 customers and 1 million seats deployed and as 2012 share leader by significant margins at around 37% (its next closest competitor has share of 12%); the ubiquity of its established enterprise testing solution; and the breadth and depth of its comprehensive ASQ suite. HP grew ASQ revenue incrementally — 2.9% — in 2012. With some new releases shipped and others forthcoming, HP is evolving its mobile testing, ASQ SaaS, and cloud testing strategy in conjunction with application performance management, requirements, and agile.

The sheer size of HP — in conjunction with its HP Enterprise Services arm — positions the company to continue to be a significant combined player moving into 2013/14. HP is seeking to take advantage of opportunities for strong end-to-end execution with DevOps and its service management side (with coordination for application monitoring, software deployment and monitoring), and PPM for executive governance (with leverage of quality metrics).

HP's flexible delivery model with SaaS and on-premise offerings enables customers to load-test applications with a variety of options. The option for coordination with HP's ALM products and partners (including IT PPM potentially longer term, as well as requirements) and the company's security and service management solutions position HP well. Areas of focus for HP include broad and deep performance testing capabilities to support varied deployment environments, with integrated lab management, integrated developer tools (nUnit, jUnit, Selenium, Jenkins, Eclipse) to support continuous delivery, root cause analysis for performance issues (profiling), Dev/Ops capabilities to bring in metadata from production to define testing scenarios, analytics, and metrics that feed reporting as part of an executive scorecard.

HP's own services and partnerships with other third-party systems integrators complement the company's technology competencies in this arena. Enterprise ASQ adoption necessitates process and organizational change, which tends to be better implemented in conjunction with services support.

HP remains dominant overall at nearly 40% revenue share for ASQ and ubiquitous among organizations that use automated solutions for testing.

(8)

Seeking to Map HP's Product Portfolio for Performance

Primary performance testing framework capabilities for HP include traceability with HP's Quality Center, Performance Center, and ALM products from requirements through to development, with a common repository for requirements, tests, and defects "out of the box." HP has strengthened its requirements capabilities and coordination with testing, evolved coordination with its security solutions, and added traceability capabilities with support for users across those areas. HP also offers integration (via Tasktop and partnerships) to software change management and other life-cycle automation areas not already incorporated into the HP ALM platform. Built on the foundation of HP Quality Center and Performance Center, HP's ALM platform is extensible. Over 200 partners have solutions built on HP Quality Center, Performance Center, and ALM. HP ALM is a single platform for managing and executing quality across the life-cycle areas it supports for functionality, architecture standards, performance, and an emerging coordinated security focus.

HP enables integration between its PPM Center product and HP ALM. PPM can extend these capabilities to address IDC's criteria for IT financial management by enabling executive visibility into combined financial, end-to-end costing via integration between HP's IT PPM, quality, and operational details. PPM Center is beginning to have a role and gain visibility as a driver for business management for HP's performance management and overall software strategy during the current volatile and challenging worldwide economy. Some users have found the combined products to be beneficial in terms of cutting costs and enabling efficiency.

HP offers two options for companies to execute performance testing: HP LoadRunner and HP Performance Center. Both can be integrated with HP ALM for end-to-end life-cycle traceability. HP LoadRunner is in a revenue-leading position for performance testing for project-focused load testing. HP Performance Center is built on top of LoadRunner. It can enable controller sharing across multiple projects and multiple users, working concurrently on different projects from globally distributed locations. In addition to the functionality of LoadRunner, Performance Center also can allow customers to:  Coordinate testing and collaboration to potentially achieve more performance

testing in less time

 Manage and control performance testing projects, users, and resources in different locations from a centralized location

 Help streamline and standardize the testing process with a centralized performance testing practice using common resources and consistent procedures

 Help increase testing capacity by delivering global 24 x 7 access to testing resources with a Web-based interface, pooled infrastructure, and shared licensing model

 Help achieve consistent quality across applications by applying the same tools, expertise, and common practices across IT projects

 Audit, use, and bill lines of business for the time and resources used to help support business return on investment (ROI)

(9)

Performance testing teams and production monitoring teams both generally work toward a common goal: application performance. By collaborating with the production team, the testing team can deliver an application that will have fewer issues in production. Similarly, the production team can help reduce test cycle times by providing valuable production information for performance testing. Collaboration can help ensure the continuous delivery of effective application performance.

HP has sought to incorporate continuous performance capabilities into HP Performance Center. With this capability, performance engineers have the opportunity to incorporate production insight when planning and implementing performance testing of applications. A continuous approach to performance delivery is based on a series of steps, which together can lead to more reliable and valid performance testing.

HP is evolving its solutions to help solve the mobile testing problem with an enhanced solution for testing the performance of mobile applications. The solution is built on the existing capabilities of HP LoadRunner software and HP Performance Center software, including HP TruClient technology.

HP's mobile performance testing solution includes two new protocols:

HP Mobile TruClient. Built on top of HP's new HP TruClient technology, HP Mobile TruClient helps customers record browser-based applications directly through the browser. It seeks to make scripting and testing of browser-based applications fast, easy, and adaptive.

HP Mobile Applications. Targeting native mobile applications, or other applications that can't be recorded using HP Mobile TruClient, the HP Mobile App protocol lets customers build Web scripts using agents on the device or through emulators.

HP also has a strong partnership with Perfecto Mobile as part of its mobile testing strategy. In addition, HP supports network emulation through integrations with Shunra Network Virtualization. Since network conditions are such a key element in mobile applications, HP LoadRunner and HP Performance Center include speed simulation to simulate various types of upstream and downstream bandwidth.

In addition to load testing and user virtualization, HP supports service virtualization by offering HP Service Virtualization, which can enable development and testing teams to access services in a simulated, virtualized environment. Clients can test the quality and performance of cloud or mobile applications without having to disrupt production business systems or build out costly and brittle stub programs for unavailable services. IDC sees service virtualization as a key element of an effective end-to-end performance testing framework to help address agility, continuous development and integration, and resource optimization in complex environments. This is why we have seen acquisitions and evolutions of products in this area by a variety of companies over the past 18 months.

(10)

HP's Application Performance Management (APM) solution is a core component of HP's ALM solution and enables the use of a common APM toolset across aspects of the software development life cycle, including development, test, and production. HP's APM solution includes a Diagnostics Profiler for use by developers during testing. Also, both HP Diagnostics and SiteScope are integrated with Performance Center and LoadRunner data stores and user interfaces to enable performance testing teams to detect and diagnose application performance issues before they are exposed in production. Testing and operations teams can also share assets (i.e., scripts and monitoring templates) and access production data through additional integrations. HP's APM solution is also integrated with HP's Continuous Delivery Automation solution to support parallel design, configuration, and deployment of the application build and APM solution. HP's APM solution can help provide support for the continuous improvement of application quality by allowing operational monitoring data to be fed back to development and testing teams. This data can provide insight into user interaction and usage of the application once the application is deployed in production. HP's APM solutions are available as on-premise implementations or via HP SaaS.

HP seeks to support an end-to-end performance application life-cycle strategy through:  Defining business prioritization, requirements, resources, and projects for

application performance at the "planning phase" (PPM) — HP Application Lifecycle Management integrated with HP Project and Portfolio Management  Collecting requirements for applications prior to development (application/product

requirements) — HP Performance Center integrated with HP Application Lifecycle Management

 Starting early performance testing and iterative testing within Agile — integration with developer tools (nUnit, jUnit, Selenium, Eclipse, Jenkins)

 Leveraging service virtualization to start the test earlier and execute an end-to-end test in complex environments — HP Service Virtualization

 Executing performance testing — HP Performance Center for Enterprise, HP LoadRunner for project-based testing, HP Diagnostics for profiling, and SiteScope for monitoring the infrastructure components under test

 Leveraging the performance testing scripts for production — integration with HP Application Performance Management solutions such as Business Process Monitor, Diagnostics and SiteScope

 Monitoring — Application Performance Management, which includes coverage of enterprise (SAP, Oracle), cloud (EC2, Azure), SaaS (SFDC, AWS), and mobile (synthetic and native mobile) applications (HP's APM solutions include Business Process Monitoring, Real User Monitoring, and Application Diagnostics to proactively monitor the end-user experience and rapidly diagnose any application performance issues.)

(11)

 Addressing post-production application changes (new requirements for performance) — integration via HP Service Manager and HP Application Lifecycle Management; integration with pre-production software change and configuration management (SCM) tools only via Tasktop partnership for integration with third-party SCM tools (no direct HP offerings)

 Leveraging production data to retest changed/modified applications — HP Performance Center collecting data from Application Performance Management solutions

 Planning future project, program, and portfolio execution and leveraging metrics as part of executive dashboard — HP Application Lifecycle Management integrated with HP Project and Portfolio Management and HP Executive Scorecard

HP's portfolio is broad and deep in the application performance arena. Product capabilities span automated software quality, application performance management and monitoring, DevOps, service virtualization, and requirements, along with PPM and partnerships (for missing areas such as software change and configuration management) to augment HP's ALM strategy. In addition, HP has strong security capabilities and a longer-term security strategy, which it intends to evolve in combination with performance. Considering these capabilities in total, we expect to see HP continue to execute and drive in this area moving forward into 2014.

M A J O R H E A L T H C A R E P R O V I D E R B E N E F I T S

F R O M P E R F O R M A N C E T E S T I N G

O r g a n i z a t i o n O v e r v i e w

A major healthcare organization was seeking to improve the performance of core business applications by moving from ad hoc, outsourced project-by-project testing to in-sourced, automated performance testing. With 2500 employees overall now and 300-500 in IT (including 300 internal staff and 200-300 contractors) this company supplies insurance and healthcare services. Software enables both business execution and agility for this organization, so making certain that the company's applications perform and function well is essential.

Before bringing in automated performance testing, the company had little to no visibility into performance challenges in time to address them. As deployment environments and business complexity increased, ad hoc, single project approaches to testing became too costly in the 2006 timeframe for this company. With a multi-tiered, multi-platform environment, debugging every layer of a six tiered architecture without automation became problematic. Performance in production was often feeble and unable to meet appropriate healthcare standards. Software defects and resulting performance and downtime issues created significant business impact, driving the company to explore test automation and a strategy to bring testing back in-house.

(12)

C h a l l e n g e s a n d S o l u t i o n E v a l u a t i o n a n d D e p l o y m e n t

Given the challenges for production applications, this organization evaluated three vendors with performance test product offerings at the time. The company did an RFP focusing on areas such as security, infrastructure and performed a proof of concept with vendors who scored best at that time in 2006. The final choice came down to HP's LoadRunner. HP's dominant position in the marketplace and the breadth of protocols supported, as well as access to a broader pool of users experienced in using HP tools helped drive this company's decision to standardize on HP's LoadRunner (LR) product beginning with one license for LR (controller) and one for Performance Center (controller).

Deployment went smoothly for the company, given the HP experience of existing resources that enabled them to execute well without external services support. The HP performance test products were purchased in Nov. and the company went live by the next February. It took about a week for the company to implement LoadRunner and about a month to bring in Performance Center (PC). The company purchased 250 LoadRunner Web licenses initially (with total 1750 currently in use). The company has one contractor in Canada using Performance Center. They have processes that they follow and that they provide to developers to coordinate consistent approaches to performance testing.

I m m e d i a t e A p p l i c a b i l i t y a n d Q u i c k R O I

The company set up a Return on Investment (ROI) timeframe of three years for the performance testing products but ended up getting pay back much sooner. Within a month after installing the HP products, they had problems with a business critical system. The company’s broker portal had serious performance degradation and brokers were having trouble getting quotes to be able sell their insurance products. These occurred every day for about a week until the teams were able to resolve them using LoadRunner to emulate the load and deep dive into identifying the root-cause. The company's VP of IT at the time said that the product paid for itself within the first month of installation due to the costs of these performance challenges to the business and the speed with which they were able to address them using Load Runner.

In terms of overall benefit, users found value in moving to automated scripting and testing, with increased speed to execute scripts that enabled the company to meet aggressive project schedules and sky rocketing demand for application delivery. The functionality that was most important to this organization in solving their problems included performance and load testing, augmented to some extent by monitoring and limited root cause analysis. The correlated reports, log data looking at response times and issues enabled visibility into key issues. (The company also uses HP's SiteScope for monitoring with performance testing.)

(13)

C o m b i n i n g T o o l s w i t h P r o c e s s e s a n d E v o l v i n g t o A L M

The architect for the HP deployment and his team created and leveraged testing processes which were incorporated into the company's Center of Excellence (COE) to improve adoption and reliable use. These processes include bringing in Subject Matter Experts (SMEs) for project estimating before the project is budgeted, involving them in requirements and design reviews. Performance testing is core for the organization now as part of consistent approaches to software development and quarterly releases. When code freezes occur pre-deployment, the company does performance testing to address issues. The visibility of performance testing has increased significantly over the years since HP testing tools have been deployed -- from developers and QA and QC personnel through to the executive level engagement.

The company is also expanding its use of HP products to include HP's Application Lifecycle Management (ALM) products. They have begun the process of bringing in requirements and defect tracking; eventually, they want to move from logging defects in Performance Center to merging them into ALM. (Performance Center is helpful for scheduling testing across distributed development teams globally such as India.) So far, the company has nearly 100 people deployed on ALM, mostly across testing teams, and is beginning to benefit from coordinating the ALM capabilities support by HP – such as requirements and defect tracking – with testing.

Overall, the company has benefited greatly from significantly reduced performance problems and visibility into issues to be able to address issues before they impact the business. Team members have a common source of information to be able to know where to go to help solve performance challenges. They are evolving their mobile testing support further and expect to expand additional adoption once ALM has been deployed and as they staff up on the testing side.

C H A L L E N G E S / O P P O R T U N I T I E S

Challenges for HP include continuing to build on the transition it has made so far as an organization in the wake of executive changes and leadership focus over the past few years. In the past year, we have seen a strong transition for the company in excellent product suite and portfolio focus and a trend of building strong partnerships. Product portfolio cost and ease of use and adoption remain challenges for HP, as well as moving its legacy product set forward to address dynamic market change and emerging customer demand. We see HP seeking to address these issues with targeted product packaging and its evolving SaaS strategy. The sheer size of HP is both a differentiator and a challenge to innovation. We see HP moving to address this issue by accelerating the pace of product releases. HP has a synergistic, broad suite of products, of which operational asset management, monitoring, security, SOA registry, portfolio management, and application management are a part. HP remains dominant as a highly competitive provider in the ASQ market, and we look to the new leadership at HP to leverage additional synergies from other areas of the HP services and tools portfolio as a core focus.

(14)

C O N C L U S I O N / R E C O M M E N D A T I O N

Organizations should evaluate, assess, and better understand end-to-end application performance management in the context of close optimization of ALM, security, and project and portfolio management. This must be done given development and deployment environments that are increasingly agile, mobile, and complex, so pragmatic adoption and execution are core considerations. Companies should begin where pain points and stressors are the highest to gain organizational traction. The benefits of optimized execution, increased productivity, improved control for regulatory compliance, offshoring/outsourcing, increased business value, and lessened risk of business exposure through successful software and IT project and program implementations are key success factors for business today, given the competitive position enabled by software in multimodal deployment environments.

C o p y r i g h t N o t i c e

External Publication of IDC Information and Data — Any IDC information that is to be used in advertising, press releases, or promotional materials requires prior written approval from the appropriate IDC Vice President or Country Manager. A draft of the proposed document should accompany any such request. IDC reserves the right to deny approval of external usage for any reason.

References

Related documents

Our aim was to ascertain the rate of management (percentage of encounters) of WAD among patients attending Australian general practice, and to review management of these

As the result of the current study showing that distance on longitudinal axis between the weighted center and the geometric center of the nucleus pulposus in Pfirrmann Grade II

moderate to advanced OA, a lower proportion of knees showed progression of cartilage damage on a knee level as well as less progression of MRI-based inflammatory markers for

International Classification of Functioning, Disability and Health (ICF) constructs of Impairment, Activity Limitation and Participation Restriction in people with osteoarthritis

A third explanation accounting for patterns of differ- ential performance among seeds of different size is the seedling size effect (Westoby et al. 2000), which proposes that

This, this you can’t forget because since I started first uh, grade school, we were always… The minute we come… came out from school, they chased us with stones and, you know,

It could be shown that addition of IL-10 decreased the TNF-α-induced expression of matrix degrading enzymes, release of GAGs and formation on NITEGE fragments, which suggests that