• No results found

Risk-Based Testing. Paul Gerrard Technical Director, Systeme Evolutif Limited

N/A
N/A
Protected

Academic year: 2021

Share "Risk-Based Testing. Paul Gerrard Technical Director, Systeme Evolutif Limited"

Copied!
29
0
0

Loading.... (view fulltext now)

Full text

(1)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 1

Risk-Based Testing

Paul Gerrard

Technical Director, Systeme Evolutif Limited

Systeme Evolutif Limited 3rdFloor, 9 Cavendish Place

London W1M 9DL, UK Tel: +44 (0)20 7636 6060 Fax: +44 (0)20 7636 6072 email: paulg@evolutif.co.uk http://www.evolutif.co.uk/

Agenda

l I Why Risk-Based Testing?

l II Introduction to Risk-Management l III Risk and Test Objectives

l IV Designing the Test Process

l V Illustrative Example from an RBT Prototype l V1 Closing Comments, Q&A

l Here’s the commercial bit:

– This material is based on:

– Risk-Based E-Business Testing, Gerrard and Thompson, Artech House, 2002

(2)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 3

Paul Gerrard

Systeme Evolutif are a software testing consultancy specialising in E -Business testing, RAD, test process improvement and the selection and implementation of CAST Tools. Evolutif are founder members of the DSDM (Dynamic Systems Development Method) consortium, which was set up to develop a non-proprietary Rapid Application Development method. DSDM has been taken up across the industry by many forward-looking organisations. Paul is the Technical Director and a principal consultant for Systeme Evolutif. He has conducted consultancy and training assignments in all aspects of Software Testing and Quality Assurance. Previously, he has worked as a developer, designer, project manager and consultant for small and large developments. Paul has engineering degrees from the Universities of Oxford and London, is Co-Programme Chair for the BCS SIG in Software Testing, a member of the BCS Software Component Test Standard Committee and Former Chair of the IS Examination Board (ISEB) Certification Board for a Tester Qualification whose aim is to establish a certification scheme for testing professionals and training organisations. He is a regular speaker at seminars and conferences in Europe and the US, and won the ‘Best Presentation’ award at EuroSTAR ’95.

(3)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 5 Requirements Functional Specification Physical Design Program Specification User Acceptance Test System Test Integration Test Unit Test

V-Model

Is there ever a one-to-one relationship between baseline documents and testing?

Where is the static testing (reviews, inspections, static analysis etc.)?

Traditional approach

Test stage Test stage Test stage Test stage Consider Schedule, Environments, Timescales etc. Acceptance System Test Unit Test Methodology Build and

Execute tests Not Not again!

focused Not Done

(4)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 7

Problems with tradition

l Sequence of decisions

– Stages →responsibility →capability →objectives

l Guidance to developers and testers

– None, except generic, text book mantras – “demonstrate software meets requirements”

l Input of stakeholders

– Only when system/acceptance tests reveal problems – Far too late!

l Decision making

– Timescale driven in early stages – Crisis driven towards the end – Unsatisfactory all round.

Write Requirements Specify System Design System Test the Requirements Test the Specification Test the Design Unit Test Acceptance Test System Test Integration Test Install System Build System Build Software Write Code

W-Model

(5)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 9

Risk-based testing

Plan product risksassess

define test objectives test techniques, products to test Stakeholder Involvement responsibility estimation process Schedule focused test design and execution Implement risk-based test reporting assess product risks Decide

Risk-based test planning

l If every test aims to address a risk, tests can be prioritised by risk

l It’s always going to take too long so…

– Some tests are going to be dropped – Some risks are going to be taken

l Proposal:

– The tester is responsible for making the project aware of the risks being taken

– Only if these risks are VISIBLE, will management ever reconsider.

(6)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 11

How much testing is enough?

l Enough testing has been planned when the

stakeholders (user/customer, project manager, support, developers) approve:

l TESTS IN SCOPE

– They address risks of concern and/or give confidence

l THE TESTS THAT ARE OUT OF SCOPE

– Risk is low OR these tests would not give confidence

l The amount and rigour of testing is determined by CONSENSUS.

(7)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 13

Some general statements about risk

l Risks only exist where there is uncertainty

l If the probability of a risk is zero or 100%, it is not a risk

l Unless there is the potential for loss, there is no risk (“nothing ventured, nothing gained”)

l There are risks associated with every project l Software development is inherently risky.

Cardinal Objectives

l

The fundamental objectives of the system to be

built

l

Benefits of undertaking the project

l

Payoff(s) that underpin and justify the project

l

Software risks are those that threaten the

(8)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 15

Three types of software risk

Project Risk

resource constraints, external interfaces, supplier relationships, contract

restrictions

Process Risk variances in planning and

estimation, shortfalls in staffing, failure to track progress, lack of quality assurance and configuration

management

Primarily a management responsibility

Planning and the development process are the main issues here.

Product Risk lack of requirements stability,

complexity, design quality, coding quality, non-functional

issues, test specifications.

Requirements risks are the most significant risks reported in risk assessments.

Testers are mainly concerned with

Product Risk

Process

l

Risk identification

– what are the risks to be addressed?

l

Risk analysis

– nature, probability, consequences, exposure

l

Risk response planning

– pre-emptive or reactive risk reduction measures

l

Risk resolution and monitoring

(9)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 17

Assessing consequences (loss)

Severity Description Score

Critical business objective cannot be accomplished 5 High business objective undermined 4

Moderate business objectives are affected 3

Low slight effect on business 2

Negligible No noticeable effect 1

Assessing probability (likelihood)

Probability Description Score

>80% almost certainly, highly likely 5

61-80% probable, likely, we believe 4

41-60% we doubt, improbable, better than even 3

21-40% unlikely, probably not 2

(10)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 19

Risk exposure

l

Risks with the highest exposure are those of

most concern

l

Worst case scenarios drive concerns

l

Risk EXPOSURE is calculated as the product of

the PROBABILITY and CONSEQUENCE of the

risk

l

A simple notation is L

2

– where L2 = LIKELIHOOD x LOSS.

What do the numbers mean?

l Sometimes you can use numeric assessments

– We may have experience that tells us » Likelihood is high (it always seems to happen) » Loss is £50,000 (that’s what it cost us last time)

l But often, we are guessing

– Use of categories help us to compare risks – Subjective perceptions (never the same)

– E.g. Developers may not agree with users on probability!

l Maybe you can only assign risk RAG numbers

– RED, AMBER, GREEN

(11)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 21

The danger slope

Highly Unlikely Unlikely Improbable Likely Very Likely

Critical High Moderate Low Negligible

Where we want to move all risks

(12)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 23

Why use risks to define test

objectives?

l If we focus on risks, we know that bugs relating to the selected mode of failure are bound to be important. l If we focus on particular bug types, we will probably be

more effective at finding those bugs

l If testers provide evidence that certain failure modes do not occur in a range of test scenarios, we will become more confident that the system will work in production.

Defining a test objective from risk

l We ‘turn around’ the failure mode or risk l Risk:

– a BAD thing happens and that’s a problem for us

l Test objective:

– demonstrate using a test that the system works without the BAD thing happening

l The test:

– execute important user tasks and verify the BAD things don’t happen in a range of scenarios.

(13)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 25

Risks and test objectives - examples

Risk Test Objective

The web site fails to function correctly on the user’s client operating system and browser configuration.

To demonstrate that the application functions correctly on selected combinations of operating systems and browser version combinations.

Bank statement details presented in the client browser do not match records in the back-end legacy banking systems.

To demonstrate that statement details presented in the client browser reconcile with back-end legacy systems.

Vulnerabilities that hackers could exploit exist in the web site networking infrastructure.

To demonstrate through audit, scanning and ethical hacking that there are no security vulnerabilities in the web site networking infrastructure.

Risk-based test objectives are usually

not enough

l Other test objectives relate to broader issues

– contractual obligations

– acceptability of a system to its users

– demonstrating that all or specified functional or non-functional requirements are met

– non-negotiable test objectives might relate to mandatory rules imposed by an industry regulatory authority and so on

l Risk assessment might miss something, or de-scope something

important

l Generic test objectives

– ‘catch all’ measure – e.g. all requirements coverage – complete the definition of your test stages.

(14)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 27

Generic test objectives

Test Objective Typical Test Stage

Demonstrate component meets requirements Component Testing Demonstrate component is ready for reuse in larger

sub-system

Component Testing Demonstrate integrated components correctly

assembled/combined and collaborate

Integration testing Demonstrate system meets functional requirements Functional System

Testing

Demonstrate system meets non-functional requirements Non-Functional System Testing

Demonstrate system meets industry regulation requirements

System or Acceptance Testing

Demonstrate supplier meets contractual obligations (Contract) Acceptance Testing

Validate system meets business or user requirements (User) Acceptance Testing

Demonstrate system, processes and people meet business requirements

(User) Acceptance Testing

Tests as demonstrations

l “Demonstrate” is most often used in test objectives l Better than “Prove” which implies mathematical

certainty (which is impossible)

l But is the word “demonstrate” too weak?

– it represents exactly what we will do

– we provide evidence for others to make a decision

– we can only run a tiny fraction of tests compared to what is possible

– so we really are only doing a demonstration of a small, sample number of tests.

(15)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 29

But tests should aim to locate faults,

shouldn't they?

l The tester’s goal: to locate faults

l We use boundary tests, extreme values, invalid data, exceptional conditions etc. to expose faults:

– if we find faults these are fixed and re-tested

– we are left with tests that were designed to detect faults, some did detect faults, but do so no longer

l We are left with evidence that the feature works correctly and our test objective is met

l No conflict between:

– strategic risk-based test objectives and – tactical goal of locating faults.

Testing and meeting requirements

l

Risk-based test objectives do not change the

methods of test design much

l

Functional requirements

– We use formal or informal test design techniques as normal

l

Non-functional requirements

– Test objectives are often detailed enough to derive specific tests.

(16)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 31

Designing the Test Process

Risk Identification

• Consult business, technical staff • Prepare a draft register of risks

Risk Analysis Risk Response Test Scoping Test Process Definition • Discuss risks

• Assign probability and consequence scores • Calculate exposure

• Formulate test objectives, select test technique • Document dependencies, requirements, costs,

timescales for testing • Assign Test Effectiveness score • Nominate responsibilities

• Agree scope of risks to be addressed by testing • Agree responsibilities and budgets

• Draft the test process from the Test Process Worksheet

• Complete test stage definitions

Tester Activity

Workshop

Tester Activity

Review and Decision

Tester Activity

(17)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 33

Test process worksheet

Failure Mode or Objective

Probability Consequence Test Effectiveness RISK Number Prototyping Infrastructure Sub

-System

Application System Non

-Functional

Tests User Acceptance Acceptance (BTS) Operational Live Confidence Customer Live Trial

Test Technique

Client Platform

1 Which browsers, versions and O/S platforms will be supported, includes nonfr ames, non -graphic browsers etc.)?

SS

2 New platforms: Web TV, Mobile Phones, Palm Pilots etc.

3 Connection through commercial services e.g. MSN, Compuserve, AOL

SS

4 Browser HTML Syntax Checking SS

5 Browser compatibility HTML Checking SS

6 Client configuration e.g. unusable, local character sets being re jected by database etc.

SS

7 Client configuration: Client turns off graphics, rejects cookies, Cookies time out, Client doesn’t have required plug-ins etc.

SS

8 Minimum suppo rted client platform to be determined/validated

SS

Component Functionality

9 Client component functionality SS

10 Client web-page object loading SS

11 Custom-built infrastructure component functionality

SS

12 COTS component functionality SS

13 HTML page content checking - spelling, HTML val idation

SS

System/Application functionality

14 End -t o-end system functionality CC

15 Loss of context/persistence between transactions

SS CC

Using the worksheet - risks

l

Failure Mode or Objective column

– failures/risks

– requirements for demonstrations

– mandatory/regulatory/imposed requirements

l

Probability of the problem occurring

l

Consequence of failure

l

Test Effectiveness - if we test, how likely would

(18)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 35

Creating the worksheet

l Create a template sheet with initial risks and objectives based on experience/checklists

l Cross-functional brainstorming

– stakeholders or technically qualified nominees

– might take all day, but worth completing in one session to retain momentum

l If you can’t get a meeting, use the specs, then get individuals to review.

Illustrative Example from an RBT

Prototype

(19)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 37

Test products through the lifecycle

initial risk assessment test objectives test stages test process definition master test planning test plan/ procedures test specification test log test execution release risk assessment

test results analysis today Planned

end

Progress through the test plan

Residual Risks

start

Risk

detail

• This is the risk detail – it is one row of the test process

worksheet • An example

screen from the RBT prototype application.

(20)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 39

From risks to test objectives

initial risk assessment

test objectives

(21)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 41

Test stage - key attributes

Test Objectives • The objectives of this stage of testing based on the

risks to be addressed and the generic objectives for the test stage.

Component(s) under Test • The architectural components, documents,

business processes to be subjected to the test.

Baseline • Document(s) defining the requirements to be met

for the components under test (to predict expected results).

Responsibility • Groups responsible for e.g. preparing tests,

executing tests and performing analysis of test results.

Environment • Environment in which the test(s) will be performed.

Entry Criteria • Criteria that must be met before test execution may

start.

Exit Criteria • Criteria to be met for the test stage to end.

Techniques/tools • Special techniques, methods to be adopted; test

harnesses, drivers or automated test tools to be used.

Deliverables • Inventory of deliverables from the test stage.

Test stage definition

initial risk assessment test objectives test stages

(22)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 43

Test stage definition 2

(23)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 45

Test stage definition 4

Test stage definition

includes the required

items to create an IEEE

829 format test plan.

Master Test Planning

initial risk assessment test objectives test stages test process definition master test planning

(24)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 47

Master Test Plan glues it all together

Example

test stage

Project:Sample IT Project

Test Stage: LSI Large Scale Integration

Description The System -Tested application will be tested in conjunction with associated systems that it must integrate with.

Features To Be Tested The Sample IT Application will be tested in conjunction with the Banking Interface and the Electronic Mail Interface. The following features are in scope:

- payment processing - payments exceptions

- electronic mail notifications (all types).

Features Not To Be Tested Integration with legacy Financial Accounting system FTBS is out of scope for this project.

Item Pass Fail Criteria: Messages are triggered as predicted. Data transfer performed in a synchonised way. Data reconciles across integrated systems.

Suspend/Resume Criteria Testing will be suspended if it is not possible to perform the basic data transfers:

- payments - exceptions

- electronic mail notifications. Test Deliverables: LSI Test Plan

LSI Test Procedures LSI End of Phase Report Object Under test Sample IT A pplication

Banking Interface Electronic Mail interface. Test Objectives:

Technique: System Integration Test

Risk: R07 Demonstrate system and banking systems integrate. Organisation: Joint Customer and Supplier Activity

Environment: ACCTEST

Name: Ev olutif Test

Estimate: 120 days

Dependencies: Assume functional system testing is complete to the degree that the key billing functions operate with stubbed -out interfaces.

Test stage

definition

example

• stage objectives • technique • organisation • object under test • environment • etc. etc

.

(25)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 49

Ongoing risk assessment

initial risk assessment test objectives test stages test process definition master test planning test plan/ procedures test specification test log test execution release risk assessment

test results analysis today Planned

end

Progress through the test plan

Residual Risks

start

Risk-based test reporting

today Plannedend

residual risks of releasing TODAY

Residual Risks

start all risks ‘open’ at the start

(26)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 51

Benefits of risk-based test reporting

l Risk of release is known:

– On the day you start and throughout the test phase – On the day before testing is squeezed

l Progress through the test plan brings positive results – risks are checked off, benefits available

l Pressure: to eliminate risks and for testers to provide evidence that risks are gone

l We assume the system does not work until we have evidence – “guilty until proven innocent”

l Reporting is in the language that management and stakeholders understand.

Benefit & objectives based test

reporting

Open Closed Risks Open Open Closed Closed Open Objective Objective Objective Objective

Benefit Benefit Benefit

Benefit Benefit Benefit Objective

(27)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 53

Benefits of benefit-based test

reporting

l Risk(s) that block every benefit are known:

– On the day you start and throughout the test phase – Before testing is squeezed

l Progress through the test plan brings positive results – benefits are delivered

l Pressure: to eliminate risks and for testers to provide evidence that benefits are delivered

l We assume that the system has no benefits to deliver until we have evidence

l Reporting is in the language that management and stakeholders understand.

How good is our testing?

l

Our testing is good if it provides:

– Evidence of the benefits delivered

– Evidence of the CURRENT risk of release – At an acceptable cost

– In an acceptable timeframe

l

Good testing is:

– Knowing the status of benefits with confidence – Knowing the risk of release with confidence.

(28)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 55

Closing Comments

Risk-based test approach: planning

l RBT approach helps stakeholders:

– They get more involved and buy-in

– The have better visibility of the test process

l RBT approach helps testers

– Approval to test against risks in scope

– Approval to not test against risks out of scope – Clearer test objectives upon which to design tests

l RBT approach helps developers

– Specifies their responsibility for testing in detail – “No hiding place”.

(29)

Version 1.0 ©2002 Systeme Evolutif Ltd Slide 57

Risk-based test approach: execution

and reporting

l

RBT approach helps stakeholders:

– They have better visibility of the benefits available and the risks that block benefits

l

RBT approach helps management:

– To see progress in terms of risks addressed and benefits that are available for delivery

– To manage the risks that block acceptance – To better make the release decision.

Risk-Based Testing

Close

Any Questions?

document templates can be found at

References

Related documents

Biophysics and Molecular Biology.. Alves-Pereira M, Reis Ferreira JM, Joanaz de Melo J, Motylewski J, Kotlicka E, Castelo Branco NAA. Noise and the respiratory system.

Rest 75-120 seconds between sets. After your last set, rest 3 minutes, and proceed to C. This helps to drastically increase the involvement of the gastrocnemius. Procedure: This

On June 7, 2004, the SEC adopted final rules relating to disclosure of breakpoint discounts by mutual funds. The new rules require open-end management investment companies

The coverage rules can be used in a number of scenarios: to evaluate the completeness of a given test database in relation to a query, to assist the development of new test cases,

(particles, chemicals or substances that make water contaminated) are discharged directly or indirectly into water bodies without enough treatment to get rid of harmful

I f all prestations are lost thru debtors fault (debtor has the right to choose), the debtor shall pay the value of the last thing that was lost plus damages.. If the other things

This certainly gives much stronger constraints for the g q /g DM > 1 part of our parameter space which is why we don’t study this part of parameter space (we show results for g q

Evaluation of outcomes in adolescent inflammatory bowel disease patients following transfer from pediatric to adult health care services: Case for transition.