• No results found

Benefits of Higher Quality Level of the Software Process

N/A
N/A
Protected

Academic year: 2021

Share "Benefits of Higher Quality Level of the Software Process"

Copied!
9
0
0

Loading.... (view fulltext now)

Full text

(1)

development process yields higher quality performance, and they seek quantitative evidence based on empirical findings. The few available journal and conference papers that present quantitative findings use a methodology based on a comparison of “before-after” observations in the same organization. A limitation of this before-after methodology is the long observation period, during which intervening factors, such as changes in products and in the organization, may substantially affect the results. The authors’ study employed a methodology based on a comparison of observations in two organizations simultaneously (Alpha and Beta). Six quality performance metrics were employed: 1) error density, 2) productivity, 3) percentage of rework, 4) time required for an error correction, 5) percentage of recurrent repairs, and 6) error detection effectiveness.

Key words: CMM level effects, CMM level

appraisal, software development perfor-mance metrics

SQP References

Sustaining Best Practices: How Real-World Software Organizations Improve Quality Processes

vol. 7, issue 3 Diana Mekelburg

Making Effective Use of the CMM in an Organization: Advice from a CMM Lead Assessor

vol. 2, issue 4 Pat O’Toole

INTRODUCTION

Software quality assurance (SQA) professionals believe that a higher quality level of software development process yields higher quality performance. SQA professionals seek evidence for positive results of investments in SQA systems to achieve improved quality performance of the software development process. Journal and conference papers provide such evidence by presenting studies that show SQA investments result in improved software development processes. Most of these stud-ies are based on comparison of “before-after” observations in the same organization. Only parts of these papers quantify the performance improvement achieved by SQA system invest-ments, presenting percentages of productivity improvement and percentages of reduction of defects density, and so on.

Of special interest are papers that quantify performance improvement and also measure software process quality level advancement. Capability Maturity Model (CMM®) and CMM

IntegrationSM (CMMI®) level are the tools used for

measur-ing the software process quality level common in all of these papers. According to this approach, the improvement of the quality level of the software process is measured by attain-ing a higher CMM (or CMMI) level in the organization. For example, Jung and Goldenson (2003) found that software main-tenance projects from higher CMM-level organizations typically report fewer schedule deviations than those from organizations

Benefits of a

Higher Quality

Level of the

Software Process:

Two Organizations

Compared

Daniel Galin

Ruppin Academic Center

Motti avrahaMi

(2)

assessed at lower CMM levels. For U.S. maintenance projects the results are:

• Mean deviation of 0.464 months for CMM level 1 organizations

• Mean deviation of 0.086 months for CMM level 2 organizations

• Mean deviation of 0.069 months for CMM level 3 organizations

A variety of metrics are applied to measure the resulting performance improvement of the software development process, relating mainly to quality, pro-ductivity, and schedule keeping. Results of this nature are presented by McGarry et al. 1999; Diaz and King 2002; Pitterman 2000; Blair 2001; Keeni 2000; Franke 1999; Goldenson and Gibson 2003; and Isaac, Rajendran, and Anantharaman 2004a; 2004b.

Galin and Avrahami (2005; 2006) performed an analysis of past studies (meta analysis) based on results presented in 19 published quantitative papers. Their results, which are statistically significant, show an aver-age performance improvement according to six metrics that range from 38 percent to 63 percent for one CMM level advancement. Another finding of this study is an average return on investment of 360 percent for invest-ments in one CMM level advancement. They found similar results for CMMI level advancement, but the publications that present findings for CMMI studies do not provide statistically significant results.

Critics may claim that the picture portrayed by the published papers is biased by the tendency not to publish negative results. Even if one assumes some bias, the multitude of published results proves that a significant contribution to performance is derived from SQA improvement investments, even if its real effect is somewhat smaller.

The papers mentioned in Galin and Avrahami’s study, which quantify performance improvement and rank software process quality level improvement, were formulated accord-ing to the before-after methodology. An important limitation of this before-after methodology is the long period of obser-vations during which intervening factors, such as changes in products, the organization, and interfacing requirements, may substantially affect the results. In addition, the gradual changes, typical to implementation of software process improvements, cause changing performance achievements during the observation period that may affect the study results and lead to inaccurate conclusions.

An alternative study methodology that minimizes these undesired effects is a methodology based on comparing the performance of several organizations observed at the same period (“comparison of organizations” methodology). The observation period, when applying this methodology, is much shorter, and the observed organization is not expected to undergo a change process during the observation period. As a result, the software process is relatively uniform during the observation period and the effects of uncontrolled soft-ware development environment changes are diminished. It is important to find out whether the results obtained by research applying the comparison of organizations methodology support findings of research that applied the before-after methodology of empirical studies. Papers that report findings of studies that use the comparison of orga-nizations methodology are rare. One example is Herbsleb et al. (1994), which presents comparative case study results for two projects that have similar characteristics performed at the same period by Texas Instruments. One of the projects was performed by applying “old software develop-ment methodology,” while the other used “new (improved) software development methodology.” The authors report a reduction of the cost per software line of code by 65 per-cent. Another result was a substantial decrease in the defect density, from 6.9 to 2.0 defects per 1,000 lines of code. In addition, the average costs to fix a defect were reduced by 71 percent. The improved software development process was the product of intensive activities of software perfor-mance improvement (SPI), and was characterized by an entirely different distribution of resources invested during the software development process. However, Herbsleb et al. (1994) provide no comparative details about the quality level of the software process, that is, by appraisal of the CMM level for the two projects.

The authors’ study applies the comparison of orga-nizations methodology, which is based on empirical data of two software developing organizations (“develop-ers”) with similar characteristics, collected in the same period. The empirical data that became available to the authors enabled them to process comparative results for each of the two developers, which include: 1) quantita-tive performance results according to several software process performance metrics; and 2) a CMM appraisal of the developer’s quality level of its software processes. In addition, the available data enable them to provide an explanation for the performance differences based on the differences in resource investment preferences during the software development phases.

(3)

THE CASE STUDY

ORGANIZATIONS

The authors’ case study is based on records and observa-tions of two software development organizaobserva-tions. The first organization, Alpha, is a startup firm that implements only basic software quality assurance practices. The second organization, Beta, is the software development depart-ment in an established electronics firm that performs a wide range of software quality assurance practices that are employed throughout the software development process. Both Alpha and Beta develop C++ real-time embedded software in the same development environment: Alpha’s software product serves the telecommunication security industry sector, while Beta’s software product serves the aviation security industry sector. Both organizations employ the waterfall methodology; however, during the study Alpha’s implementation was “crippled” because the resources invested in the analysis and design stage were negligible. While the Alpha team adopted no soft-ware development standard, Beta’s softsoft-ware development department was certified according to the ISO 9000-3 standard (ISO 1997) and according to the aviation soft-ware development standard for the aviation industry DO-178B Software Considerations in Airborne Systems and Equipment Certification (RTCA 1997). The Federal Aviation Administration (FAA) accepts use of the standard as a means of certifying software in avionics. Neither soft-ware development organization was CMM certified.

During the study period Beta developed one software product, while Alpha developed two versions of the same software product. The software process and the SQA system of Beta were stable during the entire study period. The SQA system of Alpha, however, experi-enced some improvements during the study period that became effective for the development of the second ver-sion of its software product. The first and second parts of the study period dedicated to the development of the two versions lasted six and eight months, respectively.

A preliminary stage of the analysis was done to test the significance of the results of the improvements performed in Alpha during the second part of the study period.

The Research Hypotheses

The research hypotheses are:

• H1: Alpha’s software process performance met-ric for its second product will be similar to that of its first product.

• H2: Beta, as the developer of a higher quality level of its software process, will achieve soft-ware process performance higher than Alpha according to all performance metrics.

• H3: The results for the differences in perfor-mance achievements of the comparison of organizations methodology will support the results of studies performed according to the before-after methodology.

METHODOLOGY

The authors’ comparative case study research was planned for both a preliminary stage and a two-stage comparison:

• Preliminary stage: Comparison of software process performance for Alpha’s first and second products (first part of the study period vs. the second part). • Stage one: Comparison of software process

performance of Alpha and Beta.

• Stage two: Comparison of the first stage findings (of comparison of organizations methodology) with the results of earlier research performed according to the before-after methodology.

The Empirical Data

The study was based on original records of software correction processes that the two developers made available to the study team. The records cover a period of about one year for each developer. The following six software process performance metrics (“performance metrics”) were calculated:

1. Error density (errors per 1,000 lines of code) 2. Productivity (lines of new code per working day) 3. Percentage of rework

4. Time required for an error correction (days) 5. Percentage of recurrent repairs

6. Error detection effectiveness

The detailed records enabled the authors to calculate these performance metrics for each developer. The met-rics were calculated on a monthly basis for the first five performance metrics. For the sixth metric, only a global metric calculated for the entire period could be processed for each developer.

(4)

©

2007,

ASQ

Table 1 presents a comparison of the organization characteristics and a summary of the development activi-ties of Alpha and Beta.

The CMM Appraisal

Since the studied organizations were not CMM certi-fied, the authors used an official SEI publication, “Maturity Questionnaire for CMM based appraisal of internal process improvement - CBA IPI” (Zubrow, Hayes, and Goldenson 1994), to prepare an appraisal of Alpha and Beta’s software process quality level. The appraisal yielded the following: CMM level 1 for Alpha and CMM level 3 for Beta. A summary of the appraisal results for Alpha and Beta is presented in Table 2.

The Statistical Analysis

For five of the six performance metrics, the calculated monthly performance metrics for the two organizations were compared and statistically tested applying t-test procedure. For the sixth performance metric, error detection effectiveness, only one global detection effectiveness metric (calculated for the entire study period) was available, 90.3 percent and 99.7 percent for Alpha and Beta, respectively.

THE FINDINGS

The Preliminary Stage

A comparison of Alpha’s performance metrics for the two parts of the study period is shown in Table 3.

Alpha’s performance results for the second study period show some improvements (compared with the results of the first study period) for all five per-formance metrics that were calculated on a monthly basis. However, the performance achievements of the second study period were found statistically insignifi-cant for four out of five performance metrics. Only for one performance metric, namely the percentage of recurrent repairs, did the results show a significant improvement.

Accordingly, H1 was supported for four out of five performance metrics. H1 was rejected only for the metric of the percentage of recurrent repairs.

Stage 1: The Organization

Comparison - Alpha vs. Beta

As Beta’s software process quality level was appraised to be much higher than that of Alpha, according to H2, the quality performance achievements of Beta were expected to be significantly higher than Alpha’s. The comparison of Alpha and Beta’s quality performance results is presented in Table 4.

The results of the statistical analysis show that for three out of the six performance metrics the performance of Beta is significantly better than that of Alpha. It should be noted that for the percentage of recurrent repairs, where Alpha demonstrated a significant performance improvement during the second part of the study period, Beta’s performance was significantly better than Alpha’s

Subject of comparison The developer

Alpha Beta

a) The organization characteristics

Type of software product Real-time embedded

C++ software

Real-time embedded C++ software

Industry sector Aviation electronics Telecommunication security

Certification according to software development quality standards

1. ISO 9001 2. DO-178B

None

CMM certification None None

CMM level appraisal CMM level 1 CMM level 3

b) Summary of development activities

Period of data collection Jan. 2002 – Feb. 2003 Aug. 2001 – July 2002

Team size 14 12

Man-days invested 2,824 2,315

New lines of code 56K 62K

Number of errors identified during development process 1,032 331

Number of errors identified after delivery to customers 111 1

table 1 Comparison of the organization characteristics and summary of development activities of Alpha and Beta

(5)

© 2007, ASQ © 2007, ASQ

for each of the two parts of the study period. For the fourth performance metric, productivity, Beta’s results were 35 percent better than those of Alpha, but no statis-tical significance was found. Somewhat surprising results were found for the fifth metric, the time required for error correction, where the performance of Alpha was 14 per-cent better than Beta, but the difference was found to be statistically insignificant. The explanation for this finding is probably in the much lower quality of Alpha’s software product. The lower quality of Alpha could be demonstrat-ed by the much higher percentages of recurrent repairs, where Alpha’s results were found significantly higher than Beta’s, fivefold and threefold higher for Alpha’s first part and second part of the study period, respectively. Alpha’s lower quality is especially evident when referring to the sixth performance metric. For the sixth performance metric, the error detection effectiveness, although only global performance results for the entire study period are available, a clear inferiority of Alpha is revealed, where the error detection effectiveness of Alpha is only 90.3 per-cent compared with Beta’s error detection effectiveness of 99.7 percent. In other words, 9.7 percent of Alpha’s errors were discovered by its customers compared with only 0.3 percent of Beta’s errors.

To sum up stage 1, H2 was supported by statisti-cally significant results for the following performance metrics: 1) error density, 2) percentage of rework, and 3) percentage of recurrent repairs. For an addi-tional metric, error detection effectiveness, though no statistical testing is possible—the global results that clearly indicate performance superiority of Beta support hypothesis H1. For two metrics H1 was not supported statistically. As for the productivity metric, the results show substantially better performance for Beta. As for the time required for an error correction—Alpha’s

No. Key Process Area Alpha grades Beta grades 1. Requirements

management

1.67 10

2. Software project planning 4.28 10 3. Software project tracking

and oversight 5.74 8.57 4. Software subcontract management 6.25 10 5. Software quality assurance (SQA) 3.75 10 6. Software configuration management (SCM) 5 8.75 Level 2 Average 4.45 9.55 7. Organization process focus 1.42 10 8. Organization process definition 0 8.33 9. Training program 4.28 7.14 10. Integrated software management 0 10 11. Software product engineering 1.67 10 12. Intergroup coordination 4.28 8.57 13. Peer reviews 0 8.33 Level 3 Average 1.94 9.01 14. Quantitative process management 0 0 15. Software quality management 4.28 8.57 Level 4 Average 2.14 4.29 16. Defect prevention 0 0 17. Technology change management 2.85 5.71 18. Process change management 1.42 4.28 Level 5 Average 1.42 3.33

table 2 Summary of the maturity questionnaire detailed appraisal results for Alpha and Beta

SQA metrics

Alpha First part of the

study period (6 months)

Alpha Second part of the study period

(8 months) t 0.05 Statistical significance of differences t 0.05 Mean s.d. Mean s.d.

1. Error density (errors per 1,000 lines of code) 17.9 3.8 15.8 4.2 t=0.964 Not significant 2. Productivity (lines of code per working day) 16.8 11.8 21.9 18.4 t=-0.585 Not significant

3. Percentage of rework 35.4 12.7 28.5 19.8 t=0.746 Not significant

4. Time required for an error correction (days) 35.9 33.3 16.9 8.4 t=1.570 Not significant

5. Percentage of recurrent repairs 26.7 11.8 13.8 8.7 t=2.200 Significant

6. Error detection effectiveness (global

performance metric for the entire study period)

90.3% 99.7% Statistical testing is not possible table 3 Alpha’s performance comparison for the two parts of the study period

(6)

©

2007,

ASQ

results are a little better than Beta’s, with no statistical significance. To sum up, as the results for four of the performance metrics support H1 and no result rejects H1, one may conclude that the results support H1. The authors would like to note that their results are typical case study results, where a hypothesis when clearly supported is followed by some inconclusive results.

Stage 2: Comparison of

Methodologies—The Comparison

of Organizations Methodology

vs. the Before-After

Methodology

In this stage the authors compared the results of the current case study that was performed according to the comparison of organizations methodology with results of the commonly used before-after methodology. They discovered that for this purpose, the work of Galin and Avrahami (2005; 2006), which is based on combined analysis of 19 past studies, is the suitable “representative” of results obtained by applying the before-after methodology.

The comparison is applicable to four software process performance metrics that are common to the current case study and the findings of the com-bined past studies analysis carried out by Galin and Avrahami. These common performance metrics are:

• Error density (errors per 1,000 lines of code) • Productivity (lines of code per working day) • Percentage of rework

• Error detection effectiveness

As Alpha’s and Beta’s SQA systems were appraised as similar to CMM levels 1 and 3, respectively, their quality performance gap is compared with Galin and Avrahami’s mean quality performance improvement for a CMM level 1 organization advancing to CMM level 3. The comparison for the four performance metrics is shown in Table 5.

The results of the comparison support hypothesis H3 regarding all four performance metrics. For two of the performance metrics (error density and percent-age of rework) this support is based on statistically significant results for the current test case. For the productivity metric the support is based on substantial productivity improvement, which is not statistical-ly significant. The comparison results for the four metrics reveal similarity in direction, where size dif-ferences in achievement are expected when comparing multiproject mean results with case study results.

To sum up, the results of the current test case performed according to the comparison of organiza-tions methodology conform to the published results obtained by using the before-after methodology.

DISCUSSION

The reason for the substantial differences in software process performance achievement between Alpha and Beta is the main subject of this discussion. The two developers claimed to use the same methodology. The authors assume that substantial differences in software process performance result from the actual implementation differences by the developers. To investigate the causes for the quality performance gap, the authors first examine the available data relating to the differences between Alpha and Beta’s distributions

SQA metrics Alpha

(14 months) Beta (12 months) t 0.05 Statistical significance of differences t 0.05 Mean s.d. Mean s.d.

1. Error density (errors per 1,000 lines of code) 16.8 4.0 5.0 3.0 8.225 Significant 2. Productivity (lines of code per working day) 19.7 15.5 26.7 16.6 -1.111 Not significant

3. Percentage of rework 31.4 16.9 17.9 8.0 2.532 Significant

4. Time required for an error correction (days) 25.0 23.7 29.0 15.6 -0.497 Not significant 5. Percentage of recurrent repairs – part 1

Percentage of recurrent repairs – part 2

26.7 13.8 11.8 8.7 4.8 4.8 8.1 8.1 4.647 2.239 Significant Significant 6. Error detection effectiveness - % discovered

by the customer (global performance metric for the entire study period)

9.7 — 0.3 — — No statistical

analysis was possible table 4 Quality performance comparison—Alpha vs. Beta

(7)

© 2007, ASQ © 2007, ASQ

of error identification phases along the development process. Table 6 presents for Alpha and Beta the percentages of error identification for the various development phases.

Table 6 reveals entirely different distributions of the error identification phases for Alpha and Beta. While Alpha identified only 11.5 percent of its errors in the requirement definition, analysis, and design phases, Beta managed to identify almost half of the total errors during the same phases. Another delay in error identi-fication is noticed in the unit testing phase. In the unit testing phase Alpha identified fewer than 4 percent of errors identified by testing while Beta identified in the same phase more than 20 percent of the total errors (almost 40 percent of errors identified by testing). The delay in error identification by Alpha is again apparent when comparing the percentage of errors identified during the integration and system tests: 75 percent for Alpha compared to 35 percent for Beta. However, the most remarkable difference between Alpha and Beta is in the rate of errors detected by the customers: 9.7 percent for Alpha compared to only 0.3 percent for Beta. This enormous difference in error detection efficiency, as well as the remarkable difference in error density, is the main contribution to the higher quality level of Beta’s software process.

Further investigation of the causes of Beta’s higher quality performance leads one to data related to resources distribution along the development process. Table 7 presents the distribution of the development resources along the development process, indicating noteworthy differences between the developers.

Examination of the data presented in Table 7 reveals substantial differences in resource distribu-tion between Alpha and Beta. While more than a third of the resources are invested by Beta’s team in the requirement definition, analysis, and design phases, Alpha’s team investments during the same phases are negligible. Furthermore, while Alpha invests about half of the development resources in software testing and the consequent software corrections, the invest-ments of Beta in these phases are less than a quarter of the total project resources. It may be concluded that the shift of resource invested “downstream” by Alpha resulted in a parallel shift downstream of the distribu-tion of error identificadistribu-tion phases (see Table 6). The very low resource investments of Beta in correction of failures identified by customers as compared with Alpha’s investments in this phase correspond well to the differences in error identification distribution between the developers. It may be concluded that the enormous difference in the error detection efficiency as well as the

SQA metrics

Comparison of organizations methodology

Before-after methodology

Beta’s performance compared with Alpha’s

%

CMM level 1 advancement to CMM level 3 Mean performance improvement *

% 1. Error density (errors per 1,000 lines of code) 70% reduction (Significant) 76% reduction 2. Productivity (lines of code per working day) 36% increase (Not significant) 72% increase

3. Percentage of rework 43% reduction (Significant) 65% reduction

4. Error detection effectiveness 97% reduction (Not tested statistically)

84% reduction * According to Galin and Avrahami (2005; 2006)

table 5 Quality performance improvement results—methodology comparison

Development phases Alpha Beta Identified errors % Identified errors— cumulative % Identified errors % Identified errors— cumulative % Requirement definition 5.8 5.8 33.8 33.8 Design 5.7 11.5 9.0 42.8 Unit testing 3.8 15.3 22.3 65.1

Integration and system testing 75.0 90.3 34.6 99.7

Post delivery 9.7 100.0 0.3 100.0

(8)

© 2007, ASQ © 2007, ASQ

remarkable difference in error density are the product of the downstream shift of the distribution of the software process resource. In other words, they are the demon-stration of the results of a “crippled” implementation of the development methodology that actually begins the software process at the programming phase. This crippled development methodology yields a software process of a substantially lower productivity, followed by a remarkable increase in error density and a colossal reduction of error detection efficiency.

At this stage it would be interesting to compare the authors’ findings regarding the differences of resourc-es distribution between Alpha and Beta with those of Herbsleb et al.’s study. A comparison of findings regarding resource distribution along the development process for the current study and Texas Instruments projects is presented in Table 8.

The findings by Herbsleb et al. related to Texas Instruments projects indicate that the new (improved) development methodology focuses on upstream

devel-opment phases, while the old methodology led the team to invest in coding and testing. In other words, while in the improved development methodology project 40 percent of the development resources were invested in the requirement definition and design phases, only 8 percent of the resources of the old methodology project were invested in these development phases. Herbsleb et al. also found a major difference in resource invest-ments in unit testing: 18 percent of the total testing resources by the old methodology project compared to 90 percent by the improved methodology project. Herbsleb et al. believe that the change of development methodology, as evidenced by the change in resource distribution along the software development process, yielded the significant reduction in error density (from 6.9 to 2.0 defects per thousand lines of code) and to a remarkable reduction in resources invested in customer support after delivery (from 23 percent of total project resources to 7 percent). These findings by Herbsleb et al. closely resemble the current case study findings.

Development phase Resources invested Alpha Beta Resources invested % Resources invested— cumulative % Resources invested % Resources invested— cumulative %

Requirement definition and design Negligible 0 34.5 34.5

Coding 46.5 46.5 41.5 76.0

Software testing 26.0 72.5 14.0 90.0

Error corrections according to

testing results 22.5 95.0 9.5 99.5

Correction of failures identified

by customers 5.0 100.0 0.5 100.0

table 7 Project resources according to development phase—Alpha vs. Beta

Development phase

Resources invested

Old development methodology project New (improved) development methodology project Resources invested % Resources invested— cumulative % Resources invested % Resources invested— cumulative % Requirement definition 4% 4% 13% 13% Design 4% 8% 27% 40% Coding 47% 55% 24% 64% Unit testing 4% 59% 26% 90.0%

Integration and system testing 18% 77% 3% 93%

Support after delivery 23% 100.0% 7% 100.0%

table 8 Texas Instruments project resources distribution according to development phase—“Old development methodology” project vs. “New (improved) development methodology” project. Source: Herbsleb et al. (1994)

(9)

CONCLUSIONS

The quantitative knowledge of the expected soft-ware process performance improvement is of great importance to the software industry. The available quantitative results are based solely on studies per-formed according to the before-after methodology. The current case study supports these results by apply-ing an alternative methodology—the comparison of organizations methodology. As the examination of the results obtained by the use of an alternative study methodology is important, the authors recommend performing a series of case studies applying the com-parison of organizations methodology. The results of these proposed case studies may support the earlier results and add substantially to their significance.

The current case study is based on existing correc-tion records and other data that became available to the research team. Future case studies applying the comparison of organizations methodology that will be planned at earlier stages of the development project may participate in the planning of the project manage-ment data collection, and enable collection of data for a wider variety of software process performance metrics.

REFERENCES

Blair, R. B. 2001. Software process improvement: What is the cost? What is the return on investment? In Proceedings of the Pittsburgh PMI

Conference, April 12.

Diaz, M., and J. King. 2002. How CMM impacts quality, productivity, rework, and the bottom line. Crosstalk 15, no. 1: 9-14.

Franke, R. 1999. Achieving Level 3 in 30 months: The Honeywell BSCE Case. Presentation at the 4th European Software Engineering Process

Group Conference, London.

Galin, D., and M. Avrahami. 2005. Do SQA program work – CMM works. A meta analysis. In Proceedings of the IEEE International Conference

on Software –Science, Technology & Engineering, Herzlia, Israel, 22-23

February. IEEE Computer Society Press, Los Alamitos, Calif.: 95-100. Galin, D., and M. Avrahami. 2006. Are CMM programs beneficial? Analyzing past studies. IEEE Software 23, no. 6: 81-87.

Goldenson, D. R., and D. L. Gibson. 2003. Demonstrating the impact and benefits of CMMI: An update and preliminary results (CMU/SEI-2003-009). Pittsburgh: Software Engineering Institute, Carnegie Mellon University. Herbsleb, J., A. Carleton, J. Rozum, J. Siegel, and D. Zubrow. 1994. Benefits of CMM-based software process improvement: Initial results (CMU/SEI-94-TR-013). Pittsburgh: Software Engineering Institute, Carnegie Mellon University. Available at: http://www.sei.cmu.edu/ publications/documents/94.reports/94.tr.013.html.

Isaac, G., C. Rajendran, and R. N. Anantharaman. 2004a. Does qual-ity certification improve software industry’s operational performance.

Software Quality Professional 5, no. 1: 30-37.

Isaac, G., C. Rajendran, and R. N. Anantharaman. 2004b. Does qual-ity certification improve software industry’s operational performance – supplemental material. Available at http://www.asq.org.

ISO. 1997. ISO 9000-3 Guidelines for the application of ISO 9001:1994 to the development, supply, installation and maintenance of computer soft-ware. Geneva, Switzerland: International Organization for Standardization. Jung, H. W., and D. R. Goldenson. 2003. CMM-based process improvement and schedule deviation in software maintenance (CMU/SEI-2003-TN-015). Pittsburgh: Software Engineering Institute, Carnegie Mellon University. Keeni, G. 2000. The evolution of quality processes at Tata Consultancy Services. IEEE Software 17, no. 4: 79-88. Available at: http:// www.stsc. hill.af.mil/crosstalk/1999/05/oldham.pdf.

McGarry, F., R. Pajerski, G. Page, S. Waligora, V. Basili, and M. Zelkowitz. 1999. Software process improvement in the NASA Software Engineering Laboratory (CMU/SEI-94-TR-22). Pittsburgh: Software Engineering Institute, Carnegie Mellon University. Available at: http://www.sei.cmu. edu/publications/documents/94.reports/94.tr.022.html.

Pitterman, B. 2000. Telcordia Technologies: The journey to high maturity.

IEEE Software 17, no. 4: 89-96.

RTCA. 1997. DO-178B Software considerations in airborne systems and equipment certification, Radio Technical Commission for Aeronautics, U.S. Federal Aviation Agency, Washington.

Zubrow D., W. Hayes, and D. Siegel Jand Goldenson. 1994. Maturity questionnaire (CMU/SEI-94-SR-7). Pittsburgh: Carnegie Mellon University, Software Engineering.

® Carnegie Mellon, Capability Maturity Model, CMMI, and CMM are reg-istered trademarks of Carnegie Mellon University.

SM CMM Integration and SEI are service marks of Carnegie Mellon

University.

BIOGRAPHIES

Daniel Galin is the head of information systems studies at the

Ruppin Academic Center, Israel, and an adjunct senior teaching fellow with the Faculty of Computer Science, the Technion, Haifa, Israel. He has a bachelor’s degree in industrial and management engineering, and master’s and doctorate degrees in operations research from the Israel Institute of Technology, Haifa, Israel. His professional experience includes numerous consulting projects in the areas of software quality assurance, analysis, and design of information systems and industrial engineering. He has published many papers in professional journals and conference proceed-ings. He is also the author of several books on software quality assurance and on analysis and design of information systems. He can be reached by e-mail at dgalin@bezeqint.net.

Motti Avrahami is VeriFone Global supply chain quality manager.

He has more than nine years of experience in software quality process and software testing. He received his master’s degree in quality assurance and reliability from the Technion, Israel Institute of Technology. He can be contacted by e-mail at mottia@gmail.com.

References

Related documents

Proposed Framework Form a project team Communicat e objectives to all concerned Confirm the value drivers Define Objective s. Based on the results of data analysis, the

Panda Subclass is the subclass of Laguna Class soil with the least npOx (i.e., least dust component), so that average Martian dust and this soil subclass (columns four and six in

Once the image has been processed it moves into the second part,then the image is converted into gray scale.Also the Arduino and the object detection sensors

the hyperthyroid mother has produced goi- ter in the newborn.3 The precise functional status of the goiter presumably reflects the interplay between the dosage and duration

nanoparticle assembled from an amphiphilic albumin-polymer conjugate could be modified with a targeted ligand to enhance its active targeting to cancer cells [250]. A

Next, the multifactorial principal component survey, regardless of the relative heterogeneous performances exhibited by physical, functional and electrocardiogram analyses,

19% serve a county. Fourteen per cent of the centers provide service for adjoining states in addition to the states in which they are located; usually these adjoining states have