• No results found

STATE-OF-THE-ART textbooks emphasize metrics programs

N/A
N/A
Protected

Academic year: 2021

Share "STATE-OF-THE-ART textbooks emphasize metrics programs"

Copied!
13
0
0

Loading.... (view fulltext now)

Full text

(1)

Information-Centric Assessment of

Software Metrics Practices

Helle Damborg Frederiksen and Lars Mathiassen

Abstract—This paper presents an approach to enhance manage-rial usage of software metrics programs. The approach combines an organizational problem-solving process with a view on metrics programs as information media. A number of information-cen-tric views are developed to engage key stakeholders in debating current metrics practices and identifying possible improvements. We present our experiences and results of using the approach at

Software Inc., we offer comparisons to related approaches to soft-ware metrics, and we discuss how to use the information-centric approach for improvement purposes.

Index Terms—Improvement, information medium, soft systems approach, software metrics.

I. INTRODUCTION

S

TATE-OF-THE-ART textbooks emphasize metrics pro-grams as a key to developing and maintaining professional software practices. A metrics program [46] builds on measures, which provide quantitative indications of the extent, amount, dimensions, capacity, or size of some attribute of a software product or process. The measures result from collecting one or more data points, and then aggregating them through the metrics program to obtain performance indicators. The organization can use the measures and indicators to support managerial decision-making and intervention related to software practices [5], [6], [21].

Collecting and using data about the software operation for managerial purposes is by no means a trivial task [11], [13], [19], [28], [44], [47]. Project managers and software engineers are expected to supply measures, but constant deadline pres-sures and weak incentives to prioritize data collection typically lead to low data quality. Software managers are expected to base their decisions and interventions on metrics program data, but the appropriate data might not be available when it is needed. When data is available, managers might not trust its quality or might react emotionally to negative indicators by questioning the metrics and the data collected.

Concerns like these have led to development of approaches to assess and improve software metrics programs [5], [6], [32],

Manuscript received August 13, 2003; revised May 1, 2004, December 1, 2004, and March 1, 2005. Review of this manuscript was arranged by Depart-ment Editor R. Sabherwal. This work was supported in part by the DepartDepart-ment of Computer Science, Aalborg University, Aalborg, Denmark, and in part by the Case Company.

H. D. Frederiksen is with the Department of Computer Science, Aalborg University, Aalborg, Denmark (e-mail: hdf@kmd.dk).

L. Mathiassen is with the Center for Process Innovation, J. Mack Robinson College of Business, Georgia State University, Atlanta, GA 30302-5029 USA (e-mail: lars.mathiassen@eci.gsu.edu).

Digital Object Identifier 10.1109/TEM.2005.850737

[37], [38]. This paper contributes to this line of research by pre-senting a new information-centric assessment approach. The ap-proach seeks to enhance managerial usage of software metrics programs by: 1) viewing software metrics programs as media for collecting, analyzing, and communicating information about the software operation; 2) applying different information-cen-tric views to assess meinformation-cen-trics practices; and 3) involving relevant stakeholders in debating as-is and could-be metrics practices. The proposed approach is adaptable to the particular organiza-tional context in which it is used. The underlying research was conducted as part of a three-year collaborative research effort [35] to improve metrics practices withinSoftware Inc., a large Danish software organization.

We first present the theoretical background for our research in Section II, followed by research design in Section III that describes the research approach and Software Inc.’s met-rics program. As we describe in the assessment approach in Section IV, our information-centric approach consists of five activities: appreciate current situation, create information-cen-tric viewpoints, compare situation with viewpoints, interpret findings, and identify improvements. In the assessment results in Section V, we present the results of applying the approach at

Software Inc., followed by an overview of our experiences in assessment lessons in Section VI. Finally, Section VII reviews our research contribution and related approaches to software metrics.

II. THEORETICALBACKGROUND

Most software metrics research focuses on defining metrics, but there is also a fair amount of research into implementing metrics programs [15], [16]. It is nonetheless difficult to suc-cessfully design and implement a metrics program: by one estimate, up to 78% of metrics programs fail [13]. Recent research has, therefore, focused on critical success factors [5], [13], [20], [21], [23], [24], [27], [28], [30], [45], [48]. Among the issues explored are lack of alignment with busi-ness goals, lack of management commitment, and insufficient resources [20], [21]. With this background, we focus on how to assess and improve managerial usage of metrics programs. We first present related work on assessment of software met-rics programs (Section II-A). We then present the theoretical foundation for our approach (Section II-B).

A. Assessment of Metrics Programs

The experienced difficulties in developing successful soft-ware metrics programs have led to an interest in assessing prac-tices in software organizations that already collect data about

(2)

their software operation [38]. Mendonça and Basili have de-veloped an approach for improving data collection mechanisms and data usage within a software organization. They combine a top-down approach using goal-question-metric (GQM) and a bottom-up approach based on data mining techniques. The top-down perspective creates a structured understanding of the existing set of measurements, and the bottom-up perspective helps reveal new and relevant information that can be generated from data already collected.

Kitchenham et al. [32] argue that measures and indicators should be representative of the attributes they are supposed to reflect. Validation is therefore critical to a metrics program’s success. They suggest a framework that can help validate mea-sures and assess their appropriateness in a given situation.

Berry and Jeffery identify variables that can lead to a metrics program’s success or failure [5]. The purpose is to evaluate and predict the success of new programs on the basis of experiences with previous programs. They use a structured set of questions to collect data from people implementing and managing metrics programs. Most questions represent advice from experienced practitioners; some are based on theory. The instrument includes questions on the program’s status, context, inputs, processes, and products.

Berry and Vandenbroek have developed a complementary ap-proach to help individual software organizations improve met-rics programs [6]. They offer a meta-framework to design and deploy assessment instruments that target specific software pro-cesses, e.g., project tracking and oversight or configuration man-agement. This approach’s critical elements are the ability to build a performance model tailored to a software organization’s particular needs, and the explicit inclusion of experiences and attitudes of practitioners who are involved in metrics and soft-ware practices. The model also includes social factors—such as management leadership, fear of measurement, and ethical use of measures—that software engineers are usually poorly equipped to deal with [6]. The authors suggest that an organization can use the method periodically, with or without outside assistance. The approach is complex and the results are comprehensive.

Finally, ISO/IEC 15 939 defines a measurement process ap-plicable to software-related engineering and management disci-plines. The standard defines the measurement process (e.g., ac-tivities for specifying information needs) and acac-tivities for de-termining validity of analysis results. Practical software mea-surement (PSM) [37] serves as the basis for the ISO standard and provides details on the standard’s activities. ISO/IEC 15 939 offers a normative basis against which existing metrics pro-grams can be assessed. The standard was used as a basis for creating the CMMI’s measurement and analysis process area [1].

While these approaches provide valuable support for as-sessing and improving software metrics programs, none focus on how metrics programs are used to support software man-agement. Several studies suggest, however, that a metrics program’s success depends intrinsically on regular use of indicators and measures to inform managerial decision-making and intervention [20], [21], [27], [28], [42]. Our research is, therefore, directed toward filling this gap.

Metrics programs measure attributes of software processes and products and make data available on different levels of ag-gregation to stakeholders within a software organization. Given this, a metrics program’s primary purpose is to support and strengthen management practices on all organizational levels. Organizations use measures and indicators to estimate projects, analyze defects, identify improvement initiatives, pinpoint best practices, and so on. Grady [22] suggests that software metrics can be used for tactical and strategic purposes. Project managers represent the tactical use in project planning and estimation, progress monitoring, and product evaluation. People involved in software process improvement represent the strategic use, iden-tifying and prioritizing improvement initiatives, engaging in de-fect analysis, and validating best practices.

B. Information-Centric Assessment

We propose an information-centric approach that combines two streams of theory and applies them to assess software metrics practices. First, it views software metrics programs as media for creating and sharing information about software practices [2], [51]. This perspective focuses on the relations between the measured software processes and products, the program designer’s intended meaning for measures and indi-cators, and the program users’ interpreted meaning based on data from the metrics program [43]. Second, it engages key stakeholders in debating metrics practices in the target software organization based on soft systems methodology (SSM) [8], [10], a general approach to organizational problem-solving.

The first theoretical foundation for our approach is to view metrics programs as information media. In order to measure software processes and products, you must observe them; for the resulting data to be useful, you must interpret and act upon them. Weinberg [51] represents these fundamental activities in a four-element model: intake, meaning, significance, and re-sponse. In the intake process, people access information about the world. In the meaning process, they assign meaning to their observations. In the significance process, they give priority to these meanings. Finally, in the response process, people act by translating their observations into actions.

Software metrics programs must therefore support observa-tion of and store data about relevant software processes and products. In addition, such programs must support data inter-pretation and information communication between different ac-tors; cf. “publish objectives and data widely” and “facilitate debate” [27]. Finally, to create value for the software organi-zation, such programs must lead to managerial responses; cf. “use the data” [27]. Fig. 1 illustrates this view of software met-rics programs as information media. Three types of activities are involved: measure software practices to generate data, ana-lyze data to create useful and relevant information, and intervene into software practices based on information from the program. Metrics programs mediate the interaction between many stake-holders, including data suppliers, software engineers, software managers, improvement agents, and metrics staff.

While information is fundamental to computer and informa-tion science, there is, unfortunately, little agreement about the concept [39]. We have adopted a definition that is in line with

(3)

Fig. 1. Software metrics program as a medium.

Weinberg’s [51] focus on sense-making. It emphasizes the re-lationships between the observed object, the intended meaning, and the interpreted meaning as follows [43].

• Information is a symbolic expression of an object, i.e., a real-world entity or an abstract concept. Programs, people, and activities are real-world entities and program size and quality, person-hours, and productivity are ex-amples of abstract concepts related to software practices. • Program designers create information to communicate an intended meaning about specific objects, e.g., when they define a procedure for assessing a program’s function points as a way to understand and measure program size. • Program users interpret the information’s symbolic repre-sentation in a given social context in order to interpret the related object’s meaning. When managers read a produc-tivity report based on function points, they might assume that all types of function points are equally difficult to im-plement, while the intended meaning was to differentiate between more or less complex function point types. There are several reasons for viewing software metrics pro-grams as information media. First, two potential risks of met-rics programs are that they might fail to make managers use the data and fail to make intentions and interpretations meet [42]. Second, metrics programs are used to support communication and interaction between different actors [27]. Third, a media perspective applies to the complete cycle of storing, accessing, interpreting, and acting upon observations about software pro-cesses and products [51]. Finally, information systems have gen-erally become media for communication and collaboration [2]. The second theoretical foundation for our approach is organi-zational problem-solving based on SSM [8], [10]. SSM provides a general approach for addressing problematic situations in or-ganizational contexts and has been used extensively to address and study information systems issues (e.g., [3], [12], [34], [49]). It has also been sporadically adopted in relation to software met-rics [4], [25]. SSM’s generic activities involve appreciating the situation (i.e., existing metrics practices); developing idealized viewpoints on the situation; comparing viewpoints of the situa-tion in a debate between relevant stakeholders; and identifying actions that can lead to an improved situation. SSM is a qualita-tive, interpretive approach to organizational problem-solving. It is based on the assumption that actors have different beliefs and viewpoints and it engages the involved stakeholders in a debate

to learn from their differences and experiences. To do this, and thus identify possible improvements, SSM uses soft systems, i.e., idealized viewpoints on a situation expressed as adaptive systems [8], [10]. Checkland [9] provides a survey of SSM con-cepts and practices.

We base our assessment approach on SSM for many reasons. First, SSM is particularly well suited to addressing complex organizational practices involving many different stakeholders. Second, SSM has proven useful in addressing issues related to information systems in organizational contexts. Third, while the approach is qualitative and interpretive, it offers rigorous tech-niques to apply systems thinking and practices to organizational problem-solving. Finally, SSM is adaptable to specific organi-zational contexts [8], [10].

III. RESEARCHDESIGN

The research we present is part of a collaborative practice study [35] carried out between Software Inc. and Aalborg University, Denmark, from January 2000 to April 2004 [17]. Collaborative practice research targets specific professional practices with the double goal of improving these practices, while also contributing to research. The basic approach is action research combined with various forms of practice studies and experiments. This paper is based on activities that we carried out in the early phase of the collaborative project. Our goal was to identify key problems and possible improvements in software metrics practices atSoftware Inc.Here, we present the rationale for the information-centric assessment approach together with a case study [18], [53] of its use withinSoftware Inc.The or-ganization had extensive experience implementing and running a metrics program, and there was a growing concern that its benefits were not as great as expected. Furthermore, senior management was willing to fund and support an R&D initiative and there was a well-established collaboration between soft-ware metrics practitioners fromSoftware Inc.and researchers from Aalborg University.

A. Case Study

The business of Software Inc.is to develop, maintain, and run administrative software for local authorities. With more than 2400 employees and almost 700 software developers,

Software Inc.is one of Denmark’s largest software organiza-tions.Software Inc. is geographically distributed at four sites and organized in a conventional hierarchical divisional struc-ture. The software operation is largely organized in projects and teams according to application type. A unit supporting the software organization with methods, tools, and techniques is responsible for the metrics program and software process improvement.Software Inc.has a long tradition of collecting simple measures, e.g., the number of faults in an application. In 1996, senior management decided to implement an elaborate metrics program—to increase productivity and quality by benchmarking against the industry [7]. An external contractor supplied the metrics program to make benchmarking possible.

The research collaboration was initiated to address the growing concern about unsatisfactory return on investment by enhancing managerial usage of the metrics program. When

(4)

Fig. 2. Software metrics program atSoftware, Inc.

we began assessing metrics practices atSoftware Inc.in 2000, we found no approach in the literature to identify strengths, weaknesses, and opportunities in managerial usage of metrics programs. We, therefore, decided to develop our own approach. One of us executed the assessment in close collaboration with other Software Inc. stakeholders, while the other served as coach and critical outsider during the assessment. We kept a diary of plans, events, results, and experiences throughout the assessment to record the process [29], [40].

Two limitations apply to our research design. First, as with any case study, one should be cautious about generalizing the findings [18], [53]. The advantage of a case study is that it pro-vides in-depth insight into practices within one organization. To transfer the information-centric assessment approach to other software organizations, one must carefully consider the con-ditions under which it was applied at Software Inc. Second, the results we present here are from the initial assessment of a large improvement initiative. Continued efforts to improve

Software Inc.’s metrics practices will provide additional experi-ences and valuable feedback on the approach [17].

B. Metrics Program

Fig. 2 illustratesSoftware Inc.’s metrics program. The pro-gram’s purpose is to monitor and improve software processes and products. Project and application managers supply data (measures) to the metrics program every three months. The data includes time spent on various activities and data on the appli-cation errors reported and corrected in the measurement period. The data is supplemented with characteristics on projects or applications, e.g., business area, technology, and size measured in function points.

The payroll and the human resource departments are some of the external units that supply data, e.g., on personnel and ex-penses on software and equipment. At each site, a controller supplies data on time spent on activities unrelated to projects and applications, e.g., management, administration, and educa-tion. The controller is responsible for getting the measures from managers on time and for helping managers submit measures to the program.

The metrics staff processes, validates, and packages data and sends them to the contractor. After the contractor processes data, results are returned including aggregate indicators on a general level, and on a project and application level. The metrics staff interprets and disseminates some results to data suppliers and

TABLE I ASSESSMENTAPPROACH

senior managers. They primarily disseminate results by e-mail, but also use the company’s intranet. Although the data itself is different, the presentation is the same for all users.

Each year, the contractor delivers a written report. The report includes an analysis ofSoftware Inc.’s software operation on different levels of aggregation, along with a set of recommen-dations for further actions, e.g., software process improvements. The contractor andSoftware Inc.’s software process improve-ment staff present the report to senior manageimprove-ment and facilitate a structured debate. Subsequently, results are made available to the next level of managers. Occasionally, these managers ask the metrics staff to facilitate a debate on a more detailed level. The amount of managerial response varies, however, across the or-ganization. The project and application managers have encour-aged a system in which results are traceable to projects and ap-plications, but not to employees.

A specialist in the supporting unit is responsible for setting the measurements strategy and mapping the contractor’s model toSoftware Inc.’s practices. Controllers are responsible for con-trolling data collection in each unit, supporting project and ap-plication managers in supplying data and presenting results back to the unit. The specialist and controllers constitute the metrics staff. They meet on a regular basis to discuss and fine-tune the program.

IV. ASSESSMENTAPPROACH

The proposed information-centric assessment approach con-sists of five activities: appreciate current situation, create infor-mation-centric viewpoints, compare situation with viewpoints, interpret findings, and identify improvements. We now describe each activity along with details about how we executed them at

Software Inc.Table I summarizes the approach. Further guide-lines based on our experiences are presented in the Assessment Lessons and in Table V.

Appreciate current situation: This activity is aimed at getting an overview and appreciation of the metrics program and its context. We used rich pictures (e.g., Fig. 2) to represent dif-ferent views on the situation and to identify problems, problem

(5)

owners, and possible idealized viewpoints (soft systems) for detailed analysis [8], [10]. We conducted seven semi-structured interviews with project managers, senior managers, metrics staff, and quality assurance staff. Each interview lasted for three hours. We documented the key points and gave these to the interviewee to correct and approve. We also collected plans, decision reports, and e-mail correspondence related to metrics, minutes of meetings from the metrics staff, and documents from the organization’s process improvement initiative. Finally, at the time of our study, one of the authors had worked as a metrics specialist atSoftware Inc.for five years and had kept a personal log of experiences with using the metrics program. We used these data sources to learn about current metrics practices. We drew half-a-dozen rich pictures to express different views of the current situation and to get an initial appreciation of prob-lematic issues. This led us to identify nine information-centric viewpoints, each of which provided a possible starting point for soft systems modeling and debate.

• A system for helping data suppliers ensure data quality. • A system for satisfying the information needs of users. • A system for reporting essential problems in the software

process.

• A system for demonstrating to data suppliers that the data is actually used, thus increasing their motivation. • A system for discussing results and assumptions to

facil-itate open debate.

• A system for acting upon negative trends in indicators. • A system for interpreting and disseminating results. • A system for improvements based on the results. • A system for producing measurements from the

ERP-system atSoftware Inc.

Create information-centric viewpoints: This activity’s pur-pose is to define and model selected information-centric view-points based on soft systems thinking [8], [10]. Each viewpoint is defined textually as a root definition of the involved Cus-tomers, Actors, Transformation, Weltanschauung, Owner, and Environment (the CATWOE of SSM). A conceptual model of the necessary activities is then developed, and a simplified rich picture is drawn as illustration. We selected viewpoints from the list above based on three criteria: they should address key prob-lems atSoftware Inc., cover the entire lifecycle from data collec-tion to use, and express key features of good informacollec-tion system design. Three viewpoints proved particularly useful: a system for measuring software practices, a system for disseminating in-dicators and interpretations, and a system for using metrics to generate managerial response.

Compare situation with viewpoints: This activity’s purpose is to assess current metrics practices by systematically com-paring viewpoints to current practices. The activity is carried out with relevant stakeholders [8], [10]. To begin, we systemat-ically compared data from the “appreciate current situation” ac-tivity with each viewpoint’s individual elements (see Figs. 3–5). This produced several findings, which we organized into a co-herent assessment based on each particular viewpoint. We vali-dated the assessment by discussing it with the interviewees and with other stakeholders. In general, stakeholders were enthusi-astic. They felt that their experiences with and concerns about

Fig. 3. Idealized view of “measuring software practices.”

Fig. 4. Idealized view of “disseminating indicators and interpretations.”

Fig. 5. Idealized view of “using metrics to generate response.”

the metrics program inSoftware Inc.were well covered. The dis-cussions demonstrated that the three viewpoints helped address key problems and challenges atSoftware Inc., and they led to a revised set of findings.

Interpret findings: This activity’s purpose is to interpret the findings across individual viewpoints by looking at the metrics program as an information medium. This lens helps clarify the relation between the measured objects, the intended meaning of measures and indicators, and different users’ in-terpreted meaning based on data from the metrics program. The author that led the Software Inc. assessment carried out this activity, assisted by the other author. We used the model (Fig. 1) to systematically interpret the findings, analyze each element (measure, analyze, intervene, and medium) across the findings, and discuss them in the light of Pedersen’s notion of

(6)

TABLE II

CATWOEOF“MEASURINGSOFTWAREPRACTICES”

TABLE III

CATWOEOF“DISSEMINATINGINDICATORS ANDINTERPRETATIONS”

information. Key stakeholders in the organization validated the interpretation. We also presented the results to “outsiders” to test their relevance. We then presented the results to managers atSoftware Inc.

Identify improvements. This activity’s purpose is to identify possible improvements based on the analysis. The author who led the assessment distilled the initial set of improvements that had emerged through the debates of the information-centric views and the subsequent interpretation of the findings. These were carefully checked against current practices to ensure that they would address key issues in a feasible way. We then documented the analysis in a report to a group of senior and middle managers at Software Inc. This report was presented at a meeting, which led to further elaboration of the proposed improvements.

V. ASSESSMENTRESULTS

In the following, we present the results of theSoftware Inc.

assessment. The three viewpoints that emerged as particularly helpful are summarized as root definitions in Tables II–IV. We present each viewpoint as a simplified text and rich picture, followed by the findings that resulted from comparing the viewpoint toSoftware Inc.practices. Finally, we present the re-sults of the interpretation of the findings together with possible improvements.

A. Measure Software Practices

Ideally, as illustrated in Fig. 3 and summarized in Table II, project and application managers supply measures, such as function points, directly to the metrics program. Relevant indicators, such as project productivity, are then fed back to data suppliers so they can use their own data for project

TABLE IV

CATWOEOF“USINGMETRICS TOGENERATERESPONSE”

management. Metrics program staff also informs data suppliers about the need for measurements to support management and improvement, such as to identify important improvement areas. Data suppliers’ use of indicators and their appreciation of the need for them typically motivate them to supply quality data, which in turn reflects the progress and general perception of the project. Comparing this first, information-centric view of how measurement activities are ideally performed with metrics practices atSoftware Inc.led to a number of observations.

The company organizes software practice as projects and tasks for developing and maintaining applications. Projects and tasks are well defined, and have an appointed manager. Generally, routines for collecting measures are well estab-lished and managers submit measures to the metrics program as required. However, managers often do not understand the precise definition of metrics and measures. In particular, it is difficult for them to understand and accept function point measures. As a result, managers often submit measures without really understanding the data or implications for managerial decision-making and intervention.

Still, the metrics program contains a lot of data that are po-tentially useful for several purposes, e.g., tracking the resources used to correct errors, assessing a project’s productivity, or tracking application quality. The indicators are not typically used to support management of the software operation; they are primarily used for high-level software process improvement, and for feeding data into a model for estimating development projects. Project managers, however, seldom use this estimation model.

Relevant indicators are rarely presented to data suppliers, and the metrics staff has not identified when and how data suppliers could make use of indicators. The metrics program does not supply relevant indicators to project managers during the project life cycle, i.e., the indicators are only available after the project has ended. Hence, project managers cannot use indicators for controlling their projects. They can only use indicators for post-mortem evaluation. In some cases, indicators are not available for two to five months after a project has ended.

Although data suppliers do not themselves use indicators, the experienced need of other colleagues could be communicated to them, e.g., the use of data for high-level software process im-provement. This is, however, not happening. The need for mea-sures and indicators is poorly communicated to data suppliers,

(7)

and most software engineers and managers do not understand the metrics program’s purpose.

There is no explicit standard for data quality, but the met-rics staff does perform a comprehensive validation of data in some units. Although it is difficult for data suppliers to partici-pate in this validation, the metrics staff tries hard to define mea-sures and support data suppliers. The metrics program’s adjust-ment in response to data quality problems is nonetheless quite unsystematic.

B. Disseminate Indicators and Interpretations

Ideally, as illustrated in Fig. 4 and summarized in Table III, targeted information is disseminated effectively, taking into ac-count available measures and indicators, user needs, and avail-able media. Hence, the metrics staff interprets measures and in-dicators and chooses an appropriate medium to communicate targeted information to program users. To generate relevant in-formation, the metrics staff must appreciate the users’ need for information. When comparing this second, information-centric view of ideal dissemination practices with current metrics prac-tices atSoftware Inc., we made several additional observations. The metrics program has more than 100 measures and indi-cators. As mentioned earlier, they are not well understood by the users. Hence, the metrics staff makes targeted information available to relevant groups. A proper interpretation requires, however, insight into the metrics program, submitted measures, and actual software practices. This means that results are often presented to users in quite general terms that require further interpretation.

A few routines for disseminating information are rather well established, e.g., the presentation of indicators for senior man-agement. In most cases, low-level managers simply collect re-quired measures and ignore indicators. They only engage in in-terpretations when senior management requires that they do so. The metrics staff does not systematically analyze target groups and information needs. This means that they cannot tailor the presentation of information to specific recipients. Moreover, because most managers do not see any need for the metrics program, it is difficult to get them actively involved in identifying information needs. Most managers believe they can do their job without the metrics program.

State-of-the-art media are available for presenting informa-tion in a variety of ways. However, choice of media is not con-sidered systematically. Nearly all interpretations are presented in written, paper-based reports. Only a few indicators are pre-sented on the company intranet.

Software Inc.has decided on a strategy for disseminating in-formation, but the strategy is not applied to the metrics program. Information is supposed to be communicated from the met-rics staff to senior management and from senior management to lower level managers. Project and application managers are supposed to receive indicators on their own software practices from the metrics staff. This happens only in few cases. There are no explicit criteria for the success of information dissemination. Moreover, there is no systematic evaluation and adjustment of dissemination practices.

C. Use Metrics to Generate Response

Ideally, as illustrated in Fig. 5 and summarized in Table IV, the metrics staff facilitates managerial response and organizes structured debates among managers and engineers on the basis of measures and indicators. The program is subsequently tuned and gradually developed based on the metrics staff’s appreci-ation of informappreci-ation needs and user experiences. When com-paring this view of how metrics programs are used to generate managerial response with metrics practices atSoftware Inc., we arrived at additional observations.

The set of measures and indicators is very complex. To make the information accessible to users, the metrics staff can facili-tate structured debates with users. This requires, however, sub-stantial knowledge of the indicators and their interpretation, plus skills to organize structured debates. There are few people at

Software Inc.with those skills.

The metrics staff does not consider relevant questions about organizing debates: Why should they be organized? Who should participate? When and where should they take place? Few de-bates to assign meaning to measures and indicators actually take place, and there are no explicit criteria for evaluating them when they do occur. Furthermore, there is no systematic evaluation and adjustment of how and when debates are organized. This is true despite the fact that senior management holds structured debates each year, and there is a strong tradition of structured debates within the software process improvement organization. There are few managerial responses based on information from the metrics program. Senior management tries to un-derstand and explain the indicators, but lower level managers usually ignore indicators. However, many software process improvement initiatives are based on the metrics program, e.g., those related to estimation, software configuration manage-ment, and requirements management.

Structured debates could facilitate a systematic appreciation of information needs. However, the metrics staff rarely uses such opportunities to reflect on the use and possible improvement of the metrics program. Few systematic efforts are aimed at tuning the metrics program. The metrics staff decides on minor im-provements without any real commitment from management.

In summary, this assessment reveals a gap between the three information-centric views of metrics programs and the current practices atSoftware Inc.The organization has a high commit-ment to use software metrics to manage and improve the soft-ware operation and puts considerable effort into data collection. But when it comes to ensuring data quality and creating infor-mation to manage or improve software practices, our analysis reveals a number of weaknesses and dysfunctional practices.

D. Interpret Findings

To get a deeper understanding of why the software metrics program atSoftware Inc.fails in certain ways, we subsequently viewed the program as an information medium that shapes com-munication and social interaction between data suppliers, soft-ware engineers, softsoft-ware managers, improvement agents, and metrics staff.

The success of any metrics program is highly dependent on data quality, cf. “measure” in Fig. 1. Our analysis shows that

(8)

data suppliers atSoftware Inc.do not appreciate the need for collecting data, nor do they use measures or indicators them-selves. The data suppliers’ task is to measure certain objects re-lated to their practices. Using Pedersen’s notion of information [43], data providers appear to have little insight into the intended meaning. Their actions are purely symbolic, focusing on objects and producing data about them. Put differently, measuring is de-tached from their practices and they have little motivation or knowledge to facilitate high data quality.

Only limited analyses are made of data in the metrics pro-gram, cf. “analyze” in Fig. 1. The analyses that are made are either carried out by the contractor or by the metrics staff. The contractor has little or no insights into the software operation at Software Inc.and its interpretations tend to be straightfor-ward and based on general trends. Because the metrics staff and the contractor share an understanding of the program’s in-tended underlying meaning, the metrics staff can make sense of, modify, and refine the contractor’s analyses to make them more useful withinSoftware Inc.The metrics staff also performs cer-tain analyses on their own, but these are on an aggregate level and target senior management or the organization as a whole. No specialized interpretations are provided relating data in the metrics program to particular contexts and needs within the or-ganization. Moreover, there is little explicit knowledge about particular needs for information about the software operation. For these reasons, little is done to convey the fact that the pro-gram’s intended purpose is related to the information needs of software managers atSoftware Inc.

Software Inc.’s metrics program is used for some purposes, but not for others, cf. “intervene” in Fig. 1. Those involved in software process improvement use the program strategically to identify and support new improvement initiatives. In contrast, the metrics program is used very little, if at all, on a tactical level to support software management. Again, we can use Ped-ersen’s notion of information [43] to explain this variation. The improvement people have participated actively in designing and implementing the metrics program. They understand its under-lying intention and it is easy for them to interpret meanings that are relevant and useful in their context. In contrast, other man-agers have not participated in the program’s design and have little knowledge of the metrics program’s intended meaning. It is therefore more demanding, and perhaps even impossible, for them to make sense of data in their contexts.

So far, measures, indicators, and interpretations from

Software Inc.’s metrics program have not been successfully disseminated. This is surprising in light of two factors. First,

Software Inc.’s culture is open, i.e., people have explicitly re-quested that metrics information be publicly available. Second,

Software Inc.is using contemporary media, e.g., an intranet, to support communication within the organization. Despite these enabling factors, dissemination from the metrics program is still quite limited. The metrics staff provides overall information to top management on a quarterly basis. Once that report is approved, lower levels in the organization can produce more focused reports relevant to their particular situations. This rarely happens. One possible explanation is that the managerial hierarchy severely limits open forms of dissemination, such as posting information on the intranet.

E. Identify Improvements

Our assessment and subsequent interpretation of the findings indicate areas in whichSoftware Inc.can potentially improve practices. We summarized these as follows.

• Data suppliers’ needs should be given more consideration in order to increase the quality of data.

• Targeted interpretations should be made available to sup-port increased usage.

• The intended meaning of measures and indicators should be shared across groups.

• The managerial hierarchy should not restrict information dissemination.

• Contemporary technologies should be used to facilitate dissemination.

• The metrics staff should change its role from that of data supplier to that of information provider.

The three information-centric views of software metrics pro-grams and the insights into current practices gave us more de-tailed clues as to how the company can establish specific initia-tives within these improvement areas. After considering other improvement activities, trends, andSoftware Inc.’s climate, we formulated four specific initiatives. Initially, senior management decided to launch the following two.

• Data collection should be optimized to motivate data sup-pliers and save time. Furthermore, results and their use should be visible to data suppliers in order to increase data quality. This initiative should increase data suppliers’ ap-preciation of the intended meaning of measures and indi-cators.

• Measures and indicators should be made available for project management. This initiative’s focus was to pro-vide software quality measures of the number of bugs per function point at different stages of a project.

As these rather limited improvements were being imple-mented, there was a change of senior management. The new management found the assessment of the existing metrics pro-gram interesting and decided on a radical improvement strategy in which a new firm-specific metrics program was designed and the contract with the external supplier was cancelled. The new metrics program and practices were based on our assessment’s recommendations and findings. Also, a process of continuous improvement was established to further develop and sustain the metrics program as a useful and integral part ofSoftware Inc.’s management practices [17].

VI. ASSESSMENTLESSONS

Table V summarizes guidelines for the assessment approach. The guidelines build on our experiences and on the background literature [8], [10], [43], [51]. They provide advice on how to use the information-centric assessment approach for improvement purposes.

Appreciate current situation: Look at processes, structures, climate, and power [8], [10]. Each of these perspectives provides different insights into the current situation. By looking at pro-cesses, you can identify inappropriate or ineffective practices. Structures help you identify the conditions under which the met-rics program operates. By looking at climate, you can identify

(9)

TABLE V ASSESSMENTLESSONS

explanations for current weaknesses. If, for example, metrics program data has been used to publicly blame project managers, they will tend to submit data that helps them avoid such blame, rather than data that reflects their projects. It is useful but often difficult to make people talk about the power structure in their organization. Personal observations are, therefore, necessary to qualify interviews on this point. Furthermore, a lot of people are not very conscious about power issues; they concentrate on doing their jobs.

Be creative and open-minded—you can always dispose of viewpoints later [8], [10]. In our case, we drew several rich pic-tures. From these, we listed nine possible viewpoints, of which three turned out to be particularly valuable. It is important that you do not limit yourself at this stage, as you could miss out on important viewpoints.

Be careful when you identify actors. Some actors are part of the problematic situation or have something at stake [25]. Middle managers might not support the use of data about their applications or projects. Is this because they do not believe in the idea, or because they want to prevent senior managers and others from gaining insight into their areas? In some cases, you have to decide beforehand how radical your assessment should be. Likewise, key actors will sometimes use the assessment as an opportunity to blame other actors.

Consider confidentiality in interviews and published results, at least in the early stages. Metrics programs are monitoring and control mechanisms [31] and involve contradictory interests [25]. If you can guarantee confidentiality in interviews, people will be more open. It is important to illustrate how interviews will be used, and how results will be published.

Create information-centric viewpoints: Describe concrete ideas, thoughts, and points of view, and not the situation “as-is” [8], [10]. Use viewpoints to learn about the situation, and create concrete viewpoints so that they are useful in debates. Your chance of getting relevant feedback will decrease if the viewpoints are academic and detached from the organization’s reality.

Describe a viewpoint’s essence, not the detailed steps or ac-tivities [8], [10]. The simplified rich picture of a viewpoint is not supposed to be a data flow diagram. Looking at processes and structures, it is tempting to draw very detailed diagrams, as is often customary in requirements analysis. Instead, draw a simple, overall picture and elaborate from there. Even though the viewpoint is very complex and consists of concurrent ele-ments, consider it to be linear or break it into several different viewpoints in order to simplify. Trying to cope with all issues at one time will take you nowhere.

Remember: viewpoints are vehicles for learning [8], [10], [33]. The point is to create a foundation for assessing the current situation, not to make a model that will solve current problems. Think of viewpoints as tools for learning how to improve, rather than solutions to the organization’s problems.

Iterate several times: You should do several iterations over root definitions (CATWOE), texts, and simplified rich pictures. Do a thorough check of texts and pictures against root defi-nitions. Likewise, you should iterate and reiterate when you draw rich pictures. Explain the pictures to an outsider to see if they make sense. The pictures play a vital role in the informa-tion-centric approach, and it is important that they are relevant and make sense in your particular context. Furthermore, you will

(10)

be explaining the pictures to key actors, and you should know how to present them in a straightforward way.

Compare situation with viewpoints: Aim for a structured de-bate [8], [10]. Work your way systematically through the dede-bate to ensure that you cover all aspects of each viewpoint. Keeping this in mind, you will see that the information-centric view-points are well suited for structuring a debate with many dif-ferent audiences. We have successfully performed debates with interviewees, other colleagues at the company, the metrics staff from other companies, and conference participants.

Look for differences between viewpoints and practices and identify problems and opportunities related to the metrics program. Remember that at least one actor should promote a problem to ensure you do not end up seeing everything as problematic. Also, try to identify the impact of problems to help prioritize your effort [33]. You will not be able to resolve every issue.

To begin the comparison of key stakeholders, gather material from the “appreciate current situation” activity (see Table I). Your initial findings will provide guidance for what you are looking for. Although you should debate with other stakeholders to test for relevance and usefulness, start with interviewees. They know the set up, which gives you a chance to rehearse the process. Guide participants through the viewpoints and initial findings. The viewpoints are rich in information, and the actors will not be able to read them on their own. A fruitful approach is to talk the actors through the picture as planned in the previous step. Be aware that you should adjust your presentation order in accordance with the participants’ questions and responses. Start with elements that are most familiar to the participants.

Interpret findings: Look across viewpoints and findings. Your focus is now shifting from details within viewpoints to viewing the metrics programs as an information medium [43], [51]. Use the theoretical framework systematically and carefully cover all elements—measure, analyze, intervene, and medium (see Fig. 1).

Let the framework challenge your findings and help identify explanations and root causes. Furthermore, try to move from isolated observations to systemic problems. For example, the decision to share or not share the information’s intended meaning explained why data from Software Inc.’s metrics program were used for software process improvement and not management support. In our case, the theoretical lens chal-lenged the analysis and helped us look beyond conventional beliefs within the organization. Also, it was helpful to have both an outsider and an insider conduct the interpretation and identify and challenge findings.

To validate your interpretations, you should present them to interviewees and other stakeholders in a structured debate. Many aspects of most situations are interrelated, and it’s almost impossible to provide feedback without a structured approach to discussion. We successfully performed debates on interpre-tations with interviewees, other colleagues, and people from other companies.

Identify improvements: Identify improvement areas from viewpoints and findings [8], [10]. The information-centric

viewpoints suggest how you can execute metrics practices, while the findings suggest where the problems and opportuni-ties reside.

Identify and debate improvement areas with key stakeholders. The documentation from the earlier steps is well suited for this dialogue. To get the process going, you can present a set of pre-liminary improvements. These should serve as inspiration, and you must be open to debating them. Again, try to structure the debate with a focus on one or a few improvement areas at a time. Hopefully, stakeholders’ involvement will commit them to the improvements [22], [26], [33].

Make sure to address identified problems and to compare improvements to current practices. As you work your way through the assessment approach’s steps, you can get carried away. Given this, it is important that you pause and check your improvement suggestions against the current situation. Will the improvements actually resolve the identified problems?

Test the feasibility of the improvements. Even though im-provements are technically possible, you must ensure that you have the right kind of commitment from sponsors and the people who will be affected [22], [26]. Furthermore, you should be aware of general trends and the climate in the organization. Some improvements will be more uphill than others. It might be better to focus on two minor improvement areas that are likely to succeed than one major improvement that is likely to fail. Likewise, it is important to align with other efforts in the organ-ization in order to be a credible partner. Always remember that a desirable improvement is not necessarily a feasible one [8], [10].

Communicate improvements in specific terms that make sense within the organization. Having worked your way around the rich pictures, the information-centric viewpoints, and the interpretations, you must get back to the real world. Your suggestions for improvements should be presented to relevant stakeholders in language that communicates directly to them.

We have presented the information-centric assessment ap-proach at conferences and to colleagues, and it has been easily understood. However, actually adopting the approach requires that you appreciate the ideas of SSM and have insights into the literature on software metrics. It is important that you use the approach systematically.

VII. DISCUSSION

Our research confirms that it requires a dedicated effort to integrate metrics programs into software practices [5], [13], [19]–[21], [27], [28], [38], [42], [44], [47]. Implementing software metrics programs requires change—or, as Weinberg puts it, successfully transforming the “old status quo” to a “new status quo” [52]. When software metrics programs are intro-duced, they are, however, often rejected or integrated into the old status quo without achieving a new status quo. The limited success of the metrics program atSoftware Inc.illustrates this.

The overall purpose of our research was to enhance manage-rial usage of metrics programs within software organizations. The key contribution is the information-centric approach to met-rics assessment combining an information medium perspective [43], [51] with an organizational problem-solving process based

(11)

TABLE VI

RELATEDAPPROACHES TOSOFTWAREMETRICS

on SSM [8], [10]. In the following, we explicate the contribu-tion by comparing the proposed approach to related approaches to software metrics. First, we compare with other approaches that use soft systems thinking in relation to software metrics. Second, we compare with other approaches to assessment of metrics programs. The key features of the related approaches are summarized in Table VI.

Similar to our approach, Hughes [25] and Bellet al.[4], have adopted soft systems thinking to software metrics. Hughes fo-cuses on design of metrics programs. For that purpose, he has embedded software measurements into SSM to model software development. Hughes argues that GQM is reputed to be the primary method for defining relevant metrics, but identifica-tion of common, acceptable goals can be quite difficult. There might, for instance, be a conflict between software developers who prefer generous effort estimates to relieve pressure on their projects and software managers who prefer restricted estimates to set high productivity rates. Hughes suggests that SSM ad-dresses these issues effectively and that SSM-like approaches should precede or substitute for GQM. Likewise, Bellet al.[4] have introduced and tested an approach to software process im-provement based on SSM. Their methodology combines soft systems thinking and GQM. They argue that GQM does not pro-vide any guidelines on how to identify relevant problems and goals. Their methodology consists of four stages: 1) framing to prepare for enquiry; 2) enquiry to identify the problems as perceived by the involved stakeholders; 3) metrication to re-solve the identified problems by enabling metrics to be collected

based on GQM; and 4) action to collect and store data in an ac-cessible and transparent manner.

Our motivation to adopt SSM in relation to software metrics resonates with those of Hughes [25] and Bellet al.[4]. In their approaches, however, SSM is used as an alternative or supple-ment to GQM, and soft systems are used to model and debate software development practices. Our focus is on assessment of software metrics practices. We apply soft systems thinking to develop information-centric views of metrics practices and we use SSM’s learning cycle to guide metrics assessment and in-volve key stakeholders in the process.

Our approach complements existing approaches to assess software metrics programs, cf. Table VI and Section II. Men-donça and Basili [38] focus on collection and use of data in a software metrics program; their approach combines GQM with data-mining techniques to more effectively utilize available measures and indicators already implemented in the program. Kitchenhamet al.[32] are interested in validation of metrics programs; based on measurement theory, they offer a concep-tual framework for assessing how well attributes, measures, and indicators represent the real world. Berry and Jeffrey [5] are concerned with predicting metrics program success; they have collected data from several programs and offer a framework for assessing a program’s success through structured questions about its status, context, inputs, processes, and products. Berry and Vandenbroek [6] emphasize metrics related to specific practices; they provide a meta-framework to design and deploy assessment instruments of metrics targeting specific software processes. Finally, ISO/IEC 15 939 focuses on benchmarking of metrics programs; this approach offers a standard set of measurement activities that can be used as a normative basis for assessment.

The information-centric approach adds to this line of re-search by focusing on how information from metrics programs can enhance managerial decision-making and intervention. The approach combines a perspective on software metrics programs as media [2], [43], [51] with an organizational problem-solving approach that engages key stakeholders in debating metrics practices [8], [10]. The approach seeks in this way to enhance managerial usage of software metrics programs by: 1) viewing software metrics programs as media for collecting, analyzing, and communicating information about the software opera-tion; 2) applying different information-centric views to assess metrics practices; and 3) involving relevant stakeholders in debating as-is and could-be metrics practices.

We found in the study that the information-centric perspec-tive in many ways contrasted traditional perceptions of soft-ware metrics atSoftware Inc.The original metrics program fo-cused on data and technology rather than on information and technology usage; management was formally seen as a hier-archy, rather than as a network of communicating individuals; and, computers were viewed as devices for storing and pro-cessing data rather than as media for creating and sharing infor-mation.Software Inc.’s initial metrics program was in this way focused on data, programs, computers, and hierarchy. The in-formation-centric assessment suggested that successful metrics practices would require equal emphasis on information, inter-pretation, and human interaction and communication.

(12)

The general purpose of assessment is to understand practices and identify possible areas for improvement [33], [36], [50], [52]. Some assessment approaches are based on abstract models and others emphasize identification of unique problems [41]. The information-centric approach is both problem and model-driven, cf. Table I. It starts by collecting data about current prac-tices without having any specific normative models of metrics practices in mind. Based on that, it creates several rich pictures of current practices and identifies a number of problematic is-sues. This initial exploration constitutes the problem-driven part of the information-centric approach. The approach then cre-ates a number of idealized views of metrics practices. The sys-tematic comparison between these views and practices together with the subsequent interpretation of findings constitutes the model-driven part of our assessment.

There are both important limitations and implications of the information-centric approach. The research was carried out atSoftware Inc., and the results are not necessarily trans-ferable to all software organizations. The context was a large software organization that emphasizes measurements, and the study was conducted by a well-functioning collaborative team of researchers and practitioners. The information-centric approach requires strong modeling and analytic skills and knowledge about soft systems thinking and practice. Our pre-vious experiences with SSM were instrumental in conducting the assessment. If you are trained in drawing rich pictures and formulating systemic views, the process is rather quick and iterations are easy. It does, however, require effort and practice to acquire these skills. This stresses the more general point that organizational assessment and problem-solving is a crucial skill in software improvement efforts [33], [36], [50], [52].

The information-centric approach has implications for both practice and research. Many software organizations fail to get satisfactory benefits from their metrics program investments [13], [20], [21]. Metrics programs often fail to provide useful information, and when they do provide it, it is not necessarily communicated to the right people at the right time. Software managers are, therefore, advised to prioritize critical assess-ments of their metrics program [38]. The information-centric approach complements existing approaches by concentrating on making metrics programs useful for managerial deci-sion-making and intervention. It also supports the active participation of different stakeholders and stresses the relation between measured objects, the program designer’s intended meaning, and different users’ interpreted meaning. Our findings fromSoftware Inc.suggest that this approach can help increase the value generated by metrics programs.

We agree with Niessink and Vliet [42] that it is important to focus on those factors that enable metrics programs to gen-erate value and not just data. The information-centric approach illustrates how emphasis on information rather than data, inter-pretation rather than facts, and human interaction and commu-nication rather than computation and storage of results can con-tribute to this end. Future research could further explore infor-mation-centric approaches by adopting general approaches to assess information systems quality (for example, see [14]) to the specific area of software metrics programs. Additional re-search is also required to complement the assessment focus of

this research with information-centric techniques for other ac-tivities in the improvement cycle of the IDEAL model [36]. The experiences reported here focus on the cycle’s initiation and di-agnosing phases, and some aspects of the establishing phase. Future research addressing IDEAL’s action and learning phases could provide valuable new knowledge of the information-cen-tric approach as an integral part of comprehensive software met-rics improvement strategies.

ACKNOWLEDGMENT

The authors wish to thank colleagues atSoftware, Inc.for ac-tive participation in the assessment, for providing valuable feed-back, and for engaging themselves in improving the software metrics program. K. Kautz, J. Nørbjerg, and colleagues at the 24th IRIS Conference 2001 in Norway have provided valuable comments on an earlier version of the paper. Finally, they want to thank the editors and reviewers for critical and very construc-tive comments that have helped us improve the manuscript.

REFERENCES

[1] D. M. Ahern, A. Clouse, and R. Turner,CMMI Distilled: A Practical Introduction to Integrated Process Improvement. Reading, MA: Ad-dison-Wesley, 2003.

[2] P. B. Andersen,A Theory of Computer Semiotics. Semiotic Approaches to Construction and Assessment of Computer Systems. Cambridge, U.K.: Cambridge Univ. Press, 1990.

[3] D. Avison and T. Wood-Harper,Multiview: An Exploration in Informa-tion Systems Development. New York: McGraw-Hill, 1990. [4] G. A. Bell, M. A. Cooper, J. O. Jenkins, S. Minocha, and J. Weetman,

“SSM+GQM=The Holon methodology: A case study,” inProc. 10th ESCOM, 1999, pp. 123–136.

[5] M. Berry and R. Jeffery, “An instrument for assessing software measure-ment programs,”Empirical Softw. Eng. Int. J., vol. 5, no. 3, pp. 183–200, 2000.

[6] M. Berry and M. F. Vandenbroek, “A targeted assessment of the soft-ware measurement process,” inProc. IEEE 7th Int. Softw. Metrics Symp., London, U.K., 2001, pp. 222–235.

[7] P. Bøttcher and H. D. Frederiksen, “Experience from implementing soft-ware metrics for benchmarking,” presented at the 11th Int. Conf. Softw. Qual., Pittsburgh, PA, 2001.

[8] P. Checkland,Systems Thinking, Systems Practice. New York: Wiley, 1981.

[9] , “Soft systems methodology: a thirty year retrospective,”Syst. Res. Behav. Sci., vol. 17, pp. 11–58, 2000.

[10] P. Checkland and J. Scholes,Soft Systems Methodology in Action. New York: Wiley, 1990.

[11] M. K. Daskalantonakis, “A practical view of software measurement and implementation experiences within Motorola,”IEEE Trans. Softw. Eng., vol. 18, no. 11, pp. 998–1010, Nov. 1992.

[12] L. Davies and P. Ledington,Information in Action: Soft Systems Method-ology. New York: MacMillan, 1991.

[13] C. A. Dekkers, “The secrets of highly successful measurement pro-grams,”Cutter IT J., vol. 12, no. 4, pp. 29–35, 1999.

[14] W. H. deLone and E. R. McLean, “The deLone and McLean model of information systems success: A ten-year update,”J. Manage. Inf. Syst., vol. 19, no. 4, pp. 9–30, 2003.

[15] N. E. Fenton and M. Neil, “Software metrics: Successes, failures and new directions,”J. Syst. Softw., vol. 47, no. 2–3, pp. 149–157, 1999. [16] H. D. Frederiksen and J. Iversen, “Implementing software metrics

pro-grams: A survey of lessons and approaches,” inIRMA 2003, Philadel-phia, PA, 2003.

[17] H. D. Frederiksen and L. Mathiassen, “Assessing improvements of soft-ware metrics practices,” presented at the IFIP 8.6 Working Conf. IT In-novation for Adaptability and Competitiveness, Dublin, Ireland, May 30–Jun. 2 2004.

(13)

[18] R. D. Galliers, “Choosing appropriate information systems research approaches: A revised taxonomy,” inInformation Systems Research: Contemporary Approaches & Emergent Traditions, H. E. Nissen, H. K. Klein, and R. A. Hirschheim, Eds. Amsterdam, The Netherlands: Elsevier, 1991.

[19] W. Goethert and W. Hayes,Experiences in Implementing Measure-ments Programs. Pittsburgh, PA: The Software Engineering Institute, Carnegie Mellon Univ., 2001. CMU/SEI-2001-TN-026.

[20] D. R. Goldensen, A. Gopal, and T. Mukhopadhyay, “Determinants of success in software measurement programs: Initial results,” inProc. 6th IEEE Int. Symp. Softw. Metrics, Boca Raton, FL, Nov. 4–6, 1999, pp. 10–21.

[21] A. Gopal, M. S. Krishnan, T. Mukhopadhyay, and D. R. Goldenson, “Measurement programs in software development: Determinants of suc-cess,”IEEE Trans. Softw. Eng., vol. 28, no. 9, pp. 863–875, 2002. [22] R. B. Grady,Practical Software Metrics for Project Management and

Process Improvement. Englewood Cliffs, NJ: Prentice-Hall, 1992. [23] T. Hall and N. Fenton, “Implementing effective software metrics

pro-grams,”IEEE Softw., vol. 14, no. 2, pp. 55–64, Mar./Apr. 1997. [24] J. D. Herbsleb and R. E. Grinter, “Conceptual simplicity meets

organiza-tional complexity: Case study of a corporate metrics program,” inProc. Int. Conf. Softw. Eng., Kyoto, Japan, 1998, pp. 271–280.

[25] R. T. Hughes, “Embedding software measurement in a soft system ap-proach,” inProc. 11th ESCOM, 2000, pp. 143–150.

[26] W. S. Humphrey,Managing the Software Process. Reading, MA: Ad-dison-Wesley, 1989.

[27] J. Iversen and K. Kautz, “Principles of metrics implementation,” in Im-proving Software Organizations: From Principles to Practice, L. Math-iassen, J. Pries-Heje, and O. Ngwenyama, Eds. Reading, MA: Ad-dison-Wesley, 2001, pp. 287–305.

[28] J. Iversen and L. Mathiassen, “Cultivation and engineering of a software metrics program,”Inf. Syst. J., vol. 13, no. 1, pp. 3–20, 2003. [29] L. O. Jepsen, L. Mathiassen, and P. A. Nielsen, “Back to thinking

mode—diaries as a medium for effective management of information systems development,”Behav. Inf. Technol., vol. 8, no. 3, pp. 207–217, 1989.

[30] C. Jones, “Software measurement programs and industry leadership,” Crosstalk, vol. 14, no. 2, pp. 4–7, 2001.

[31] L. J. Kirsch, “The management of complex tasks in organizations: Con-trolling the systems development process,”Org. Sci., vol. 7, no. 1, pp. 1–21, 1996.

[32] B. Kitchenham, S. L. Pfleeger, and N. Fenton, “Toward a framework for software measurement validation,”IEEE Trans. Softw. Eng., vol. 21, no. 12, pp. 929–944, 1995.

[33] G. F. Lanzara and L. Mathiassen, “Mapping situations within a systems development project,”Inf. Manage., vol. 8, no. 1, pp. 3–20, 1985. [34] P. J. Lewis, “Linking soft systems methodology with data-focused

infor-mation systems development,”J. Inf. Syst., vol. 3, no. 3, pp. 169–186, 1993.

[35] L. Mathiassen, “Collaborative practice research,”Inf., Technol., People, vol. 15, no. 4, pp. 321–345, 2002.

[36] B. McFeeley,IDEAL. A User’s Guide for Software Process Improve-ment. Pittsburgh, PA: The Software Engineering Institute, Carnegie Mellon Univ., 1996. CMU/SEI-96-HB-001.

[37] J. McGarry, D. Card, C. Jones, B. Layman, E. Clark, J. Dean, and F. Hall, Practical Software Measurement. Objective Information for Decision Makers. Reading, MA: Addison-Wesley, 2001.

[38] M. G. Mendonça and C. R. Basili, “Validation of an approach for im-proving existing measurement frameworks,”IEEE Trans. Softw. Eng., vol. 26, no. 6, pp. 484–499, Jun. 2000.

[39] J. C. Mingers, “Information and meaning: Foundations for an intersub-jective account,”Inf. Syst. J., vol. 5, pp. 285–306, 1995.

[40] P. Naur, “Program development studies based on diaries,” inLecture Notes in Computer Science, Formal Methods and Software Develop-ment, T. R. Greenet al., Eds. Berlin, Germany: Springer-Verlag, 1983.

[41] P. A. Nielsen and J. Pries-Heje, “A framework for selecting an assessment strategy,” inImproving Software Organizations: From Prin-ciples to Practice, L. Mathiassen, J. Pries-Heje, and O. Ngwenyama, Eds. Reading, MA: Addison-Wesley, 2001, pp. 185–198.

[42] F. Niessink and H. van Vliet, “Measurement should generate value, rather than data,” inProc. 6th Int. Softw. Metrics Symp., Boca Raton, FL, 1999, pp. 31–38.

[43] M. K. Pedersen,A Theory of Informations. Copenhagen: Samfundslit-teratur, 1996.

[44] S. L. Pfleeger, “Lessons learned in building a corporate metrics pro-gram,”IEEE Softw., vol. 10, no. 3, pp. 67–74, 1993.

[45] D. Philips, “Back to basics: Metrics that work for software projects,” Cutter IT J., vol. 12, no. 4, pp. 36–42, 1999.

[46] R. S. Pressman,Software Engineering—A Practitioner’s Approach, Eu-ropean Adaptation, 5th ed. New York: McGraw-Hill, 2000. [47] S. Rifkin and C. Cox, Measurement in Practice. Pittsburgh, PA:

The Software Engineering Institute, Carnegie Mellon Univ., 1991. CMU/SEI-91-TR-016.

[48] E. C. L. Starrett, “Measurement 101,”Crosstalk, vol. 11, no. 8, pp. 24–28, 1998.

[49] F. A. Stowel, Ed.,Information Systems Provision: The Contribution of Soft Systems Methodology. New York: McGraw-Hill, 1995. [50] G. M. Weinberg,Becoming a Technical Leader—An Organic Problem

Solving Approach. New York: Dorset House, 1986.

[51] ,Quality Software Management, Volume 2, First-Order Measure-ment. New York: Dorset House, 1993.

[52] , Quality Software Management, Volume 4: Anticipating Change. New York: Dorset House, 1997.

[53] R. K. Yin,Case Study Research Design and Methods, 2nd ed. New-bury Park, CA: Sage, 1994, vol. 5, Applied Social Research Methods Series.

Helle Damborg Frederiksenreceived the M.S. and Ph.D. degrees from Aalborg University, Aalborg, Denmark, in 1992 and 2004, respectively.

She has been employed at the Case Company, Aalborg, Denmark, since 1995, and was an industri-ally-based Ph.D. student at Aalborg University from 2000 to 2004.

Lars Mathiassen received the M.S. degree in computer science from Aarhus University, Aarhus, Denmark, in 1975, the Ph.D. degree in informatics from Oslo University, Oslo, Norway, in 1981, and the Dr.Techn. degree in software engineering from Aalborg University, Aalborg, Denmark, in 1998.

He is currently a Professor of Computer Informa-tion Systems at Georgia State University, Atlanta. He is coauthor ofComputers in Context(Oxford, U.K.: Blackwell, 1993),Object Oriented Analysis & De-sign(Aalborg, Denmark: Marko Publishing, 2000), andImproving Software Organizations(Reading, MA: Addison-Wesley, 2002). His research interests include information systems and software engineering with a particular emphasis on process innovation.

Dr. Mathiassen is a member of the Association for Computing Machinery (ACM) and AIS.

References

Related documents

Co-Director of Research, National Science Foundation (October, 1992 through August, 1993). Hurricane Andrew: The Impact on Law and Social Control. - Ronald Akers, University

Based on the experimental results of the salt scaling (mass loss), a comparison of ASTM C 672 and its proposed replacement method for each concrete type, the effect of

I: standard parameter set results for 2000; II: adjusted parameter set results for 2000; III: development of standard parameter set results until 2100; IV: development of

Management Configurations. Document Type This field indicates which SAP Business One object will be exported. This also specifies which set of paths will be used for the

In the next stage, what Kane calls “stage one,” SEPTA will (1) put in place the infrastructure necessary to allow it to issue its own prepaid contactless payment cards, which

The online tools we embedded into our website included Etherpads for collaborative writing, Google Forms for student progress check, Google Hangouts for videoconferencing,

Canadian Recent Past, Current & Future Analysis for Denim Jeans Market - Analyzed with Annual Sales Figures in US$ Million for the Years 2013 through 2020 (includes

Concrete is the most popular and conventional construction material used and the combination with steel reinforcement enhanced the properties in-terms of bending or tension