• No results found

Maintainability-based impact analysis for software product management

N/A
N/A
Protected

Academic year: 2021

Share "Maintainability-based impact analysis for software product management"

Copied!
6
0
0

Loading.... (view fulltext now)

Full text

(1)

Maintainability-based impact analysis for software product management

Samer I. Mohamed, Islam A. M. El-Maddah, Ayman M. Wahba

Department of computer and system engineering

Ain Shams University, Egypt

samer.mohamed@eds.com

,

islam_elmaddah@yahoo.co.uk

,

ayman.wahba@gmail.com

,

Abstract

The decision making process about which requirements to include within each release, becomes a critical task for the product manage. Thus identifying software product release contents for evolving systems is one of the product success factors that makes any software organization in market-driven environment focuses on and pays a lot of effort and expenses to maximize the satisfaction of their stakeholders. Maintenance of evolving systems is challenging because the system components characteristics restrict the type of requirements they can accommodate during product evolution. This paper illustrates a Maintainability-Based Impact Analysis (HBMIA) methodology that calculate the impact of the system components (where the new requirements to be implemented) on the release planning decisions. These calculated impacts of the requirements scheduled for implementation by factoring the effect on the target components will make those requirements most likely to be successfully implemented within the available resources and constraints. A product management framework is designed to incorporate the effect of both existing and new development components on the release planning decisions. A practical case study is performed to demonstrate the added value gained from the proposed HBMIA through the designed framework.

1. Introduction

Incremental software development is a process in which software product is developed incrementally such that additive functionalities and/or faults correction are produced over the sequential product releases. This will enable the product customers to receive parts of the system early to provide higher value and gain early feedbacks [9]. Release planning for incremental software development involves those decisions about which requirements to be implemented during which release. This will be a critical and challenging process especially with stakeholders

conflicting perspectives, competing objectives and different types of constraints. Thus the objective from the release planning process is to maximize the value gained while balancing the stakeholder’s perspective and meeting the resources, budget, time, and risk constraints [7].

Software maintenance is a very broad activity that covers correction of errors, enhancements, deletion and addition of capabilities, adaptation to changes in data requirements and operation environments, improvement of performance and usability [1]. The IEEE standard definition for software maintenance process is as follows: “Software maintenance is the process of modifying a software system or component after delivery to correct faults, improve performances or other attributes, or adapt to a changed environment [11].” This definition covers the different types of software maintenance like adaptive, perfective, corrective and preventive maintenance [3]. Adaptive maintenance involves the modifications to the software system required by changes in the software operating environment. Perfective maintenance refers to the changes originate from the new user requirements. Corrective maintenance includes all the changes required to fix any faults or bugs in the system. Preventive maintenance focuses on preventing problems in the future. Maintenance definition also reflects the common view that software maintenance is a post-delivery activity that starts when the system is released and encompasses all the activities that keep the it operational [1]. One of the major challenges of the software maintenance is to determine the impact of a proposed modification on the system components which is called impact analysis.

Impact analysis is the activity of assessing the potential effects of a change with the aim of minimizing unexpected side effects. It also involves the identification of the system’s components that need to be modified or created as a consequence of the proposed modification [2]. Impact analysis has a great benefit for reducing the risks and unexpected outcomes from the system before implementing the changes [9].

(2)

Impact analysis information also can be used in planning different project activities like resource estimation, schedule and cost allocation. This information also can be used to reduce the rework cost and result in higher quality .

The organization of the paper is as follows. In the next section, we will refer to the related work for our research. In section three we will elaborate the rationale and research objectives for the proposed methodology (HBMIA). In section four, we will discuss the release planning process using (HBMIA) methodology. Section five will illustrate the practical advantages from the (HBMIA) through a case study using the designed framework. The final section summarizes our conclusions and introduces our future research.

2. Related work

2.1. Difficulty of Modification (DoM)

Difficulty of Modification (DoM) is used for assessing the impact of the existing components where the new requirements to be implemented. (DoM) act as a measure for how the existing components in the system to be maintained will be impacted by the new requirements to be implemented. There is a set of factors affecting (DoM) measurement. These factors are assessed based on the lower level criteria that can be directly measured from the historical data available in the organization metrics [5]. The set of factors that will subject to our analysis and contribute to (DoM) of the existing components are size, complexity, health, understandability and functionality. These factors are not assumed to be necessary orthogonal. Size refers to that factor which measures the ration of the added/modified code to the total component size. The most common metric used for assessing the component size is the Source Line Of Code or (SLOC). Complexity identifies that factor which measures the code complexity. The component complexity affected by the relations between the components themselves in the systems which are measured by the coupling between components. The most common metric used to assess component complexity is the McCabe’s Cyclomatic complexity [8]. Health refers to that factor which measures the operational failure reported against the component during field usage of the system. Unhealthy components will result in a high risk for any small modification [4]. The health for any components can be calculated as the ration between the numbers of defects against these components to the total number of defects affecting system for any specific period.

Understandability refers to that factor measures the ease with which component can be understood by the developer who modifying it. This will be function of the expertise of whoever is making the change, how long this component is part of the system, and quality of the documentation. The most common metric used to assess the component understandability is the Halstead level [6]. Functionality identifies how much functionality implemented per each component. The most common metric used to assess functionality is Weighted Method per Class (WMC) or can be calculated by the ration of the added/modified functions per component to the total number of functions within this component.

2.2. Difficulty of Creation (DoC)

Difficulty of Creation (DoC) is used for assessing the impact of the new components where the new requirements to be implemented. (DoC) act as a measure for how many the new components in the system to be maintained will be impacted by the new requirements to be implemented. The groups of factors that contribute to (DoC) assessment are size, complexity, criticality, understandability and dependability. Also these factors as indicated in DoM factors are not assumed to be orthogonal, or by other means they may affect each other [6]. Size factor can be calculated as the ration between the new component size to the total size of the new implemented components within a specific period of time. The most common metric used for assessing the component size is the Source Line Of Code or (SLOC). Complexity of the new component will be similar to that of the existing components which indicated in the previous section. Criticality refers to that factor measures how critical the component [8]. Understandability of the new components will be similar to that of the existing component as indicated in the previous section. Dependability identifies that factor which measures the relation between the new component and other system components. The most common metric used to assess Dependability is coupling between components.

3. Rationale and research objectives

3.1. Rationale

The rationale behind the (HBMIA) is to develop a new hybrid-based methodology that combines both metric-based and expert-based approaches to gain the advantages of both. Our contribution in the paper is three fold; first, develop a new hybrid-based

(3)

methodology that combines both expert-based and metric-based approaches to evaluate the Difficulty of Modification (DoM) of exiting components. Second, use the expert-based approach to evaluate the Difficulty of Creation (DoC) of new components imposed by the new requirements. Third, integrate the (HBMIA) with the release planning process to identify those requirements that will be scheduled for implementation based on their influence on the system components.

3.2. Research objectives

1-Design a characterization methodology for the system components that can be used to assess the components based on quality attributes such as Difficulty of Modification (DoM) for existing components and Difficulty of Creation (DoC) for new components. This characterization methodology identifies the different factors that affect both DoM and DoC and gets the proper system metrics that can be used for the assessment of each factor.

2- Develop a methodology to incorporate both system metrics and experts contributions together for each factor of the quality attributes.

3-Imeplement a technique to identify those components that would be impacted by the implementation of the proposed requirements. This technique will be called Requirements-Driven Impact Analysis (RDIA). RDIA will determine how the implementation of each requirement would impact system components.

4-Uses both the characterization methodology and RDIA to calculate the total requirement impact from the components where the requirement to be implemented.

5-Design a framework that uses the calculated requirement impacts to identifies the best requirements to be selected for implementation.

4. Release planning process with HBMIA

4.1. Process architecture

We will demonstrate as shown in figure 1 the steps required to calculate the impact of the system components where the new requirements to be implemented and how this impact be used while selecting the best candidate requirements for the product releases.

Figure 1. Release planning process architecture with integrated HBMIA  Step 1: Use historical data repository of the system within the organization to identify system components.

 Step2: Determine the different factors that affect Difficulty of Modification (DoM).

 Step3: Determine the different factors that affect Difficulty of Modification (DoC).

Historical data repository Requirements repository Identify system component Determine DoM factors Determine DoC factors Get experts view for DoM Get experts views for DoC Calculate new component DoC Calculate existing component DoM Aggregate experts and metrics data for DoM Impact assessment Identify component impacted by each requirement Determine XoM for impacted component Experts Experts

Release planning decisions Release objectives and constraint pool Stakeholders priorities Requirements for implementation

(4)

 Step4: Get the experts contributions for each component factor of both (DoM) and (DoC). Each expert will be assigned weight according to his/her experience with the component. Experts also will be assigned relative weights based on their importance within the organization.

 Step 5: Aggregate the data collected from the experts along with the data collected from the system historical data for each component. System historical data will be assigned weight to reflect the maturity of the data collected within the organization. This weight along with the relative expert’s weights will control the portion by which each source will impact the total calculated value for each component.

 Step 6: For each new requirement to be implemented, we need to identify those components that will be impacted either through modification or creation. This process called Requirements-Driven Impact Analysis (RDIA).

 Step 7: Determine the eXtent of Modification (XoM) for each impacted component. (XoM) determine to which extent the impacted component will be modified by the proposed requirement. For new component XoM will equal 1 but for existing components it will be value less than or equal to 1.

 Step 8: Calculate the new component Difficulty of Creation (DoC) value.

 Step 9: Calculate the existing component Difficulty of Modification (DoM) value.

 Step 10: Aggregate the calculated values of (DoM) and (XoM) for exiting components along with (DoC) and (XoM) for new components to calculate the total impact value of each requirement.

 Step 11: Identify the input stakeholders priorities and interests for those requirements elicited for implementation.

 Step12: Use the impacts values calculated for each requirement along with the stakeholders input priorities, release objectives system and environmental constraints to identify those requirements that will be scheduled for implementation during the next future release.

Due to size limitation for the paper, we could not add the detailed calculations for the (HBMIA) process. The detailed process calculations described in a separate paper submitted to another technical conference.

5. Case study

5.1. Test case description

The HBMIA will be evaluated for the components of the KC1 case study from the NASA Metric Data Program [10] The Metrics Data Program is a database that contains data about problems, products and metrics of a number of software projects. The main objective of the program is to gather, validate, arrange, save and provide software metrics data for the software engineering community. The case study (KC1) is a software component of a data processing unit within a large ground system. The system is made up of 43 (KSLOC) of C++ code. The error data for this code has been collected since the beginning of the project over five years of development and maintenance. The data from (KC1) is analyzed to map it to the class level, so each component refers to a class within (KC1) test case.

In order to show the practical benefit from the (HBMIA) we will assume 20 new requirements G1 to G20 scheduled for implementation as part of the future product maintenance over three releases (Alpha, Beta and Candidate). In this test case, there are two experts (expert_1, expert_2) along with the system historical data metrics data. There are three stakeholders entering their input values for each requirement. Given the components affected by each new requirement taken into consideration that CM1, CM2, CM3, CM4, CM5, CM6 and CM7 are existing system components while CM8, CM9 and CM10 are new components, we will use the designed framework to calculate the requirements impacts. These impacts values will identify which requirements to be selected for implementation based on the available product resources.

To show the release planning process for the 20 requirements over the three product releases using the impact values over the system 10 components, we will follow the following steps:

 Step 1: Each stakeholder will need to enter the specifications for each new requirement that selected for implementation. In this step, each stakeholder will enter his own values for each requirement

 Step 2: The product manager will need to identify the different system components impacted by each requirements. This process is the Requirements-Driven Impact Analysis (RDIA), on which the requirements/components matrix will be built to identify the impact on the system components where those requirements to be implemented as shown in figure 2. This selection will include both existing and new components.

(5)

Figure 2. Requirement impacted components  Step 3: Using the system history metrics to get the Difficulty Of Modifications (DoM) factors values. Size, complexity, health, understandability and functionality are those factors that will affect (DoM) and will be entered by the product manager. The product manager will also enter the experience weight for each component which will be 1 in our case as the entire component has equal weights from the history perspective. The volume of each component in SLOC is another input value that will need to be entered by the PM as shown in figure 3.

Figure 3. Difficulty Of Modification (DoM) values for existing components

 Step 4: Each expert will need to enter his own values for each (DoM) factors based on his/her experience with the existing system components. Each expert will need also to enter his experience weight for each component that reflects his working experience with that component. These weights will be from 1 to 9 such that 1 refers to equal importance, 5 strong importance and 9 extreme importance. The values in between refers to intermediate importance.

 Step 5: Each expert will need to enter his own values for each (DoC) factors based on his/her experience with the new system components. Each expert will need also to enter his experience weight for each component that reflects his working experience with that component as shown in figure 4.

Figure 4. Difficulty Of Creation (DoC) values for new components

(6)

 Step 6: Each expert will need to enter his own weights for each DoM/DoC factors based on his/her experience with the system components as shown in figure 5.

 Step 7: Product Manager will need to enter the details for each of releases to be sued while generating the product release plans.

 Step 8: Use the framework to generate the release plan shown in figure 6 based on the input values shown from the previous figures.

Figure 6. Resultant release plan

6. Conclusions and future work

In this paper, we have presented a new methodology for calculating the impact of implementing new requirements on existing system called (HBMIA). Our contribution in this paper is three fold; first, the new (HBMIA) combines the strengths of both metric-based and expert-based approaches while collecting DoM/DoC factor values. It balances between historical and experts contributions. Second, (HBMIA) gets the effect of the new components while calculating the total requirement impacts. This will provide a better quality for the calculated requirement impact especially when the new requirements result in creation of new components. Third, integrate the (HBMIA) with the release planning process to identify those requirements that will be scheduled for implementation based on their influence on the system components. This integration done through a designed framework that balances between the different stakeholders priorities and incorporate the different perspectives altogether while selecting the releases requirements. The

framework takes the different resources constraint as budget, time and resources into account while managing the product releases.

Thus, (HBMIA) will overcome the drawbacks of existing similar approaches which depend only on the expert’s data which has no evidence on its correctness and neglect the historical data which has good indicators of the future trends of the system components. Since only a few studies have been performed to evaluate the efficiency and suitability of (HBMIA), there is a need to do further studies for some issues that can affect the algorithm efficiency. Further evaluation for the different factors that influence both (DoM) and (DoC) is required. Further analysis for the type of modification on the (DoM) assessment is required. Analyze how the software developer productivity [2, 8] will impact the assessment of both (DoM) and (DoC).

7. References

[1] Aggarwal K., Singh Y., and Chhabra J. K., "An Integrated Measure of Software Maintainability", In Proceedings of Annual Reliability and Maintainability Symposium, IEEE, 2002.

[2] Arnold R. S., , Bohner S. A., , “Impact Analysis – Toward a Framework for Comparison”, Proceedings of the Conference on Software Maintenance, Montreal, Canada, IEEE Computer Society Press, CA, 1993, pp. 292-301. [3] Artur L. J., , “Software Evolution: The Software Maintenance Challenge”, John Wiley & Sons, New York, NY, 1988.

[4] Ash D., Alderete J., Yao L., Oman P. W., and Lowther B., "Using software maintainability models to track code health", In Proceedings of International Conference on Software Maintenance, IEEE, 1994.

[5] Bengtsson P., “Towards Maintainability Metrics on Software Architecture: An Adaptation of Object-Oriented Metrics”. Proceedings of First Nordic Workshop on Software Architecture. Research Report 1998:14 ISSN: 1103-1581, Blekinge Institute of Technology, Sweden, pp. 87-91. [6] Bohner S.A., Arnold R.S., eds., Software Change Impact Analysis, IEEE Computer Soc. Press, Los Alamitos, CA. [7] Carlshamre P., “Release Planning in Market-Driven Software Product Development: Provoking an Understanding”, Requirements Engineering Journal, 7(3), 2002, pp. 139-151. [8] Conte S.D., Dunsmore H.E. and Shen V.Y., “Software Engineering Metrics and Models”, Benjamin/Cummings Publishing Company (1986).

[9] Greer D., and Ruhe G., “Software Release Planning: An Evolutionary and Iterative Approach,” Information and Software Technology, 46(4), 2004, pp. 243-253.

[10] Metrics Data Program, NASA IV&V Facility http://mdp.ivv.nasa.gov/.

[11] IEEE Standard for Software Maintenance, Std 1219-1998, Software Engineering Standard, Volume 2, 1999.

References

Related documents

Team SWARM-AI plans to create a software that can test swarming strategies independent of physical hardware, but can also be used to control real world-drones’ flight paths

Select the Triggers Tab on the properties notebook for the new Task, and click on the New button to display the New Trigger panel.. Select At log on for the Begin the

BEFORE & AFTER SCHOOL CARE IS COMING TO UMINA BEACH PUBLIC SCHOOL We are very excited that Cubby House Child Care have been successful in obtaining the tender to provide

Patients without a tear, detachment or new symptoms who see the vitreoretinal team at Addenbrooke’s Hospital at Cambridge are now offered planned cryotherapy to the edge of

The main contributions of this thesis to scientific knowledge are the develop- ment of the discrete-time leap method for the simulation of stochastic models, the approximations

enabling the public component stored With the DNS to verify that the digitally signed message originated from the domain associated With the client, and enabling each

Our energy efficient zone routing algorithm (EZone) maximized the time when all nodes are alive with the single gateway simulation outperforming other multi-gateway

Ten studies — 7 Level I randomized controlled trials (RCTs), 2 Level II case control study, and 1 Level III one group study — within this review examined how specific