DOD Parametric Cost Estimating Handbook 2nd Ed

272  Download (0)

Full text








Fall 1995



This Handbook is intended to be used as a general guide for implementing and evaluating parametric based estimating systems, and as the text material for a basic course in parametric estimating techniques. The information contained in this Handbook complements and enhances the guidance provided by DCAA Cost Audit Manual (CAM), FARs/DFARs, and other government regulations or public laws relating to cost estimating. The Handbook is structured so each chapter can be used as a stand alone guide and reference. If the reader is interested in only one subject, then one chapter can be read without reading any other. Expected users of this

Handbook include cost model developers, analysts, auditors, and all levels of cost management.

Every phase of the government’s procurement process is being reviewed and changed to reflect the realities of today’s cost conscious environment. Among the many aspects of the procurement process under scrutiny is the use of traditional cost estimating tools and processes. These tools and processes, at times, have proven expensive to maintain and operate and did not always provide realistic or timely results. Today’s Acquisition Reform mind-set dictates that the cost estimating community consider a different set of tools and processes which are less labor intensive to operate and result in credible cost estimates.

It is partly from the impetus of Acquisition Reform that this document has been created. All of us in the cost estimating community need to find ways to perform our work more cost effectively. Pushed by the common goal of reducing proposal preparation cost, contractor and oversight management are pursuing a cooperative approach to problem solving. This Handbook is an example of such a cooperative effort. If the Handbook succeeds to help produce better estimates, analyses, or audits, then the process has been successful.

This Handbook is just one of several by-products of a joint Industry/Government Parametric Cost Estimating Initiative. The Initiative Steering Committee and implementation team includes representation from contractors, buying activities, Defense Contract Management Command and Defense Contract Audit Agency. The Initiative’s far reaching action items include working with pilot contractor sites to test the expanded use of parametric cost estimating techniques and tools, developing parametric cost estimating training, and preparing and distributing this Handbook.

This is the first edition of the Handbook. It will be used and tested at pilot contractor sites. The Initiative Steering Committee will be responsible for receiving and incorporating any comments for improving the Handbook from the pilot sites and others, and publishing the second edition of the Handbook. The second edition will be issued in the summer of 1996. A user reply sheet is provided in this Preface to facilitate submission of your suggestions. An Order Form for this Handbook is included at the end of the Preface.

The preparation of this Handbook included the review of nearly four thousand pages of documentation. Some of that total is included here. The material has come from many sources, both individual and organizational contributors. There is no way all of the contributors can be acknowledged, since most of them are anonymous. There are, however two who should be


mentioned at this time, besides those acknowledged later in the test. First, the Parametric Estimating Initiative Steering Committee for authorizing this Handbook and specifically, NAVSEA for funding the Handbook’s development.

Thanks to the rest of the Committee for their inputs, reviews and critiques of the draft. And, especially, the Space Systems Cost Analysis Group (SSCAG) who supported the initial concept of the Handbook, and the may individuals within that organization who conceptualized, critiqued, and developed much of the material.

Bill Brundick, General Editor



Bob Scott Bob Spiker

Executive Director of Contract Management Controller, Financial and Government Accounting Defense Contract Management Command - AQC Westinghouse Electric Corporation - ESG

Cameron Station P.O. Box 17319, Mail Stop A585

Alexandria, Virginia, 22304-6190 Baltimore, Maryland 21203-7319

703-274-0821 phone 410-765-2913 phone 703-274-0823 fax 410-765-6038 fax Michael Thibault Deputy Director DCAA Headquarters Cameron Station Alexandria, Virginia 22304-6178 703-274-7281 phone 703-617-7450 fax Co-Chairman

Jim Collins David Eck

Parametric Estimating Specialist Chief, Policy Formulation Division Westinghouse Electric Corporation D61-ESG DCAA Headquarters

P.O. Box 1693, Mail Stop 1293 Cameron Station, Room 4A250

Baltimore, Maryland 21203 Alexandria, Virginia 22304-6178

410-765-8033 phone 703-274-7314 phone

410-765-5289 fax 703-617-7452


Jim Balinskas Dean Boyle

Contract Pricing & Finance Division Deputy Technical Assessment Manager

NASA Headquarters, Code HC DPRO - Northrup Grumman

300 E Street SW Defense Logistics Agency, DCMC

Washington, DC 20546 Bethpage, New York 11714-3593

202-358-0445 phone 516-575-9742 phone

202-358-3082 fax 516-575-6527

Gary Constantine Marty Deutsch

Manager, Parametric Estimating Chief, Estimating Policy Review & Compliance

E-Systems - Greenville Martin-Marietta Astronautics

P.O. Box 6056, CBN93 P.O. Box 179, Mail Stop DC 2503

Greenville, TX 75403-6056 Denver, Colorado 80201-0179

903-457-5666 phone 303-971-6060 phone

903-457-4619 fax 303-971-5143 fax

Mel Eisman Jim Gleason

Senior Research Associate Procurement Analyst

Rand Corporation US Army Material Command, AMCAQ-E

P.O. Box 2138, Mail Stop HMRP/5 5001 Eisenhower Avenue, Room 9N15 Santa Monica, California 90407-2138 Alexandria, Virginia 22333-0001

310-393-0411 x6704 phone 703-274-4437 phone


Directorate of Pricing and Finance Senior Engineer, Technical Estimating US Air Force Headquarters, AFMC/PKF Westinghouse Electric Corporation - ESG Building 262, 4375 Chidlaw Road, Suite 6 P.O. Box 1693, Mail Stip 1293

Wright Patterson AFB, Ohio 45433-5006 Baltimore, Maryland 21203

513-257-6861 phone 410-765-6163 phone

513-476-2435 fax 410-765-5289 fax

Bernard Rudwick Marcia Rutledge

Professor - Financial Management Contracting Officer, ASW & Mine Systems Branch Defense Systems Management College US Navy, Naval Sea Systems Command

Fort Belvoir, Virginia 22060-5426 2531 Jefferson Davis Highway, NC-3, Room 5E08

703-805-3783 phone Arlington, Virginia 22242-5160

703-805-3184 fax 703-602-0951 x639 phone

703-602-7023 fax

George Salantai Amir Tarmohamed

Manager, Estimating Proposal Anaysis & Definitization Team

McDonnell Douglas Aircraft Company Defense Logistics Agency, DCMC-AQCOD Building 305, Post 2E, Room 232, Mail Code 306-5485 Cameron Station, Room 8A454

St. Louis, Missouri 63166 Alexandria, Virginia 22304-6190

314-233-8461 phone 703-274-4130 phone




Please send comments to: Marcia Rutledge

Contracting Officer, ASW & Mine Systems Branch US Navy, Naval Sea Systems Command

2531 Jefferson Davis Highway, NC-3, Room 5E08 Arlington, Virginia 22242-5160

703-602-0951 x639 phone 703-602-7023 fax

We would appreciate specific comments such as the suggested alternative language and rationale to support the language, additional information that would add value to the Handbook, or existing information that can be deleted. Specific examples that enhance the Handbook are welcome.



for the



, please send me _____ copies of the Parametric Cost Estimating Handbook. There is no charge for the Handbook. Requestors are encouraged to limit requests to one copy per organization.

Please send the Handbook to:

Company/Organization (Please type or print)


Street Address

City, State, Zip Code

Daytime phone number including area code

All orders should be sent to:

Naval Sea Systems Command 2351 Jefferson Davis Highway NC-3, Room 5E08

Arlington, VA 22242-5160

Attn: M. Rutledge, SEA 0263



Early in 1994, a joint Government/Industry Committee was formed to study ways to enhance the use of parametric cost estimating techniques. The Committee found that the lack of training was one of the largest barriers to the use of parametrics. The Committee sponsored this

Handbook to provide training and background information on the use and evaluation of

parametric tools. The Committee is also working with the Defense Acquisition University to develop classroom training that would be available to both government and industry.

The Committee is also sponsoring a pilot program to test the use of parametric tools. As of September 1995, eleven companies are participating in the pilot program. The pilot program is expected to last until the Summer of 1996. This Handbook will be tested at the pilot program sites. The Committee will update the Handbook to incorporate comments, best practices, and additional examples developed at the pilot sites. The Committee also invites your comments on the Handbook and any examples you may have. A User Reply Sheet is provided in the Preface to facilitate your input. The Committee expects to publish the second edition of the Handbook by the Summer of 1996.

The Committee hopes, with your help, to make the Handbook the guide people turn to when using and evaluating parametric tools.

Robert Scott Executive Director of Contract Management

Defense Contract Management Command Robert Spiker Controller, Financial & Government Accounting

Westinghouse Electric Corporation - ESG Michael Thibault Deputy Director

Defense Contract Audit Agency Executive Chairmen Joint Government/Industry Parametric Cost Estimating Initiative






Introduction ... 1

Background ... 4

Definitions and Terms ... 9

Cost Realism ... 10


Significant Adjustments to Parametric Data ... 14

Consistent Scope ... 14

Anomalies ... 14

Improved Technology ... 15

Other Types of Adjustments and Data Normalization ... 15

Inflation ... 15

Learning Curve ... 16

Production Rate ... 16

Calibration and Validation of Cost Models ... 17

Some Review and Audit Considerations ... 18

Data Normalization Process ... 19

Pitfalls to the Use of a Parametric Model ... 25

Two Illustrations of Typical Data Normalization Problems ... 27

Summary ... 29

CHAPTER III -- ELEMENTARY STATISTICAL TECHNIQUES AND CER DEVELOPMENT CER and Model Development - Uncertainty and Risk Reduction ... 32

Developing Cost Estimating Relationships (CERs) ... 34

Hypothesis Testing of a Logical CER ... 36

The CER Model ... 36

When to Use a CER ... 36

Strengths and Weaknesses of CERs ... 37

Strengths ... 38 Weaknesses ... 38 Regression Analysis ... 38 Curve Fitting ... 41 Graphical Method ... 41 LSBF Method ... 41


Multiple Regression ... 44

Curvilinear Regression ... 45

“Goodness” of Fit, R and R2 ... 45

Correlation Analysis ... 45

Coefficient of Determination ... 46

Coefficient of Correlation ... 47

The Learning Curve ... 47

Limitations, Errors and Caveats of LSBF Techniques ... 49

Extrapolation Beyond The Range of the Observed Data ... 49

Cause and Effect ... 50

Using Past Trends to Estimate Future Trends ... 50

Misinterpreting the Coefficients of Correlation and Determination ... 50

Summary ... 50

Examples of CER Use ... 51

Construction ... 51

Weapons Procurement ... 51

Electronics ... 51

Cost and Price Analysis ... 52

Information and Techniques ... 52

Estimate Reliability ... 53

Two Examples of CER Use ... 54

Common CERs ... 60

Acceptance Criteria for a Cost Estimating Relationship ... 61

Auditing and Analyzing a CER ... 63

CER Analysis ... 63

General Features ... 63

Evaluating and Estimate ... 64

Summary of CER Audit and Analysis ... 65


Overview of Hardware Cost Modeling ... 70

Micro-Circuit and Electronic Module Modeling ... 77

Hardware Operations and Support (O&S) of Life Cycle Models ... 79

Deployment and Employment ... 81

Hardware Parameters ... 82

Program Globals ... 82

Commercial Models ... 82

MicroFASTE ... 83

Price Parametric Models ... 87

SEER ... 90

Regression Based Product Specific Cost Models ... 91

Non-Commercial Cost Models ... 93

Non-Statistical Cost Models ... 94


Cost Model Audit and Analysis ... 96

Analyzing a Product Specific Cost Model ... 96

Summary of Cost Model Audit and Analysis ... 99


The Importance of Software Today ... 102

The Software Development Process ... 107

The Waterfall Model ... 107

The Software Cost Estimating Process ... 109

Define Project Objectives and Requirements ... 111

Plan the Activities ... 111

Software Estimation Risks ... 112

Estimation Methodologies ... 113

Software Cost Estimating Standards ... 116

Benefits ... 116

Examples of Parametric Software Cost Estimating ... 116

Parametric Software Cost Estimating Tools ... 119

Desired Functional Capabilities of Parametric Tools ... 120

Input Data Collection ... 122

Some Commercial Tools ... 122

Software Sizing Tools ... 127

Glossary of Terms ... 130

Model Calibration ... 130

Trends and Conclusions ... 131

Trends ... 131

Conclusions ... 132


Introduction ... 133

Background ... 134

Parametric Criteria ... 134

Logical Relationships ... 135

Significant Statistical Relationships ... 135

Verifiable Data ... 135

Reasonably Accurate Predictions ... 135

Proper System Monitoring ... 136

Audit Planning and Requirements ... 136

Observations and Suggestions ... 140

Summary ... 142

Estimating System Reviews ... 143

Forward Pricing Rate Agreement (FPRAs) ... 145


Rules of Thumb ... 150

An Example ... 151


Parametric Estimating in New Business Development ... 153

Estimating Production Buys Using Parametrics ... 154

Example: Estimating Production Buys Using Parametrics ... 155

Estimating Spares and Change Orders ... 158

Appendix A Definitions of Estimating Terminology ... 160

Appendix B Work Breakdown Structure ... 177

Appendix C DCAA CAM 9-1000 Section 10, Review of Parametric Cost Estimates . 198 Appendix D More About Statistics ... 205

Appendix E Examples of Other Hardware Estimating Models ... 209

Appendix F Some Currently Available Software Estimation Products ... 218









This Handbook is intended to be a living document. As advances are made in parametric estimating methodology, they will be introduced into the body of this material. Changes suggested from experienced users are solicited, as well as recommendations from other experts in the field. However, the Handbook is primarily intended for the beginning parametrics practitioner and to be used to enhance parametric training in the field. When using the

Handbook, however, we assume that the reader has a basic understanding of algebra and


Defined, a parametric cost estimate is one that uses Cost Estimating Relationships (CER’s) and associated mathematical algorithms (or logic) to establish cost estimates. For example, detailed cost estimates for manufacturing and test of an end item (for instance, a hardware assembly) can be developed using very precise Industrial Engineering standards and analysis. Performed in this manner, the cost estimating process is laborious and time consuming. However, if history has demonstrated that test (as the dependent variance) has normally been valued at about 25% of the manufacturing value (the independent variable), then a detailed test estimate need not be performed and can simply be computed at the 25% (CER) level. It is important, though, that any CER’s used be carefully tested for validity using standard statistical approaches. An exploration of certain statistical approaches relevant to CER development is included later in this Handbook.

The need to reengineer business processes and reduce cost in the Department of Defense (DoD) has led to a parametric cost estimating initiative. In every corner of every aspect of defense contracting, Business Process Reengineering (BPR) has become a nineties buzzword. The cumbersome techniques that evolved into the development of the “normal” cost estimating processes of today are beginning to yield to more efficient and less costly approaches to achieve


the same, or superior results. Parametric estimating approaches fit very well into overall BPR methods.

The importance of Business Process Reengineering was recently underscored by Lloyd K. Mosemann, II, Deputy Assistant Secretary of the Air Force, in his closing Keynote Address entitled “Predictability,” to the Software Technology Conference in Salt Lake City, Utah, on Thursday, April 14, 1994. Although addressing the software process, Mr. Mosemann’s comments are relevant to the cost estimating process in general. In summary, he said, in part:

“There seems to be an inability within the software community, in general, to predict how much a software system will cost, when it will become operational, and whether or not it will satisfy user requirements. We need to deliver on our promises.

“We have a poor track record regarding predictions. A 1979 GAO report concluded that only two percent of software contracted for was useable exactly as delivered. Twenty different studies have come to the same conclusion. Therefore, we in the DoD are focusing our attention on process improvement: These include: specific metrics usage plans, reuse plans, peer inspections, process controls, proposed architectures in executable code, and government access to contractor on-line development environments.

“This emphasis on process will give all of us in the software community a greater confidence that the prospective contractor will deliver the promised product on time and on budget.”

Mr. Mosemann’s emphasis on process improvements to improve the quality of predictability of cost and schedule fits nicely with the concept of expanding the use of parametric tools in the cost estimating workplace. Parametrics can play a role in the BPR process as was underscored by Anthony A. DeMarco in his article, CAPE (Computer Aided Parametric Estimating for Business process Re-Engineering, in the PRICE Newsletter, October 1994. In his article, in summary, Mr. DeMarco states that:

“Business Processing Reengineering (BPR) is the reengineering of an organization by examining existing processes and then revamping and revising them for incremental improvement. It is doing more with less and sometimes entails “starting over.”


There are five phases to BPR. They are: 1. Create an organization for improvement, 2. Develop an understanding of the process, 3. Streamline the process,

4. Model, implement, measure, and control, and, 5. Design and implement continuous improvement.

“Parametric tools can assist BPR. On one level, they can improve and streamline the BPR phases. On another level, parametric technology is the ‘best practice’ for estimating. Parametric tools bring speed, accuracy and flexibility to estimating processes, processes that are often bogged down in bureaucracy and unnecessary detail.”

The need to reengineer the DoD cost estimating process (Acquisition Reform initiatives) became self evident to certain people from both government and industry. A Steering Committee was chartered by government and industry executives to explore the role played by parametrics in the cost estimating process. One goal was to determine what barriers, if any, exist to expanding the role of parametrics, and to develop the action plans to overcome those barriers. The committee consists of representatives from all of the Armed Services, the oversite community, selected contractors. This Handbook has been authorized by that Steering Committee.

The Handbook is intended to be used by both model developers and model reviewers, their management or oversite, either technical or financial. Government and industry cost analysts and auditors who utilize CER’s and/or parametric cost models to develop or evaluate an estimate generated with these parametric tools should find the Handbook useful. It is also intended to be utilized as a source document by trainees within a generic parametric cost estimating training program.

This Handbook includes basic information concerning data collection, Cost Estimating Relationship (CER) development, parametric cost models, and statistical techniques.

Parametric techniques are a credible cost estimating methodology that can provide accurate and supportable contractor estimates, lower cost proposal processes, and more cost-effective estimating systems.


An estimating workbench context model is shown in Figure I-1. The model indicates the tools required within the estimating community of contractors, customers and government agencies. Figure I-2 is a graphical representation of the complete parametric cost estimating process. The figure indicates the process from inputs through modeling and into a post processor phase. The post processor allows for the conversion of parametric output into a cost proposal.


The origins of parametric cost estimating date back to World War II. The war caused a demand for military aircraft in numbers and models that far exceeded anything the aircraft industry had manufactured before. While there had been some rudimentary work from time to time to develop parametric techniques for predicting cost, there was no widespread use of any cost estimating technique beyond a laborious buildup of labor-hours and materials. A type of statistical estimating had been suggested in 1936 by T. P. Wright in the Journal of Aeronautical Science. Wright provided equations which could be used to predict the cost of airplanes over long production runs, a theory which came to be called the learning curve. By the time the demand for airplanes had exploded in the early years of World War II, industrial engineers were using Wright’s learning curve to predict the unit cost of airplanes.

In the late 1940’s, the DoD, and, especially, the United States Air Force began a study of multiple scenarios concerning how the country should proceed into the age of jet aircraft, missiles and rockets. The Military saw a need for a stable, highly skilled cadre of analysts to help with the evaluation of such alternatives. Around 1950, the military established the Rand Corporation in Santa Monica, California, as a civil “think-tank” for independent analysis. Over the years, Rand’s work represents some of the earliest and most systematic studies of cost estimating in the airplane industry.

The first assignments given to Rand concerned studies of first and second generation ICBM’s, jet fighters and jet bombers. While the learning curve technique still proved useful for predicting the behavior of recurring cost, there were still no techniques other than detailed labor-hour and material estimating for projecting what the first unit cost might be (a key input to the








learning curve equation). Worse still, no methods were available for quickly estimating the non-recurring costs associated with research, development, testing and evaluation (RDT&E). In the defense business in the early to mid 1950’s, RDT&E had suddenly become a much more important consideration. There were two reasons for that fact. First, a shrinking defense budget (between World War II and the Korean War) had cut the number of production units for most military programs, and second, the cost of new technology had greatly magnified the cost of development. The inability to quickly, and accurately, estimate RDT&E and first unit production costs had become a distinct problem.


Fortunately, within Rand, a cost analysis department had been started in 1950. This group proved to be prolific contributors to the art and science of cost analysis -- so much so that the literature of aerospace cost estimating of the 1950’s and 1960’s is dominated by the scores of Rand cost studies that were published during that time. In the mid 1950’s, Rand developed the most basic tool of the cost estimating discipline, the Cost Estimating Relationship (CER), and merged the CER with the learning curve to form the foundation of parametric aerospace estimating. This estimating approach is still used today.

By 1951, Rand derived CER’s for aircraft cost as a function of such variables as speed, range, and altitude. Acceptable statistical correlations were observed. When the data was segregated by aircraft types (e.g., fighters, bombers, cargo aircraft, etc.), families of curves were discovered. Each curve corresponded to different levels of product or program complexity. This Parametric stratification especially helped clarify development cost trends. Eventually, a useable set of predictive equations were derived which were quickly put to use in Air Force planning activities.

The use of CER’s and data stratification were basic breakthroughs in cost estimating, especially for RDT&E and first unit costs. For the first time, cost analysts saw the promise of being able to estimate relatively quickly and accurately the cost of proposed new systems. Rand extended the methods throughout the 1950’s, and by the early 1960’s, the techniques were being applied to all phases of aerospace systems.

Since these rather humble beginnings, the state-of-the-art in parametric estimating has been steadily improving by an explosive growth in the number of practitioners, important methodological improvements, and greatly expanded databases. All of the major aerospace contractors and government aerospace organizations have dedicated staffs of parametricians who maintain and expand databases, develop parametric cost models, and utilize the tools of parametrics to make estimates of new and ongoing programs. NASA and the DoD routinely use parametric estimates to form the basis of new project cost commitments to Congress. The contractor community also routinely uses parametric cost models, especially during product concept definition. These estimates are used for decision making regarding bid strategies and are used as submittals to the government. It is only at the production and full scale development phase that parametrics are not commonly utilized for official proposal submissions (although


contractors still frequently use parametrics to generate target costs and cross-checks on the labor-material/buildup estimates).

Over the past several years industry and professional estimating associations (e.g., International Society of Parametric Analyst (ISPA), Society of Cost Estimating and Analysis (SCEA), and the Space Systems Cost Analysis Group (SSCAG)) have been actively working with both Defense Contract Management Command (DCMC) and Defense Contract Audit Agency (DCAA) to explore the expanded opportunities for the use of parametric cost estimating techniques in firm business proposals. ISPA was formed in 1978 when a parametric estimating user’s group evolved into a more generic Society. The Space Systems Cost Analysis Group formed in 1977 under the sponsorship of the U.S. Air Force Space Division, with a mission to:

1. Promote Cost Analysis Research

2. Develop new tools to improve cost estimating techniques

3. Promote sound practices, and

4. Provide a forum for government and industry cost analysts concerned with the development and production of space-design hardware and software.

Then, in April 1994, a joint Industry and Government workshop on Parametric Cost Estimating occurred at the Defense Contract Audit Institute in Memphis, TN. Under the initiative and leadership of the DCMC, the DCAA, and industry proponents, a group of knowledgeable government and industry executives, policy formulators, and parametric practitioners were assembled to evaluate why there is not greater use of parametric cost estimating in DoD and NASA business proposals; identification of the barriers to expanded use of parametrics; and, action planning to take advantage of identified opportunities.

At the conclusion of the workshop, it became clear to the participants that there were no barriers which precluded further implementation and use of parametric cost estimating by contractors in DoD or NASA business proposals. Rather, barrier analysis and actions recommended focused on the need for industry leaders to demonstrate that parametric estimating systems can be relied upon by the Government customers, and the need for the Government to train employees so that there exists a clear message that valid parametric estimates are a useful and often cost effective estimating approach.



A complete glossary of parametric terminology, taken from numerous sources, is included in this Handbook as Appendix A. A few of the more important definitions are noted in this chapter.

There are several definitions of parametric estimating, but for the purpose of this

Handbook, the formal one adopted is as follows: A technique employing one or more CER’S

and associated mathematical relationships and logic. The technique is used to measure and/or estimate the cost associated with the development, manufacture, or modification of a specified end item. The measurement is based on the technical, physical, or other end item characteristics.

This definition establishes the clear linkage between cost and a product’s (or end item) technical parameters. Without this linkage, a product cost cannot be effectively defined. Non-parametric estimating systems generally do not connect technical (Non-parametric) and cost elements with any substantial precision.

And, a Parametric Cost Model is defined as: A parametric cost model is a group of cost estimating relationships used together to estimate entire cost proposals or significant portions thereof. These models are often computerized and may include many inter-related CER’s, both cost-to-cost and cost-to-noncost. Some models use a very limited number of independently estimated values and a series of Parametric inter-related cost-to-cost and cost-to-noncost estimating relationships to predict complex proposal cost structures.

Parametric cost estimating is a technique used by both contractors and the Government in planning, budgeting, and performance stages of the acquisition process. The technique is used by contractors to expedite the development of cost estimates when discrete estimating techniques would require inordinate amounts of time and resources and would produce similar results. Reliance on properly developed and carefully evaluated CER’s and parametric cost models to produce realistic cost estimates can save both Industry and the Government time and resources in the evaluation and definitization cycle of the proposal or contract.

The concept includes the use of cost-to-cost CER’s such as engineering labor overhead rates and material overhead rates which when reviewed using traditional evaluation criteria, are


considered valid estimators by the government. However, the technique also uses cost-to-noncost CER’s which require additional analysis to determine their validity and acceptability as estimating tools.

Parametric techniques focus on the cost drivers, not the miscellaneous details. The drivers are the controllable system design or planning characteristics and have a predominant effect on system cost. Parametrics uses the few important parameters that have the most significant cost impact on the product(s), hardware or software, being estimated.


A widely used term today is “cost realism.” Now, no one expects a cost estimate to precisely predict what a hardware or software product or a time and material service will cost. So, cost realism is not about the exact cost estimate. It’s about the system of logic, the assumptions about the future, and the reasonableness of the historical basis of the estimate. That is, it’s about the things that make up the foundation of the estimate.

Cost realism analysis answers questions such as:

* Are the assumptions used in the estimating process reasonable?

* Has the historical data base used been normalized to account for environmental parameters such as inflation?

* Is the cost estimate logical? Does it make sense in the context of the hardware or software product or service being estimated?

* Does the estimate display a bias toward being too low or too high? If so, how is this bias displayed in the estimate?

* Is the cost estimating organization motivated to produce an inordinately high or low estimate in order to serve their own purposes?

* If the product is fixed price sole source, has the historical basis data been “cherry picked” to insure the cost estimate obtained is unreasonably high (contractor) or unreasonably low (auditor or government customer)?


* If the program is competitive, has the contractor or government program office created program expectations that are far too optimistic?

The cost estimator or analyst must ensure that they are working toward the goal of cost realism. It doesn’t matter whether or not they are employed by private industry, or the customer as a cost analyst or an auditor. If a contractor chooses to accept a management challenge in a competitive procurement, that’s certainly acceptable. However, the basis for the challenge should be clearly identified.

There is no easy answer to the cost realism dilemma we all face. Unreasonable biases and expectations from contractor and customer have driven the cost estimating process in the past, and personal and programmatic motivations may continue to drive it in the future. But one thing is certain: the cost estimating process will continue to confront future unknowns. These unknowns are what make the cost estimating job one of the most difficult there is. But sound assumptions, high quality historical data, and unbiased analysts and estimators will improve the process for all.








This chapter provides guidelines concerning data types, data sources, data normalization and adjustment techniques. This chapter also includes utilization recommendations about data to support the development and use of auditable CERs and cost models.


A universal format for collecting technical and cost information is the Work Breakdown Structure (WBS). (See Appendix B) The WBS provides for uniform definitions and collection of cost and technical information. MIL-STD-881B provides guidelines for establishing the WBS at DoD, Service and contractor levels.

Historical cost and labor hours data are required as a basis for cost estimating, and parametric estimating is no exception. Data should be collected and maintained in a manner that provides a complete audit trail, and expenditure dates should be recorded so that dollar valued costs can be adjusted for inflation.

Also required is Technical Non-Cost Data that describes the physical, performance and engineering characteristics of a system, sub-system or individual item. For instance, weight is a common non-cost variable used in CER’s and parametric estimating models. (Other typical examples of cost driver variables include horsepower, materials of construction, watts, thrust, length, etc.)

A fundamental requirement for the inclusion of a non-cost variable in a CER is that it be a statistically significant predictor of cost (that is, it should be a cost driver).

Relevant program data including development and production schedules, quantities produced, production rates, equivalent units, breaks in production, significant design changes, and anomalies such as strikes, explosions, and other natural disasters are also necessary to fully explain any perturbations in historical data. Such perturbations may exhibit themselves in a profile of


monthly cost accounting data as the labor hour charging may show an unusual "spike" or "depression" in the level of charged hours. Such historical information comes from knowledgeable program personnel or program records (also known as program "memory").

The collecting point for cost and labor hours data is, in most instances, called the general ledger or a company accounting system. All cost and labor hours data, used in parametric CER’s or cost models, must be consistent with, and traceable back to, the original collecting point (the source). The data should also be consistent with accounting procedures and cost accounting standards.

Technical non-cost data comes from engineering drawings, engineering specifications, certification documents, or direct experience (i.e., weighing an item). Schedule, quantity and equivalent units, and similar information comes from Industrial Engineering, Operations Departments, program files or other program intelligence.

Inflation indices normally combine external and internal information. Examples of external information used in these determinations include the Consumer Price Index (CPI), Producer Price Index (PPI), Commodity Price Indices and other forecasts of inflation from various econometric models.

There are other external sources of data including databases containing pooled and normalized information from various places (other companies or public record information). Although such information can often be useful, weaknesses of these sources include:

(1) The inability of the user to have knowledge of the procedures (i.e., accounting) used by the other contributors.

(2) The treatment of anomalies (how they were handled) in the original data.

(3) Knowledge of the manufacturing processes used and how they compare to the current scenario being estimated.

(4) The inability to accurately forecast future indices.

Internal contractor information includes analyses such as private corporate inflation studies, or "market basket" analyses. Such interval information provides data specific to a company's product line(s) (i.e., radar products) that could be relevant to a generic segment of the economy as a whole (i.e., electronics); etc. Such specific analyses would normally be prepared as part of an


exercise to benchmark government provided indices (the CPI), and to compare corporate performance to broader standards.

It is important to realize that sources of data can be almost unlimited, and all relevant information should be considered in a parametric analysis, if practical. Although major sources are described above, data sources should not be constrained to a specific list.

Any data included in calculating parametric parameters will vary between model developers. However, the way in which parametric models are calculated from historical data and the way they are applied in the estimating process should be consistent within individual estimating systems.


What follows below are some of the more significant adjustments that may have to be made to historical parametric cost data.

Consistent Scope

Adjustments are appropriate for differences in program or product scope between the historical data and the estimate being made. For example, if the systems engineering department made a comparison of five similar programs and then realized that only two of the five had design to cost (DTC) requirements. To normalize the data, the DTC hours were deleted from the two programs to create a consistent systems scope and definition for CER development.


Historical cost data should be adjusted for anomalies (unusual events), prior to CER analysis, when it is not reasonable to expect these unusual costs to be present in the new projects. The adjustments and judgments used in preparing the historical data for analysis, should be fully documented. For example, a comparison has been made to compare the development test program from five similar programs and then certain observations are made (from history and interviews) that one of the programs experienced a major test failure (e.g., qualification, ground test, flight test).


A considerable amount of labor resources were required to fact find and then determine the root cause of and develop an action plan for a solution. Should the hours be left in or deleted?

Improved Technology

Cost changes, due to changes in technology, are a matter of judgment and analysis. All bases for such adjustments should be documented and disclosed. For example, electronic circuitry was originally designed with discreet components, but now the electronics are ASIC technology. A hardware enclosure once was made from aluminum and now is made, for weight constraints, of magnesium. What is the impact on the hours? Perfect historical data may not exist, but judgment and analysis should supply reasonable results.

For example, suppose there are four production (manufacturing hours) lots of data that look like this:

Lot 1 = 256,000 = 853 hours/unit Lot 2 = 332,000 = 738 hours/unit Lot 3 = 361,760 = 952 hours/unit Lot 4 = 207,000 = 690 hours/unit

Clearly, Lot 3's history should be investigated. It is not acceptable to merely "throw out" Lot 3 and work with the other three lots. A careful analysis should be performed on the data to determine why it behaved the way it did. There may have been a strike, or possibly an unusual and serious engineering problem impacted production costs. In any event, careful analysis is important.



There are no fixed ways to establish universal inflation indices (past, present or future) that fit all possible situations. Inflation indices are influenced by internal considerations as well as external inflation rates. Therefore, while generalized inflation indices may be used, it may also be possible to tailor and negotiate indices used on an individual basis to specific labor rate agreements (e.g., FPRAs) and the actual materials used on the project. Inflation indices should be based on the


cost of materials and labor on a unit basis (piece, pounds, hour) and should not include other considerations like changes in manpower loading or the amount of materials used per unit of production. The key to inflation adjustments is consistency. If cost is adjusted to a fixed reference date for calibration purposes, the same type of inflation index must be used in escalating the cost forward or backwards, from the reference date, and then to the date of the estimate.

Learning Curve

The learning curve, as originally conceived, analyses labor hours over successive production units of a manufactured item. The curve is defined by the following equation:

Hours/Unit = First Unit Hours *Ub or

Fixed Year Cost/Unit = First Unit Cost *Ub

Where: U = Unit number

b = Slope of the curve

In parametric models, the learning curve is often used to analyze the direct cost of successively manufactured units. Direct Cost equals the cost of both touch labor and direct materials - in fixed year dollars. Sometimes this may be called an improvement curve. The slope is calculated using hours or constant year dollars. A more detailed explanation of learning curve theory is presented in Chapter III.

Production Rate

Production rate effects (changes in production rate, i.e., units/months) can be calculated in various ways. For example, by adding another term to the learning or improvement curve equation we would obtain:

Hours/Unit = Ub * Rr or,

Fixed Yr $/Unit First Unit $ * Ub * Rr Where: U = Unit number


R = Production rate

r = Production rate curve slope

The net effect of adding the production rate effect equation (Rr) is to adjust First Unit $ for rate. The equation will also yield a different "b" value.

Rate effect may be ignored or can be treated in different ways in different models. If possible, rate effects should be derived from historical data program behavior patterns observed as production rates change while holding the learning curve constant.

The rate effect can vary considerably depending on what was required to effect the change. For example, were new facilities required or did the change involve only a change in manpower or overtime?


Once data has been collected and normalized, cost models can be developed. Although we will discuss cost models in much more depth in Chapters IV and V, a few comments are relevant here.

There are two general types of cost models: internal (contractor developed) and commercially available. Internal, contractor developed models are derived from unique contractor data and generally do not require calibration since they have been calibrated in a defacto manner. On the other hand, commercial models are based on more universal data, and almost always need some form of calibration to be useful.

The cost driver equation(s) utilized in a commercial cost model are based on a database external to the specific data being used to support the current estimate. Calibration, then, is the process of computing a multiplier(s), to be used with the general purpose equation(s), such that the combined equation(s) will predict the cost as reflected by the technical and programmatic data being used to support the estimate.

Specialized (Internal) cost models are based directly on the data being used to support the estimate. Since the CER’s are derived directly from the supporting data, the model is, by definition, calibrated.


The result of calibrating an item, in a commercial model, is a calibration factor which is used in the commercial model's equations, such that the equations are then made to calculate the value of the item.

Cost models need to be calibrated and validated for acceptance. The validation of a cost model is a process which usually includes the following steps:

(1) Calibrate the model to historical cost data. (2) Estimate the cost of past completed projects.

(3) Compare the estimates with actual costs to demonstrate acceptable accuracy.

It is the combined use of the model with the estimating process that must achieve acceptable results to provide a basis for the validation of the model. It may also involve disclosure of how the model works so that the effects of scaling and heuristic analysis can be evaluated by management, customers or auditors.

Validation implies that interested parties have agreed that the model is a valid and acceptable estimating tool, and predicts costs within a reasonable range of accuracy.


Almost certainly, any data utilized will have to undergo a review and audit, so proper documentation is a must. Documentation should include:

(1) A record of all mathematical formulas and "goodness of fit" and other statistics.

(2) A record of adjustments to the original cost data and why those adjustments were made. (3) A statement of work for the historical data; judgment factors should be identified with


(4) An audit trail for the data used for validation that will allow rederivation of the adjusted data and/or CER from the original source(s). This information must be identified and available upon request.

Any CER’s and data used in a cost model will need to be updated periodically and/or as agreed to with the PCO. The updating procedure should satisfy the Truth in Negotiation Act


(TINA) requirements by the time of agreement on price or another time as agreed upon by the parties.


Specifying an estimating methodology is an important early step in the estimating process. The basic estimating methodologies (analogy, catalog prices, extrapolation, factors/ratios, grassroots and parametric) are all data-driven. To use any of these methodologies, credible and timely data inputs are required. If data required for a specific approach is not available, then that methodology cannot be used.

Given that all methodologies are data-driven, it is critical that the estimator know the best data sources. Here are nine basic sources of data and a description of what specific data can be obtained from each source. Definitions of the differences between primary and secondary sources of data are provided. Finally, there is a review of the type of information that should be available from an accounting system, and a description of how to collect and analyze data is also given.

The information presented in this Handbook will help the collection and analysis of the two data types (primary and secondary) required to specify, and apply a parametric estimating methodology. Remember - any data needs to be available, reliable and convincing before an estimating methodology can be chosen that utilizes the foundation data. The two types of data are:

1. Primary data is obtained from the original source. Primary data is considered the best in quality, and ultimately the most useful.

2. Secondary data is derived (possibly "sanitized") from primary data. It is not obtained directly from the source. Since it was derived (actually changed) from the original data, it may be of lower overall quality and usefulness.

When preparing a cost estimate, look for all credible data sources. If at all possible, use primary sources of data.



1. Basic Accounting Records Primary

2. Cost Reports Either (Primary or Secondary)

3. Historical Databases Either

4. Functional Specialist Either

5. Other Organizations Either

6. Technical Databases Either

7. Other Information Systems Either

8. Contracts Secondary

9. Cost Proposals Secondary

The following normalization process description is not intended to be all inclusive.

Normalizing Cost Data

Making Units/Elements of Cost Consistent Making Year of Economics Consistent

Normalizing The Size Data

Weight and Density Comparisons

Weight Contingency Application (weight reduction programs?) Percent Electronics

Normalizing Products By Mission Application

Grouping Vehicles by Complexity Calibrating Like Vehicles

Normalizing End Terms For Homogeneity

Account for Absent Cost Items Removing Inapplicable Cost Items


Normalizing Recurring/Non-Recurring Cost

Prime Contractors' Estimates Time Phased Costs

Flight-Article Equivalent Units

Normalizing State-Of-Development Variables

Mission Uniqueness Product Uniqueness

Normalizing Environments (Platform): Manned Space

Unmanned Space Aerospace

Shipboard Commercial

Collecting the data to produce an estimate, and evaluating the data for reasonableness, is a very critical and time-consuming step of the estimating process.

When collecting the data needed to integrate cost, schedule, and technical information for an estimate, it is important to obtain cost information, and also the technical and schedule information. The technical and schedule characteristics of programs are important because they drive cost. They provide the basis for the final cost.

For example, assume the cost of another program is available and a program engineer has been asked to relate the cost of the program to that of some other program. If the engineer is not provided with specific technical and schedule information that defines the similar program, the engineer is not going to be able to accurately compare the programs, nor is he or she going to be able to respond to questions a cost estimator may have regarding the product being estimated vis-à-vis the historical data.


The bottom line is that the cost analysts and estimators are not solely concerned with cost data. They need to have technical and schedule data available in order to adjust, interpret, and lend credence to the cost data being used for estimating purposes.

A cost estimator has to know the standard sources where historical cost data exists. This knowledge comes from experience and from those people, the so-called local experts, that are available to answer key questions.

A cost analyst or estimator should be constantly searching out new sources of data. A new source might keep cost and technical data on some item of importance to the current estimate. Do not hesitate to ask anyone who might know or be able to help, since it is critical to have relevant cost, schedule and technical information at all times.

The chart below summarizes important points about data collection and evaluation.


∗ Very Critical, Time Consuming Step

∗ Need Actual Historical Cost, Schedule, and Technical Information

∗ Know Standard Sources

∗ Search Out New Sources

∗ Capture Historical Data

In order to develop a parametric model, a necessary requirement is to possess historical cost, schedule and technical data on a set of data points. The idea here is that generally more data is better than less. It is necessary to know what trends exist, and to understand why the trends are as they are. Some models have been found to be based on the opinions of experts instead of historical data. Although the opinions of experts may be germane, sound historical data is preferable for model development, audit and analysis.

In addition to the historical data points, information on the cost, technical and quantity drivers needs to be examined to determine which does the best job of predicting cost. A statistical analysis on the data is accomplished to determine the strongest predictor(s) or driver(s) of cost, that is, the independent variable(s). (See further explanations in Chapter III).


It is very important to note that when performing a statistical analysis, be sure that functional specialists can provide realistic and reliable parameters for independent variables, given the stage of the program being estimated. Illustrating this point, suppose a statistical relationship is developed that has very strong correlation, and a potential cost driver has been discovered. However, data for the same independent variable for the estimate is not available. The parametric model would not then help with the estimate.

Finally, knowledge of basic statistics, modeling skills and an understanding of analytical techniques is necessary to develop parametric estimating relationships. (See also Chapter III).

The above information is summarized on the chart below.

TYPE OF INFORMATION NEEDED TO DEVELOP A PARAMETRIC MODEL ∗ Reliable Historical Cost, Schedule, and Technical Data on a Set of Data Points

* WBS, WBS Dictionary & Product Tree

* Analysis to Determine Significant Cost Drivers

* Knowledge of Basic Statistics, Modeling Skills and CER Development

* Analysis Techniques

To use a parametric model, the model needs to be well-documented. The documentation of a parametric model should include the source of data used to derive the parameters, and the size and range of the database. Additional information that should be included in the documentation of a parametric model are: how the parameters were derived, what the model's limitations are, the time frame of the database and, how well the parametric tool/model estimates its own database. All of this information should be located in the source document of a parametric model and should be read before deciding to use it in an estimate. By reading the source document, the strengths and weakness of the parametric model can be assessed and a determination can be made about any appropriateness for use.

A statistic called the Mean Absolute Deviation (MAD) can be developed for cost models. It is a simple statistic that evaluates and assesses how well a parametric model estimates its own database. For example, if the MAD is 20%, then it means that the parametric equation(s) estimates its own database within plus or minus 20%. This is an important statistic to know, because if a


CER does not estimate its own database well, then its credibility with other data points outside its database would be questionable.

To successfully use parametric model methodology, a requirement is to obtain realistic, most-likely range estimates for the independent variable values. Sometimes functional experts are not sure what the real performance requirements are for a new program. In such cases, a most-likely range will provide values that reflect an assessment of the associated uncertainties or unknowns (a Risk Analysis).

Again, the top functional experts who know the program should identify and estimate the range of cost driving characteristics. They should also confirm the applicability of the specific parametric cost model from a technical perspective.

The information needed to use a parametric data model is listed on the chart that follows.


Source of data used to derive the parametric model Size of database

Time frame of database Range of database

How parameters were derived Limitations spelled out

How well the parametric model estimates its own database Consistent and well defined WBS dictionary

Realistic Estimates of Most Likely Range for Independent Variable Values Top Functional Experts Knowledgeable about The Program You are Estimating

To identify most-likely range for cost drivers

To confirm applicability of parametric from technical perspective

A parametric estimating methodology can be used at any stage of a program's life cycle. For example, a general parametric model may be utilized in the early, conceptual phase of a program, although the same parametric model could be inappropriate to use in the follow-on


production phase of a program. However, a detailed parametric model used in production estimating that is based on the experience and actual historical data of two or three previous production lots, could yield excellent validity.

Hence, a parametric methodology can be used at any stage of a program's life cycle as long as the parametric model is based on the level and type of information available at that stage.

The methodology can be used for any WBS element, not just hardware and software. Parametrics can be successfully applied to Systems Engineering/Program Management, Test, Training and Data, etc., provided that historical data points are available to develop solid, statistical relationships that provide reliable estimates of independent variables.


When a parametric model is applied to values outside its database range, the credibility of the resulting estimate becomes questionable. In cost estimating, one rarely finds large, directly applicable databases, and the source document has to be evaluated to determine if the parametric can be applied to the current estimate. However, it is possible to develop parametric tools that relate cost based on generic complexity values or tables. Such generalized parameters, can be related to the task at hand by an experienced modeler that results in a good cost model, but a parametric model always needs to make sense for the present estimate.

Additionally and before using, one should validate models based on expert opinion. This is accomplished first by obtaining some actual, historical data points (technical, schedule, and cost) on completed programs similar to the current program. With this data in hand, apply the model to the actual technical and schedule information and see how well the parametric model predicts the known cost. If the model estimated the actual costs with an acceptable margin of error, validate the model for programs that are similar to the historical data point. Careful validation will help insure that cost models are appropriately used.

Many times a parametric model needs to be adjusted if the new system has cost drivers and/or requirements that are not reflected in the parametric's database. In some of these cases a combination of parametric methodology with an approach taken from the analogy methodology can


be used to develop an estimate. This is accomplished by adjusting the results of the parametric approach with scaling or complexity factors that reflect any unique requirements.

For example, parametrics and analogy approaches could be effectively combined to estimate the costs of a new program for which technology advancement is anticipated. First, either develop or use an existing parametric model, based on similar data points, to estimate the cost of the program, without technology advancement.

Second, address the technology advancement by consulting with functional experts to obtain a most-likely range for a relative scaling factor that will reflect any advancements in technology. The relative scaling or complexity factor is applied to the result of the parametric estimate, and adjusts it for the impact of technology advancement. This is a solid and valid estimating approach for assessing the cost impacts of technology advancement, or other "complexity" differences.

In such cases, the parametric model has to be adjusted so that it makes sense vis-à-vis the current estimate.

If there exist no realistic estimates for the independent variable values for the product or effort being estimated, then parametric models should not be used. The level of uncertainty in the estimate will be considerable, and the validity of the estimate questionable. In cases such as this, parametrics can only be used as a benchmark.

As stated previously, it is very difficult for functional specialists to provide a single point estimate. Moreover, a single point estimate does not reflect the uncertainty inherent in any functional expert's opinions. Consider requesting most likely range values rather than point estimates, if possible.

Even after a parametric analysis has been completed it is prudent to follow it with a risk analysis evaluation of the high risk areas and key cost drivers.

The chart below displays these points.

PITFALLS TO AVOID IN THE USE OF A PARAMETRIC ESTIMATE * Using the parametric model outside its database range


* Using a parametric model without adjustment when new system requirements are not reflected in the parametric's database

* Using a parametric model without access to realistic estimates of the independent variables' values for product/effort being estimated

* Requesting point estimates for independent variable value values versus a most likely range, if possible and practical


Illustration 1

You plan to do a parametric estimate of a system using some company history. The system under investigation is similar to a system built several years ago. The two systems compare as follows:

Parameter Historical System Prospective System

Date of Fabrication Jul 89-Jun 91 Jul 95-Dec 95

Production Quantity 500 750

Size- Weight 22 lb. external case 20 lb. external case

5 lb. int. chassis 5 lb. int. chassis

8lb. of elec parts 10 lb. elec parts

Volume 1 cu ft-roughly cubical .75 cu ft-rec. solid

12.l x ll.5 x l2.5 8 x lO x l6.2

Other Prog Features Manual of oper incl. Minor chgs to manual

5% Elec parts as spares

Normalization: In this instance, we would require adjusting for inflation factors, the quantity difference, the rate of production effect and the added elements in the original program (the spare parts and manual). The analyst should be careful normalizing these data. General inflation factors are almost certainly not appropriate to most situations. Ideally, the analyst will have a good index of costs which is specific to the industry and will use labor cost adjustments specific to his/her company. The quantity and rate adjustments will have to consider the quantity effects on


the company's vendors and the ratio of overhead and setup to the total production cost. Likewise, with rate factors each labor element will have to be examined to determine how strongly the rate affects labor costs. On the other hand, the physical parameters do not suggest that significant adjustments or normalizations are required.

The first order normalization of the historic data would consist of: 1) Material escalation using industry or company material cost history. 2) Labor escalation using company history.

3) Material quantity price breaks using company history.

4) Possible production rate effects on touch labor (if any) and unit overhead costs.

Because both cases are single lot batches, and are within a factor of two in quantity, only a small learning curve or production rate adjustment likely is required.

Illustration 2

You are considering building some equipment for the first time. Relevant labor effort history relates to equipment built some time ago but reliable data is only available from the third lot, beginning at unit 250. The available history indicates a cost improvement curve on this history of 95% from the fourth lot on. This history is considered the best available.

Normalization here requires two steps. Unless you believe the early lot cost improvement curve is the same as the historical lot improvement curve of the later lots (unlikely), you will need to identify cost improvement curves which may be applicable. These data points will have to be normalized to some common condition and the resulting improvement curve applied to the case at hand. Suppose the relevant history is as follows:

Case Date Quan. Lots Rate of Prod. Improvement Curve

Case A 1985-87 1000 6 5 per day 90%

Case B 1990 400 2 2 per day 83%




Related subjects :