On the use of LRIC models in price regulation







Full text


On the use of LRIC models in price regulation


J. Confraria, J. Noronha, R. Vala, A. Amante Instituto das Comunicações de Portugal

DRAFT - Do not quote without permission



The use of cost models in price regulation has increased with the liberalization process and there seems to be an increasing use of long run incremental cost models. For instance, the European Commission has issued a Recommendation for the use of models based on Long Run Incremental Costs (LRIC´s) for the purposes of interconnection price regulation. In the area of Universal Service the use of LRIC´s for the purposes of calculating the net costs of universal service obligations is also recommended. Given the need for consistency of the costs accounting systems to be used for the purposes of price regulation, it is possible that the use of LRIC models may well be extended to areas like retail pricing of voice telephony and leased lines.

The purpose of this paper is to discuss the theoretical basis of LRIC models, the rationale for their use in economic regulation, compared with other methods currently used, difficulties and shortcomings in implementing these models in regulatory practice, as well as the main

theoretical shortcomings given the current status of the art.

One of the most important reasons for the implementation of price regulation based on the use of LRIC models is related to the need to send the appropriate signals to the market, to foster efficient entry, adequate investments and innovation.

This approach is compared with other approaches that have also been used, such as the Current and Historical Costs methodologies, as well as benchmarking. Theoretically, an approach based on long run marginal costs maybe considered well suited to the main current challenges in regulatory practice, dealing with issues such as asymmetric information and

1 Views expressed in this paper should be considered as individual views of the authors and should not be taken


moral hazard, constituting a powerful instrument in analyzing incumbents costs (and efficiency), investment decisions and network designs. Moreover, in principle, regulatory decisions based on long run marginal costs, are consistent with the role of the regulator as (at least in part) a replacement for competition.

However, it is argued that there also some empirical and theoretical shortcomings to approaches based on LRIC models. For instance, in the first group, difficulties in obtaining detailed and accurate data related to customer location, geology and input prices should not be underestimated. Moreover, the application of such principles should not be independent of the institutional evolution of each country. Concerning theoretical issues, it is suggested that several theoretical developments might improve the accuracy and the use of these models in price regulation, in the following areas: a) network evolution and the dynamics to be built into future models (as current models are basically static in nature); b) analysis of non network costs, currently excluded from modeling; c) analysis of the (optimal) financial structure of the firm – in this case, as in so many areas of regulatory economics, the conspicuous absence of financial issues in modeling is certainly a barrier to the use of some regulatory models in regulatory practice.


Price regulation and the determination of the firms costs

The provision of telecommunications services was often considered a natural monopoly and was in many instances a legal monopoly. Monopolies are usually associated with economic inefficiencies. In static terms, monopolists generate inefficiencies by charging higher than competitive prices (i.e. excessive prices) and producing the corresponding lower than competitive level of service or production. This leads to abnormal or economic profit. In dynamic terms, monopolists are said to face poor incentives to increase productive efficiency, increase quality, and, arguably, to invest and innovate. This leads to higher opportunity costs supported by society in the production of the demanded level of service or production. These economic inefficiencies arise because of the abuse of market power enjoyed in the absence of competition, and lead to social welfare losses.

Thus, from an economic point of view, the regulation of natural (or legal) monopolies implies the control of the monopolist market power to eliminate inefficiencies and maximize social welfare; regulators are believed to have to replicate a competitive market outcome by forcing


monopolists prices to be closer to costs (i.e. eliminating economic profit), as the perfect competition model predicts, and, consequently, to satisfy the level of demand that would exist in a competitive market.

Several methods were and are used to eliminate monopoly inefficiencies. One of the most popular forms of monopoly regulation was the method known as “Rate-of-Return Regulation” (RoR), which works by imposing a maximum rate-of-return on the monopolist investments. This method of regulation suffers from several shortcomings such as incentives to over-invest (the so called the Averch-Johnson effect), and cost padding.

Another popular method for regulating monopolies is price regulation. The objective of price regulation is to drive prices to the levels that would exist under competition. Since the perfect competition model predicts that prices will be geared to costs (including a “normal” return on investment), price regulation should mean the orientation of prices towards costs, which eliminates economic profit.

Price regulation can assume various forms, from administratively determined prices, which conceptually is very similar to Rate-of-Return Regulation and suffers from the same

drawbacks, to RPI-X price-caps, which creates incentives to increase efficiency, adjust prices to demand and decreases the regulatory burden.

In all of these methods of regulating monopoly power, information concerning the firms’ costs is necessary. For instances, costs are necessary in order to determine and monitor the rate of return, to impose a certain cost-based price level or to determine the evolution of productive efficiency in determining price-caps.

As a result of technological advances, most telecommunications experts are now of the opinion that monopoly conditions are, theoretically, “only likely to arise in specific segments

2. At the same time, changing political factors led to market

liberalization and to the introduction of competition in telecommunications markets in many countries, notably, in European union member states.

Liberalization requires the “interconnection of competing networks and service providers to ensure that “any-to-any” service is provided in an economically efficient manner. Such interconnection implies the establishment of principles for determining the charges that must be levied by one network operator or service provider for the interconnect services demanded,


i.e. the interconnect charge”3. The regulatory response to this has been to impose cost oriented interconnection.

Furthermore, where networks are owned by organizations that are competing against firms needing to interconnect with the same network, there is a risk that many of the potential welfare benefits of competition will be delayed or lost. One of the ways to deal with this, besides imposing cost orientated interconnection, is to forbid cross-subsidization between liberalized and reserved areas and/or between competitive and non-competitive markets as well as the prohibition of predatory pricing.

Thirdly, in a liberalized market, the issue of costing and financing of universal service obligations becomes pressing.

Besides the retail regulation above-mentioned, and for each of these three tasks – cost orientated interconnection, detection of cross-subsidy and predatory pricing and universal service costing and financing, regulators need cost information.


Before FL LRIC models: HC FDC

One of the most popular cost standard among regulators was and probably still is the Fully Distributed Historical Cost standard (HC FDC).

The HC FDC standard

This cost standard allocates all of an organization’s costs to services and products. FDC pricing requires that each output generate enough revenue to cover its attributed and allocated cost.

The most widely used method for measuring costs is known as the Top-Down method. This method is based in the information available on the operator's financial accounts. That information is then traced to individual cost centers and objects.

Financial accounts are generally based in Historical Cost, under which assets are presented on the balance sheet at their value at the time of acquisition, generally represented by the purchase cost. For cost accounting purposes, firm's costs are divided in direct costs and joint and common costs.


Direct costs have an unambiguous cause-effect relationship with costing objects. Joint Costs are unambiguously caused by a group of products, while. Common costs are shared by the entire portfolio of products or services. Joint and common costs can be divided in directly, indirectly and arbitrarily attributable costs. Directly attributable cost are attributed through the analysis of the origin of the costs themselves. Indirectly attributable costs are allocated on a measured non-arbitrary basis, for example by establishing an indirect linkage to another cost category or group of cost categories for which a direct assignment or allocation is possible. Those joint and common costs, for which no causal relationship can be found, are thus arbitrarily allocated. Although allocations can be done in many ways, there are three approaches that have been used most frequently4,5


− The relative output method (ROM), which assigns common cost to each product in

− The attributable cost method (ACM), which assigns common cost in proportion to the costs that can be unambiguously attributed to a certain product;

− The gross revenue method (GRM), which assigns common costs in proportion to the revenues generated by all the outputs.

The origins of FDC pricing

According to economic theory, a multi-product utility subject to a break-even constraint achieves maximum efficiency and social welfare by applying Ramsey prices. Ramsey prices allow for the coverage of all cost, by marking-up the marginal cost of each service. This mark-up is inversely related to the price-demand elasticity of each service. Ramsey Prices are one example of prices derived through the maximization of weighted surplus of consumers and producers, and are thus efficient. In practice, Ramsey pricing has not been used because this approach implies a very detailed knowledge of each service demand function, and would also lead to unfair income distribution.

The alternative was to allocate common costs according with an approach known as "cost-based pricing", the most popular of which is FDC. Cost "cost-based pricing requires that each service be assigned a portion of the common cost and that its revenues equal the cost figure given by the sum of its attributable cost and its share of the common costs.

4 See Brown (1986), p. 45. 5 See Mitchell et al (1991), p.137.


Cost Based-pricing stem from two concepts. The first is the link between fairness and cost-causation. That is a consumer should not pay a price higher than the cost (including capital return) caused by the service consumed. The second concept is the efficiency of competition. That is, prices charged by the incumbent should be the same as those that would exist under competition, i.e. cost-based.

Besides these practical and conceptual reasons for the development of cost-based pricing, the history of accountancy might also explain the origins of FDC pricing. Ever since the first business surveys, what emerged as the most typical pricing pattern in the business firms’ answers, was the method designated by “full-cost pricing”.6 (Arthur Andersen reached the

same conclusion concerning telecom operators in 19947). The likely reason for its ubiquitous

use is that for historic reasons the collection of cost information has been driven by the need to monitor and report historical financial performance at the request of external stakeholders. FDC is the only cost standard that is capable of objective assessment and therefore

independent verification. The use of this standard was often also incorporated into pricing decisions because if a firm could not recover in its prices its total historical costs this would adversely affect reported profitability8.

FDC and regulation

FDC was adopted as a regulatory rule in the US and in Europe.

Fully distributed costs “has been used in U.S. regulatory ratemaking for decades. The use of FDC methods reflects a (…) concern in the regulation of the domestic telephone industry: cross-subsidy between users of different services. (…) The FCC undertook a series of

investigations lasting from 1964 until the late 1970’s concerning the impacts of different FDC methods (…) [and] consistently took the view that FDC methods provided an appropriate method to test for the existence of cross subsidy. This concern reflects the desire to avoid bellow-cost pricing, but it focuses on the question of whether or not costumers in monopoly markets w\ere subsidizing the low rate”9.

In Europe FDC has also been considered regulatory rule. For instances, on the “Guidelines on the applications of EEC competition rules in the telecommunications sector”10, the European

6 Clark (1961).

7 Arthur Andersen (October 1994), p.10. 8 Arthur Andersen (October 1994), pp. 55-56. 9 FCC (1972).


Commission document states that, in order to enable the Commission to ascertain whether cross-subsidization exists, telecom operators should have transparent accounting. This transparency can be provided by an accounting system, which ensures the fully

proportionate distribution of all costs between reserved and non-reserved activities. This system should also enable the production of recorded figures which can be verified by accountants.

In the context of the ONP Directives, FDC is also considered regulatory rule. The issues FDC is supposed to deal with are excessive pricing and the detection of cross-subsidy. For

instances Directive 92/44/EEC, on the application of open network provision to leased lines, makes a reference the principle of fully distributed costing, stating that this is an example of a cost system that can be verified by accounting experts ensuring the production of recorded figures, and imposes the relative cost attribution method for common cost allocation 11. The

Directives on Voice Telephony12 and postal services13 also recommend FDC as a regulatory

norm. The most recent of these Directives was published in 1998.

The use of FDC by regulatory authorities around the world for the purpose of pricing and cost recovery, is explained by the following factors:

− FDC allows for the recovery of all (historical) costs, making it possible for the firm to obtain positive (historical) returns;

− It is easy to implement, since all information already exists or is easy to produce by the operators internal information systems;

− There is a strong connection between financial accounts and internal accounts, making it easy (for regulators) to audit the latter and guaranteeing an upper bound on the overall level of cost of a diversified operator;

− When cost causation is carefully investigated and established some of the FDC disadvantages are eliminated, namely arbitrariness;

− The alternatives are too complex or are not suited for practical use and are not readily available. For instances, “the FCC retained its traditional FDC allocations

methodology, while fully recognizing that its pursuit of a suitable FDC formula in Docket 18128 did not produce lawful rates. The procedures were so unwieldy and

11 Directive 92/44/EEC, of 5 June 1992, Recitals 18 and 19, and Article 10. 12 Directive 95/62/EC, of 13 December 1995.


complex, and the data they produced so massive and incomprehensible, that the Commission was unable to prescribe lawful rates to replace the rates found unlawful. Nevertheless, the FCC again rejected the economically preferable alternative of a marginal cost methodology. Its principal ground appears to be that marginal cost standards, however attractive in theory, are difficult to use for regulatory purposes because marginal costs are not readily measured by conventional accounting methods”.14

Limitations of FDC Pricing15, 16

FDC has been criticized by economists17. Common objections to FDC are as following:

− Arbitrariness.

In any given situation the adoption of each of the 3 methods above-mentioned (ROM, ACM and GRM) implies different common cost allocation and necessarily different prices, and this methods offer no criteria to define each of the final results is the best18,19


Top-Down HC FDC, in practice, also involves numerous conventions regarding categorization of assets, depreciation rates, valuation of assets, construction work in progress, accruals, capitalization or expensing of costs, etc, that are arbitrary and sometimes decided upon according to fiscal planning objectives, and not with the economics of the business. It cannot purport to identify those costs which are caused by a product or service, and this is fundamental to economic cost determination.

13 European Directive 97/67/EC, concerning the common rules for the development on an internal market for

community postal services and the improvement of the quality of service.

14 See Hillman et al (1989), p. 20. 15 See Berg et al. (1988), pp.93-102. 16 See Mitchell et al (1991).

17 For instances Baumol states that: “The distinguished feature of FDC is that it cannot pretend to constitute an

approximation to anything. The reasonableness of the basis of allocation selected makes absolutely no difference except to the success of the advocates of the figures in deluding others (and perhaps themselves) about the defensibility of the numbers. There just can be no excuse for continued use of such an essentially random or, rather, fully manipulable calculation process as a basis for vital economic decision by

regulators” Baumol et al. (1987).

18 Leite et al. (Outubro de 1993).

19 The 1971 AT&T study, mentioned above, explored no fewer than seven FDC methods, which varied in detail

but can be grouped under the above-mentioned three allocation methods. Depending on the method used, there were wide variations in FDC-based rates of return calculated for each service. FDC rates of return for Wide Area Telecommunication Service (WATS) varied from 9.4%, when calculated on a ROM, basis to 17.9% on as ACM basis. The rate of return for TELPAK, a bulk private line service, varied from 5.4% to 10.3%, depending on the method of allocation used. Similar variations were calculated for other services. FCC (March 16, 1972), pp.1-13.


− Inefficiency.

Economists have long argued that the adoption of a FDC standard does not achieve efficient resource allocation because it is based on average rather than incremental costs. Thus efficiency and maximum social welfare are not assured. Welfare can be increased if prices change in the direction of Ramsey Prices, while maintaining zero profit.

The fact that HC FDC is based in historic cost also implies inefficiencies. Since historical costs are based in past prices, services costs will not represent the true economic value of the resources used. This will distort any management decision based on HC FDC.

Another objection is that HC FDC is based upon existing physical network

engineering capacity and existing business processes and work practices, and include large elements of cost which are “sunk” or avoidable in cost determination, including productive inefficiencies20. Since the final cost values will not represent the true

economic cost of the resources they will lead to decisions that preclude efficient outcomes.

− Attempts to determine cross-subsidization are meaningless.

When FDC pricing is used for testing for cross-subsidy, the concept of attributable costs is normally used, meaning the sum of marginal cost and specific-fixed cost. First “attributable cost” floors are established, and then those costs are marked-up to

achieve the required amount of revenue. But attributable cost is not adequate for these purposes, because mark-ups should not be based upon product specific costs, but on marginal cost, and because attributable cost is a good approximation of average incremental cost only if marginal cost is nearly constant.21

− May include higher than competitive net return.

The allowance for net return, which is necessarily included in full-cost, is open to the suspicion of being, in actual use, arbitrary and higher than the competitive level. Excessive profit can result from the need to specify which quantities are to be used in determining allocation criteria. “Until the 1970’s it was common to use recent test period data to get the cost, output and revenue data to be used in the allocations. The


prices set by FDC methods were unlikely to be consistent with the quantities used in the allocation. The effect is that since the FDC prices were set to break even on period t quantities and costs, the firm will have non-zero profits in t+1 as demands and, hence, outputs, adjust to new prices.” 22.

− FDC pricing regulation as Rate of Return Regulation.

The previous point suggests that FDC pricing as a regulatory rule, is similar to Rate of Return regulation and, as such, subject to all the inefficiencies attributed to this

particular form of regulation, namely, the Averch-Johnson effect, cost padding and poor efficiency incentives.

− Inefficient entry.

In the presence of competition, fully distributed cost pricing tends to create an inefficient amount of entry and invites inefficient bypass23. Inefficient new entrants

are encouraged to enter the market as long as their costs are lower that the full allocated cost of the incumbent, even if its marginal cost is higher than the incumbents’ marginal cost.

− Perverse reporting and implementability incentives.

Since the regulator depends on the firm for cost information, then incentives to misreport arise. For example, the firm will have an incentive to maximize the portion of costs allocable to the core (non-competitive) services.24 But the regulated company

is not the only entity subject to perverse incentives. “Referring to the FCC’s cost allocation rules, its aim again is to reduce the regulated base and to allocate the largest possible cost and risk to the unregulated use of joint and common facilities. In all, the rules suggest a present potential for regulators to manipulate costs at least equal to that of the regulated firms”25.

21 Bradley et al (1999). 22 Brown (1986), p. 46. 23 Laffont et al (2000), p. 144. 24 Hillman et al (1989), p. 12. 25 Hillman et al (1989), p. 16.



Beyond HC FDC

Several approaches have been developed in order to tackle FDC limitations. Within the framework of full cost pricing, an axiomatic approach was developed to deal with the arbitrariness critique, and cooperative game theory was used to determine subsidy-free allocations.

Besides the two theoretical approaches mentioned, other practical approaches were developed. The first one that we will mention will be Current Cost Accounting, that deals with the limitations associated with the use of historical cost, and the second approach, Activity Based Costing have been developed to reduce the level of arbitrarily allocated costs. Three other costing standards have been developed in order to substitute FDC. The first ones Embedded Direct Cost (EDC) and stand-alone cost (SAC), will be presented in this chapter. The other, LRIC, will be the subject of the next chapters.

The Cooperative Game Theory Approach to common cost allocation

Since the early 1970's, and in part to solve FDC’s problems, some authors have proposed the use of cooperative game theory to determine the share of common cost allocated to each service. The object here is to allocate responsibility for common costs among services so as to avoid cross subsidy. This approach dates from the early 1970’s and has come into fairly frequent use in policy discussions26,27,28


Comparing the results of FDC pricing to the cross subsidy conditions, it is possible to prove that FDC pricing rules are all in the core of the cost game, as long as the cost function takes the separable form with a fixed cost (i.e. all common cost is fixed) and as long as FDC prices are demand compatible (i.e. when quantities used to distribute common cost are computed taking in consideration the demand effects of the resulting FDC prices).29

The axiomatic approach to common cost allocation

Allocating common costs may also be done through an axiomatic approach. This is a third form of cost based pricing (after FDC and game theory). This approach30, which was proposed in the beginning of the 80's starts by defining the desirable properties that a cost

26 See Brown (1986), pp. 44-45 and Leite et al (1993), pp. 303-306. 27 See Brown (1986), pp. 50-54.

28 See Mitchell et al (1991). 29 See Brown (1986), P.54.


allocating system should have. The axioms proposed are the following: Cost Sharing, independence of scale,consistency, positivity, additivity and correlation 31,32.

This will not necessarily lead to efficient pricing. In fact, an important result is that as long as the cost function takes the separable form with a fixed cost, then FDC ACM is the only price mechanism satisfying the six axioms.33

Current Cost Accounting (CCA)

By valuing assets at its time of purchase cost, HCA assumes that the purchasing power of money is stable and that the price of the goods and services do not change. The result is that its use as a management and regulatory tool may distort operating results, hurt the firms’ ability to survive (capital maintenance), distort the evaluation of management’s performance and lead to wrong investment and pricing decisions. To remedy the problems associated with the use of historical cost accounting, some authors advocate the use of current cost accounting (CCA).

Current cost is the cost today of purchasing an asset currently held. This requires that the depreciation charges included in the operating costs be calculated on the basis of current valuations of equivalent assets, and consequently the reporting on the capital employed also needs to be on a current cost basis.

There are several concepts of current cost. It is possible to define at least three different concepts34:

− Current replacement costs, the cost needed to acquire the same asset in its existing condition. “Replacement cost can simply be the cost today of replacing the asset with an identical one. However, when technology is changing rapidly, the existing asset may no longer be replaceable (e.g. it is no longer manufactured). In this case it is necessary to calculate the modern equivalent asset (“MEA”) value which is the value of an asset with the same level of capacity and functionality as the existing asset”35;

− Net realizable value (NRV). NRV is the amount that the firm would receive upon selling the assets in possession;

30 See Brown (1986), pp. 57-59.

31 See Brown (1986), pp. 44-45 and Leite et al (1993), pp. 303-306. 32 See Mitchell et al (1991).

33 See Brown (1986), pp. 58-59. 34 See Zin (no date).


− Net Present value (NPV) of the future cash flows generated by the asset in question. At the same time, there are two measurement methods: constant and nominal prices.

Another option related with CCA is the choice between financial capital maintenance (FCM), where the accumulated depreciation in the end of the asset life equals the value in the

beginning, and operational capital maintenance (OCM), where the accumulated depreciation in the end of the asset life allows the substitution of the depreciated asset for an asset that ensures the original operating capacity.

The combination of the three current cost concepts, with the two measurement methods and with the capital maintenance options leads to many different current cost methodologies. Usually, “current cost is defined by using the value to the business (VTB) method: choose replacement cost or recoverable amount, whichever is the lower. Replacement cost is defined as the cheapest possible replacement, using a modern equivalent asset, and recoverable amount is the higher of net realizable value (NRV) or present value of the additional cash flows resulting from retention of the asset (PV)”36.

The use of CCA was advocated in several countries in the 1980’s, especially those that suffered high and persistent inflation. In Portugal the obligation to re-evaluate assets was determine by law, and price indexes were published every year for every category of asset. CCA Accounting standards were introduced in the UK but were made non-mandatory a few years later for lack of compliance37.

In 1986, in the UK, the Byatt Report38 supported the introduction of a CCA method in the

context of regulating state-owned enterprises. “It was argued that current cost represented the cost that would be faced by a hypothetical new entrant to the industry”39.

In 1998, the European Commission recommended CCA methodology for interconnection pricing40.

The use of current cost is liable to several criticisms namely, the fact the asset valuation is subjective and difficult to audit, and that the procedures are complex and difficult to administer41. CCA becomes as arbitrary as HC FDC.

36 Whittington (1994). 37 Whittington (1994). 38 Byatt (1986). 39 Whittington (1994).

40 European Commission (8 April 1998). 41 Whittington (1994).


Another problem results from the use of CCA in privatized utilities. In privatized utilities, “stock market values have typically been below the replacement cost value of assets per share, and were so at the time of flotation. This means that the effective VTB can be regarded as being determined by the share price, rather than the cost of replacing assets. [There are several] options for dealing with this problem (…) each is likely to be unpopular with either the shareholder or the customer, depending upon who is favored by it. Even if this problem were solved, there would be an endemic problem of measuring the ‘goodwill’ that the

company has created by efficiency improvements and innovation, and that should be awarded a return by the regulator”42.

Activity Based Costing (ABC)

A further attempt to partially respond to FDC limitations is Activity Based Costing (ABC), which was developed in the US since 198843. To some extent the above-mentioned criticisms

directed at FDC can be overcome if greater attempts are made to ensure cost causative attribution (either directly or indirectly) to services and to reduce the arbitrariness of any “general” allocations of residual joint and common costs.

Activity based costing is a method designed to identify and establish cause-effect relationships between cost and cost objects. The first step is to determine the firms cost elements, which may be different from organized accounts cost categories. The second step is to list all activities performed in the firm. The third step is to determine the factors that make activities use resources, that it, it is necessary to determine the cost drivers. Since activities are performed in order to produce a final service, the fourth step consists in determining the factors that force activities to be performed. That is the activity drivers. Finally, cost is allocated to the activities according to the cost drivers, and activity costs are allocated to cost objects according to activity drivers.

ABC has been endorsed by regulators and accounting experts. The European Union as advocated the use of ABC in Directive 97/33/EC44, and the European Commission has

42 Whittington (1994). 43 See Schweitzer (1999).

44 European Directive 97/33/EC, on interconnection in telecommunications with regard to ensuring universal

service and interoperability through application of the principles of Open Network Provision (ONP), of 11 June 1997.


recommended its use in the field of interconnection twice45. Experts have pointed out that the

use of ABC may overcome many of the criticisms relating to the arbitrariness of FDC46.

In fact the European Commission believes that by establishing an ABC CCA cost accounting system, it is possible to calculate a top-down LRIC that will serve as a cross-check to bottom-up FL LRIC models47.

Embedded Direct Costs (EBC) and Stand-Alone Costs (SAC) Standards

Another way to deal with the arbitrariness surrounding the allocation of common cost is simply not to perform this allocation. There are two ways to achieve this: the first is to simply disregard residual joint and common costs in the costing process. This method is known as Embedded Direct Costs (EBC). The second method, stand-alone cost (SAC), solves the allocation problem by allocating all of the residual joint and common costs to the service for which the costing exercise is being performed.

EBC presents the same limitations attributed to FDC, except those related to the allocation of common costs. In a practical application of this costing standard, OPTA, the Dutch Regulator imposed EBC on KPN, the telecoms incumbent48.

SAC is manly used as a price ceiling and as a cross-subsidization test.


The LRIC standard and Price Regulation

Pricing based on marginal cost, which is the cost of producing an extra unit of output, is based in economic theory. Marginal cost (MC) pricing assures, under certain assumptions,

maximum social welfare, efficient resource allocation and efficient market entry.

Nevertheless, in industries where there are economies of scale and/or economies of scope, which result in large amounts of fixed, common or joint costs, marginal cost pricing may not include all the relevant costs49.

45 European Commission (15 October 1997) and European Commission (8 April 1998). 46 Arthur Andersen (October 1994).

47 European Commission (15 October 1997). 48 OPTA (2000).

49 These technological characteristics will also lead to chronic deficits. Economists have long argued that in the

presence of economies of scale, two-part tariffs or subsidies should substitute or complement marginal cost pricing (cf. Coase, 1946). This paper will not deal with this issue.


A way to deal with this problem is to measure marginal cost in the long run, taking account of service-specific fixed costs, and to define larger increments in order to account for the cost effects of joint production, economies of scope and scale and indivisibilities. This led to the development of the long-run incremental cost standard (LRIC).

This cost standard considers that the cost of a service or product is equal to the change in total cost resulting from a discrete variation in output in the long run, that is when all inputs are variable.

Two types of LRIC standards have so far been developed: Total Service Long Run Incremental Cost (TSLRIC) and Total Element Long Run Incremental Cost (TELRIC). TSLRIC may be used as a Long-Run Average Incremental Cost (LRAIC)50.

TSLRIC measures the difference in cost between producing a service and not producing it. TSLRIC is LRIC in which the increment is the total service. TSLRIC seems to have been devised to deal with the fact that the long-run incremental cost should include all items necessary to offer the product or service to the consumer, and not just be limited to the technical means of delivery. Thus TSLRIC would include such activities as billing, payment collection, network planning, etc…51. Nevertheless, it is argued that “although this notion is

incremental with respect to services, it is a form of average cost-pricing with respect to

quantity. Therefore it retains, on the one hand, the problem of covering costs that are common to more than one service, and loses, on the other hand, the efficiency properties of setting price equal to a cost that is marginal with respect to quantity”52.

TELRIC, on the other hand, is a standard coined by the FCC, and is connected to the “unbundling” of the incumbent’s business. TELRIC includes the incremental cost resulting from adding or subtracting a specific network element in the long-run plus an allocated portion of part of the joint and common costs. Generally, this unbundling is limited to certain elements of the network infrastructure, and that is why it is termed TELRIC. TELRIC is TSLRIC where the increment is a network element, plus a “reasonable allocation of forward-looking joint and common cost”. Besides being useful for pricing unbundled network

elements, TELRIC was developed based on the presumption that it is possible to attribute a greater share of total costs to its several network elements than to its several services53,54.

50 Intven (2000), B14-B15. 51 Cartwright (2001), P.62. 52 Linhart et al (no date). 53 Kahn et al (1999). 54 Linhart et al (no date).


Critics say, though, that besides not covering all costs, “TELRIC retains no vestige of marginality with respect to quantity”55.

Incremental cost may be calculated through a top-down financial model or through

economic/engineering cost model. The financial approach involves the use of current cost accounting, by using the Modern Equivalent Asset (MEA) methodology to account for assets, for instances. When used for pricing purposes efficiency factors must be subtracted from the computed value in order to account for incumbents’ inefficiencies.

Using incremental cost standards for pricing implies that prices are calculated as IC plus a mark-up to account for residual joint and common cost, to allow the operator to recoup all of its forward-looking costs and achieve break-even. The mark-up may be uniform (i.e. price proportional) or non-uniform (i.e. additive or usage-proportional).

LRIC models and regulatory practice

The adoption of the LRIC standard as a regulatory rule was proposed by Oftel in 1995 in the UK56, to be implemented from 1997, and was determined by the FCC in the US in 199657. In 1997 and 1998 the European Commission recommended LRIC58, based on two 1994 studies

produced by WIK/EAC59 and Arthur Andersen60.

Oftel proposed the use of forward-looking long run incremental costs methodologies in connection with BT’s interconnection charges. Oftel considered that forward-looking costs are the appropriate basis for interconnection charges because they reflect resource costs. Ideally for economic efficiency, the prices of retail services should be set in a way which encourages consumers to take account of the resource costs of their purchasing decisions. Operators would be encouraged to set efficient retail prices if they could purchase a major input (interconnection) at a charge that was set by reference to the cost of the resources consumed by its provision. Furthermore, since replacement costs would be the costs faced by a new entrant, signals would be given to encourage efficient entry into and exit from

interconnection services, if the incumbent's interconnection charges were set on the basis of forward looking costs. An entrant into provision of interconnection services that was more

55 Linhart et al (no date).

56 Oftel, (December 1995), Chapter 5 and Annex D. 57 Telecommunications Act of 1996.

58European Directive 97/33/EC of 11 June 1997, European Commission (15 October 1997) and European

Commission (8 April 1998).

59 WIK and EAC (October 1994). 60 Arthur Andersen (October 1994).


efficient than the incumbent could make a profit by setting a charge below the incumbent's charge, whereas an inefficient firm would be unprofitable if it were to match the incumbent's charge. In addition, efficient entry into retail services would be encouraged, although this would depend also upon the nature of retail prices.

Oftel also favored the use of equal proportionate mark-ups to apportion the common costs of BT's network between conveyance and access respectively. Oftel considered that mark-ups over incremental cost were necessary if BT was to be able to recover the common costs that it necessarily incurs in providing its network. Oftel also considered that the concept of forward-looking costs required that assets were valued using the cost of replacement with the modern equivalent asset (MEA), since this would be cost of the asset which a new entrant might be expected to employ

In the US, one of the principle goals of the US Telecommunications Act of 1996 Act regarding the provision of telephone service was to open the local exchange and exchange access markets to competitive entry. In advancing this goal the interconnection section of the 1996 Act, section 251, imposes several obligations on the incumbent local exchange carriers (ILECs). Among the most relevant of these obligations is the duty to offer interconnection at rates, terms, and conditions that are just, reasonable, and nondiscriminatory. Second, ILECs must provide unbundled network elements (UNEs) at rates, terms, and conditions that are just, reasonable, and nondiscriminatory61.

The FCC devised pricing rules to implement this provisions of the Act62. The FCC

determined that an incumbent LEC's rates for each element it offers shall be established pursuant to the forward-looking economic cost-based pricing methodology. In general, the forward-looking economic cost of an element equals the sum of (1) the total element long-run incremental cost of the element and (2) a reasonable allocation of forward-looking common costs. The total element long-run incremental cost of an element is the forward-looking cost over the long run of the total quantity of the facilities and functions that are directly

attributable to, or reasonably identifiable as incremental to such element, calculated taking as a given the incumbent provision of other elements.63

The total element long-run incremental cost of an element should be measured based on the use of the most efficient telecommunications technology currently available and the lowest

61 Gabel et al (2000). 62 FCC 96-325. 63FCC 96-325.


cost network configuration, given the existing location of the incumbent LEC's wire centers. The forward-looking cost of capital should be used in calculating the total element long-run incremental cost of an element. The depreciation rates used in calculating forward-looking economic costs of elements should be economic depreciation rates. 64

Forward-looking common costs are economic costs efficiently incurred in providing a group of elements or services (which may include all elements or services provided by the

incumbent LEC) that cannot be attributed directly to individual elements or services. 65 Proponents of TELRIC argue that, because prices are driven to forward-looking costs in competitive markets, “the TELRIC Plus methodology is intended and expected to provide incumbents with a constitutionally sufficient approximation of the fair market value of their property in a competitive market. It is the FCC’s adoption of the TELRIC Plus methodology for the pricing of unbundled elements and interconnection that has given rise to multiple constitutional challenges by the ILECs and their supporters.”66

The European Commission has also recommended LRIC as the basis for interconnection pricing in the context of Directive 97/33/EC67. This Directive departs from the cost

orientation and cost accounting framework of the previous ONP directives, and proposes LRIC for universal service costing and for interconnection pricing. The objective of the interconnection pricing regulation is to encourage the rapid development of an open and competitive market as well as productivity and efficient and sustainable market entry. In order to achieve this, prices must not only be excessive, but, more importantly, they must not be predatory. The Directive mentions incremental cost models and the used of a forward-looking cost base, but left all recommendations for a later stage, not specifying a particular cost accounting system.

In the Commission Recommendation on interconnection in a liberalized market: Part I: Interconnection pricing68, a few months after the publication of the Interconnection

Directive, the Commission recommends FL LRIC as the cost standard for interconnection pricing.

64 FCC 96-325. 65 FCC 96-325. 66 Gabel et al (2000).

67 European Directive 97/33/EC, on interconnection in telecommunications with regard to ensuring universal

service and interoperability through application of the principles of Open Network Provision (ONP), of 11 June 1997.


In so doing, the Commission recognizes that an implementation period must be granted, and, in the mean time, proposes the implementation of ABC current cost systems, that will provide a “top-down” cross-check on the Bottom-up engineering models that will be implemented in order to calculate FL LRIC. Nevertheless, the Commission recognizes that FL LRIC must be complemented by mark-ups, to cover FL incremental common and joint cost, which opens the way for arbitrariness.

In Commission Recommendation on Interconnection in a liberalized

telecommunications market: Part 2-Accounting separation and cost accounting69, the Commission implements Part 1 of the Recommendation, taking it to its logical consequences. Nevertheless, it goes beyond Part I, by referring to efficiency adjustments, which may lead to increased arbitrariness in price determination. The Recommendation was drafted taking into account two documents produced by Arthur Andersen70,71.

It recommends the imposition of a time limit for the implementation of CCA accounting, based on the ABC method. This would attribute at least 90% of cost. Cost should be attributed to network elements. The sum of incremental or avoidable cost of use of this network

elements, would allow the calculation of interconnection incremental cost. The reconciliation of this top-down cost with the one resulting from bottom-up models, would serve as a cross-check. The existence of unattributable common and joint costs and of inefficiencies would mean that the top-down result should be adjusted. Since engineering bottom-up models take time to develop, the commission published “best practices” as another cross-check to these adjustments.

All possible polemical issues, including the allocation of common cost is to be dealt with in the context of public consultations, turning an objective costing exercise in a game between market players.


A critical appraisal of the LRIC approach

The existence of historical monopoly positions justifies regulatory practices in order to eliminate inefficiencies and maximize social welfare.

69 European Commission (8 April 1998). 70 Arthur Andersen (October 1997). 71 Arthur Andersen (November 1997).


Some of the opponents of LRIC question the whole framework of cost-based pricing, considering that economic theory is only compatible with usage-based prices, or defending the impossibility of administratively defined prices (following Hayek’s defense of the impossibility of economic calculation under socialism), pointing-out the never-ending difficulties surrounding the determination of costs. Others consider that the mainstream regulation paradigm does not take in consideration the need to facilitate the market-process and enable competition in the Schumpeterian sense of the word, in which firms in a

competitive market gain a temporary degree of monopoly power by a process or product innovation, and leap-frog over each other in this respect, and, on the contrary, focus on achieving the competitive market outcome.

The regulators must encourage entrants without expropriating incumbents and for that they need to send the right “price signals” to the market.

In this context, Long Run Incremental Cost (LRIC) models were developed to answer this concern. “(…) economists generally agree that prices based on forward looking long run incremental cost (LRIC) give appropriate signals to producers and consumers and ensure efficient entry and utilization of the telecommunications infrastructure”72

The issue here is to know if at current stage of model building LRIC models answer this concern and are already to be directly used by regulators in implementing cost-oriented prices.

Forward Looking Long Run Incremental Costs: shortcomings of a static analysis and the need for a dynamic analysis

Models, especially bottom-up models, are basically static in nature. In this context several questions arise:

a) It is unrealistic and does not mimic the market process

More generally the use of LRIC models would force firms constantly to update their facilities in order completely to incorporate today’s lowest-cost technology, as though starting from scratch, the moment those costs fell bellow prevailing market prices73. However it is arguable that such investment policies would be inconsistent with the achievement of normal return on investment.

72 FCC 96-325, §630 73 Kahn et al (1999), 319-365


Cost proxy models74 have the advantage of incorporating built in network optimization routines, which can be used to determine an optimal static network at various points in time in order to approximate certain dynamic considerations. This repeated exercises will not necessary result in the optimal path for the network since it always rebuilds the network from the scratch.

Assuming that the “efficient firm” simply takes over the current volume of sales of the incumbent, sizing its plant to serve that demand at minimum cost. This assumption ignores that many telecommunication assets are long-lived and that capacity is not deployed all at once, overnight, but expands incrementally to serve growing and changing demand.

Ignoring the dynamic character of this process inherently understates the minimum costs of serving demand since as it is based in the real world over time.

b) How to deal with inefficiencies originated by regulation or political decisions Existing inefficiencies of the incumbent operator may be caused by regulatory constraints75.

The point here is to discuss who should bear the cost of regulatory and policy inefficiencies - the firm shareholders and workers or the society as a whole. c) Depreciation and technology evolution

The LRIC methodology has also been criticized assuming that networks can be

instantaneously and entirely reconstructed with the best-available FL technology. Indeed, if a current technology were cheaper to install and operate, earning an adequate (presumably high) return on the risk investment (which has to face similar threats from later

competitors) and depreciating its equipment correctly to reflect the changing MEA value of its assets then, in a competitive market, entry would occur until prices were driven down to these LRIC costs. What is not clear is whether we attempt to measure LRIC using a

74 Using a bottom-up approach, when constructing the fictive network, regulators also have to consider whether to use a network, built the way one would build such a network today (scorched-earth) or whether to take the existing network configuration as point of departure (scorched-node). Scorched earth approach should result in the lowest price: one can copy the present network but has the option to build it differently if this can reduce costs.

75 There are some examples of inefficiencies caused by regulatory or political constraints: for instance, (1) in a

country that, in the past, it was forbidden the use of Fixed-Wireless-Access technologies by the incumbent, it must design the network plant using cables, even in the rural areas with low density. These conduct to network inefficiencies; (2) Inadequate historically prescribed depreciation rates might been caused sunk investments in the incumbent network plant; (3) In the past, when the incumbent were property of Govern, some

microeconomic policies into the company might been caused by macroeconomics policies from the Govern, leading to inefficiencies into the firm.


sensible risk-adjusted rateof return on assets and appropriate economic measures of depreciation. There is a risk they will understate the true cost76.

Heavy-handed regulation

Some other find that the LRIC approach puts the regulator in position of having to decide what investments are relevant even before they have been undertaken and so, regulators would replace the function of companies board.

“It is not realistic to presume that a government agency is better equipped than market

participants to sort out those technological changes to determine which technology is the best available or most efficient.”77

The choice of the best technology available should not be made by the regulator but by the operator that is in a better position to judge.

On the other hand, LRIC pricing gives a regulatory key role in management entry78. Additionally, the regulator while choosing the adoption of a certain LRIC model has to be aware of the different algorithms in each model and of its optimality when constructing a hypothetical network. For instance, underestimating the network length can lead to seriously insufficient support for explicitly subsidizes services and incorrect prices (and hence incorrect price signals for investment and entry) for unbundled network elements79.

Using proxy models to calculate LRIC, the regulator have to choose what will be the algorithm that result in a optimal, most efficient, network, and he could not be able to do it. In terms of Interconnection, the algorithm to be used must respond to the trade-off between switch and transmission in order to minimize the cost and built an efficient interconnection network.

76 Newbery (1999), 338-341 77 Sidak et al (p.423) 78 Laffont et al (2000), p.149

79 Dippin and Train (2000) perform an analysis by comparing the HAI distribution network procedure to the minimum spanning tree and they find that lengths in the HAI model are considerably less than the length of the minimum spanning tree. This result implies that the HAI model procedure provides less distribution cable, and hence lower costs, that is physically possible to use in serving the customers.


Other Costs

Bottom-up models basically model the main network costs – in some cases, some network costs are not included – but non-network costs are excluded. These costs constitute an important amount of the firm total costs and most of them are basically common and joint costs.

The problem is what is the Long Run Incremental Cost of the non-network costs? Is it legit for the regulator to choose a certain technology (software) or the optimal level of advertising? It may be said that pricing at incremental cost without joint and common costs is

economically inefficient because it enables competitors to offer the incumbents’ services without considering the common costs that the incumbents incur. Thus, technological decisions will be distorted: the incumbent is encouraged in less efficient technologies that have higher incremental costs and lower common costs, which would tend to reduce economies of scope. Moreover, due the fact that the firm cannot break even, reduce the quality of service80.

This reduction in the quality of service levels would imply a close regulatory monitoring and interference.

Using incremental cost standards for pricing implies that prices are calculated as Incremental Cost plus a mark-up to account for residual joint and common cost, in order to allow the operator to recoup all of its forward-looking costs and achieve break-even.

The fact that mark-ups are used to allocate residual joint and common cost, subjects LRIC models to the same criticisms directed at FDC pricing resulting from the arbitrary allocation of common costs.

Institutional rigidity of labor market, e.g. the unionized firms

It is important for a model to show clearly the contribution of each one of these factors. When the regulator make a long run planning of the incumbent costs, must take into account some restrictions or barriers that are faced by the company. It is the case of the unionized firms.


Inefficiencies arising from former monopoly positions can subsist for a period of time large enough affecting the efficiency level of the incumbent coming from the monopoly

environment, some of which resulting from regulatory constraints. However they incurred in sunk costs associated with labor factor. Actually, unionized firms have some difficulties when they must reduce the number of workers, due to the technological development, having as main goal to increase efficiency.

Regulators must consider this question since it could exist associated costs that should be recovered (once the incumbents always will face these costs and the entrants will not).

This problem becomes crucial when competition is increasing and incumbents are faced with cost-oriented methodologies like incremental Costs defended by regulators.

Economic/financial methodology of the cost calculation

LRIC aims at calculating the minimum cost of providing an efficient service.

The efficient principle that is the core of the LRIC concept has implications in calculating the cost of capital.

For instance:

a) It is necessary for the regulator to estimate the optimal financial structure.

It is not obvious that current economic theory provide the regulator the necessary knowledge for this purpose.

b) It is necessary for the regulator to estimate the optimal tax planning policy (e.g. tax avoidance).

Apart from the problem of being involved in typical direct management decisions, the regulator, in this case, faces a dilemma in choosing between the private interest of the firm shareholders, that may contribute to decreasing the cost of services and so helping pursuing a restricted concept of public interest in telecommunications market and wider public interest resulting from the government tax policy.

c) These cost models do not take into account the option value of discarding the asset if it becomes an economical to exploit, that is, if it becomes economically or regulatory stranded81,82.


A LRIC model has to deal with an investment problem characterized by uncertainty (of technology, market or demand evolution), irreversibility (initial cost can not be fully recovered in case it is decided to abandon the project) and managerial flexibility to postpone (or modify) the investment (option to wait), that is, considering the real options impact83,84. There are two solutions to deal with these problems: set short depreciation horizon, which will result in higher prices in the short run also lower in the long run or set a high return on capital (higher than that proposed according to the CAPM)85.

There are some empirical shortcomings when implementing LRIC models in the regulatory practice.

Engineering-economic models have been developed in recent years as an alternative to the traditional econometric and accounting approaches to cost assessment. Engineering models (also known as cost proxy models) offer a more detailed view of cost structures than the econometric data. These models could enable the regulator to estimate the FL economic costs of the service without having to rely on detailed cost studies that otherwise would be

necessary. An economic cost proxy model begins with an engineering model of the physical local exchange network, and then makes a detailed set of assumptions about input prices and other factors.

However, these models have some shortcomings in the regulatory practice:

a) Information resources

The required information for proper computation of LRIC is very extensive and is therefore conductive to the exercise of discretion. It is almost impossible to audit or review the results since the models are highly subjective and the necessary data are under the exclusive control of the party subject to the agreement86,87.

82 Newberry (1999), pp.338-341 83 Alleman

84 Real Option theory explores the value of a firm’s existing options (e.g. to postpones, contract or abandon a

capital investment) and the value of building in options at some extra cost (e.g. the ability to switch between inputs or outputs, expanding capacity, to default when investments are staged sequentially, etc.)

85 Holm (2000)

86 FCC 96-325, §635-§642. 87 FCC 96-325, §635-§642.


The main concern in most reforming developing countries is the limited access to data, which is the first barrier to be transposed by a regulator in the development of a toolkit for any regulator88.

The amount of information detail that the proxy models must incorporate is almost forbidding (about the size and cost of central switches, the size, locations and cost of fiber optic electronics).

LRIC methods start from the lowest current cost of the equipment. However, there may arise some problems related with the knowledge of the cost of equipment. First, some elements may be customized and may not have a well-defined market price; second, what constitutes efficient equipment in general depends on the forecast at the future usage of the elements.

On the other hand, proper determination of LRIC for setting current access charges requires some forecast of technological progress (e.g. platforms migration).

The reliability of the model's results depends not only on the engineering parameters and economic principles, which are integrated or used in model optimisation routines. It also depends upon a calculated detail related to customer location and terrain characteristics.

It should be noted that imprecise input values, due to the quality of available information, might imply less accuracy on output costs.

Equipment costs could be obtained from suppliers or from the own incumbents, but anyone can’t be sure about the accuracy of the values. For instance, the cost of

investment is determined by multiplying the input price of investments by the quantity of the equipments. The investments are at “list” price, which could be subject to quantity discounts, which is not considered.

b) No cap on costs

In the LRIC models there is no cap on costs, i.e., there is no maximum limit to costing services or elements, because the result depends on the input values used to compute the model.


Contrary, with FDC (fully distributed costs) it is possibly to define the maximum costs of the network, since the costs result from the account books. However, LRIC models do not allow us to know the dimension of the costs. Each time the input values, parameters or equipment costs, change, the model gives a different result/output. The main problem is that, as stated above, it is very difficult to estimate accurate inputs for the model.

LRIC models, as they are currently computed, may paradoxically debilitate the regulator position as it has less information than the regulated firm.

The inexistence of a cap on costs may lead to never-ending conflicts between regulator and regulated firm, eventually legal.

c) The regulator should have a precise knowledge of the productivity levels and wages. The prospective engineering models, do not allow any one to extract the values for labour included in the model's results. Accounting approaches may be somewhat better in this purpose since, if data are available any one can follow a time path of labor cost, perhaps by activity type.

The labor cost is determined as a percentage of the cost for equipment, cables and infrastructures of the exterior network also considering factors for maintenance and operation costs. However, output labor costs are not available; they are included in the expenses.

There is trade-off between capital and labor. As the price of labor goes up capital is substituted by labor and vice versa. Current models are not prepared to do that.


Concluding remarks

The construction of LRIC models and their potential use in regulation is an interesting development in regulatory economics and in regulatory practice. Certainly, one of the most important contributions relates to the role of LRIC models in reducing the traditional problem of information asymmetry between the regulator and the regulated firm. However, previous arguments have suggested that, given the current state of model building, there are some important shortcomings in this approach and some care should be taken in its use in regulatory practice. It seems clear that currently LRIC models fall short of the ideal of


building a proxy for the long run marginal cost faced by a given firm – notwithstanding that the whle argument is vulnerable to the second-best theorem provisions.

Other distortions may arise form the use of LRIC models in regulation. For instance, combining FL LRIC wholesale pricing with HC FDC pricing, may lead to price-squeezes, since it is not assured that FL LRIC prices are lower than HC FDC prices (for example, due to increased labor costs), forward looking access prices may be higher than HC FDC prices. Competition may also be affected since Marginal Cost pricing of access prevents incumbents from making money on the access business and gives them incentives to extend their

untapped market power on that segment to the competitive segments by denying access to their rivals by non-price methods. This possibility calls for close regulatory monitoring and interference89, 90.

89 Laffont et al (2000), pp. 7-8. 90 Laffont et al (2000), p. 163.



Alleman, James, University of Colorado at Boulder and PHD Hagler Bailly, “Poverty of Cost Models, The Wealth of Real Options”

Arthur Andersen (1997), "Draft Guidelines prepared for DG XIII of the European Commission", October 1997.

Arthur Andersen (1997b), “Interconnection in a liberalized telecommunications market: Working document on Cost Accounting and Accounting Separation”, 6 November 1997. Arthur Andersen (1994), “Study Prepared for the Commission of The European

Communities-DGXIII: A study on cost allocation and the general accounting principles to be used in the establishment of interconnect charges in the context of telephone liberalization in the European Community”, October 1994.

Baumol, William J. (1983), “Minnimum and Maximum Pricing Principles for Residual Regulation”, in A. Danielsen and D. Kamersschen (eds) Current Issues in Public Utility Ecomomics, Lexington Books, 1983.

Baumol, William J., Michael F. Koehn and Robert D. Willig (1987), "How Arbitrary is Arbitrary?- or Toward the Deserved Demise of Full Cost Allocation", Public Utilities fortnightly, September 3, 1987.

Benitez, Daniel A., ECARE, Antonio Estache, Kennet, D. Mark and Ruzzier, Christian A., “Are cost models useful for telecoms regulators in developing countries?”

Berg, Sandford V. and John Tschirhart (1988), Natural Monopoly Regulation: Principles and practice, Cambridge University Press, 1988.

Bradley, Michael D., Jeff Colvin and John C. Panzar (1999), “On setting Prices and Testing Cross-Subsidy with Accounting Data”, Journal Of Regulatory Economics, 16:83-100, 1999. Brown, Stephen J. (1986), The Theory of Public Utility Pricing, Cambridge University Press, Melbourne, 1986.

Brown, Stephen J. (1986), “The Theory of Public Utility Pricing”, Cambridge University Press, Melbourne, 1986.

Byatt, I. C.R. (1986), "Accounting for economic costs and prices: A report to HM Treasury by an advisory group", London, 1986

Cartwright, Peter (2001), "Interconnect Costing: Establishing Interconnect Prices for Interconnect Prices and Services", BWCS, Ledbury, 2001.


Cave, Martin and Roger Mills (1992), “Cost Allocation in Regulated Industries”, Center for the study of Regulated Industries (CRI), Regulatory Brief 3. Public Finance Foundation, 1992.

Cave, Martin, Ken Lever, Roger Mills and Stephen Trotter (1990), "Cost Allocation and Regulatory pricing in telecommunications: A UK case study", Telecommunications Policy, Volume 14, Number 6, December 1990.

Clark, John Maurice (1961), “Competition as a dynamic process”, Brookings Institution, Washington, 1961.

Coase, Ronald (1946), "The Marginal Cost Controversy", Economica, 13, August 1946. Dippon, Christian M. and Tain, Kenneth E., “ The cost of the local telecommunication network: a comparison of minimum spanning trees and HAI model”, Telecommunications Policy 24, p.253-262, 2000.

Ergas, Henry, “TSLRIC, TELRIC and Other Forms of Forward-Looking Cost Models in Telecommunications: A Curmudgeon’s Guide”, November 1998

European Commission (1997), "Commission Recommendation on interconnection in a liberalized market: Part I: Interconnection pricing" (C(98)50) , 15 October 1997. European Commission (1998), "Commission Recommendation on Interconnection in a

liberalized telecommunications market: Part 2-Accounting separation and cost accounting", of 8 April 1998.

European Commission (1991), Guidelines on the applications of EEC competition rules in the telecommunications sector, 91/C 233/02; OJ C233/2, 06.09.91.

European Directive 92/44/EEC, of 5 June 1992, on the application of open network provision to leased lines.

European Directive 95/62/EC, of 13 December 1995, on the application of open network provision (ONP) to voice telephony.

European Directive 97/33/EC, on interconnection in telecommunications with regard to ensuring universal service and interoperability through application of the principles of Open Network Provision (ONP), of 11 June 1997.

European Directive 97/67/EC, concerning the common rules for the development on an internal market for community postal services and the improvement of the quality of service. European Directive 98/10/EC, of 26 February 1998, on the application of open network provision (ONP) to voice telephony and on universal service for telecommunications in a competitive environment.

Federal Communication Commission (1996), The First Report & Order In the Matter of Implementation of the Local Competition Provisions in the Telecommunications Act of 1996 (FCC 96-325).





Related subjects :