• No results found

Public Reporting On Quality In The United States And The United Kingdom

N/A
N/A
Protected

Academic year: 2021

Share "Public Reporting On Quality In The United States And The United Kingdom"

Copied!
15
0
0

Loading.... (view fulltext now)

Full text

(1)

Public Reporting On Quality In

The United States And The

United Kingdom

In both countries the imperatives of accountability and quality

improvement make the wider development and implementation of

report cards inevitable.

by Martin N. Marshall, Paul G. Shekelle, Huw T.O. Davies, and Peter C. Smith

PROLOGUE:Report cards represent one of the most publicly visible aspects of

hospitals’ quality improvement efforts, and they are not without controversy. Me-dia coverage often overreacts to low “grades” given to hospitals in a region. Pro-viders find ways to stack the decks, selecting the healthiest patients to improve their scores. Purchasers look only at costs, and consumers either can’t figure them out or ignore them completely. At least, this is the popular perception. The need for basic research to develop accurate and useful quality-related performance mea-sures continues, but quality report cards are here to stay.

Problems, challenges, obstacles, and innovations mark the trail, but there is in-cremental progress toward widespread use of quality report cards in the United States and the United Kingdom, as Martin Marshall and his colleagues report. In the United States a strong emphasis on informing consumer choice and control-ling costs has guided reporting. Different traditions and incentives are at work across the Atlantic. The two countries have much to learn from each other as they move toward greater accountability and quality in health care.

A part-time general practitioner (GP) in an inner-city practice and a professor of general practice at the National Primary Care Research and Development Cen-tre, University of Manchester, England, Marshall is also vice-president of the Eu-ropean Working Group on Quality in Family Practice. He gained U.S. experience while a Harkness Fellow in Health Care Policy at RAND in California. Paul Shekelle is a consultant in health sciences at RAND; a professor of medicine at the University of California, Los Angeles, School of Medicine; and a staff physician at the VA Medical Center in West Los Angeles. He spent a year in the United King-dom as an Atlantic Fellow in Public Policy. Huw Davies is a professor of health care policy and management at the University of St. Andrews, Scotland, where he also directs the Center for Public Policy and Management and the Research Unit for Research Utilization in the School of Social Science. Peter Smith is a professor of economics at the University of York, England.

(2)

ABSTRACT:The public reporting of comparative information about health care quality is becoming an accepted way of improving accountability and quality. Quality report cards have been prominent in the United States for more than a decade and are a central feature of British health system reform. In this paper we examine the common challenges and dif-ferences in implementation of the policy in the two countries. We use this information to ex-plore some key questions relating to the content, target audience, and use of published in-formation. We end by making specific recommendations for maximizing the effectiveness of public reporting.

P

u b l i c r e p o r t i n g o f c o m p a r a t i v e i n f o r m a t i o n about health care

quality is becoming an important quality-improvement instrument in most developed countries.1The products of public reporting have been variously

described as “report cards,” “performance reports,” “provider profiles,” “quality assessment reports,” and “league tables.”2In this paper we review key reporting

initiatives in the United States and the United Kingdom, two countries that are rapidly assembling experience in public reporting on quality, in different ways. We start by setting out the underlying rationale for implementing quality report-ing. We then describe major initiatives in both countries and compare and con-trast the progress that has been made. In the light of this evidence, we discuss how the benefits of public reporting can be maximized, and we end with some com-ments on possible future developcom-ments.

nHistorical background.An interest in public disclosure of performance data is not new in either country. In the 1860s Florence Nightingale highlighted differ-ences in mortality rates of patients in London hospitals, and in 1917 the U.S. surgeon Ernest Codman complained that his colleagues failed to publish their results be-cause they feared that the public might not be impressed.3However, the vision of

Nightingale and Codman has started to become a reality only in recent years, made feasible by developments in information technology and encouraged by rising popu-lar expectations regarding patient choice and the accountability of health systems.

nObjectives of reporting.Advocates of the public release of performance data are often unclear about the objectives of reporting initiatives and how they expect the various stakeholders to respond.4This is likely to depend upon the context

within which reporting takes place, but broadly there are two reasons for putting performance data in the public domain. The first is to increase the accountability of health care organizations, professionals, and managers. This greater accountability offers patients, payers, and purchasers a more informed basis on which to hold pro-viders to account, either directly through purchasing and treatment decisions or in-directly through political processes.5The second reason is to maintain standards or

stimulate improvements in the quality of care provided, or both. A range of mecha-nisms are being used in the United States and the United Kingdom to achieve this aim: economic competition, performance management with or without incentives, or appeals to the professional interest of those working in health care in doing a

(3)

good job. Within each of these mechanisms, the stakeholders—patients, the in-sured, purchasers, managers, and health professionals—are expected to play differ-ent roles.

Quality Reporting In The United States

nKey U.S. initiatives.The United States has led the modern public disclosure movement. The first major U.S. report cards were published more than fifteen years ago by the federal government agency that administers Medicare.6 This initiative

was withdrawn in 1993 following criticism of the validity of the rankings, but it led to a plethora of other performance reports produced by state governments, employ-ers, consumer advocacy groups, the media, private enterprises, and coalitions.

Information is now readily available in the United States about the comparative performance of health insurance plans, hospitals, and individual physicians, and there has been an ongoing (and sometimes acrimonious) debate about the content of the data, the process of disclosure, and the associated merits and risks. Public reporting in the United States is now much like health care delivery in that coun-try: It is diverse, is primarily market-based, and lacks an overarching organizational structure or strategic plan. Public reporting systems vary in what they measure, how they measure it, and how (and to whom) it is reported. Some exemplar organi-zations involved in public reporting of quality information are outlined below.

National Committee for Quality Assurance (NCQA).The NCQA is a nonprofit zation that evaluates health care quality, primarily of health maintenance organi-zations (HMOs). The NCQA’s Health Plan Employer Data and Information Set (HEDIS) is one of the oldest and best-known public reporting systems. It mea-sures a growing number of mainly technical processes of care, collected from both administrative data and medical record review. Health plans participate volun-tarily, and comparative quality information is posted on the NCQA’s Web site, www.ncqa.org.

Pacific Business Group on Health (PBGH).This is a consortium of large employers in California that promote public reporting (Healthscope, www.healthscope.org) in order to improve the quality of health care for their employees and other Califor-nia residents. Health plans, hospitals, and medical groups are all included. Data come from patient surveys and providers’ self-reports.

National Quality Forum.The forum is a public-private, not-for-profit organization that does not produce its own public reporting system but rather promotes core sets of quality measures and standardized measurement specifications, collection, verification, and audit tools (www.qualityforum.org).

Leapfrog Group.Leapfrog is a coalition of major business purchasers (Fortune 500 companies) that does not produce public report cards but rather sets certain stan-dards for processes of care (such as the use of computerized physician order entry and volume standards for certain procedures). It then encourages health care pur-chasers to require providers to meet these standards as a condition of

(4)

participa-tion in health insurance plans (www.leapfroggroup.org). Data come from self-reports by urban hospitals.

Healthgrades.This for-profit company uses its own analyses and those of others (including the Leapfrog Group) to present comparative information on hospitals for a range of health conditions, nursing homes, and home health agencies (www. healthgrades.com). Some of this information is distributed free, and some must be purchased.

State-based initiatives.States also have played an important role in driving public reporting forward. For example, New York State has produced and published what many regard as the most sophisticated report cards, focusing primarily on cardiac procedures.7

Centers for Medicare and Medicaid Services (CMS).The federal CMS (formerly HCFA) launched the hospital mortality report described earlier. It is now committed to developing individual physician-specific quality report cards and has selected dia-betes care as the first condition. No decisions have yet been made regarding mea-sures, but both process and outcome measures are under consideration. The CMS has also embarked on an ambitious public reporting project on the quality of long-term care facilities that included full-page advertisements in newspapers in major U.S. cities and all fifty states.8

Consumer Assessment of Health Plans (CAHPS).This is a government-sponsored ef-fort to develop a common survey to be used by patients to assess the care they re-ceive. It has been the subject of numerous studies assessing its psychometric prop-erties and utility in real-life situations. Studies have shown that in experimental situations, CAHPS information can influence patients’ choice of providers; how-ever, observational studies in real-world situations have reported mixed results.9

nKey drivers.Public reporting of quality information in the United States is be-ing propelled mainly by two factors. First, the “business case for quality” argues that high-quality care will lower business costs by reducing employers’ contribution to the health care costs of their employees (since U.S. employers pay for the majority of workers’ health care costs). Second, there is rapidly increasing interest in the use of “tiered pricing”—the coupling of the portion of health care costs that a person pays to the price of health care. For example, a person may face a minimal personal cost to be hospitalized at a hospital that is “preferred” by a health insurer but a personal cost of several hundred dollars a day to be hospitalized at other hospitals.

Price has been the dominant factor determining which hospitals are “pre-ferred,” but because of concern from employers and patients, publicly reported quality information is starting to play a part. The goal is to promote the use of pro-viders that deliver the best quality for price (“value-based purchasing”). Tiered pricing had been introduced in many health plans in California. There is much on-going work, both in the private sector and in academe, to determine the types of quality information people desire and the format in which they can understand and use the information.

(5)

Quality Reporting In The United Kingdom

nKey U.K. initiatives.In contrast to the United States, there are relatively few examples of purposeful release of information about quality of care in the United Kingdom. Some basic information about hospital performance in England and Wales has been in the public domain since the early 1980s. These data had embraced hospital mortality rates by 1992 but, although publicly released, were intended prin-cipally for managerial rather than public use and had little discernible impact.10

There were also isolated examples of publishing comparative information of interest to a limited audience, such as renal transplant and in vitro fertilization success rates. The first reporting initiatives deliberately aimed at the public, known as thePatient’s Charter,focused on waiting times rather than clinical quality.

First report cards on hospitals.A range of hospital outcome data have been pub-lished in Scotland since the early 1990, but, again, the release of the information was purposefully low key to discourage hostile responses from clinicians and the media.11 High-profile national reporting initiatives were not introduced in the

United Kingdom until 1998, a decade later than in the United States. From this date, the United Kingdom has adopted a much more coordinated and strategic ap-proach to public reporting than has been seen in the United States. A national Per-formance Assessment Framework (PAF) was introduced in 1998 that sought, among other things, to report clinical outcomes at the hospital level.12In 2001 the

NHS Planplaced this initiative at center stage and became the first government publication to refer specifically to “report cards.”13

Government hospital rating system. In September 2001 the Department of Health published a new system for rating the performance of all National Health Service (NHS) nonspecialist hospitals in England. Each was to be classified annually into one of four categories, from three to zero stars, depending upon their performance against a range of indicators and the outcome of their clinical governance review by the Commission for Health Improvement (CHI). Three stars were awarded to hospitals with the highest performance rating and a favorable CHI review and zero stars to hospitals with the poorest performance and an unfavorable review. The intention is to reward three-star organizations with increased financial and strategic autonomy. Zero-star hospitals will be investigated and senior manage-ment changed where necessary.

Private initiative.Most reporting schemes in the United Kingdom have been led by the national health ministry, but in late 2000 an independent initiative entered the arena. Dr Foster,established by two Sunday Times journalists, makes hospital performance data available on the Web and sells its information to the media (www.drfoster.co.uk). The group published its firstGood Hospital Guideas a sup-plement to theSunday Timesin January 2001, subsequently produced aGood Birth Guide,and in March 2002 produced a secondGood Hospital Guide.

TheDr Fosterguides report on the performance of all public acute care hospitals and the majority of hospitals belonging to the main independent providers. The

(6)

early guides contained information about hospital-specific mortality rates, num-ber of staff, waiting times, numnum-ber of complaints, and services. These data were not published in a ranked or “league table” format, but the most recent guide ex-plicitly ranks the hospitals in terms of their relative performance.Dr Fosterhas in-vited a number of high-profile figures from the medical establishment to endorse its publications and has worked closely with the health ministry to secure compli-ance and coordinate reporting activities. The main contribution of theDr Foster

group has been to bring communication skills to performance reporting that gov-ernment agencies had failed to achieve. It is currently focusing its resources on lo-cal rather than national reports and in the near future intends to publish data on individual specialists and primary care practices.

NHS surveys.The NHS has also initiated a series of surveys designed to monitor NHS performance from the patient’s perspective, offering systematic comparisons of experiences over time and between different parts of the country. An example is the survey of heart disease patients, conducted by Picker Europe, involving 194 NHS hospitals and more than 84,000 patients. An extensive report card on the performance of each hospital is available on the Internet.14This reports on about

seventy-five different aspects of the patient experience, with responses for each question compared with the best and worst 20 percent of hospitals nationally.

nFuture plans.The enquiry into pediatric cardiac surgery deaths at the Bristol Royal Infirmary revealed that a great deal of unpublished information on the high mortality rates at Bristol was available but not acted upon. It therefore recom-mended the creation of an independent Office for Information on Health Care Per-formance, a recommendation that has now been carried out by the national govern-ment. The new office will be responsible for the collection of data, the analysis of the data to identify good and poor performance, the publication of the data in the form of both national and local report cards, and the conduct of patient and staff surveys. In addition, the office will be responsible for assessing data quality and making rec-ommendations to improve data systems. It will be part of a redesigned CHI, which will become an independent regulator of U.K. health care in both the public and in-dependent sectors. Prompted in part by the Bristol Enquiry, the Society of Cardio-thoracic Surgeons publishes postoperative mortality rates by unit, and there are plans to publish thirty-day mortality rates for individual surgeons by April 2004.15

A Comparison Of U.S. And U.K. Public Reporting

In both countries quality report cards are seen as central to improving the ac-countability of health providers, a key lever to improving quality, and an impor-tant principle to pursue.16In an open and democratic society it is inconceivable

that the policy of increasing disclosure will be reversed. However, both countries face similar challenges as they attempt to engage the key stakeholders. Politicians and the media have embraced the idea with enthusiasm, but we summarize below a growing body of evidence to suggest that many consumers, purchasers, health

(7)

professionals, and, to a lesser extent, provider organizations are either ambivalent, apathetic, or actively antagonistic toward report cards. There is still much that we do not know about public reporting, and there are major opportunities for collab-oration between the two countries for finding the answers.

nReporting standards. Some important differences between the countries

should be recognized when seeking to translate lessons from one setting to the other. In principle, the British health care system offers a comparative advantage for securing data standardization and universality of coverage (although the United Kingdom has until recently been reluctant to take advantage of this benefit of a uni-tary system). There is no reason in principle why the major payers in the U.S. system should not insist on minimum reporting standards as a condition of doing busi-ness.17However, there is evidence that some U.S. providers are becoming reluctant

to participate. For example, only 50 percent of California hospitals are now partici-pating in a statewide initiative to report patients’ evaluations of their care.18

Diversity notwithstanding, the amount and quality of routinely available data are considerably greater in the United States. Problems with the coverage and quality of data have been a major barrier to progress in the United Kingdom, and a recent study of the quality of U.K. cardiac surgery data, supported by the Nuffield Trust, demonstrated some large inconsistencies among various data sources, even for an outcome as apparently unequivocal as death.19There are plans to rectify this

with a major investment in information technology over the next three years. nReport card design.U.S. policymakers and researchers are searching for ways of increasing popular engagement with report cards. The most recent initiative, us-ing interactive Web sites that allow users to select features that they regard as rele-vant and then display this information in their chosen format, have great potential but have not yet been formally evaluated.20Traditionally, the British public have not

been expected (and, indeed, at present have little capacity) to exercise choice on the basis of information. The policy focus has instead been on reducing variations be-tween providers rather than encouraging patient choice. However, recent policy an-nouncements have signaled major changes in the philosophy of the NHS, with pa-tient choice, diversity of providers, and funding mechanisms for diagnosis-related groups (DRGs) playing a central role. Report cards are expected to be a central ele-ment in this new arrangeele-ment.

nProvider incentives.The two health systems have focused on different incen-tives for providers to act on quality reports. The United Kingdom has emphasized greater autonomy and other nonfinancial rewards for provider organizations that report good performance. In contrast, the principal external motivators for U.S. pro-viders take the form of economic incentives. High reported quality should in

princi-“Most experts do not believe that consumer pressure will be an

important mechanism to stimulate quality improvements.”

(8)

ple increase market share, and there is some evidence that report cards are used ac-tively by high-performing organizations to that end.21 Experience with the U.S.

“Rewarding Results” initiative and new contractual arrangements for U.K. clini-cians should offer additional evidence in this domain.22

Evidence On The Impact Of Public Reporting

There is a large body of research evidence from the United States on how the various stakeholders respond to public reports and their impact on the processes and outcomes of health care. There is considerably less evidence of this kind in the United Kingdom, although what does exist tends to support the key conclusions drawn from the U.S. data.23

nImpact on quality of care.Given that one of the two broad reasons for public disclosure of quality information is to maintain standards or stimulate improve-ments in the quality of care, it is surprising that there are few published studies on this subject. We could find no published data from randomized controlled trials that assess the effect of public reporting specifically on quality. The strongest exist-ing evidence is based on observational studies of short-term mortality and morbid-ity following cardiac surgery. These indicate that U.S. states that have public report-ing systems have experienced declines in cardiac surgery mortality that are more rapid than the declines in states without public reporting.24

The Cleveland Health Quality Choice project has also been the subject of sev-eral evaluations, but the data are limited by their lack of a control group.25There

are no data regarding the effect of public reporting on long-term outcomes of car-diac surgery or outcomes from care for other health conditions. There are some ob-servational studies of the effect of public reporting on processes of care judged to be related to health outcomes, such as influenza vaccination or screening mam-mography. Prominent among these are the greater improvements over time in the processes of obstetrics care for those hospitals reported as low-quality outliers compared with other hospitals, and the observation that U.S. health plans that publicly reported their data showed greater improvements over three years on some HEDIS measures than did health plans that measured but did not publicly report the assessment of their care.26

nConsumers’ use of report cards.One possible mechanism for public report-ing to stimulate efforts to improve the quality of care is consumer pressure. Several studies have demonstrated that U.S. consumers want more information about pro-viders’ performance and are willing to identify the content and format of the infor-mation of greatest use to them.27However, most of the evidence (from both the

United States and Scotland) suggests that when this information is published, the public does not search it out, does not understand it, distrusts it, and fails to make use of it.28There are some notable exceptions; for example, higher reported

perfor-mance has been associated with greater employee enrollment and a lower desire to switch health care providers.29

(9)

In the United Kingdom, recent focus group data indicate that some members of the public consider public reporting to be a punitive tool used by politicians to punish hard-working professionals, and they expressed concern about the practi-cal implications of introducing report cards on general practice in the United Kingdom.30Some studies have shown that those who do respond to report cards

are more likely to be young and well-educated.31In addition, where user interest

has been demonstrated, it appears to decline over time, which suggests that the public responds primarily to new information.32

The predominant lack of user response has been explained in terms of difficulty in understanding the information, lack of trust in the data, problems with timely access, and lack of genuine choice.33However, recent evaluations of even the

new-est state-of-the-art report cards that address many of these potential barriers have failed to demonstrate significant or sustained public interest.34Because several of

the above-cited studies dealt with consumer assessments of cardiac surgery re-port cards, the association between public rere-porting and improvements in health outcomes described above suggests that consumers’ use of report cards is not a necessary precondition if public reporting is to have an effect. Much work in the United States continues to try to understand how to present quality information in ways that are meaningful to consumers, but most experts do not believe that consumer pressure will be an important mechanism to stimulate quality improve-ments for the foreseeable future.35

nPurchasers’ use of report cards.Some commentators were not surprised by

the lack of consumer interest in comparative data but did expect purchasers (mostly U.S. employers and U.K. Primary Care Trusts) to be more responsive on behalf of their constituency groups. Published evidence from the United States suggests that early interest in report cards quickly waned and that most purchasers were more in-terested in costs, or in gross indicators of quality such as accreditation, than in de-tailed comparative information.36However, in the past few years there has been an

accelerated interest by some U.S. employers in the public reporting of quality infor-mation, stimulated by the burden placed on business by health care costs and the perception that higher-quality care may lead to healthier and more productive em-ployees. In contrast, there is little evidence to indicate that U.K. purchasers of health care are yet using the comparative hospital performance data to guide their con-tracting decisions to any great extent.

nPhysicians’ use of report cards.While both American and British physicians are more aware of report cards than consumers are, they too make little use of report card information.37Evidence in the United States suggests that physicians distrust

and attempt to discredit the data, and there are some examples of their responding defensively by demanding that their managers’ performance be judged using report cards.38Exceptions are emerging in the United Kingdom, where some professionally

based collaboratives and networks (such as cardiothoracic surgeons) are beginning to support benchmarking arrangements based on publicly released data.

(10)

nHospitals and other provider organizations’ use of report cards.A grow-ing body of evidence indicates that both U.S. and U.K. provider organizations are the most sensitive of the various stakeholder groups to report cards and can respond in ways that improve the quality of the care they provide.39U.S. hospitals that are

shown by the data to be performing poorly are inclined, at least initially, to discredit the reports and question their value. However, a recent study suggests that for these organizations, the considered and private response is different: They use the pub-lished information to help them focus on quality issues, improve their internal data systems, and improve the quality of their care.40

n The media’s response.The media have played a leading role in promoting the use of report cards in both countries. However, considerable anxiety has been ex-pressed about the media coverage of comparative information: in particular, the pro-pensity of the media to be alarmist, to engender a culture of blame, and to present complex data as overly simplistic league tables. Evidence from both countries sug-gests that these claims are probably exaggerated, particularly when those responsi-ble for the release of the data work closely with the reporting journalists.41

nUnintended consequences.The focus of some quality reporting systems has

been criticized for measuring “what can be easily measured” rather than “what is important.” The evidence and perception that organizations and provider groups will devote resources to improve on any quality measures that are publicly reported create the possibility that other important areas of health care may suffer for lack of those same resources. There are no randomized trial data or observational data that associate public reporting in one health domain with worsening performance in other health domains. However, the potential for this and other unintended conse-quences of public disclosure have been described in health and nonhealth sectors and should be taken seriously.42

Maximizing The Benefits Of Public Reporting

The debates over the merits or deficits of reporting health care quality have been extraordinarily heated. On the one hand, advocates of public reporting see the current reservations as a necessary evolutionary step and think that report cards will soon become an integral and accepted tool in a modern health care sys-tem. Opponents see report cards as largely unproven and a distraction, with po-tentially unhelpful side effects.

Our view is that—whatever the merits of the two arguments—the imperatives of accountability and quality improvement make the wider development and im-plementation of report cards inevitable. Public reports are here to stay, and the de-bate should now be moving on from whether to use them to how best to deploy them in particular circumstances. In this respect, public reporting should be treated like any other technology or policy option. Its benefits against stated ob-jectives should be evaluated in the light of its costs, including both direct costs and inadvertent side effects.

(11)

nMandatory reporting.This paper has sought to summarize the principal costs and benefits. In the light of this evidence, how should policy on public reporting evolve? The first point is that—if it is to be effective—public reporting may need to be mandatory. Recent evidence from the United States suggests that where this is not the case, health care organizations may simply withdraw from a reporting scheme when they perceive that participation is not in their interest, thereby se-verely compromising its usefulness.43Mandatory reporting of course places a duty

on the regulator to ensure that all reporting requirements are manifestly useful to stakeholders and cost-effective to the health system, and during developmental phases it may be more effective to rely on voluntary participation.

nTailoring the data.The next design issue concerns what data to report. Most public reporting schemes until now have opportunistically relied on readily avail-able information. In the future there will be increasing pressure to tailor reports more closely to the needs of users, necessitating the implementation of new data col-lection mechanisms. The most obvious pressures are toward increased use of out-come data. However, for many specialities, especially those dealing with chronic conditions or public health, it is likely that reporting schemes will have to rely on measures of process, preferably those that are known from research evidence or pro-fessional opinion to be strongly linked to good eventual outcomes.

nBroadening the scope of data.The scope of reporting schemes is of course circumscribed by the data collection capacity of participating organizations, and the development of electronic health records is likely to offer a good opportunity to broaden the scope of information reported. The unitary U.K. system offers particu-larly fertile ground for seizing new information opportunities. For example, there will be a dramatic increase in information relating to primary care arising from new general practitioner (GP) remuneration systems, information that could readily be used for reporting purposes.44

nAdequate risk adjustment.Almost all performance reporting requires some

sort of risk adjustment to secure meaningful comparability, particularly when data on outcomes are reported and when the information is used to make definitive judg-ments. The credibility of a reporting scheme may be fatally compromised if risk ad-justment is inadequate. In addition, inadequate risk adad-justment gives physicians the incentive to discriminate against higher-risk patients (cream skimming). The sci-ence of risk adjustment is developing rapidly, but many issues remain unresolved. Further development of methodology is a clear research priority, and as new do-mains of performance reporting are introduced, associated risk adjustment methods need to be put in place.

nOrganizational focus levels.In designing a reporting scheme, a fundamental

“Explicit incentives need to be built into the system if the public

reports are to achieve maximum effectiveness.”

(12)

question is at what organizational level to report results. The New York cardiac sur-gery scheme (and its proposed U.K. counterpart) focus on the individual physician, and there are clear benefits to the user in being able to scrutinize a specific provider of care. However, cardiac surgery may be a special case. For many other procedures and specialities, it is likely that outcome measures and risk adjustment methodolo-gies are less well developed and that volumes of activity are lower. In these circum-stances, focusing publicly on individual physicians may provoke negative responses from physicians. It is therefore likely that, for many circumstances, some broader or-ganizational level will be most appropriate, such as hospitals, larger group practices, health plans, and purchasers. Of course, scrutiny of individual physicians and teams should continue to be an important internal task of health care organizations.

nIncreasing the public’s interest.A consistent finding in both countries is the lack of public interest in quality reports. This may change in the future, as the public becomes better informed and more assertive (and, in the United Kingdom, enjoys greater choice). Increased interest may be particularly marked among patients with chronic conditions, supported by advocacy groups. However, the main scope for in-creased engagement in both countries seems to be among provider and purchaser organizations. The former need more help and support than they now receive if they are to respond positively to public reports: The benefits are more likely to be maxi-mized and the adverse consequences minimaxi-mized in an environment that values learning and improvement. Purchaser organizations also need encouragement, since, with some notable exceptions, they seem to be remarkably passive in scruti-nizing the quality of care secured for their patients.

nUsing incentives.This raises the important issue of what incentives to attach to public reports. This is an underdeveloped area, but both countries are evaluating schemes to reward quality, and we await the results with interest. It is also impor-tant not to neglect the intrinsic incentives associated with being seen as doing a good job. But we feel that explicit incentives need to be built into the system if the public reports are to achieve maximum effectiveness. In the same vein, many of the alleged adverse side effects of public reporting arise from accidental incentives that may need to be countered. For example, if cream skimming is found to take place, then it may be necessary to adjust both payment and risk adjustment schemes. It will also be important to nurture a professional climate in which gaming and misre-porting are considered unacceptable. Given the frequent professional hostility to public reporting, it is essential to monitor all schemes for unintended consequences to allay fears or take remedial action as appropriate.

T

h e t r e n d t o wa r d i n c r e a s e d d i s c l o s u r e of quality information is

in our view irresistible, and, therefore, determining what style of reporting works best and in what circumstances is a major policy task. To that end, the emerging experience from the two countries offers a rich source of contrasting evidence for policymakers to consider.

(13)

An earlier version of this paper was presented at the international symposium, Improving Quality of Health Care in the United States and the United Kingdom: Strategies for Change and Action, organized by the Commonwealth Fund and the Nuffield Trust and held at Pennyhill Park Conference Centre, Bagshot, England, 12–14 July 2002.

NOTES

1. P. Smith, ed.,Measuring Up: Improving Health System Performance in OECD Countries(Paris: Organization for Eco-nomic Cooperation and Development, 2002).

2. M.N. Marshall et al., “Public Disclosure of Performance Data: Learning from the U.S. Experience,”Quality in Health Care9, no. 1 (2000): 53–57.

3. F. Nightingale,Notes on Hospitals, 3d ed. (London: Longman, Green, Longman, Roberts, and Green, 1863); and E.A. Codman,A Study in Hospital Efficiency(1917; reprint, Oakbrook Terrace, Ill.: Joint Commission on Accreditation of Healthcare Organizations, 1996).

4. M.N. Marshall et al.,The Public Disclosure of Performance Data in Health Care: Learning from the U.S. Experience

(London: Nuffield Trust, 2000); and M.N. Marshall and H.T.O. Davies, “Public Release of Information on Quality of Care: How Are Health Services and the Public Expected to Respond?”Journal of Health Services Research and Policy6, no. 3 (2001): 158–162.

5. A. O’Neil, “A Question of Trust,” BBC Radio 4 Reith Lectures (2002); and H.T.O. Davies, “Falling Public Trust in Health Services: Implications for Accountability,”Journal of Health Services Research and Policy4, no. 4 (1999): 193–194.

6. A. Epstein, “Sounding Board: Performance Reports on Quality—Prototypes, Problems, and Prospects,”

New England Journal of Medicine333, no. 1 (1995): 57–61.

7. New York State Department of Health, “Info for Researchers, Heart Disease,” October 2000, www. health.state.ny.us/nysdoh/research/heart/heart.htm (19 January 2003); and M.R. Chassin, “Achieving and Sustaining Improved Quality: Lessons from New York State and Cardiac Surgery,”Health Affairs(Jul/Aug 2002): 40–51.

8. Centers for Medicare and Medicaid Services, “Nursing Home Quality Initiative,” 14 February 2003, www.cms.hhs.gov/providers/nursinghomes/nhi (21 February 2003).

9. M. Spranca et al., “Do Consumer Reports of Health Plan Quality Affect Health Plan Selection?”Health Ser-vices Research35, no. 5 (2000): 933–947; and E. Guadagnoli, “Providing Consumers with Information about the Quality of Health Plans: The Consumer Assessment of Health Plans (CAPHS) Demonstration in Washington State,”Joint Commission Journal on Quality Improvement26, no. 7 (2000): 410–420.

10. A. Street, “The Resurrection of Hospital Mortality Statistics in England,”Journal of Health Services Research and Policy7, no. 2 (2002): 104–110.

11. R. Mannion and M. Goddard,The Impact of Performance Measurement in the NHS—Report 1: Empirical Analysis of the Impact of Public Dissemination of the Scottish Clinical Resource and Audit Group Data(York: Centre for Health Economics, University of York, 2000).

12. Department of Health,A First Class Service, Quality in the NHS(London: Department of Health, 1998); and De-partment of Health,The NHS Performance Assessment Framework(London: Department of Health, 1999). 13. Department of Health,The NHS Plan: A Plan for Investment, a Plan for Reform(London: Department of Health,

2001).

14. Department of Health, “The National Survey of NHS Patients, Coronary Heart Disease Survey, Trust Level Data,” 19 February 2003, www.doh.gov.uk/nhspatients/chdsurvey2a.htm (21 February 2003).

15. Department of Health,Learning from Bristol: The Department of Health’s Response to the Report of the Public Inquiry into Children’s Heart Surgery at the Bristol Royal Infirmary 1984–1995(London: Department of Health, 2002). For details on the society’s activities, see www.scts.org.

16. For a detailed description of the U.S. and U.K. reporting systems, see V. Raleigh,Performance Assessment in the NHS(London: Commission for Health Improvement, 2002); R. Mannion and M. Goddard,The Impact of Performance Measurement in the NHS—Report 3: Performance Measurement Systems; A Cross-Sectoral Study(York: Centre for Health Economics, University of York, 2000); M.N. Marshall et al.,Dying to Know: Public Release of Comparative Data in Health Care(London: Nuffield Trust, 2000); and E.C. Schneider and T. Lieberman, “Pub-licly Disclosed Information about the Quality of Health Care: Response of the U.S. Public,”Quality in Health Care10, no. 2 (2001): 96–103.

17. D. Lansky, “Improving Quality through Public Disclosure of Performance Information,”Health Affairs

(14)

18. Schneider and Lieberman, “Publicly Disclosed Information.”

19. L. Fine et al., “How to Evaluate and Improve the Quality and Credibility of an Outcomes Database: Valida-tion and Feedback Study on the U.K. Cardiac Surgery Experience,”British Medical Journal326, no. 7379 (2003): 25–28.

20. M.E. Vaiana and E.A. McGlynn, “What Cognitive Science Tells Us about the Design of Reports for Con-sumers,”Medical Care Research and Review59, no. 1 (2002): 3–35.

21. Schneider and Lieberman, “Publicly Disclosed Information.”

22. National Health Care Purchasing Institute, “Rewarding Results: About the Program,” www.nhcpi.net/ rewardingresults/nac.cfm (7 March 2003).

23. For a review of the literature, see M.N. Marshall et al., “The Public Release of Performance Data: What Do We Expect to Gain? A Review of the Evidence,”Journal of the American Medical Association283, no. 14 (2000): 1866–1874.

24. E.L. Hannan et al., “Improving the Outcomes of Coronary Artery Bypass Surgery in New York State,” Jour-nal of the American Medical Association271, no. 10 (1994): 761–766; and E.D. Peterson et al., “The Effects of New York’s Bypass Surgery Provider Profiling on Access to Care and Patient Outcomes in the Elderly,”Journal of the American College of Cardiology32, no. 4 (1998): 993–999.

25. G.E. Rosenthal et al., “Using Hospital Performance Data in Quality Improvement: The Cleveland Health Quality Choice Experience,”Joint Commission Journal on Quality Improvement24, no. 7 (1998): 347–360; and G.E. Rosenthal, L. Quinn, and D.L. Harper, “Declines in Hospital Mortality Associated with a Regional Initiative to Measure Hospital Performance,”American Journal of Medical Quality12, no. 2 (1997): 103–112. 26. D.R. Longo et al., “Consumer Reports in Health Care: Do They Make a Difference in Patient Care?”Journal

of the American Medical Association278, no. 19 (1997): 1579–1584; and J. Bost, “Managed Care Organizations Publicly Reporting Three Years of HEDIS Data,”Managed Care Interface14 (2001): 50–54.

27. S. Edgman-Levitan and P.D. Cleary, “What Information Do Consumers Want and Need?”Health Affairs

(Winter 1996): 42–56; and S. Robinson and M. Brodie, “Understanding the Quality Challenge for Health Consumers: The Kaiser/AHCPR Survey,”Joint Commission Journal on Quality Improvement23, no. 5 (1997): 239–244.

28. Schneider and Lieberman, “Publicly Disclosed Information”; Mannion and Goddard,The Impact of Perfor-mance Measurement in the NHS—Report 1;and M.N. Marshall, J. Hiscock, and B. Sibbald, “Attitudes to the Public Release of Comparative Information on the Quality of General Practice Care: A Qualitative Study,”

British Medical Journal325, no. 7375 (2002): 1278.

29. D. Scanlon et al., “The Impact of Health Report Cards on Managed Care Enrollment,”Journal of Health Eco-nomics21, no. 1 (2002): 19–41; and N.D. Beaulieu, “Quality Information and Consumer Health Plan Choices,”Journal of Health Economics21, no. 1 (2002): 43–63.

30. Marshall et al., “Attitudes to the Public Release of Comparative Information.”

31. E.C. Schneider and A.M. Epstein, “Use of Public Performance Reports,”Journal of the American Medical Associ-ation279, no. 20 (1998): 1638–1642.

32. D.B. Mukamel and A.I. Mushlin, “Quality Care Information Makes a Difference: An Analysis of Market Share and Price Changes after Publication of the New York State Cardiac Surgery Mortality Reports,”

Medical Care36, no. 7 (1998): 945–954.

33. Hannan et al., “Improving the Outcomes”; Schneider and Epstein, “Use of Public Performance Reports”; S.T. Mennemeyer, M.A. Morrisey, and L.Z. Howard, “Death and Reputation: How Consumers Acted upon HCFA Mortality Information,”Inquiry(Summer 1997): 117–128; J.R. Gabel et al.,When Employers Choose Health Plans: Do NCQA Accreditation and HEDIS Data Count?(New York: Commonwealth Fund, 1998); and P.S. Romano, J.A. Rainwater, and D. Antonius, “Grading the Graders: How Hospitals in California and New York Perceive and Interpret Their Report Cards,”Medical Care37, no. 3 (1999): 295–305.

34. H.H. Schauffler and J.K. Mordavsky, “Consumer Reports in Health Care: Do They Make a Difference?” An-nual Review of Public Health22 (2001): 69–89; and K. Harris, J. Schultz, and R. Feldman, “Measuring Con-sumer Perceptions of Quality Differences among Competing Health Benefit Plans,”Journal of Health Econom-ics21, no. 1 (2002): 1–17.

35. Schneider and Lieberman, “Publicly Disclosed Information.”

36. Gabel et al.,When Employers Choose Health Plans;and J.H. Hibbard et al., “Choosing a Health Plan: Do Large Employers Use the Data?”Health Affairs(Nov/Dec 1997): 172–180.

(15)

of Cardiac Surgery Outcomes Data in New York: What Do New York State Cardiologists Think of It?”

American Heart Journal134, no. 1 (1997): 55–61; and E.C. Schneider and A.M. Epstein, “Influence of Car-diac-Surgery Performance Reports on Referral Practices and Access to Care: A Survey of Cardiovascular Specialists,”New England Journal of Medicine335, no. 4 (1996): 251–256.

38. J. Kaplan et al., “Managed Care Report Cards: Evaluating Those Who Evaluate Physicians,”Managed Care Interface13 (2000): 88–94.

39. Mannion et al.,The Impact of Performance Measurement in the NHS—Report 1;Rosenthal et al., “Using Hospital Performance Data in Quality Improvement”; Longo et al., “Consumer Reports in Health Care”; J.M. Bentley and D.B. Nash, “How Pennsylvania Hospitals Have Responded to Publicly Released Reports on Coronary Artery Bypass Graft Surgery,”Joint Commission Journal on Quality Improvement24, no. 1 (1998): 40–49; S.W. Dziuban et al., “How a New York Cardiac Surgery Program Uses Outcomes Data,”Annals of Thoracic Surgery58, no. 6 (1994): 1871–1876; D.M. Berwick and D.L. Wald, “Hospital Leaders’ Opinions of the HCFA Mortality Data,”Journal of the American Medical Association263, no. 2 (1990): 247–249; and J.A. Rainwater, P.S. Romano, and D.M. Antonius, “The California Hospital Outcomes Project: How Useful Is California’s Report Card for Quality Improvement?”Joint Commission Journal on Quality Improvement24, no. 1 (1998): 31–39.

40. H.T.O. Davies, “Public Release of Performance Data and Quality Improvement: Internal Responses to Ex-ternal Data by U.S. Health Care Providers,”Quality in Health Care10, no. 2 (2001): 104–110.

41. M.R. Chassin, E.L. Hannan, and B.A. DeBuono, “Benefits and Hazards of Reporting Medical Outcomes Publicly,”New England Journal of Medicine334, no. 6 (1996): 394–398. Similar results were found in an unpub-lished audit of the British media’s coverage of hospital performance reports conducted by Simon Rinaldi and Martin Marshall in 2002.

42. P. Smith, “On the Unintended Consequences of Publishing Performance Data in the Public Sector,” Inter-national Journal of Public Administration18, nos. 2 and 3 (1995): 277–310; and D. Dranove et al.,Is More Informa-tion Better? The Effects of Report Cards on Health Care Providers(Cambridge, Mass.: National Bureau of Economic Research, 2002).

43. D. McCormick et al., “Relationship between Low Quality-of-Care Scores and HMOs’ Subsequent Public Disclosure of Quality-of-Care Scores,”Journal of the American Medical Association 288, no. 12 (2002): 1484–1490.

44. M.N. Marshall and M.O. Roland, “The New Contract: Renaissance or Requiem for General Practice?” Brit-ish Journal of General Practice52, no. 480 (2002): 531–532.

References

Related documents

9.2 Any boat withdrawing from the race on race day and BEFORE the start shall notify NZMYC race officials by calling on VHF Channel 17 or calling Coastguard on VHF Channel 82

• Study Scope: Focus on incentive pass-through for CSI/ERP residential PV rebates from 2001-2012, excluding “appraised-value” third-party owned (TPO) systems: further

Step 5: To start making fog, locate the rotary knob on the back for output level adjustment, and press the button to start making fog, release to stop.. Step 6: To turn off

The exemplar test questions were developed from curriculum work that covers Terms 1, 2 and 3 of the school year and a complete ANA model test for each grade has been provided..

Sharon Tennyson Associate Professor Department of Policy Analysis and Management Cornell University.

Let’s face it –  many twenty‐somethings 

For identification of the text (words) that will be used for speech segments extraction, selection of existing Macedonian and Albanian words, which contain all

Fig. 4 9 Again, this date is too early for May's involvement in the framing of Whistler's works for Dowdeswell , and additional letters indicate that May actually produced the