• No results found

Implementing IHC in a Health Plan

In document Wired for Health and Well-Being (Page 86-89)

Janice Rodriguez, an accomplished project manager, has just been recruited from an interactive health software company to be the regional chief information officer for a large progressive health plan with several facilities in the area. The CEO of the health plan wants the organization to be on the “cutting-edge” of using IHC applications to improve member health and to reduce costs of delivering care.

Eager to please, Janice embarks on an ambitious plan to eventually implement a comprehensive suite of Web-based applications to provide seamless health information and support for members, allow electronic member transactions, provide clinical decision support for members and clinicians, and facilitate electronic communication between clinicians and members. For the first phase of her plan, she focuses on selecting and implementing a shared decision-support application for members and clinicians.

Janice ponders whether to outsource the development of the applica- tion or to do it in-house. After an assessment of her department’s resources and an environmental scan of available applications, she decides on the former option. She surmises that, with a pilot project, it is better to minimize uncertainty. The health plan’s IT department has some expertise in general Web site development, but they are unfamil- iar with decision-support applications. If she outsources to a company with an established product, she is assured of prompt delivery and can more easily predict costs. In addition, she is worried about potential liability issues. Relying on an established and rigorously evaluated application, she concludes, may help in that regard.

She sends out notice of what she is looking for to her former com- pany and a few other contacts. Within a week, Janice is deluged with calls from more than a dozen companies that want to sell their product. Each of them claims that their product is of high quality and very effec- tive.

Janice understands that she needs to base her decision on more than surface appeal or which salesperson can do a better presentation, because her boss is looking for bottom-line benefits to the health plan, such as increased member satisfaction and retention. She promptly assembles a team of people from the IT and clinical research departments with expe- rience in evaluation. They review the literature on evaluation of IHC,

talk to colleagues in other plans, and brainstorm, coming up with a list of questions for themselves and prospective vendors. The questions include: What are reasonable objectives for such an application within the context of the organization? Was the application evaluated with regards to these objectives, and what were the results? Has it been shown to be cost-effective? Is the product flexible and adaptable to the plans changing needs?

After reviewing additional information from the vendors, the team decides to purchase a hundred licenses of the application from a com- pany that presented very convincing data on effectiveness which was published in a respected scientific journal. Janice and the team make plans to deploy the application in the primary care department in one of the plan’s facilities because their medical director happens to be a “techie” and is very enthusiastic about the program. Things look positive; the application runs well with their existing clinical information system, the training session is well attended, and several of the primary care staff say that they are impressed with the technology.

Janice and her team check in weekly to see how things are going. After a month, only a small proportion of the clinicians has used the applica- tion with their patients. Janice interviews the staff. “I don’t want to support using something that may eventually replace me,” says one physician. “I like the program but I don’t have time to use it during my appointments…I am also afraid that it will actually increase the amount of time I have to spend with some patients,” says another. Yet a third comments, “It’s too much of a hassle for me to use it because it crashes my personal sched- uler program when I do. When I ask our IT administrator to help, he says he will not support it because he is too busy troubleshooting the clinical information system that the plan just purchased last year.”

Janice’s team realizes that they have formidable implementation prob- lems on their hands, so they start again from square one, and recruit more troops. They fan out, interview additional stakeholders, hold extensive “town meetings,” and form a plan-wide implementation team.

The team is divided into a number of smaller groups that are dedicated to a specific aspect of the implementation process. The development team takes responsibility for assessing the value of available applications, rating them according to compatibility with the health plan, and modi- fying and designing interfaces to reflect the organizational culture. The implementation working group will oversee the process of defining constituencies and determining how best to represent their needs in the

dissemination process. The operations working group considers how the application will be merged with existing clinical care, how quality will be assessed, and how the application will be maintained. The evaluation group defines how the impact of the program will be determined, designs usability testing protocols, analyzes potential return on the investment, and determines how study results will be disseminated. The technology working group hammers out the “nuts and bolts” of the installation and designs the architecture to support the smooth operation of the application. Finally, a planning working group is formed to exam- ine future opportunities and directions for new applications and development.

Meanwhile, a communication plan is set into action so that support for the program can be gathered from the stakeholders and opinion leaders throughout the organization. Key decisionmakers are identified to ensure continued financial support; they are encouraged to join the work- ing groups and inject their concerns into the development process. Clinical staff members from all the facilities are asked to visit and report on changes in medical care and health plan policies. In addition, organization leaders continually address the working groups to inform them of organizational changes so that the applications are consistent with organizational goals.

Finally, the working groups meet every few months with their custom- ers, including members of the health plan, clinical staff, and plan administrators. They define and refine the dissemination process, provide feedback on the success of the program, and brainstorm on future options for added technology.

A year later, a survey shows that 70 percent of members who have used the decision-support application are highly satisfied with their experience and believe that it not only helped them make a better deci- sion, but also improved their relationship with their health care provider. A similar survey among providers shows that more than half have already adapted the application into their routine practice and believe that it improves the quality-of-care they deliver.

Key Considerations

Health care organizations and other potential purchasers should carefully assess their internal capabilities for application development and maintenance before attempting to develop their own applications.

Purchasing decisions require careful consideration of the objectives for the application and evidence of application effectiveness.

Substantial attention to, and planning of, the implementation process are critical. The process should be consistent with the organization’s structure and culture, and address institutional barriers.

Continuous evaluation and quality improvement are essential.

Successful implementation is a team process; involvement of, and communication with, all stakeholders are necessary.

Consumers

Consumers, perhaps more than any other stakeholder group, vary in terms of their ability and experience in evaluating applications, and, thus, may be the most “vulnerable” stakeholder group. This is because a “consumer” may be a scientist or health professional by training, someone trained in a different field, or someone who has no formal education. Therefore, of all the stakeholders, consumers are probably at greatest risk of potential harm and need to be cautious because of the general lack of disclosure of information about devel- opers and sponsors of IHC applications. Many applications are currently being used with no or limited guidance from a health professional. Most consumers are concerned about being able to select and use the best applications for their needs and require guidance and tools for doing so (Gustafson, Robinson et al., 1999). A “consumer’s guide” for interpreting evaluation results reported by developers is presented in Appendix E.

In document Wired for Health and Well-Being (Page 86-89)