M.D.'S DESCRIBE CONCERNS ABOUT PUBLIC RELEASE OF OUTCOME STATISTICS

The following material was excerpted from a Sounding Board article in the February 8, 1996 issue of The New England Journal of Medicine. Contributed by Mark R. Chassin, M.D. M.P.P., Mount Sinai School of Medicine; Edward L. Hannan, Ph.D., State University of New York (Albany); and Barbara A. DeBuono, M.D., M.P.H., New York State Department of Health. Copyright 1996 by the Massachusetts Medical Society.


In 1990, the [New York State Department of Health] made public the 1989 data on crude, expected, and risk-adjusted mortality rates and the volume of CABG [coronary-artery bypass grafting] procedures performed at each hospital, with the intention of releasing comparable data each year. A newspaper, Newsday, sued the department under the state's Freedom of Information Law to gain access to surgeon-specific data on mortality. We at the department resisted this lawsuit, because we believed that low annual numbers of operations per surgeon would probably result in overly variable mortality rates, with wide confidence intervals that could easily be misinterpreted. The department lost the lawsuit and was ordered to give the surgeon-specific data to Newsday, which published them in December 1991. Physicians reacted angrily. As a result, the Cardiac Advisory Committee voted overwhelmingly to recommend to hospitals that they submit data to the department in a fashion that would make it impossible to identify specific physicians.

We believed, however, that continuing to receive data identifying individual physicians was important to understanding how the skill of surgeons combines with other factors to affect outcomes after CABG. Previous research in New York had documented, for example, a strong inverse relation between a surgeon's volume of CABG procedures and the operative mortality associated with such surgery. We began intensive discussions with the Cardiac Advisory Committee and other physician leaders, seeking agreement on how the department could continue to disclose surgeon-specific data to the public while also addressing the main concerns of physicians, which we understood and shared.

In 1992, it was agreed that data on operative mortality would be compiled for the most recent three years according to surgeon and would be attributed by name only to surgeons who performed at least 200 operations in a single hospital during that period. Members of the department and of the committee believed that this method would yield statistically stable mortality rates. Accordingly, surgeon-specific death rates have been released in this format since 1992.

The initial press accounts disclosing the 1989 data were superficial and misleading, emphasizing numerical ranking of hospital performance. These news stories made differences between hospitals of a few 10ths of a percentage point appear important when the differences were not meaningful either statistically or clinically. The media failed to present the reason for the program's existence--namely, the need to provide comparative data on the performance of cardiac-surgery programs in order to spur efforts to improve the quality of care.

Beginning in 1992, we made a major effort to educate members of the media. This effort was continuous and persistent, taking advantage of many opportunities besides the annual release of data. The majority of journalists responded positively. In the past several years, news stories have stressed how hospitals and surgeons have used the data to improve outcomes for their patients. Editorial comment has been favorable. Although this response has been gratifying, we believe the effort to educate the media and the public about this program must continue.

Problems with Data Release

Both the early newspaper reports and the initial publication of surgeon-specific death rates by Newsday elicited understandably unfavorable reactions from physicians and hospitals. They were concerned that patients might overreact to the reports and avoid hospitals or physicians associated with high reported mortality rates. They worried about how accurate the data on risk factors were and whether the risk-adjustment process accounted adequately for the patients at higher risk. More recently, there has been concern about whether some providers avoid high-risk patients in an attempt to lower their risk-adjusted mortality rates.

No movement of patients away from hospitals with high mortality rates has occurred. Nor did patients move to hospitals with low rates. In 1989, for example, 8.7 percent of all patients undergoing CABG were treated at hospitals whose risk-adjusted mortality rates were significantly higher than the state average, and 15.7 percent were treated at hospitals with significantly lower rates. The comparable figures in 1993 were 9.5 percent and 17.0 percent, respectively. Tracking the data on hospitals identified as high and low statistical outliers in 1990 yielded similar results.

We at the department share the concern about the accuracy of data on risk factors. Before they are made public, hospital data are checked to ensure that all patients are included by comparing the list of patients with the statewide data base on hospital discharges. The department ensures the accuracy of data with an independent audit of a sample of data from a sample of hospitals that compares the data in the registry with the information in the medical records. The first such audit was conducted in 1989. Annual audits began in 1992. Overall, each audit has produced similar findings. We found no statistically significant difference between the expected mortality rate calculated from the audited data and that calculated from the data originally submitted by the hospitals.

#

Return to ScienceWriters table of contents.