Who does your doctor work for? In recent decades, the answer to that question has become increasingly muddled. As health care becomes more fragmented—a committee effort by institutional actors—the immediate decision-making responsibility that should properly belong to independent doctors is being lost.
The rise of the “hospitalist” exemplifies this trend. According to the American Hospital Association, the number of hospitalists working in the U.S. grew from less than 1,000 in 1996 to over 44,000 in 2014. The reasons behind the broad shift are mostly financial and have little to do with patient welfare.
Prior to the introduction of the hospitalist model of care, hospitalization in the U.S. was considered just one event in the continuous relationship between patients and their primary physicians, who would attend to patients in the hospital and guide their care during their stay. Under this system, hospitals were keen to attract particular doctors to serve as attending physicians; patients often selected hospitals specifically because they had working relationships with their primary doctors. Importantly, these private physicians were chosen by patients rather than assigned to them.
Importantly, these private physicians were chosen by patients rather than assigned to them.
Because these doctors often had longstanding relationships with patients, they had an immediate and thorough understanding of their medical histories. While the visiting primary doctors would consult with physicians employed by the hospitals, they were ultimately the ones responsible for crucial decisions made during hospitalization. That meant they were responsible not to patients they had never met before and might never see again, but to people and families they actually knew.
With the expansion of the hospitalist system, medical responsibility has become far more diffuse and impersonal. A hospitalist—a term reportedly coined in the mid-’90s by Drs. Robert Wachter and Lee Goldman—is a specific kind of hospital-employed physician charged with overseeing the care of hospitalized patients, and, theoretically at least, making critical medical decisions. Hospitalists must swiftly coordinate with specialists, whose time is at a premium, while also attempting to gather complicated medical data from patients or family members. The sort of cumulative knowledge that comes with years of caring for a particular patient is no longer inherent to hospital treatment, which can be a serious problem for patients with complicated conditions. Today, as I witnessed during a recent series of hospital stays by a close relative, decision making is often dispersed to the point where no one is really in charge. There are too many committee meetings. In many cases, particularly high-risk situations, this can mean that decisions are critically delayed or that the course of action is muddled.
With the expansion of the hospitalist system, medical responsibility has become far more diffuse, and decisions can be critically delayed.
What led America’s hospital system down this road? For nearly two decades following the 1965 passage of Medicare, Congress reimbursed hospitals on a per diem basis. Under this system, hospitals had mostly free reign to calculate the costs of inpatient treatment. While the "cost reports" filed by hospitals were meant to reflect the actual price of hospital services, as Chris Pope has noted in National Affairs, Medicare reimbursements were also used to "bankroll major capital investments, expansions of capacity, staffing, or technology." Seemingly endless government funds, combined with a lack of accountability, incentivized hospitals to spend with abandon, and from 1970 to 1980, Medicare hospital payments increased by 88%.
In response, Congress sought to curtail Medicare’s hospital payments. Instead of retrospectively paying for a range of services at costs largely determined by hospitals, in 1983 Congress implemented a system in which Medicare would pay a flat rate based on a patient’s diagnosis. Regardless of how long patients stayed and how much hospitals actually ended up spending on them, hospitals were paid according to this diagnostic code. This was effective in reducing Medicare hospital payments, which declined by 52% between 1985 and 1990, and again by 37% between 1990 and 1995. It was around this time that the hospitalist model began to take hold.
This is at least in part because the new payment framework created a new set of incentives for hospitals. While the per diem Medicare reimbursement system had allowed for extended hospital stays and encouraged hospitals to treat very ill patients at length, hospital administrators now had reason to push for discharging patients swiftly: the so-called “quicker and sicker” paradigm. At the same time, reimbursements for hospital visits by primary doctors substantially decreased. This combination of financial pressures led to the old model’s gradual decline.
Dr. Joseph Ming Wah Li noted in the December 2008 issue of the American Medical Association’s Journal of Ethics that “cost savings was the predominant force behind the development of… hospitalist programs. Hospitals and payers recognized that hospitalist care was linked to decreased length of stays and resource use. Greater patient throughput correlated with hospital revenue, giving managed care and hospital executives reasons to develop hospitalist programs.”
The new system has not resulted in better patient outcomes. One 2017 JAMA study found that patients under the care of hospitalists are more likely to die within 30 days of leaving the hospital, and that such patients are less likely to be discharged home, as opposed to being discharged to a nursing or rehab facility. A 2020 study published in the Journal of General Internal Medicine found that, among hospitalized Medicare patients who had been released to nursing facilities, re-admission rates and Medicare costs were higher for those who had been under the care of hospitalists. A 2012 study in the Annals of Internal Medicine similarly found that patients under hospitalist care were more likely to be re-admitted within 30 days of discharge; it concluded that any cost saving from shorter hospital visits was being offset by the expense of swift re-admission. And a 2022 investigation published in JAMA noted the likelihood that the “increase in the number of hospitalists over time may be contributing to rising national costs related to hospital care.”
The replacement of the attending physician with the hospitalist is treated as unavoidable, an organic consequence of necessary policy shifts. But the legislation guiding these developments was not inevitable, nor was it driven by concern for patients. It seems to be increasing rather than curtailing hospital costs, and it has placed another wedge between patients and the doctors who know them best. In many ways, hospitalists are less accountable to the patients they serve than to the large facilities in which they’re employed. In the past, doctors worked for and were primarily responsible to the patients seeking them out, rather than administrative institutions. At moments of fraught medical decision making—of the sort which happen frequently in hospitals—there would be one clear authority to take the credit or the blame.