Mihm: Why the US doesn’t have enough hospital beds
Hospitals often speak of what’s called “surge capacity” — the ability to absorb a sudden influx of patients because of a terrorist attack, a natural disaster, or even, yes, a pandemic. Given the possible influx of patients sickened by the new coronavirus, how much of a surge can U.S. hospitals accommodate?
Not as much as you might think. For years, cost-conscious hospitals have emulated the lean, just-in-time principles that have revolutionized manufacturing. The result has been a health care system that is far more efficient but unprepared to handle a sudden influx of seriously ill patients.
Over the first half the 20th century, hospitals expanded, adding beds at a steady clip. By the late 1950s, the United States had approximately nine hospital beds for every 1,000 people. Many of these beds went unused in normal times, but came in handy during periods of extreme stress.
For example, in the winter of 1957 and 1958, an epidemic of influenza swept the country. This was the worst outbreak since the famous pandemic of 1918, but hospitals managed to absorb the surge without any problems. Even in New York City, which was harder hit than most, hospitals coped with the increased demand, adding a few extra beds. The epidemic came and went with little damage. Hospitals didn’t overflow with patients. Things kept running.
In the 1960s and 1970s, hospitals expanded their facilities, thanks to government subsidies that encouraged capital spending. This eventually created a surplus of hospital beds at precisely the moment when growing concern over medical costs began to fuel a search for ways to limit hospital stays or eliminate them altogether.
The turning point came in 1983, when Congress transformed how Medicare reimbursed hospitals. Up to this point, the government had followed a fee-for-service arrangement. If a Medicare patient occupied a bed, the hospital could bill the government for whatever services the patient incurred. If anything, this arrangement gave hospitals an incentive to be inefficient, keeping patients in beds longer than necessary.
By contrast, under the new system introduced that year, a patient’s particular diagnosis was translated into a flat payment for their entire hospital visit. This unleashed market forces on hospitals as they began billing by the case, not by the day.
Its effects can be glimpsed in the data on occupancy rates in the nation’s hospitals. From 1945 to 1983, hospitals in the United States kept approximately 75% of beds in use, leaving a quarter ready for unanticipated surges. But after Congress acted, occupancy rates began to plunge, reaching 65% within two years and bottoming out at around 60% the following decade.
Hospitals began consolidating and cutting capacity, just as Congress intended. The rise of managed care and other cost-conscious models also moved much of the business of delivering health care out of hospitals and into doctors’ offices and other settings.
As the need for hospital care declined, the number of beds per capita declined, too. By the early 1990s, the number of beds per 1,000 population had fallen by 50% from its peak. Over the next decade, a growing number of analysts began to worry about the unthinkable: a future shortage of hospital beds. The aging of the larger population — especially baby boomers — looked particularly worrisome.