Machine Learning Study Shows Potential to Optimize Medical Resource Sharing During Crises

Addressing supply shortages in an organization can be like hitting a moving target — the problem may shift along with immediate supply and demand as the situation evolves.

U.S. hospitals saw this dilemma play out in real time during the early waves of the COVID-19 pandemic: Shortages of vital supplies spiked in various states as the virus spread rapidly.

Sundaramoorthi

However, modern analytics models can be powerful tools in solving these types of complex problems — particularly models that incorporate machine learning, a form of artificial intelligence.

A team of Olin Business School researchers at Washington University in St. Louis tested algorithmic models for sharing ventilators, based on real data regarding ventilator needs across the U.S. during a three-week period early in the pandemic. Durai Sundaramoorthi, a professor of practice in data analytics, noted that during that period, there were instances where there was a critical need for ventilators in one state, while another state had a surplus of unused devices.

“Often, you have resources available in one place, with a lot of demand in another place, and there’s no real mechanism to share,” Sundaramoorthi said. “We thought this is certainly one of those problems where we can create a model to share resources in a systematic way.”

The team found that the most effective models to do this use deep Q-learning, a form of machine learning that allows the model to learn from many iterations of solving the problem. Their work, titled “Management of Resource Sharing in Emergency Response Using Data-driven Analytics,” was published recently in the journal Annals of Operations Research.

The teachability of the deep Q-learning model gives it an edge over more traditional integer programming, which solves the problem once for a single set of conditions.

“The beauty of the Q-learning model is that once you train the (decision-making) agent, you can use it in the future,” Sundaramoorthi said. “In integer programming, if you want to do the same thing in the future, you have to solve the problem again.”

Training the agent

The Olin team measured the effectiveness of their models by three metrics: The resulting unmet demand for ventilators, the total distance the devices had to be sent, and the total shipments between locations. For each metric, when the agent achieved lower numbers, it was rewarded.

“The agent is trying and learning: For example, after we deliver one ventilator to one state, the agent says, ‘Do we have any cost?’” said Salih Tutun, a lecturer in data analytics who specializes in machine learning. “The agent tries again and again, like a baby, until the agent understands which pattern to follow.”

Tutun said the deep learning-based model is particularly helpful in situations such as the ventilator problem, in which the facts on the ground are changing.

“This is a dynamic system — we send ventilators from Missouri to California, and tomorrow, maybe Missouri needs ventilators, and another state is supposed to send ventilators to us,” he said. “An agent is supposed to learn this dynamic pattern.”

The team also found that it was more effective in some cases if hospitals sent out ventilators based entirely on need — a so-called “just ship” policy — rather than holding on to a core number of devices in case they needed them later for themselves.

Tutun noted that organizations’ desire to retain ventilators contributed to shortages during the pandemic: “Because of this, people died, actually. It’s a very big challenge.”

The researchers said that in an emergency, an authority at a national level could intervene to implement a model such as the one they tested.

This sharing model could be implemented on a smaller scale, at a regional or citywide level, Sundaramoorthi said. In addition, lecturer Samira Fazel Anvaryazdi noted that instead of sharing ventilators, neighboring hospitals could move patients to take advantage of unused devices.

Preparing for future emergencies

Taking these findings from the operations research realm to more practical health-care applications can be a slow process, the authors said.

“What I’ve learned from my prior research is that sometimes you do research and you’re excited about it, but nobody talks about it,” Sundaramoorthi said. “And two or three years later, somebody calls you because they saw it and they got excited.”

Tutun said many industries are interested in optimizing resources through reinforcement-learning models like these, citing companies such as Amazon, Tesla and Netflix.

I believe that in the coming years, everybody is going to focus on this kind of research. This is the future.

Salih Tutun

“I believe we’re on the right path,” he said. “That’s why I take great pleasure in working on this research.”

In addition to Sundaramoorthi, Tutun and Fazel Anvaryazdi, co-authors of the paper included Jifan Zhang and Mohammadhossein Amini at Olin Business School and Hema Sundaramoorthi at the School of Medicine.