AI Used by Scientists to Predict Brain Cancer Outcomes
Glioblastoma is a swift and aggressive brain cancer, with an average life expectancy of about one year after diagnosis. It’s difficult to treat, in part because the cellular makeup of each tumor varies greatly from person to person.
“Because of the heterogeneity of this disease, scientists haven’t found good ways of tackling it,” said Olivier Gevaert, PhD, associate professor of biomedical informatics and of data science.
Doctors and scientists also struggle with prognosis, as it can be difficult to parse which cancerous cells are driving each patient’s glioblastoma.
But Stanford Medicine scientists and their colleagues recently developed an artificial intelligence model that assesses stained images of glioblastoma tissue to predict the aggressiveness of a patient’s tumor, determine the genetic makeup of the tumor cells and evaluate whether substantial cancerous cells remain after surgery.
“It’s sort of a decision support system for the physicians,” said Yuanning Zheng, PhD, a postdoctoral scholar in Gevaert’s lab. Their team recently published a study in Nature Communications describing how the model could help doctors identify patients with cellular characteristics that indicate more aggressive tumors, and flag them for accelerated follow-up.
A new view on glioblastoma
Even after glioblastoma patients undergo surgery, radiation and chemotherapy, some cancer cells almost always remain. Nearly all glioblastoma patients relapse — some sooner than others.
Doctors and scientists typically use something called histology images, or pictures of dyed disease tissue, to help them identify tumor cells and design treatment plans. While the images often reveal the shape and location of cancer cells, they don’t paint a complete picture of the tumor. In recent years, a more advanced technique called spatial transcriptomics was developed. It reveals the location and genetic makeup of dozens of cell types, using specific molecules to identify genetic material in tumor tissue.
“The spatial transcriptomics data allows us to look at these types of tumors in a way that was not possible previously,” Gevaert said. “But it’s currently an expensive technology. It takes a few thousand dollars to generate data for a single patient.”
Gevaert and Zheng turned to AI to economize the process, developing a model to draw from spatial transcriptomics and enhance basic histology images, creating a more detailed tumor map.
“The model showed which cells like to be together, which cells don’t want to communicate and how this correlates with patient outcomes,” Gevaert said.
Developing the model
The researchers trained the model on spatial transcriptomics images and genetic data from more than 20 glioblastoma patients. From these detailed pictures, the model learned which cell types, cell-cell interactions and profiles were linked to more favorable (or unfavorable) cancer outcomes.
For example, the model found that when tumor cells resembling neuron support cells, called astrocytes, clustered together abnormally, patients seemed to have more swift, aggressive cancers. Other studies have found that when astrocytes bunch together they communicate biological signals that drive tumor growth.
By revealing cellular patterns like this telltale clumping, the model may help drug developers design more effective treatments to target glioblastoma, Gevaert said.
Spatial transcriptomics data from the same glioblastoma patients also taught the model to identify different tumor cells in corresponding histology images with accuracy of 78% or higher. Essentially, it used the cells’ shape to predict which genes are turned “on” and “off,” information that reveals a cell’s identity.
Zheng also hopes that clinicians can use this application to infer how much of a tumor was successfully removed during surgery, and how much remains within the brain. Their model showed that tumor cells with genetic traces of oxygen deprivation are often located in the center of a patient’s tumor. When these cells were seen in higher proportion, it corresponded to worse cancer outcomes. By illuminating the oxygen-deprived cells in histology-stained surgery samples, the model can help surgeons understand how many cancerous cells may be left in the brain, and how soon to resume treatment after the surgery, Zheng said.
Once the model was trained to identify the location of different cell types from basic images, researchers evaluated its utility on a larger, separate data set of histology images from 410 patients. From those images, the model began to infer cancer outcomes. The researchers saw that the model was able to identify cell patterns that corresponded to cancer aggression.
The idea is that the model could someday help physicians identify those patients who have cell patterning that indicates a more aggressive tumor and could pose an imminent threat, whether through relapse or rapid growth.
What’s next?
Zheng is excited about the model’s prediction potential, but it needs to be trained on more patients before releasing it to physicians, he said. He plans to refine the model so it can create even more granular cellular maps of glioblastoma tumors.
Right now, a proof-of-concept version of their model, GBM360, is available for researchers to upload diagnostic images and predict glioblastoma patient outcomes. Zheng emphasizes, however, that the model is still in a research phase, and that results from the algorithm should not guide patient care just yet.
Zheng hopes the algorithm could someday predict outcomes for other conditions, such as breast or lung cancers. “I think these multimodal data integrations can shape improvement of personalized medicine in the future.”
Researchers affiliated with the University of Granada, Ghent University, and the University of Freiburg contributed to the work.