Exploring the Intersection: Math and Machine Learning Forge a Two-Way Street of Mutual Influence
Traditionally, mathematicians jot down their formulas using paper and pencil, seeking out what they call pure and elegant solutions. In the 1970s, they hesitantly began turning to computers to assist with some of their problems. Decades later, computers are often used to crack the hardest math puzzles. Now, in a similar vein, some mathematicians are turning to machine learning tools to aid in their numerical pursuits.
“Mathematicians are beginning to embrace machine learning,” says Sergei Gukov, the John D. MacArthur Professor of Theoretical Physics and Mathematics at Caltech, who put together the upcoming Mathematics and Machine Learning 2023 conference, which takes place at Caltech December 10–13.
“There are some mathematicians who may still be skeptical about using the tools,” Gukov says. “The tools are mischievous and not as pure as using paper and pencil, but they work.”
Machine learning is a subfield of AI, or artificial intelligence, in which a computer program is trained on large datasets and learns to find new patterns and make predictions. The conference, the first put on by the new Richard N. Merkin Center for Pure and Applied Mathematics, will help bridge the gap between developers of machine learning tools (the data scientists) and the mathematicians. The goal is to discuss ways in which the two fields can complement each other.
“It’s a two-way street,” says Gukov, who is the director of the new Merkin Center, which was established by Caltech Trustee Richard Merkin.
“Mathematicians can help come up with clever new algorithms for machine learning tools like the ones used in generative AI programs like ChatGPT, while machine learning can help us crack difficult math problems.”
Yi Ni, a professor of mathematics at Caltech, plans to attend the conference, though he says he does not use machine learning in his own research, which involves the field of topology and, specifically, the study of mathematical knots in lower dimensions. “Some mathematicians are more familiar with these advanced tools than others,” Ni says. “You need to know somebody who is an expert in machine learning and willing to help. Ultimately, I think AI for math will become a subfield of math.”
One tough problem that may unravel with the help of machine learning, according to Gukov, is known as the Riemann hypothesis. Named after the 19th-century mathematician Bernhard Riemann, this problem is one of seven Millennium Problems selected by the Clay Mathematics Institute; a $1 million prize will be awarded for the solution to each problem.
The Riemann hypothesis centers around a formula known as the Riemann zeta function, which packages information about prime numbers. If proved true, the hypothesis would provide a new understanding of how prime numbers are distributed. Machine learning tools could help crack the problem by providing a new way to run through more possible iterations of the problem.
“Machine learning tools are very good at recognizing patterns and analyzing very complex problems,” Gukov says.
Ni agrees that machine learning can serve as a helpful assistant. “Machine learning solutions may not be as beautiful, but they can find new connections,” he says. “But you still need a mathematician to turn the questions into something computers can solve.”
Gukov has used machine learning himself to untangle problems in knot theory. Knot theory is the study of abstract knots, which are similar to the knots you might find on a shoestring, but the ends of the strings are closed into loops. These mathematical knots can be entwined in various ways, and mathematicians like Gukov want to understand their structures and how they relate to each other. The work has relationships to other fields of mathematics such as representation theory and quantum algebra, and even quantum physics.
In particular, Gukov and his colleagues are working to solve what is called the smooth Poincaré conjecture in four dimensions. The original Poincaré conjecture, which is also a Millennium Problem, was proposed by mathematician Henri Poincaré early in the 20th century. It was ultimately solved from 2002 to 2003 by Grigori Perelman (who famously turned down his prize of $1 million). The problem involves comparing spheres to certain types of manifolds that look like spheres; manifolds are shapes that are projections of higher-dimensional objects onto lower dimensions. Gukov says the problem is like asking, “Are objects that look like spheres really spheres?”
The four-dimensional smooth Poincaré conjecture holds that, in four dimensions, all manifolds that look like spheres are indeed actually spheres. In an attempt to solve this conjecture, Gukov and his team develop a machine learning approach to evaluate so-called ribbon knots.
“Our brain cannot handle four dimensions, so we package shapes into knots,” Gukov says. “A ribbon is where the string in a knot pierces through a different part of the string in three dimensions but doesn’t pierce through anything in four dimensions. Machine learning lets us analyze the ‘ribboness’ of knots, a yes-or-no property of knots that has applications to the smooth Poincaré conjecture.”
“This is where machine learning comes to the rescue,” writes Gukov and his team in a preprint paper titled “Searching for Ribbons with Machine Learning.” “It has the ability to quickly search through many potential solutions and, more importantly, to improve the search based on the successful ‘games’ it plays. We use the word ‘games’ since the same types of algorithms and architectures can be employed to play complex board games, such as Go or chess, where the goals and winning strategies are similar to those in math problems.”
On the flip side, math can help in developing machine learning algorithms, Gukov explains. A mathematical mindset, he says, can bring fresh ideas to the development of the algorithms behind AI tools. He cites Peter Shor as an example of a mathematician who brought insight to computer science problems. Shor, who graduated from Caltech with a bachelor’s degree in mathematics in 1981, famously came up with what is known as Shor’s algorithm, a set of rules that could allow quantum computers of the future to factor integers faster than typical computers, thereby breaking digital encryption codes.
Today’s machine learning algorithms are trained on large sets of data. They churn through mountains of data on language, images, and more to recognize patterns and come up with new connections. But data scientists don’t always know how the programs reach their conclusions. The inner workings are hidden in a so-called “black box.” A mathematical approach to developing the algorithms would reveal what’s happening “under the hood,” as Gukov says, leading to a deeper understanding of how the algorithms work and thus can be improved.
“Math,” says Gukov, “is fertile ground for new ideas.”
The conference will take place at the Merkin Center on the eighth floor of Caltech Hall.