OpenAI Welcomes Carnegie Mellon’s Kolter to Board of Directors

OpenAI has announced the appointment of Carnegie Mellon University Professor Zico Kolter to its board of directors.

Kolter will also join the board’s Safety and Security Committee. The committee is responsible for making recommendations on critical safety and security decisions for all OpenAI projects.

“I’m excited to announce that I am joining the OpenAI board of directors,” said Kolter, the head of the Machine Learning Department in CMU’s School of Computer Science. “I’m looking forward to sharing my perspectives and expertise on AI safety and robustness to help guide the amazing work being done at OpenAI.”

Kolter’s work predominantly focuses on artificial intelligence safety, alignment and the robustness of machine learning classifiers. His research and expertise span new deep network architectures, innovative methodologies for understanding the influence of data on models, and automated methods for evaluating AI model robustness, making him an invaluable technical director for OpenAI’s governance.

“Zico adds deep technical understanding and perspective in AI safety and robustness that will help us ensure general artificial intelligence benefits all of humanity,” said Bret Taylor, chair of OpenAI’s board.

Kolter has been a key figure at CMU for 12 years with affiliations in the Computer Science Department, the Software and Societal Systems Department, the Robotics Institute, the CyLab Security and Privacy Institute, and the College of Engineering’s Electrical and Computer Engineering Department. He completed his Ph.D. in computer science at Stanford University in 2010, followed by a postdoctoral fellowship at the Massachusetts Institute of Technology from 2010-12. Throughout his career, he has made significant contributions to the field of machine learning, authoring numerous award-winning papers at prestigious meetings such as the Conference on Neural Information Processing Systems (NeurIPS), the International Conference on Machine Learning (ICML) and the International Conference on Artificial Intelligence and Statistics (AISTATS).

Kolter’s research includes developing the first methods for creating deep learning models with guaranteed robustness. He also pioneered techniques for embedding hard constraints into AI models using classical optimization within neural network layers. More recently, in 2023, his team developed innovative methods for automatically assessing the safety of large language models, demonstrating the potential to bypass existing model safeguards through automated optimization techniques. Alongside his academic pursuits, Kolter has worked closely within industry throughout his career, formerly as chief data scientist at C3.ai, and currently as chief expert at Bosch and chief technical adviser at Gray Swan, a startup specializing in AI safety and security.