George Mason University’s AI Strategies team holds inaugural summer institute
The transdisciplinary AI Strategies team examines how cultural values and institutional polices shape AI infrastructures in national and global contexts.
In May, a cohort of 20 selected AI and Tech fellows gathered at Mason Square for George Mason University’s AI Strategies first AI and Tech Policy Summer Institute. The event, which was also sponsored by Mason’s Institute for Philosophy and Public Policy, the Schar School of Policy and Government, the Institute for Digital Innovation, and the Center for Advancing Human-Machine Partnership, brought together scholars, industry experts, government officials, and civil society activists from multiple academic disciplines, backgrounds, and research interests.
The cohort convened to introduce Mason master’s and doctoral students in the social sciences, humanities, and select professional schools to the fundamental engineering concepts about how artificial intelligence (AI) works, policy and regulatory frameworks that are evolving to govern AI, debates on AI ethics, and issues surrounding security, economic, and human rights concerns from local to global levels.
“AI now impacts every kind of work and even play, from writing an email to ordering a book,” said Schar School of Policy and Government Distinguished University Professor J. P. Singh, who leads the AI Strategies team. “The knowledge from the summer institute is important for students who will eventually be responsible for using and controlling AI, which is already considered an existential threat in some quarters.
“The institute demystified how AI works, whether in ‘recommender systems’ that prompt words in emails, or algorithms that drive users on social media,” Singh continued. “The interdisciplinary work in this field is just beginning.”
AI Strategies is funded by a three-year, $1.39 million Department of Defense grant to study the economic and cultural determinants for global artificial intelligence infrastructures—and describe their implications for national and international security. The grant was awarded by the DoD’s Minerva Research Initiative, a joint program of the Office of Basic Research and the Office of Policy that supports social science research focused on expanding basic understanding of security.
Researchers from the College of Humanities and Social Sciences’ Institute for Philosophy and Public Policy (i3p) played a key role in the project, from the pre-proposal stage to the present, providing insight on the ethical, social, and policy implications of emerging technologies. At the institute, i3p Acting Director Jesse Kirkpatrick, a member of the AI Strategies team, presented “Responsible Innovation and National Security,” which addressed existing efforts, challenges, and opportunities in responsible AI, and drew on his involvement in responsible AI research, policy, and practice across such sectors as academia, industry, and government.
“It’s no secret that there is a vital need for transdisciplinary mentorship and training in AI for our graduate students. What may be less obvious is that this [training] must occur across disciplines,” said Kirkpatrick, who is a research associate professor of philosophy. “By engaging nearly 30 speakers and faculty, our 20 AI & Tech fellows got just that—a broad and deep look at the cutting-edge of AI, inclusive of numerous perspectives.”
Kirkpatrick said that from the composition of the research team to the design and structure of the project and its research outputs, the people, process, and products have been thoroughly transdisciplinary. “This is a testimony to the team’s leadership; the support we have from our respective academic units, schools, and colleges; and the wonderful constellation of research centers and institutes,” Kirkpatrick said.
The cohort of fellows will participate in a year-long fellowship through Mason’s Center for Advancing Human Machine Partnership.