Carnegie Mellon University’s Artificial Intelligence Experts Provide Briefing to Policymakers
This September, Carnegie Mellon University researchers — leaders in developing artificial intelligence (AI) tools to benefit society — lent their expertise to a series of policy briefings in Washington, D.C, and at the 78th United Nations General Assembly in New York.
As AI continues to rapidly evolve, CMU is leading national conversations around its fair use and development.
The Need for Transparency in Artificial Intelligence
Ramayya Krishnan(opens in new window), dean of CMU’s Heinz College of Information Systems and Public Policy, appeared before the U.S. Senate Committee on Commerce, Science and Transportation Subcommittee on Consumer Protection, Product Safety and Data Security on Tuesday, Sept. 12. Chaired by U.S. Sen. John Hickenlooper, the hearing explored how to increase transparency in AI technologies for consumers, identified uses of AI that are beneficial or “high-risk” and evaluated the potential impact of policies designed to increase trustworthiness in the transformational technology.
Krishnan presented senators with four recommendations:
- Require all federal agencies to use the NIST (National Institute of Standards and Technology) AI Risk Management Framework(opens in new window) during the design, development, procurement, use and management of their AI use cases.
- Require all AI models (open source and closed source models) that produce content to label their content with watermarking and provenance technology and provide a tool to detect the label.
- Require standardized documentation, such as audited financial statements, that would be verifiable by a trusted third party (e.g., an auditor). Akin to nutrition labels on food packaging, these make it clear what went into producing the model.
- Investing in a trust infrastructure, such as an AI lead response team (ALRT) (opens in new window)to connect vendors, AI system deployers and users. The ALRT would catalog incidents, record vulnerabilities, test and verify models, and recommend solutions and share best practices to minimize systemic risks as well as harm stemming from vulnerability exploits. ALRT is modeled after the computer emergency response team (CERT) established by the government at Carnegie Mellon in the late 1980s in response to cyber security vulnerabilities and threats.