July 25, 2024
WASHINGTON, D.C. — Following reports from whistleblowers and former employees at OpenAI voicing safety and security concerns, U.S. Senator Angus King (I-ME) joined four of his colleagues in calling on artificial intelligence (AI) research company OpenAI to honor its “public promises and mission” regarding essential safety standards. Since its founding in 2015, OpenAI has branded itself as a safety-conscious and responsible research organization.
In the letter to OpenAI CEO Sam Altman, the senators highlight recent reporting that OpenAI whistleblowers and former employees have sounded alarms about OpenAI’s focus on ‘shiny products’ over safety and societal impacts, allowing AI systems to be deployed without adequate safety review and insufficient cybersecurity — as well as possible retribution against former employees who publicly air concerns. The senators also ask whether OpenAI’s commitments on AI safety remain in effect and request that the company reform its non-disparagement agreement practices that could deter whistleblowers from coming forward.
“We write to you regarding recent reports about OpenAI’s safety and employment practices. OpenAI has announced a guiding commitment to the safe, secure, and responsible development of artificial intelligence (AI) in the public interest. These reports raise questions about how OpenAI is addressing emerging safety concerns,” the Senators wrote.
The Senators continued, “Given OpenAI’s position as a leading AI company, it is important that the public can trust in the safety and security of its systems. This includes the integrity of the company’s governance structure and safety testing, its employment practices, its fidelity to its public promises and mission, and its cybersecurity policies.”
According to reports, the company has failed to honor its public commitment to allocate 20-percent of computing resources to AI safety, reassigned long-term AI safety team members, and required departing employees to sign life-long non-disparagement agreements under threats of taking back previously earned compensation.
On the letter, Senator King was joined by U.S. Senators Brian Schatz (D-HI), Ben Ray Lujan (D-NM), Peter Welch (D-VT), and Mark Warner (D-VA).
Senator King has been a leading voice in fighting threats from emerging technology, having served as the Co-Chair of the Cyberspace Solarium Commission — which has had dozens of recommendations become law since its launch in 2019. As a member of the Senate Intelligence and Armed Services committees, Senator King has been a strong supporter of increased watermarking regulations. In a September 2023 open Intelligence hearing, King asked Dr. Yann LeCun — a New York University Professor of Computer Science and Data Science at New York University — about what is technologically feasible in terms of implementing watermarks (a small icon or caption) for users to discern between real and artificially created content.
The FY2024 National Defense Authorization Act legislation includes a Senator King-led provision to evaluate technology, including applications, tools, and models, to detect and watermark generative artificial intelligence. Most recently, he joined the bipartisan Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024 (DEFIANCE Act) that would allow victims to sue perpetrators for up to $150,000 who create and share fake visual depictions to falsely appear to be authentic. During a hearing of the Senate Energy and Natural Resources Committee hearing, where he raised the question of what Congress and the private sector can do to combat fake and misinformation online. Most recently, he introduced legislation to combat non-consensual deep fake explicit images online.
The full text of the letter can be found here and below.
+++
Dear Mr. Altman,
We write to you regarding recent reports about OpenAI’s safety and employment practices. OpenAI has announced a guiding commitment to the safe, secure, and responsible development of artificial intelligence (AI) in the public interest. These reports raise questions about how OpenAI is addressing emerging safety concerns. We seek additional information from OpenAI about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments, and on the company’s identification and mitigation of cybersecurity threats.
Safe and secure AI is widely viewed as vital to the nation’s economic competitiveness and geopolitical standing in the twenty-first century. Moreover, OpenAI is now partnering with the U.S. government and national security and defense agencies to develop cybersecurity tools to protect our nation’s critical infrastructure. National and economic security are among the most important responsibilities of the United States Government, and unsecure or otherwise vulnerable AI systems are not acceptable.
Given OpenAI’s position as a leading AI company, it is important that the public can trust in the safety and security of its systems. This includes the integrity of the company’s governance structure and safety testing, its employment practices, its fidelity to its public promises and mission, and its cybersecurity policies. The voluntary commitments that you and other leading AI companies made with the White House last year were an important step towards building this trust.
We therefore request the following information by August 13, 2024:
Thank you very much for your attention to these matters.
Sincerely,
###