Skip to content

September 07, 2023

With Increasing Misinformation Online, King Suggests that Americans Need Clear Labeling to Distinguish the Source of Content

WASHINGTON, D.C. – U.S. Senator Angus King today shared a productive exchange with a top scientist about the urgent need for consistent labeling on AI generated content to help Americans assess the authenticity of the information they are consuming. In a hearing of the Senate Energy and Natural Resources Committee, King asked Dr. Rick L. Stevens, Associate Laboratory Director at Argonne National Laboratory, about what both Congress and private companies need to do to combat the threat of fake information and misinformation. Senator King successfully added a provision into the FY2024 National Defense Authorization Act to require both the private and public sector develop effective “watermarks” (a small icon or caption) when the companies detect enhanced or adjusted content.

King began, “The word watermarking was used earlier. I don't want the government deciding what is true and not true. That is just not the direction we want to go. It is not consistent with our principles and values. On the other hand, it seems to me people who use information on the internet or otherwise have a right to know its source. You mentioned watermarking. What we are talking about for me, it is labeling. This film or this article was produced with AI. That would be important information for people to have, in assessing the validity of what they are seeing. How close are we to having that technology?”

We know how to do it. It is a question of getting agreement that AI companies would use some kind of common approach, not some proprietary approach, and then also how we enforce or require it,” Stevens responded.

King asked, “Could the Congress require the platforms that it has to be labeled?”

Stevens responded, “That is the current approach. I think it is flawed in the sense that there would ultimately be many hundreds or thousands of generators of ai, some of which will be the big companies like Google and OpenAI. But there will be many open models produced outside the United States and produced elsewhere that would not be bound by a U.S. Regulation. I think what we are ultimately going to end up having to do is validate real sources, we could have a law that says watermark ai generated content but a rogue player outside the U.S. like China or Russia would not be bound by that and could produce a ton of material that would not have those watermarks and so could pass the test perhaps. We are going to have to be more nuanced or more strategic in this and we will have to authenticate real content, down to the source. Whether it is true or not is a separate issue. But whether it is produced by real humans in a real meeting. You know that is real versus something that is synthetic.”

According to a Forbes study, one major area of concern when it comes to AI and content, is its accuracy. Nearly 76% of people worry that AI content will mislead or misinform them.

Senator King is seen as a national leader in fighting threats from technology, having served as the Co-Chair of the Cyberspace Solarium Commission – which has had dozens of recommendations become law.  As a member of the Senate Energy and Natural Resources and Armed Services committees, Senator King has been a strong supporter of increased watermarking regulations. The FY2024 National Defense Authorization Act legislation includes a Senator King-led provision to evaluate technology, including applications, tools, and models, to detect and watermark generative artificial intelligence.

###


Next Article » « Previous Article