Why Congress’s AI Plan Was Doomed From the Start

Lawmakers took a year of study to avoid repeating the same mistakes they made with social media. But their report suggests they've once again fallen behind the industry they say they want to regulate.
Image may contain Charles Ellis Schumer People Person Accessories Glasses Crowd Adult Formal Wear and Tie
Senate Majority Leader Chuck Schumer (D-NY) speaks during a news conference at the U.S. Capitol on May 15, 2024 in Washington, DC.Kent Nishimura/Getty Images

After close to a year of study, Majority Leader Chuck Schumer and a bipartisan group of senators on Wednesday released their “roadmap” to regulating artificial intelligence—a report, the working group said, designed to “stimulate momentum” for AI legislation that will “ensure the United States remains at the forefront of innovation in this technology.”

“Harnessing the potential of AI demands an all-hands-on-deck approach,” Schumer told reporters as the senators released their review. “And that’s exactly what our bipartisan AI working group has been leading.”

The report, broad in scope but often thin on details, calls for $32 billion in emergency spending—to put safeguards around the rapidly developing technology, but also to “avoid creating a policy vacuum that China and Russia will fill.” The goal, the senators say, is not to establish a sweeping package regulating the technology, but to inform individual bills addressing AI as it relates to national security, job loss, and, most immediately, the risks it poses to the upcoming election. “If we’re not careful,” Schumer warned at a Senate Rules and Administration Committee markup of three AI election bills Wednesday, “AI has the potential to jaundice or even totally discredit our election systems.”

But the prospects for such legislation remain unclear, particularly in a divided Washington. The approach by Schumer, Democrat Martin Heinrich, and Republicans Todd Young and Mike Rounds also raises questions about how meaningful any regulations that come from the report would be. Indeed, while lawmakers have sought to avoid repeating the same mistakes they made in their handling of social media, the working group’s framework echoes that faulty effort. “Congress failed to meet the moment on social media,” Connecticut Democrat Richard Blumenthal said a year ago, as OpenAI CEO Sam Altman testified before the Senate Judiciary Committee. “Now, we have the obligation to do it on AI before the threats and the risks become real.”

Schumer's cohort doesn’t ignore those risks, but they also don’t put much forward in in terms of regulations to mitigate them—falling back, instead, on some boilerplate language about lawmakers’ “dedication to harnessing the full potential of AI while minimizing the risks of AI in the near and long term.” If such platitudes sound similar to what you might hear from Altman and other AI evangelists, that makes sense: Just as lawmakers’ approach to social media was guided by Mark Zuckerberg and others with an interest in stifling more significant regulation, AI proponents and the tech lobby seemed to wield significant influence over the report—and supports the finished product.

“This road map leads to a dead end,” Evan Greer, director of the advocacy group Fight for the Future, told the Washington Post, adding that it was a “pathetic” report. “They heard from experts about the urgency of addressing AI harms and then paid lip service to that while giving industry most of what they want: money and ‘light touch’ regulatory proposals.”

It’s true, as Schumer has said, that developing substantial regulations for an evolving technology like this isn’t easy: “We’ve never ever dealt with anything like this before,” he told reporters Tuesday. But just because it’s challenging doesn’t mean it’s impossible: Last year, the European Union agreed to the AI Act, a set of rules meant to address the most significant risks posed by the technology, including the spread of misinformation and the threat of automation. There are questions, of course, about how effective that outline will be. But it is a stronger regulatory step than any that has been taken in the U.S., where the tech industry itself seems still to be sitting in the driver’s seat. The AI roadmap is “but another proof point of Big Tech’s profound and pervasive power to shape the policymaking process,” as Accountable Tech Co-Founder and Executive Director Nicole Gill put it Wednesday. “Lawmakers must move quickly to enact AI legislation that centers the public interest and addresses the damage AI is currently causing in communities all across the country.”