Rachel Klarman is executive director of Governing for Impact. Adam Conner is vice president of technology policy at the Center for American Progress.
Clarote & AI4Media / Better Images with AI / Power/Profit / CC-BY 4.0
If there’s one thing everyone seems to agree on when it comes to combating the foreseeable harms of AI, it’s that we should start with rigorous enforcement of the law.
But what exactly are the existing laws and regulations, and how can they be best used to address the challenges at hand? The lack of efforts to understand this situation, beyond the laudable executive orders adopted by the White House last year, has left regulators in the dark as important tools in their arsenal go unused.
until now.
Our organizations, the Center for American Progress (CAP) and Governance for Impact (GFI), have been working for months to answer this question, and in June we released a report identifying underutilized legal authorities. We found that even if Congress cannot pass new laws designed to address the current frenzy, there is a lot the federal government can do to mitigate significant AI risks.
For example, the White House Office of Management and Budget could impose broad, binding AI-related obligations and worker protections on federal contractors, which employ roughly one-fifth of the U.S. workforce, and require these companies to subject all automated hiring systems to strong transparency regimes, conduct pre-market testing and ongoing evaluation, and guarantee workers’ health, safety, privacy, fair compensation, ability to organize, and non-discrimination rights.
This is just one of more than 80 executive actions that can be taken under current law that we identified in our new report.
Other powers the federal government could use to strengthen protections for workers’ health and retirement benefits include requiring proactive disclosure and plain-language explanations of AI systems involved in benefit decisions, guaranteeing a human right of appeal when claims are denied, and expanding protections against sudden job terminations caused by algorithmic management tools.
These bills would significantly expand U.S. preparedness to respond to a possibly foreseeable national emergency related to AI, outlining in detail likely scenarios and the range of legally authorized tools the government could use to counter these threats, as well as the criteria for invoking such measures (including freezing assets or restricting transactions related to AI technologies that are contributing to the crisis).
It could begin a standard-setting process to regulate the use of electronic monitoring and automated supervision (ESAM) in the workplace to the extent that it poses risks to workers’ physical and mental safety and health, and require providers and employers of workplace monitoring technology to comply with the Fair Credit Reporting Act.
They could require credit reporting agencies to explain whether and to what extent AI is involved in the creation of their reports and scores, and require financial institutions to implement reasonable AI safeguards and practices, including minimum risk management practices for high-impact AI systems, such as conducting red team exercises and audits and ensuring decisions are explainable.
Under the Dodd-Frank Act, passed after the 2008 financial crisis, large cloud service providers like Amazon, Microsoft, and Google can also be designated as “systemically important financial market utilities,” making them subject to oversight and regulation by the Federal Reserve in recognition of the outsized influence they now hold over the stability of the entire U.S. financial system as the explosion of new AI products and services continues on top of their expanding infrastructure.
Again, we’ve only just scratched the surface of the existing powers the federal government has to address AI harms, as we document in our new report. And that’s a good thing, because it’s becoming increasingly clear, as it has consistently been, that waiting for Congress to meaningfully regulate the tech industry would be folly.
By themselves, will our proposed executive actions be enough to address this unprecedented event? Almost certainly not. But unlike many of the proposals out there, they are substantive and immediately actionable, and would significantly change the course of AI and protect people from preventable harm.
We don’t need more evidence than two decades of it to show the damage that failing to safeguard cutting-edge technology can do to society. We’ve seen the heroes of previous tech revolutions transform from disruptive start-ups into innovation-stifling monopolies; we’ve seen platforms that promised to be great democratizers be weaponized as tools of surveillance and repression.
We don’t need any more damning evidence. In fact, we can’t afford it. What we need is swift and substantive action — action that existing law already allows for. And our report lays out a blueprint for federal agencies to do just that.