Alondra Nelson and Ami Fields Meyer both served as technology policy advisers to President Biden and Vice President Kamala Harris in the White House.
Republicans gathered in Milwaukee last week to nominate former President Donald Trump and Senator J.D. Vance of Ohio to formally adopt a party platform, parts of which were reportedly drafted and edited by Trump himself. A departure from past Democratic and Republican platforms (except for 2020, when there was no new Republican platform at all), this platform does not assert a vision for America’s future, but rather a list of grievances that closely mirror the rallying cries of the Republican flag-bearer. Among this extensive list of outlandish grievances and threats is Trump’s pledge to repeal a historic White House executive order on the development of safe, secure and trustworthy artificial intelligence.
Unleash the robot.
Perhaps motivated by this cynical turn toward one of the least understood and most anxiety-generating technologies of our time, or perhaps by the selection of a Silicon Valley insider as his vice presidential running mate, Trump quickly attracted huge campaign contributions from tech billionaires.
We both served as technology policy advisors to President Biden and Vice President Kamala Harris in the White House. We were both at the U.S. Embassy in London last November when the Vice President shared the Administration’s philosophical vision for AI on the world stage, and we were both in the East Room when the President signed his AI directive that same week. Needing Congressional leadership, the Biden-Harris Administration has put forward a serious, rigorous, and comprehensive theory of AI governance, laying out a framework for responsibly managing the technology and unlocking its potential benefits, while addressing a wide range of survey-driven concerns from the American public, from job losses to data privacy to a general distrust of technology companies.
The order directed the federal government to prioritize workers by studying the labor market impacts of using the technology in the workplace and involving workers in decisions about future technology transitions. With a focus on innovation and competition, the order created a foothold in the AI ecosystem for small businesses, startups, and workers, and encouraged federal agencies to set an example in the responsible use of AI tools to improve public services.
More importantly, the policy directs U.S. government agencies to wisely use their existing authorities over these new technologies to protect people from current and potential future AI harms, from tenant screening algorithms that have been shown to discriminate against qualified homebuyers, to unidentified AI tools that raise concerns could be used to design dangerous biological materials.
“As the World Steps Back, America Sets to Set the Rules for AI,” the Financial Times headlined. Politico trumpeted that Harris was wresting the AI leadership agenda from the UK, promoting a balanced and responsible vision for the technology in a competitive geopolitical environment. The Washington Post called the executive order the “most far-reaching” attempt the US has ever made to tackle AI. Many civil rights leaders, who have long criticized the federal government for abdicating oversight responsibilities and leaving the tech industry to regulate itself, praised the administration’s forward-thinking approach to governing the impact of AI use on discrimination and privacy.
Americans are right to be anxious and fearful about the potential impact of these tools. Even Senator Vance has expressed concern about the unbridled power of big tech companies. And after more than a year of alarming headlines about AI’s potential and two decades of disastrous self-regulation by Silicon Valley, the public is right to hold our elected leaders to account.
Democratic AI policy is being heavily shaped by the ideas and priorities of the leading candidates, sending a blunt message to American workers, parents, entrepreneurs, and advocates: “We’re with you.”
Trump says something entirely different. He claims that the tenets of America’s current approach to AI — supporting workers and small businesses and providing strong protections for civil and consumer rights — are “radical left ideology.” On the contrary, Trump-allied think tanks have laid out a detailed plan to put the responsibility for assessing the safety of AI tools in the hands of the powerful industry players who develop them and profit from them. When it comes to realizing perhaps the most powerful technology of our time, its associated risks, and its potential opportunities, the Republican platform is clear and unambiguous: “You solve it yourself.”
If big tech companies exclude you from the infrastructure necessary for AI small businesses to thrive, you’re on your own. If you’re denied important health care benefits because of an AI system’s decision, you’re on your own. If your employer uses automated tools to surveil you or prevent you from unionizing, you’re on your own. If an AI-enabled retailer scams you, you’re out of luck. If you’re a teenage girl victim of deepfake nudes or a disabled veteran relying on life-saving benefits that have been unfairly cut by an algorithm, unfortunately, you’re on your own.
As a new meta-analysis of public opinion on AI suggests, Americans want government leadership on AI. Executive orders on AI are the nation’s best defense against the risks posed by AI and a path to its potential benefits. We must not only sustain this great effort, but also support coherent legislation like California’s bill aimed at protecting the rights and privacy of young people regarding the use of AI.
President Trump’s AI pledges aren’t policy solutions — they’re campaign promises to big tech companies that will exacerbate an already out-of-control industry that will harm nearly every aspect of our lives.