May 15, 2024: At a podium in the Capitol, Senate Majority Leader Chuck Schumer (D-NY) announces the Senate AI Policy Roadmap, flanked left to right by Sen. Todd Young (R-IN), Sen. Martin Heinrich (D-NM), and Sen. Mike Rounds (R-SD). Source
Congressional working groups, committees, and subcommittees were active on tech policy this month, moving forward AI, data privacy, and kids online safety bills. In the Senate, the bipartisan Senate AI Working Group released its Senate AI policy roadmap, and the Senate Rules Committee advanced three bills on the use of AI in elections. On the House side, the House Energy and Commerce Subcommittee on Innovation, Data, and Commerce moved the American Privacy Rights Act (APRA) and Kids Online Safety Act (KOSA, S.1409 / H.R.7891). Read on to learn more about May developments on AI policy, progress on privacy and election-related AI bills in Congress, and more.
The Freedman Consulting team is excited to announce that the monthly US Tech Policy Roundup will now be produced in closer collaboration with Tech Policy Press to bring you updates on tech policy happenings and legislation and litigation tracking. Look out for exciting improvements to our monthly roundup and, as always, please feel free to reach out to Alex Hart at alex@tfreedmanconsulting.com and contributions@techpolicy.press with thoughts and feedback.
Senate Working Group Unveils a Roadmap for AI Policy Priorities
Summary: This month, the bipartisan Senate AI Working Group, led by Senate Majority Leader Chuck Schumer (D-NY) and Sens. Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-IN), released a 31-page roadmap recommending the various funding, legislative and policy priorities. The roadmap caps off Sen. Schumer’s months-long efforts to educate the Senate on AI issues through nine closed-door AI Insight Forums and various briefings and committee hearings. Central to the roadmap was a recommended allocation of at least $32 billion per year toward non-defense AI research and development with the goal of bolstering AI innovation in the U.S. and maintaining global competitiveness. Additional priorities in the roadmap included the passage of a comprehensive data privacy law, addressing nationwide standards for AI safety and fairness, protecting American workers from job displacement caused by AI, and combating AI-generated deepfakes, especially ones related to the election.Stakeholder Response: Tech industry groups showed broad support for the Senate AI policy roadmap, with many praising the plan’s efforts to boost AI research and development. TechNet, whose members include leading AI companies such as OpenAI, Google, and Meta, said in a statement that the plan would “strengthen America’s global competitiveness in AI and emerging technologies” and “empower a new generation of AI leaders.” The Consumer Technology Association, a trade organization representing over 1,300 consumer technology companies in the US, released a statement in support of the roadmap, emphasizing that “the U.S. needs a clear national AI policy with guardrails so American innovation in AI can safely flourish.”Civil society groups, in contrast, have largely criticized the plan, arguing that the document is far too vague on how it will protect people from AI’s everyday harms. Dr. Alondra Nelson, Harold F. Linder Professor at the Institute of Advanced Study and former acting director of the White House Office of Science and Technology Policy, voiced concern that the roadmap is “too flimsy to protect our values,” and that it does not “point us toward a future in which patients, workers, and communities are protected from the current and future excesses of the use of AI tools and systems.” Open Markets Institute Executive Director Barry Lynn criticized the roadmap’s investment of “$32 billion to ‘democratize’ AI – without any guarantees that these funds won’t simply end up with the same corporations responsible for enclosing it.” Lynn also urged lawmakers to “ignore this report, and most of its recommendations.”What We’re Reading: Tech Policy Press compiled civil society’s responses and criticism toward the AI roadmap as well as an in-depth analysis of funding priorities, legislative developments, and areas of further study that were highlighted in the roadmap. In response to the proposed AI roadmap, a group of 13 civil society organizations including Accountable Tech, Georgetown Center on Privacy and Technology, AI Now Institute, and Data & Society released a shadow report, which aims to refocus legislative priorities on ensuring AI serves the public interest. The shadow report also collected over 40 signatures from other civil society organizations, researchers, and advocates. Representatives from AI Now Institute, Color of Change, Just Futures Law, and Surveillance Resistance Lab spoke to Tech Policy Press editor Justin Hendrix about the shadow report on a podcast. Politico highlighted a policy brief by Data & Society’s Serena Oduro and Tamara Kneese exploring the importance of integrating sociotechnical expertise into AI governance, which Data & Society policy director Brian Chen referenced in a piece for Tech Policy Press.
Landmark Privacy Bill Advances in House; Other Tech Bills Progress
Summary:APRA: Following Rep. Cathy McMorris Rodgers’ (R-WA) and Sen. Maria Cantwell’s (D-WA) unveiling of a discussion draft last month, the Innovation, Data, and Commerce Subcommittee forwarded a revised discussion draft of the American Privacy Rights Act (APRA) to the Senate Energy and Commerce Committee by voice vote without amendments. APRA would create a national data privacy framework, establish data security and data minimization standards, mandate data usage transparency, and “give consumers the right to access, correct, delete, and export their data, as well as opt out of targeted advertising and data transfers.” APRA would also address some algorithmic discrimination concerns, establishing opt-out rights for important decisions and “prohibiting the use of covered data to discriminate against consumers.” Finally, the latest version of APRA included parts of COPPA 2.0 (S. 1418), but COPPA 2.0’s House co-sponsors Reps. Tim Walberg (R-MI) and Kathy Castor (D-FL) criticized the lack of the bill’s full inclusion as having “the skin, but not the meat and bones” of their bill.KOSA: In the same session, the Innovation, Data, and Commerce Subcommittee forwarded the Kids Online Safety Act (KOSA, S.1409 / H.R.7891) despite criticisms by subcommittee members. Rep. Frank Pallone (D-NJ) expressed concern about the “duty of care” provision resulting in platforms overly limiting content due to legal risk and other subcommittee members criticized insufficient harm mitigation requirements.AI in Elections: The Senate Rules Committee was also busy in May, advancing three bills introduced by Sen. Amy Klobuchar (D-MN) on the use of AI in elections. The Protect Elections from Deceptive AI Act (S. 2770) would ban “the distribution of materially deceptive AI-generated audio or visual media relating to candidates for Federal office, and for other purposes.” The AI Transparency in Elections Act of 2024 (S. 3875) would require a disclosure statement if image, video, or audio content is “substantially generated” by generative AI in a political ad. Finally, the Preparing Election Administrators for AI Act (S. 3897) would mandate that the Election Assistance Commission create voluntary guidelines addressing uses and risks of AI technologies for election administrators. Reps. Chrissy Houlahan (D-PA), Brian Fitzpatrick (R-PA), Abigail Spanberger (D-VA), and Doug Lamborn (R-CO) introduced a House companion to the Preparing Election Administrators for AI Act this month.Stakeholder Response:APRA: Various civil society organizations celebrated APRA’s movement and urged Congress to pass the privacy law. The Electronic Privacy Information Center (EPIC) “commended the Subcommittee and Chair McMorris Rodgers for their work.” The Center for American Progress urged Congress to pass APRA and made a case for the pressing need for comprehensive privacy and children’s safety laws. In contrast, some organizations criticized APRA for its shortcomings or impacts on specific industries. A coalition of seven financial trade associations published a letter pushing Congress to exempt certain financial institutions from APRA’s requirements. Small businesses owners at the Connected Commerce Council also expressed concerns about APRA’s impact on their companies.KOSA: Civil society organizations continued to be divided on KOSA this month. NetChoice voiced opposition to both KOSA and APRA, arguing that “the proposed bills up for review today will fail” in protecting online privacy or children online. In contrast, Parents for Safe Online Spaces (ParentsSOS) celebrated the markup.AI in Elections: Although the advancement of the three bills on AI in elections resulted in less coverage, civil society was generally supportive of the bills. The Campaign Legal Center and Protect Democracy both published letters of support for all three bills, pointing to the potential for AI to exacerbate election threats like disinformation and voter suppression.
What We’re Reading: Gabby Miller and Ben Lennett published an analysis and transcript of the House Energy and Commerce Subcommittee on Innovation, Data, and Commerce meeting in Tech Policy Press, while John Perrino looked at the debate over provisions around researcher access to platform data. The Verge covered the Senate Rules Committee’s passage of the bills on AI in elections.
Tech TidBits & Bytes
Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, and industry.
In the executive branch and agencies:The White House published a fact sheet on the Department of Labor’s key principles to protect workers, which includes “centering worker empowerment” and “establishing AI governance and human oversight,” among other provisions.The White House also published a call to action for industry leaders and other stakeholders to make voluntary commitments to address non-consensual AI-generated image-based sexual abuse.The Department of Housing and Urban Development’s (HUD) Office of Fair Housing and Equal Opportunity (FHEO) announced guidance clarifying discriminatory tenant screening practices that violate the Fair Housing Act to protect rental applicants.The National Institute of Standards and Technology (NIST) published the AI Safety Institute’s strategic vision for AI safety, focusing on three main goals: “Advance the science of AI safety; articulate, demonstrate, and disseminate the practices of AI safety; and support institutions, communities, and coordination around AI safety.”NIST launched the Assessing Risks and Impacts of AI (ARIA) program which will serve as a “new testing, evaluation, validation, and verification (TEVV)” initiative to advance the understanding of AI’s capabilities and societal impacts. ARIA expands on NIST’s AI Risk Management Framework and will support the US AI Safety Institute’s testing capabilities to “help build the foundation for trustworthy AI systems.”The New Democrat Coalition Artificial Intelligence Working Group released a statement endorsing 10 bipartisan AI bills that address a myriad of issues including consumer protections, privacy, threats to the workforce, risks to elections and democracy, among othersThe US Access Board signed a Memorandum of Understanding with the American Association of People with Disabilities (AAPD) and the Center for Democracy & Technology (CDT), beginning a collaboration to unite disability rights and technology groups on AI issues.The Department of Defense (DoD) released an RFI seeking public comments on actions it can take to enable the defense industrial base to adopt AI for defense applications. Public comments close on July 22.The Federal Communications Commission (FCC’s) issued a final rule on net neutrality, classifying broadband internet access as a telecommunications service under Title II, expanding the FCC’s authority to “safeguard national security, advance public safety, protect consumers, and facilitate broadband deployment.”The FCC proposed a $6 million fine for a political consultant and a $2 million fine for the transmitting company who broadcasted deepfake robocalls of President Biden ahead of the New Hampshire presidential primary. Steven Kramer, the consultant responsible, also faces over two dozen criminal charges.The Department of Justice held a workshop at Stanford University with industry leaders, researchers, and government officials to discuss competition in the AI industry. The convening arrived amid rising pressure from the Biden Administration’s antitrust agencies against large technology companies including Google, Amazon, and Meta.In Congress:Sen. Ed Markey (D-MA) and other legislators pushed for the Kids Online Safety Act (S. 1409, KOSA) and the Children and Teens’ Online Privacy Protection Act (S. 1418, COPPA 2.0) to be added as amendments to the Federal Aviation Administration (FAA) reauthorization bill. Despite bipartisan support for the two technology bills, key officials have opposed their incorporation into the FAA package and differences between the Senate and House versions could complicate the provisions’ passage forward. Civil society, tech interests, and LGBTQ groups have been split on KOSA and COPPA 2.0 – some groups have opposed the bills and their potential inclusion in the FAA bill, while others have published letters in support of the bills.The House Energy and Commerce Subcommittee on Communications and Technology hosted a hearing on a bill introduced by Reps. Cathy McMorris Rodgers (R-WA) and Frank Pallone (D-NJ) to sunset Section 230.In civil society:A coalition of 19 tech and civil society organizations published a letter supporting the American Innovation and Choice Online Act (AICOA, S. 2033) and the Open Apps Market Act (OAMA, S. 2710).Airlines for America and the US Travel Association, the airline industry’s top trade associations, published a letter opposing efforts to stop the Transportation Security Administration (TSA) from using facial recognition technologies.LGBTQ media advocacy organization GLAAD published its annual Social Media Safety Index for 2024, which “provides an overview of the current state of LGBTQ social media safety, privacy, and expression, including sections on: the economy of hate and disinformation, predominant anti-LGBTQ tropes, policy best practices, suppression of LGBTQ content, the connections between online hate and offline harm, regulation and oversight, AI, data protection and privacy, and more.”The Special Competitive Studies Project, a nonprofit organization founded by former Google CEO Eric Schmidt, released a report titled, “Vision for Competitiveness: Mid-Decade Opportunities for Strategic Victory.” The report detailed a strategy for the US to strengthen its national security, ensure an economic advantage, and remain at the forefront of AI-driven technological innovation.In industry:ByteDance and TikTok filed a petition with the US Court of Appeals for the District of Columbia Circuit challenging a law that would require the sale of TikTok to a non-Chinese entity or face a ban from US app stores.Also in TikTok news, tech lobbying group NetChoice removed the company from their membership lists without announcement this month.Microsoft and OpenAI launched the Societal Resilience Fund, a $2 million effort to promote AI education and literacy for voters worldwide. The Fund is the latest action following Microsoft and OpenAI’s adoption of the White House Voluntary Commitments, and the Tech Accord to Combat Deceptive Use of AI in the 2024 Elections.
Other Legislation Updates
Other legislation of note includes:
REPORT Act (S.474, sponsored by Sens. Jon Ossoff (D-GA) and Marsha Blackburn (R-TN)): This bill requires electronic communication service and remote computing service providers to report online child sexual exploitation materials to the National Center for Missing and Exploited Children (NCMEC). The bill also increases the amount of time reports must be preserved and extends the requirement to additional situations. The REPORT Act became law in May.Enhancing National Frameworks for Overseas Critical Exports (ENFORCE) Act (H.R. 8315, sponsored by Rep. Michael McCaul (R-TX)): This bill would allow the Commerce Department’s Bureau of Industry and Security (BIS) to utilize export controls on AI and other “national security-related emerging technology that can potentially be used by our enemies in the future.” This bill was introduced and forwarded with amendments by the Committee on Foreign Affairs 43-3 in May.NSF AI Education Act of 2024 (sponsored by Sens. Maria Cantwell (D-WA), and Jerry Moran (R-KS)): This bill would authorize the National Science Foundation (NSF) to award scholarships and professional development opportunities for students to study AI and quantum. The bill would also direct NSF to create AI resources for K-12 students and workers to upskill the workforce and ensure the US remains a leader in innovation. The bill was introduced in May.
We welcome feedback on how this roundup could be most helpful in your work – please contact Alex Hart with your thoughts.