Happy Tuesday! I’m Gerrit De Vynck, a reporter covering Google and artificial intelligence, filling in for Cristiano today. Send your news tips to gerrit.devynck@washpost.com.
Microsoft calls for new legislation on deepfake scams and AI-generated sexual abuse images
Tech giant Microsoft is calling on Congress to pass legislation that would make it illegal to use AI-generated voices or images to commit fraud and require AI companies to develop technology to identify fake AI images created by their own products.
The recommendations are part of a 50-page document released by Microsoft on Tuesday that lays out a broader vision for how governments should approach AI.
As lawmakers and regulators across the country debate how to regulate AI, the companies behind the emerging technology have released a number of proposals for how policymakers should handle the industry.
Microsoft has long been accustomed to lobbying governments on issues that affect its business, and has sought to position itself as an active and helpful company trying to shape the debate and ultimate legislative outcomes by aggressively pushing for regulations.
Smaller technology companies and venture capitalists are skeptical of the approach, accusing big AI companies like Microsoft, Google and OpenAI of trying to pass legislation that would make it harder for startups to compete with them. Supporters of the legislation, including California politicians who are leading the national push for broad AI legislation, say the government’s early failure to regulate social media use could allow problems like cyberbullying and disinformation to flourish unchecked.
“Ultimately, the danger is not that we move too fast, but that we move too slow, or not at all,” Microsoft President Brad Smith wrote in the policy paper.
In the document, Microsoft called for the enactment of a “deepfake fraud law” that would specifically make it illegal to use AI to deceive people.
As AI gets better at generating voices and images, scammers are already using it to trick people into sending money to their loved ones. Other tech lobbyists argue that existing anti-fraud laws are enough to crack down on AI, and that the government doesn’t need to enact additional legislation.
Microsoft split with other tech companies on a separate issue last year when it suggested the government should create an independent agency to regulate AI, while others argued the FTC and DOJ have the ability to regulate AI.
Microsoft also called on Congress to require AI companies to build “provenance” tools into their AI products.
AI images and audio are already being used around the world for propaganda and to mislead voters. AI companies are working on technology to embed hidden signatures into AI images and videos that can be used to identify whether content is AI-generated. But deepfake detection is notoriously unreliable, and some experts question whether AI content can be reliably separated from real images and audio.
State and Congress should also update laws to address the creation and sharing of sexually exploitative images of children and intimate images without their consent, Microsoft said. AI tools are already being used to create sexual images and sexual images of children against their will.
Government Scanner
Federal court rules US Border Patrol must get warrant before searching cell phones (TechCrunch)
Google’s Anthropic AI Deal Closes in on U.K. Regulators (Bloomberg)
New US Commerce Department report encourages ‘open’ AI models (TechCrunch)
Hill’s Affair
Senators turn to online content creators to push bill (Taylor Lorenz)
Low-income families lose internet service as Congress eliminates discount program (Ars Technica)
Inside the Industry
Trump v. Harris is splitting Silicon Valley into opposing political camps (Trisha Thadani, Elizabeth Dwoskin, Nitasha Tyk, Gerrit de Vink)
TikTok has a Nazi problem (Wired)
Amazon paid about $1 billion for Twitch in 2014, and it’s still losing money. (Wall Street Journal)
Fraudsters Use Meta’s Proprietary Tools to Target Middle Eastern Influencers (Bloomberg)
Competition Watches
Adobe, Canva, and ByteDance’s CapCut are losing users — especially on TikTok (Bloomberg)
As AI companies keep building new scrapers, websites are blocking errant AI scrapers (404 Media)
trend
How Elon Musk came to support Donald Trump (Josh Dorsey, Eva Doe, Faiz Siddiqui)
A Field Guide to Spotting Fake Photos (Chris Velazco and Monique Woo)
AI Gives Weather Forecasters a New Advantage (New York Times)
diary
The Information Technology and Innovation Foundation will host an event called “Can China Innovate with Electric Vehicles?” at 1 p.m. Tuesday in Rayburn House Office Building 2045. The Consumer Technology Association will host a conversation with White House National Cyber Director Harry Corker Jr. at 4 p.m. Tuesday in the CTA Innovation House. The Senate Budget Committee will hold a hearing on the future of electric vehicles at 10 a.m. Wednesday in Dirksen Senate Office Building 608. The Center for Democracy and Technology will host a virtual event, “What You Need to Know About Artificial Intelligence,” at noon Wednesday. Sens. Ben Ray Lujan (D-Minn.) and Alex Padilla (D-Calif.) will host a public panel, “Countering Digital Election Disinformation in Languages Other Than English,” at 4 p.m. Wednesday in Room G50 of the Dirksen Senate Office Building. The U.S. General Services Administration will host a Federal AI Hackathon at 9 a.m. Thursday.
Before you log off
That’s all for today. Thank you for joining us. Please tell others to subscribe to Tech Brief. Contact Cristiano (by email or Social media) and Will (email or Social mediaFor tips, feedback, greetings, etc., please contact us at .