June 13, 2024 – United States Capitol. Justin Hendrix/Tech Policy Press
Until this week, the US Congress was considering the most promising bipartisan privacy bill in years, the American Privacy Rights Act (APRA). The bill is currently scheduled to be heard by the House Energy and Commerce Committee on June 27, but many civil society groups, including UnidosUS, are calling for a delay because the new draft would remove important consumer protections from the bill and open unfortunate loopholes for data stored on “devices” such as mobile phones.
While the law already prohibits discrimination in some important areas, such as housing, employment and credit, standards are poorly enforced and accountability is virtually impossible for those affected. Many other important areas involving data-driven decision-making, such as education, workplace monitoring, policing and sentencing, remain lawless and lack even basic protections for fairness.
The explosion of artificial intelligence (AI) means that these complex decision-making models will create opportunities in almost every area of our lives, so previous bills were right to ban the use of sensitive personal data to discriminate against consumers and to require tech companies to test their algorithms and submit data showing that the results are fair.
Indeed, it would be naive and ignorant, as Sen. Ted Cruz (R-Texas) has argued, to claim that earlier bills are “woke” simply because they included such fairness guarantees. Bias is a major concern in AI systems because they make predictions based on patterns, but there is ample evidence that the biases can be surprisingly specific and capable of discriminating against anyone, regardless of race or ethnicity.
For example, in 2024, Bloomberg investigated a hiring system based on ChatGPT and found that it had problems favoring certain demographic groups for certain roles, such as recommending Asian women as top hires for investment roles while disfavoring white men. And across all four groups, it ranked men lower in human resources roles, regardless of race or ethnicity. In every role studied, at least one demographic group had an actionable claim under federal employment law. Another example of the system’s inherent unpredictability is the hiring algorithm that was infamous for favoring an applicant named Jared who played lacrosse in high school.
AI models essentially match patterns found in vast amounts of data and “learn” to make inferences in the process. Basic fairness checks ensure that mistakes aren’t made and that bias in a particular direction, even if unanticipated, doesn’t influence the results. Without these checks, every time AI is used in different areas of life, we can’t be sure it’s treating applicants, students, and patients fairly.
As the Federal Trade Commission warned in a blog last week, AI “avatars or bots can collect or infer very personal information at scale.” Now more than ever, we urgently need a national privacy law. And the bill has bipartisan support. Rep. Gus Bilirakis (R-Fla.), chairman of the Energy and Commerce Subcommittee, praised the bill for “giving Americans the right to control their personal information, including how and where it is used, collected and stored,” and for creating a national framework that gives all consumers “consistent rights, protections and obligations.” During the Innovation, Data and Commerce Subcommittee markup hearing on the bill, he also hailed algorithmic assessments as a positive step to “prevent the manipulation of Americans.”
Lawmakers’ sense of urgency responds to public anxiety. A 2023 Pew Research Center survey found that 81% of Americans are concerned about how companies use the data they collect, with 68% of Republicans and 78% of Democrats supporting regulating what companies can do with personal information. The same poll shows that the more people know, the more anxious they become. The lack of basic privacy protections in the United States is also a sore point for voters, with both Democrats and Republicans supporting more rules on the use of consumer data. A 2023 UnidosUS poll found that Hispanic voters’ biggest concern about AI is that it will infringe on their personal privacy.
But moving forward without fundamental safeguards could allow discriminatory data practices to go undetected and unchallenged. APRA will only be a major step forward if it includes practical new safeguards against unfair bias while preventing the collection of personal data without consent. It is shortsighted for lawmakers to fail to recognize the need for fundamental fairness, and it could harm any of us as the move to integrate AI into everything progresses.
To be clear, APRA is far from perfect. For example, the bill avoids tough questions about unjustified government surveillance; these risks must be addressed in future legislation. Additionally, remedies for damages should remain available to people in states with privacy protections, as the bill currently provides. In addition to the flaws listed above, the latest version also opens a major loophole for data stored on devices, and lawmakers need to eliminate or significantly reduce this exemption to achieve any real privacy gains.
Still, a version of APRA that restores the fairness check to the bill will be the best tool federal lawmakers have to achieve consumer privacy in the near future. Consumer privacy is a key election concern for millions of Americans across the political spectrum. Policymakers should not roll back this opportunity for progress or invalidate its provisions because of ignorance of how AI works and the risks it poses to everyone. They should also keep their eyes on the real goal: creating the tools to stop consumers from constantly extracting and selling their highly personal data without their consent.