If you were looking for online therapy between 2017 and 2021 (like many of you were), you likely ended up on BetterHelp. BetterHelp currently bills itself as the world’s largest online therapy provider with over 2 million users. Once on BetterHelp, with just a few clicks, you’ll fill out a form similar to the paper questionnaires you’d fill out in a therapist’s office. Are you new to therapy? Are you on medication? Are you having intimate relationship problems? Are you feeling overwhelming sadness? Are you thinking about self-harming? BetterHelp asked if you were religious, LGBTQ, or a teen. These questions were meant to connect you with the counselor best suited to your needs, and you were assured in a small text. Your information will remain private.
But BetterHelp isn’t a therapist’s office, so users’ information may not have been completely private. In fact, according to a complaint filed by federal regulators, for years BetterHelp shared user data (such as email addresses, IP addresses, and survey responses) with third parties like Facebook and Snapchat for the purpose of targeting ads for its own services. And the FTC found that the company didn’t adequately regulate what third parties did with user data after they got it. In July, the company finalized a settlement with the FTC, agreeing to refund $7.8 million to consumers who privacy regulators said were violated. (In a statement, BetterHelp denied any wrongdoing and said sharing user information was an “industry-standard practice.”)
We leave digital traces about our health wherever we go: when we fill out forms like those on BetterHelp, request prescription refills online, click on links, ask search engines for medication dosages or directions to a clinic or where chest pains are likely to kill us, shop online or offline, participate in direct-to-consumer genetic testing, step on a smart scale, use a smart thermometer, join Facebook groups or Discord servers for people with specific medical conditions, use internet-connected exercise equipment, or use apps or services that count steps, track menstrual cycles, or log workouts. Even non-health-related demographic and financial data can be aggregated and analyzed to reveal or infer sensitive information about people’s physical or mental health.
All this information is valuable to advertisers and the technology companies that sell ad space and targeting to them. It’s valuable because it’s private. Our health drives our behavior, perhaps more than anything else. And the more companies know, the more they can influence us. Over the past year or so, evidence has been reported that Meta’s tracking tools harvested patient information from hospital websites, and that Drugs.com and WebMD apps shared search terms and user identifying information, such as herpes and depression, with advertisers. (Meta denies receiving or using data from the tool, and Drugs.com says it does not share data that constitutes “sensitive personal information.”) In 2021, the FTC settled with Flo, a period and ovulation app that reportedly has more than 100 million users, after allegations that Flo had disclosed information about users’ reproductive health to third-party marketing and analytics services, despite Flo’s privacy policy clearly stating that it would not disclose such information. (Flo, like BetterHelp, said its settlement with the FTC was not an admission of wrongdoing and that it did not share users’ names, addresses or birthdates.)
Of course, not all of our health information will end up in the hands of people who want to misuse it. But when it does, it carries significant risks. People’s opportunities in life are limited when advertiser or social media algorithms infer that people have certain medical conditions or disabilities and subsequently exclude them from providing information about housing, employment, and other important resources. If our personal information falls into the wrong hands, it increases the risk of fraud and identity theft. Someone could use our data to open lines of credit or pretend to be us to obtain medical services or obtain drugs illegally. This could lead not only to a lower credit rating, but also to canceled insurance policies and denial of treatment. Our sensitive personal information could also be made public, leading to harassment and discrimination.
Many people believe that their medical information is private under the federal Health Insurance Portability and Accountability Act, which protects medical records and other personal health information. But that’s simply not true. HIPAA only protects information collected by “covered entities” and their “business associates.” Health insurers, doctors, hospitals, and some of the companies they do business with are limited in how they collect, use, and share information. Many companies that handle our medical information, including social media companies, advertisers, and most health tools sold directly to consumers, have no protection at all.
“Once someone downloads an app onto their phone and starts inputting health data or data that could indicate a health condition, there’s absolutely no protection for that data beyond what the app promises,” Deven McGraw, former deputy director for health information privacy in the Department of Health and Human Services’ Office for Civil Rights, told me. (McGraw now works as director of data management and data sharing at the genetic testing company Invitae.) And even then, consumers have no way of knowing whether the app is following its stated policies. (In BetterHelp’s case, the FTC’s complaint notes that the company displayed a HIPAA seal on its website from September 2013 to December 2020, despite the fact that “it was not reviewed by any government agency or other third party.” [its] “We did not investigate information practices to comply with HIPAA or determine whether those practices meet HIPAA requirements.”
Companies that sell ads are often quick to point out that the information is aggregated. Tech companies use our data to target broad segments of people based on demographics and behaviors, not individuals. But those categories can be very narrow. Recent reports suggest that they could target Ashkenazi Jewish women of childbearing age, men who live in certain zip codes, or people whose online activity may indicate an interest in certain illnesses. These groups then see targeted pharmaceutical ads at best, and unscientific “cures” and medical misinformation at worst. They may also be discriminated against. Last year, the Department of Justice settled with Meta over allegations that the company violated the Fair Housing Act by allowing advertisers to not show housing ads to users whose data collection systems inferred that they were interested in topics like “service animals” and “accessibility.”
The recent settlement shows that the FTC is increasingly interested in regulating medical privacy. But most FTC actions, including that one, are made through consent orders, that is, commissioner-approved settlements, in which the parties resolve disputes without admitting wrongdoing (as happened with both Flo and BetterHelp). If a company appears to have violated the terms of a consent decree, a federal court can investigate. But the FTC’s enforcement resources are limited. In 2022, a coalition of privacy and consumer advocacy groups wrote to the chairs and ranking members of the House and Senate Appropriations Committees urging them to increase funding for the FTC. The committees, pointing to a significant increase in consumer complaints and reported consumer fraud, requested $490 million for fiscal year 2023, up from the $376.5 million they received in 2022. They ultimately received $430 million.
The FTC has created an interactive tool to help app developers comply with the law as they develop and market their products, and the HHS Office for Civil Rights has provided guidance on the use of online tracking technologies by organizations and business associates covered by HIPAA, which may help head off privacy issues before apps cause harm.
The nonprofit Center for Democracy & Technology has also proposed its own consumer privacy framework in response to the fact that “vast amounts of information reflecting mental and physical health are being created and held” by organizations that are not bound by HIPAA mandates. The framework emphasizes placing appropriate limits on the collection, disclosure, and use of health data, or information that can be used to infer an individual’s physical or mental health. The framework removes the burden from consumers, patients, and users (who may already be burdened by their health conditions) and places the burden on the organizations that collect, share, and use the information. It also limits the use of data to purposes that people expect and want, rather than purposes that are unknown or unpleasant to them.
But this framework remains merely a proposal for the time being. Currently, without a comprehensive federal data privacy law that accounts for all the new technologies that can access our health information, our most personal data is controlled by a patchwork of laws and regulations that are not commensurate with the giant corporations that profit from accessing that data, or the very real needs that drive patients to use these tools in the first place. Patients don’t type their symptoms into search engines, fill out online surveys, or download apps because they don’t care about or think about privacy. They do these things because they want help, and the internet is the easiest, quickest, cheapest, or most natural place to get it. Technology-enabled health products provide an undeniable service, especially in countries plagued by health disparities, and their popularity is not likely to wane. It’s time for laws designed to protect our health information to catch up.