Opinions expressed by contributors are their own and not those of The Hill.
Jonathan Joseph, Opinion Contributor 07/27/24 12:00 PM ET
Human Rights Watch completed a thorough audit of AI training materials and found that photos of children harvested from the internet were being used to train the models without the consent of the children or their families.
This is already bad, but it gets even worse.
According to HRW, “the child’s name may also be included in the caption accompanying the photo or the URL where the image is saved. In many cases the child’s identity can be easily traced, along with information about the child’s time and location when the photograph was taken.”
To make matters worse, many of the scraped images were not publicly available on the internet, hidden by privacy settings on popular social media sites.
In other words, some parents who thought they were doing everything right about sharing photos of their kids may find out just how wrong they were.
I’m not without sympathy. I’m originally from Australia and live in the US with my wife and kids. At one time, I thought social media was a great way to keep friends and loved ones updated on what’s going on with my growing family. But eventually, I realized I was invading my kids’ privacy, and that in the future, they may not want these photos posted online.
Sharenting (posting information, photos, and stories from a child’s life online) is increasingly coming under fire for a lot of very good reasons. A three-year-old is not capable of giving meaningful consent to their parents posting a video of their potty training fail for the world to see. It may seem like innocent enough fun, but three-year-olds don’t stay three forever. And today’s kids will have extensive information about them online long before they reach the age of consent.
But apart from their children’s inability to consent, the HRW report makes clear that adult parents have no way of knowing what the long-term effects of sharenting will be. Ten years ago, no one would have imagined that shared photo albums from family vacations would be fed into machine learning. Unintended consequences are already being seen.
Of course, a reasonable interpretation would be that this shouldn’t be allowed at all: Why would a for-profit AI company have the right to train on other people’s data, let alone children’s, data hidden behind privacy settings?
The FTC will likely have something to say about this, although last month the FTC and all other federal agencies had their hands tied by a Supreme Court ruling that violated the Chevron principle, stripping federal agencies of their power and giving it to the courts.
“Today, the majority has in one fell swoop bestowed upon itself monopoly power over every unsettled question about the meaning of regulatory statutes, even questions that hinge on expertise and policy,” Justice Elena Kagan wrote in her dissent. “As if that weren’t enough to do, the majority has now made itself an administrative czar of the nation.”
If federal privacy legislation wasn’t ready before, it certainly is now. The overwhelming result will be to push privacy legislation back to the states, while federal decisions are left in limbo and understaffed courts with no special privacy expertise will try to handle a workload for which they are neither prepared nor equipped.
While we wait, AI will continue to collect data on children, and ultimately, whether it’s a completely legal practice will depend on the state you live in.
Sharing photos from your child’s Little League game may be a fun way to stay connected with family near and far, but until meaningful protections are in place, I urge everyone to not take the risk. We are entitled to data dignity, ethical technology, and sane, responsible guardrails for AI. Right now, none of that is happening. And the Supreme Court’s decision adds a major hurdle to making them happen.
Meanwhile, big tech companies have been forced to make their own rules, and perhaps the only way to get their attention is to delete the apps, stop posting, and stop feeding the monster.
State lawmakers can’t act fast enough.
Jonathan Joseph is the director of The Ethical Tech Project.