Here we are when that commercial came on. Photo: Getty Images
If you’ve been watching the Olympics, you’ve likely seen a Google ad called “Dear Sydney.” The ad tells the story of a father’s grade-school track and field daughter who wants to write a fan letter to Olympian and 400m hurdles world record holder Sydney McLaughlin-Levrone. “She wants to show Sydney some love, and I’m good with words, so this would be a perfect fit,” says the narrator’s father.
What will this made-up guy do? He decides to call his daughter to share this sweet moment with her. The father instructs Gemini, Google’s artificial intelligence model, to help him write a letter telling Sydney how much she inspires him. And make sure to mention that she’s going to break a world record one day. She’ll say, ‘I’m sorry, but I’m sorry.'” What are you sorry about?
What? How could a father who is “pretty good with words” need an AI model to help his daughter write a heartfelt message to her favorite athlete? Aren’t moments like these what being a parent is all about? What lesson is this? Not only are we suggesting to our kids that it’s okay to hand off their writing assignments to an AI, but that it’s a good idea to let a computer express its feelings for you. This may be a problematic precedent. Given the “sorry, but not sorry” joke, it feels like my daughter could do this all on her own. Isn’t the whole premise of the Olympics to celebrate human achievement?
Like many things about AI itself, this is something no one seems to want. The primetime ad caused quite a stir. “I categorically reject the future that Google is promoting,” wrote Syracuse University media professor Shelley Palmer. “I want to live in a culturally diverse world where billions of people use AI to improve their human skills, not a world where we are exploited by AI pretending to be human.” Brand strategist Michael Miraflor wrote that the ad is very similar to Apple’s widely criticized iPad ad from May. “Both give the same impression that something is wrong, with a kind of insensitivity to the legitimate concerns and fears of the majority,” he wrote, adding that both were developed in-house.
Advertisers weren’t the only ones upset: Google’s YouTube channel turned off comments on the ad. It’s easy to imagine that the creepy tone and bad idea were the subject of criticism, but if Google is trying to downplay the dystopian vibe of the ad itself, silencing dissenting voices on a company-owned website might not be the best idea.
Subscribe to the Intelligencer Newsletter
Daily news on the politics, business and technology that shape our world.
Vox Media, LLC Terms of Use and Privacy Notice