Enlarge / The Cadillac V-Series.R is one of General Motors’ factory-sponsored racing programs.
James Moy Photography/Getty Images
It’s hard to escape the impression that too many companies are jumping on the AI bandwagon because of the hype, rather than because AI brings fundamental benefits to their operations. So when General Motors reached out to show off some of the new AI/ML tools they’re using to win more races in NASCAR, sports car racing, and IndyCar, I’ll admit to being a little skeptical and maybe a little morbidly curious. As it turns out, that skepticism was misplaced.
GM has a broad involvement in motorsports, but there are four top-level programs that it is particularly committed to. The American automaker’s primary focus is NASCAR, which remains the king of motorsports, with Chevrolet supplying engines to six Cup teams. IndyCar, once the most popular race in America, has six Chevrolet-powered teams. And then there’s sports car racing: Cadillac currently competes in IMSA’s GTP class and the Hypercar class in the World Endurance Championship, and also has a factory Corvette Racing team in IMSA.
“Every series that we race in, we have key partners and specific teams that run our cars, and some of the technical support that they get from us is due to the capabilities of my team,” said Jonathan Bolenbaugh, GM’s motorsports analytics leader, based at GM’s Charlotte technical center in North Carolina.
Unlike generative AI, which is being developed to remove human creativity, GM sees the role of AI and ML as supporting human experts to make cars run faster, and it’s using these tools across a variety of applications.
Close-up / One of GM’s command centers at its Charlotte Technology Center in North Carolina.
General Motors
Each team in these various series has local staff at each race (obviously), with more engineers and strategists assisting the team from Indianapolis, Charlotte or the specific race team’s home base, but they also work with the team at GM Motorsports, who work from one of the many command centers at the Charlotte Technical Center.
What did they say?
When you connect these three, a ton of data flows — not just from the car itself (the series that enables telemetry from the car to the pits), but voice communications, text-based messages, timing and scoring data from officials, trackside photos, etc. And one of the things Bolenbaugh’s team and suite of tools can do is make sense of that data quickly and make it actionable.
“In a series like F1, a lot of the teams have students who are maybe new members of the team, and they literally listen to the radio, type in what’s going on and say, ‘This is about the pit stops, this is about the condition of the track,'” Bollenbau said.
Instead of hiring an intern to do the job, GM developed a real-time speech-to-text tool. After trying commercial solutions, the company decided to develop its own solution, “a mix of open source and our own proprietary code,” Bolenbaugh said. As anyone who’s been to a race track knows, it’s a noisy environment, so GM had to train its models in the presence of all that background noise.
“We’ve been able to improve the accuracy and ease of use of the tool so much that we’re now seeing some of the manual support for this feature decrease,” he said, adding that the benefit is that the humans who would normally be transcribing the text can now use their brains in more useful ways.
Look at this
Another tool Bollenbau and his team developed was built to quickly analyze images taken by trackside photographers working for teams and OEMs. While some of the footage they shoot might be for marketing or PR, a lot of it is for the engineers.
Two years ago, it took two to three minutes for a photo to go from a photographer’s camera to a team. Now, “it takes seven seconds from when you press the shutter at a NASCAR racetrack to when the photo is AI-tagged and available in an application that’s used to extract information from it,” Bollenbau says.
Enlarge / You may not need ML tools to analyze photos to determine if a car is damaged.
Jeffrey Best/Icon Sportswire via Getty Images
“It’s all about time. The shortest lap time we’ll run, with the exception of the Coliseum, is probably around 18 seconds, so between the time they go through the pit lane entrance and when they come back in we’ve got to be quicker,” he said.
When this special tool was unveiled at a NASCAR race last year, one of GM’s partner teams was able to avoid a caution pit stop after its driver scraped against a wall, and the young engineer who developed it showed the team a photo of the right side of the car seconds before it had been damaged.
“They didn’t have to wait for a spotter to look, they didn’t have to wait for the driver’s opinion. They knew there was no damage to the car. That team was four points back in that series and in the playoffs, so if they had pitted, there’s a good chance they wouldn’t have made it,” he said. If the car is damaged, image analysis tools can automatically flag it and let you know right away.
Not all images are used for such snap decisions: engineers can also learn a lot about their rivals from photos.
“We’re very interested in things related to the geometry of the car for setup configurations – what’s the wing angle, what’s the ride height, how close is the car to the ground. Those are all useful things to know from an engineering standpoint and are what we aim for when doing image analysis,” said Patrick Canup, GM’s director of motorsports competition engineering.
Enlarge / Many of the photographers working trackside are filming on behalf of teams or manufacturers.
Steve Russell/Toronto Star via Getty Images
“It’s not easy to just take a set of still images and determine a lot of engineering information from them, so we’re actively working to leverage all of the photos that are sent to us over the race weekend – thousands of photos. We have access to a huge amount of information, and we want to maximize the engineering information we glean from all that data. This is a big data problem that AI is really good at,” Canupp said.
The computer says we should pit now.
Remember that audio feed transcript from earlier? “During the race, if a number of drivers start talking about similar things, like the condition of the track, we can start to infer that the condition of the track is changing based on the occurrence of certain words,” Bollenbaugh said. “It may not just be an issue with their own car… If the drivers are talking about something on the track, it could raise the likelihood of a caution, which is part of our strategy model.”
This information is fed into strategy tools that derive lap times from timing and scoring, predictive models that do the same in race series that provide fuel economy data for every car or in series like NASCAR or IndyCar where teams don’t get to see such data from their competitors, and tire-wear models.
“One of the biggest things we have to manage is tires, fuel and lap times. It’s all a trade-off with trying to run the race the fastest,” Bollenbaugh said.
Racing is obviously a dynamic situation, and “we update our recommendations every lap as the scenario changes. If a tire falls off, [as the tire wears and loses grip]”It tracks them in real time and predicts where they’re going to go. It’s constantly evolving and doing transfer learning during the race, so we’ll continue to train our models in real time as the race unfolds over the weekend,” Bollenbau said.