Jacob Ridley, Senior Hardware Editor
(Image courtesy of Future)
This week I went to CES 2024. My feet were sore from a week of walking around Las Vegas, but it was worth it as I got to see so many new products for the first time. This year there was an overwhelming number of products showing off AI capabilities, which got me thinking about what to make of it all.
Yet another buzzword at CES, and while previous years have been about blockchain, the metaverse, and attaching the word “smart” to everything, there’s no point in guessing what it’ll be in 2024. Artificial intelligence is the talk of the town, and there was little to avoid it during my trip to the Las Vegas tech show.
It’s easy to understand why AI is taking the world by storm. The rapid success of ChatGPT and image generation has caught the world’s attention. Now, most of the big tech companies are thinking about how to integrate AI into their products, like Microsoft and Copilot for Windows, or Intel with their latest Meteor Lake Core Ultra processors.
There’s a money-making opportunity in AI. NVIDIA is a trillion-dollar company thanks to its AI-powered GPUs. OpenAI, the developer of ChatGPT, has raised billions of dollars. These companies are offering something truly new, but not all of their AI capabilities are obviously new.
Some of the things being touted as AI at CES 2024 seemed like smart features even before the AI hype surged, like sleep monitoring, automatic timers, head tracking, and automatic mode switching.
Many of these capabilities existed before the AI boom. Is there something going on beneath the surface that makes any of these capabilities worthy of the AI tagline? Or is this all just clever marketing?
What distinguishes AI from other somewhat intelligent capabilities is often not entirely clear.
MSI’s MEG 321URX QD-OLED, shown at CES 2024, is equipped with a feature that analyzes the League of Legends minimap to provide advance warning of enemies. (Image courtesy of Future)
Some AI products explicitly mention training and learning, but they are often integrated into cloud-based systems that do most of the actual AI processing. Adobe, for example, offers AI algorithms that dabble in functions that regular smart algorithms can’t achieve, such as creating generative images. Admittedly, I still believe that the standard Magic Eraser tool is often more effective than the new background remover, but that will change over time as AI capabilities can be improved by deploying new models trained on new examples and feedback. Such training and feedback starts to move AI away from traditional code consisting of if statements and logic loops.
The MSI monitor pictured above also comes with an app that allows you to train the software to work with games other than League of Legends.
In other cases, you may struggle to define what makes an AI product in 2024 different from existing smart features. The exact definition of artificial intelligence will change, especially as we move through the AI era and new models and systems emerge. However, there are some key terms to keep in mind:
The first is artificial general intelligence, which is generally defined as a system with human-like or greater intelligence, meaning it can achieve results comparable to or better than humans across a wide range of tasks. This is different from a bot designed to play Go, which today may surpass humans in that ability because it is a highly specialized bot. Most people consider true AI to be something that has the ability to surpass humans in many ways through self-learning and decision-making.
OpenAI CEO Sam Altman wrote a blog post about artificial general intelligence. (Image credit: Justin Sullivan via Getty Images)
We hear a lot about companies like OpenAI working towards general intelligence, but we’re still a long way from it, and it’s likely that this form of AI will be the only one worthy of the name once systems claim to be as intelligent as humans.
This raises an interesting question: will what we define as AI change as AI gets smarter? I think this is true to some extent. The various chatbots released before ChatGPT already look pretty formulaic compared to newer versions. Even if AI reaches general intelligence, I doubt whether we would classify today’s ChatGPT bot as artificial intelligence. But at least among the publicly available models, this is probably the best we have today.
Stanford University’s Human-Centered Artificial Intelligence Laboratory publishes list of definitions of AI [PDF warning] These include the definition published by Professor Emeritus John McCarthy in 1955, who defines AI as “the science and engineering of making intelligent machines.” By this definition, we could define all kinds of technology from the past few decades as advances in AI. And that’s true, since AI wouldn’t exist without decades of computer science. But Stanford University also adds their own definition to clarify things:
“While much of the research has focused on humans programming machines to perform clever actions, such as playing chess, today the emphasis is on machines that can learn at least as well as humans can.”
What we want is machines that can somehow match humans and learn.
Intel plans to bring desktop gaming processors with built-in AI accelerators to market this year. Both Intel and AMD currently offer mobile chips with AI acceleration. (Image credit: Future)
The human part of this definition doesn’t quite apply to the AI PCs and appliances at CES 2024, which operate under a slightly different definition: one that refers to learning and automation, what many call machine learning.
Most definitions consider machine learning to be a subset of AI. It’s related, but crucially different. Machine learning is about analyzing data and images, recognizing patterns, and using them to improve itself. That’s the kind of application you’ll see in AI noise-canceling software.
As Microsoft outlines, machine learning and AI come together to create truly useful machines. Machine learning lays the foundation for device understanding, which AI leverages to make decisions. We see this symbiotic relationship in computer vision, where machine learning is used to teach a car about hazards on the road, but it’s up to AI to mimic a human response, like braking before hitting a cow.
The difficulty is how these definitions are then applied to products and how they apply historically.
(Image courtesy of Future)
Take the MSI Prestige 16 AI Evo for example, a laptop equipped with MSI’s “AI Engine” that “detects user scenarios and automatically adjusts hardware settings for best performance.” My question is how this AI differs from, say, Nvidia’s GPU Boost algorithm. GPU Boost uses various sensors on the graphics card to automatically boost performance as thermal and power constraints allow, but the feature has been around since 2012 and no one calls it AI. And for good reason.
There is certainly an overlap between intelligent features that react to human input and features that learn from it. I would love to know if MSI’s profile switching feature actually learns from user actions and improves over time to be defined as AI. That seems to be the best way to distinguish between artificial intelligence features and just plain smart features.
Moreover, AI is not a completely local technology. Most of these AI capabilities are enabled by large data centers, and the AI revolution is riding on the wave of cloud computing. AI processing power is increasingly being placed locally, such as on your own PC. Nvidia GPUs have tons of AI processing cores, and Intel’s latest Core Ultra chips have NPUs to accelerate AI. But many of today’s AI capabilities require an internet connection, which further blurs what is actually an AI product and what is simply using an AI service elsewhere.
AI is inherently difficult to define, and its various sub-sects make it even harder to sniff out AI’s real-world use cases. The only practical conclusion I can reach is that with the wide variety of uses of what gets classified as AI and machine learning today, it may be pointless to slap the umbrella term “AI” on products.
If you’re looking for a new laptop, processor, toaster, or pillow, ignore the noise and focus on the truly effective features that you will actually use. The AI and machine learning features you really need should be pretty obvious.