A British startup that hopes to challenge Nvidia’s dominance in chips for AI applications with an innovative hardware design has emerged from stealth with $15 million in seed funding to pursue its idea.
The startup, Fractile, is the brainchild of Walter Goodwin, a 28-year-old PhD graduate from the University of Oxford’s Robotics Institute. Like other teams hoping to take on Nvidia, Goodwin is pursuing a chip design that is very different from the graphics processing units (GPUs) that Nvidia makes.
The British startup is the latest in a slew of startups and big tech companies trying to offer chips that can compete with Nvidia’s GPUs in the fast-growing market for running AI applications. Other startups targeting the same market include Groq, Mythic, Rain AI, Cerebras and Graphcore (recently acquired by SoftBank). Meanwhile, AMD, which already makes GPUs, is stepping up efforts to challenge Nvidia, and major cloud providers like Microsoft, Google and Amazon’s AWS are already making their own AI-specific chips.
Fractile was founded in 2022 but has been operating in “stealth mode” for two years while working on chip designs. The company secured a seed round investment from defense alliance NATO’s innovation fund Kindred Capital and Oxford Sciences Enterprise, which led the fundraising. Also participating were Cocoa and Innovia Capital, prominent angel investors and alumni from AI and semiconductor companies.
GPUs were originally designed in the late 1990s to speed up the execution of graphics-intensive applications like video games and computer-aided design software. Their advantage is that they can process large amounts of data in parallel, rather than having to execute programs in a linear sequence as a standard central processing unit (CPU) does. Coincidentally, the parallel processing capabilities of GPUs make them ideally suited to running the large neural networks that underpin modern AI applications (a type of software very loosely based on how the human brain works).
Although GPUs can run AI software much faster than CPUs, there are still aspects of their design that limit how fast AI models can run. One of the biggest issues is that GPUs typically rely on memory stored elsewhere in the system, in a separate memory chip component called a DRAM (short for dynamic random access memory) chip. The movement of data back and forth from this memory chip to the GPU itself creates a bottleneck in how fast a GPU can run AI models. Goodwin told Fortune that in Fractile’s design, the data needed for a calculation is stored right next to the transistors that perform the operation, which dramatically cuts down on AI run times.
The story continues
AI startup Groq, which already makes chips and offers them through its own cloud-based AI computing service, is taking a similar approach by moving a system’s memory closer to where the processing happens. Groq achieves this by using coexisting SRAM (static random-access memory) components on the chip, rather than off-chip DRAM. But Goodwin says Fractile has gone a step further, integrating memory and processing into a single component, which should make its chips even faster.
So far, Fractile has only tested its designs in computer simulations and has not yet produced test chips. But Goodwin said that from these simulations, Fractile is confident it can run large language models, like the AI models that power today’s consumer chatbots and form the basis of most generative AI applications, 100 times faster and 10 times cheaper than Nividia’s GPUs.
The company also said it is targeting significant power savings over other competing AI hardware. AI chip energy consumption has become a hot topic as people grow concerned about the potential carbon footprint and energy costs of the AI boom. Google and Microsoft have said their efforts to achieve net-zero carbon emissions have been thrown off track by the global expansion of datacenter infrastructure and the fact that AI computing loads make up an increasingly large share of the work performed in those datacenters. Fractile said its goal is to develop chips that are 20 times more powerful per watt than other existing AI hardware.
Perhaps the key to Nvidia’s dominance of the AI market is not just the flexibility of its GPUs, but the software programming system, called CUDA, that the company provides to run those chips. Nvidia has invested heavily in building a large developer community around CUDA, which has made it hard to convince developers to try alternative hardware. In the past, some AI chip startups had a hard time winning developers over from Nvidia because they invested relatively little in developing easy-to-use software to run their chips.
Goodwin says Fractile learned its lesson and built its own software stack in parallel with its hardware. He says a lot of the work that CUDA does is necessary because GPUs aren’t really optimized for running AI workloads, and the calculations the software has to do to adjust for this slow down AI applications even more and waste extra energy. Because Fractile’s chip doesn’t have to do these things, the software can be simpler and potentially comparable to CUDA, he says.
Goodwin declined to say when Fractile, which currently has just 14 employees and plans to grow to 18 by the end of August, will start producing chips. He said the seed funding will be used to help the company further test its designs in simulations and move toward manufacturing its first physical test chips.
Sam Herman, head of deep tech at Oxford Sciences Innovation, praised Fractile’s “radically innovative approach” to building AI chips. Kindred Capital partner John Cassidy said the company appreciates that Fractile’s team has a deep understanding of how AI software evolves. The speed of that evolution is a big challenge for AI chip companies, because it takes at least two years to bring a new chip design to full production, and by that time the computing requirements for the AI field may have changed. This phenomenon has thwarted previous attempts to replace GPUs as the workhorse of AI computing. GPUs are versatile and flexible enough to usually adapt to the next wave of AI software, whereas more specialized chips often can’t. But Cassidy said he thinks Fractile’s team “has the deep knowledge to understand how AI models are likely to evolve and how to build hardware for the requirements not just two years from now but five to 10 years from now.”
This story originally appeared on Fortune.com.