Over the last week, the world has been on fire because of Deepseek’s new R1 reasoning model. But the stock predictions surrounding R1 don’t matter, and neither do the conspiracy theories. Even the model itself—while impressive—doesn’t really matter. The reason DeepSeek R1 really matters is because it means the number of frontier AI models is about to explode.
The bar to building state-of-the-art LLMs has dropped by an order of magnitude overnight. This doesn’t mean we will have 10x the number of frontier model providers—the power law of economic resource distribution means that in the next three years, we will have 1,000-10,000x the number of frontier model providers. This is not about a long tail of open-source lookalike models that fall short of the leading LLMs, but about true distributed innovation across hundreds of domains, use cases, and applications. There will be no AI oligopoly, no comfortable single-provider ecosystem, no walled garden. Humpty Dumpty just fell off that wall. If you felt exhausted by the arms race between OpenAI, Anthropic, Google, and Meta, get ready—because we have barely scratched the surface.
Web 1.0 didn’t have any winners. Because of R1, neither will AI 1.0. It’s as if we’ve been paying a few giant companies to build all of our websites for us. R1 will do for AI what WordPress did to the web: the barrier to contributing to the foundational AI model ecosystem is going to trend towards zero. This means more innovation, more specialization, and more choice. Everyone wins.
But this is not the future most companies have been building for. Overwhelmingly, teams have been working within a single family of models, perhaps two. While many companies have already provisioned access to dozens of different AI models, the truth is that 90% of enterprise applications are still built on OpenAI. Staving off vendor lock-in and improving diversification are already good reasons to use more than one model, but the real existential concern is that the traditional approach to building with AI—waterfalling development through a single model at a time—will put a hard ceiling on your innovation and speed. You will fall behind.
Companies are beginning to ship AI-powered applications at an astonishing clip, but as soon as they’re released they begin to drift into obsoletion. When it’s time to upgrade the model, or the old version gets deprecated, or you need to deploy in a geography that doesn't support the current model, it's a huge, painful manual lift with lots of knob-turning. There’s a constant chicken and egg problem—do you change the model or the prompt? Even changing prompts on different releases of the same model takes work. Any developer will tell you it’s not as simple as a one-line change; there are so many different variables that all need to change in parallel or you risk significant quality degradation on your outputs.
This problem is hard enough when you have a handful of applications within one or two model families. Now imagine doing this across dozens of frontier LLMs and hundreds of AI pipelines in production. If you believe the future is multi-model, then the need for multi-model infrastructure is obvious. Instead of building applications one model at a time, the way we develop, evaluate, and evolve AI applications must shift from single-path to multi-path. We need to explore multiple models, prompts, and parameters in parallel. And we need to do so in a data-driven way.
For leading Fortune 500s with global footprints and hundreds of AI-powered applications in production, it's a nightmare for them to try to switch a model out of an application—the prompt does not transfer, other parameters need to be adjusted, and there are many candidate models to evaluate. For almost every company we’ve worked with, it was extremely hard for them to update their models without tons of manual work. As soon as they launched an application to production, they immediately began to accumulate passive technical debt whenever the model landscape evolved. As we’ve worked to move our customers to principled, data-driven multi-model infrastructure, the number of engineer hours we're seeing organizations get back is mind-boggling. Not only this, but they’re outperforming the quality of their previous manual workflows.
A few years from now, nobody will remember R1. But everyone will feel it’s effects. The number and diversity of frontier-quality, value-creating, life-changing AI models is going to increase exponentially. If 2024 strained the limits of human-powered model management, in 2025 this approach will become impossible. It’s time to build multi-model infrastructure, because we’re going to need a lot of baskets.