What Happens After Model Supremacy?

Feb 19, 2026

5 min read

Author

Maxime Pasquier, Principal @ BlackWood

Models, infrastructure, and the race for ecosystem control

It was difficult this week not to notice AI M&A headlines across tech channels.

Mistral AI’s acquisition of cloud infrastructure startup Koyeb and OpenAI’s recruitment of OpenClaw creator Peter Steinberger are easy to mistake for isolated curiosities. Seen in isolation, they can appear as “modest” transactions, a model builder integrating a deployment platform, and a leading lab absorbing an open-source project founder… but taken together with earlier strategic moves across the industry, however, they reveal the first coherent consolidation wave of the generative AI era.

The prevailing assumption had been that frontier model companies, flush with capital and preoccupied with benchmark supremacy, would postpone acquisitions until the market matured. Instead, they are moving early, as competition is already shifting beyond raw model performance.

Mistral’s acquisition of Koyeb is its first transaction, though not the first among frontier labs. The Paris-based AI company, founded in 2023 and now valued in the tens of billions, has made clear its ambition to build not merely competitive open-weight models, but an integrated AI platform. Koyeb’s serverless cloud deployment technology, designed to simplify the global rollout of applications, strengthens Mistral’s compute offering and reinforces its ambition to provide a sovereign AI cloud for European workloads.

In a world where model capabilities are converging and open models proliferate, differentiation will increasingly hinge on where models run, how easily enterprises deploy them, and who controls the surrounding infrastructure. By bringing deployment capabilities closer to the model layer, Mistral reduces dependency and deepens its claim to being a European alternative to US hyperscaler ecosystems.

Anthropic’s acquisition of Bun in 2025 followed similar logic. Bun, a high-performance JavaScript runtime, had already become foundational to Anthropic’s agent-oriented workflows. Rather than rely on a third-party runtime for systems that execute and manage AI-generated code, Anthropic internalised the dependency. The deal effectively secured part of its execution environment and reinforced the company’s broader ambitions around Claude Code and agent tooling.

OpenAI’s acquisition history points in the same direction. Since 2023, it has absorbed companies and teams across product design, collaboration tooling, experimentation infrastructure, domain-specific applications and MLOps. Global Illumination brought consumer product talent. Multi strengthened real-time collaboration. Statsig added experimentation and feature-flagging capabilities. Neptune contributed tooling for tracking and debugging large-scale model training. In aggregate, these moves form a pattern: progressive vertical integration of the AI development and deployment stack.

This week’s OpenClaw episode fits squarely within that trajectory. OpenClaw, an open-source agent framework that generated notable developer attention, represented more than a technical curiosity. Open-source projects with sufficient momentum can crystallise into alternative ecosystems. Ecosystems, once formed, are difficult to displace.

By bringing Steinberger into OpenAI and aligning the project within its orbit, OpenAI secured both scarce talent and influence over a fast-moving segment of the agent ecosystem.

Early absorption is often far less costly than confronting a mature rival later :)

But why is consolidation occurring so early in the generative AI cycle?

  • First, model commoditisation risk. Frontier models continue to leapfrog one another, but their capabilities are converging along key dimensions. As raw intelligence becomes less differentiating, value migrates outward, toward tooling, deployment, workflow integration and enterprise controls.

  • Second, capital intensity. Training and serving frontier systems requires immense compute expenditure. Dependency on external infrastructure introduces both margin pressure and strategic vulnerability. Bringing more of the stack in-house improves cost visibility, operational resilience and long-term leverage.

  • Third, platform formation. AI models are no longer standalone products. They are becoming foundational platforms upon which agents, workflows and enterprise systems are built. History suggests that once platforms reach critical mass, they consolidate surrounding layers, runtimes, deployment tools, developer environments and governance systems. The AI sector is compressing that historical pattern into a few short years…

There is also an undercurrent of strategic caution. In open ecosystems, value tends to leak to intermediary layers, tool builders, runtime environments or independent open-source communities. Early acquisitions and talent integrations are mechanisms to reduce that leakage, secure scarce expertise and anchor emerging ecosystems before they develop autonomous gravitational pull.

The first phase of generative AI was defined by capability, who could build the most powerful model? The second phase seems to be defined by control, who governs the stack around it?

Sources

Anthropic: Anthropic acquires Bun as Claude Code reaches $1B milestone

OpenAI: OpenAI acquires Global Illumination

OpenAI acquisition history : comprehensive list with all of these deals

Sifted: Mistral seals first acquisition deal with cloud startup Koyeb

TechCrunch: OpenClaw creator Peter Steinberger joins OpenAI

Reuters: OpenClaw founder Steinberger joins OpenAI, open-source bot becomes foundation

VentureBeat: Coverage on OpenAI’s acquisition of OpenClaw