Open standards are the path to the next AI breakthrough 

Why we're working with the UALink and UEC consortiums to drive innovation in AI innovation.

Published: October 14, 2025
By: Sree Ganesan

Open standards are the path to the next AI breakthrough 

We are inevitably moving toward a world where AI applications need substantially better performance, and classic GPUs alone can’t keep up. And at the same time, customers need flexibility in their infrastructure.

With those constraints in mind, we still need to ensure these AI workloads are both high-performance and efficient even when scaling. But rather than re-inventing the wheel, we’re going to be following the lead of some of the most successful companies before us: working with open hardware standards.

As part of that approach, we are going to be working with many of the top hardware companies in the world to evolve these open standards as part of the UALink and UEC consortiums. 

Rather than forcing customers into a proprietary infrastructure, open standards ensure a completely interoperable ecosystem. Open standards also accelerate innovation in a way not possible when building in isolation, as all companies committed to them have a vested interest in accelerating their growth.

Users are currently tolerant of the speed and performance GPUs deliver—which can range in the hundreds of tokens generated per second—simply because the technology is so new and transformational. That will change as applications become more complex, and users expect better experiences that require high-performance computing that existing GPU-oriented systems won’t be able to deliver. 

For us, UALink and UEC represented the best starting point to grow and contribute our depth of knowledge back to the developer community.

Next-generation AI workflows need new scale-up and scale-out systems 

Classic paradigms are collapsing under the weight of runaway demand. But the data center infrastructure to keep up is already there: PCIe-enabled ethernet-based systems. The UALink and UEC consortiums recognized this the same way we did, and built a blueprint to enable collaboration and push that infrastructure forward.

We set very tight bounds for ourselves: low power, ethernet-based hardware with a PCIe form factor. We took the approach of working with ecosystem partners and building our own technology to enable customers to scale up Corsair systems to handle models of up to 100B parameters with SquadRack. We also acknowledge that open standards are going to be the best pathway to keeping up with AI inference demands—particularly as reasoning models and agentic workflows become deeply integrated into standard workflows.

UALink and UEC enable efficient ways to scale up and scale out AI workflows without the need to adopt a new interconnect fabric, instead letting hardware operators lean on existing ethernet-based systems. These address the shortfalls of NVLink and InfiniBand while allowing customers optionality across hardware setups. 

With current GPU-based compute paradigms, companies can both over-provision and under-provision expensive hardware that, currently at best, delivers performance in the hundreds of tokens per second. Model sizes and architectures change rapidly while customers are locked into hardware until it is fully amortized, which can take years. 

There are significantly more options for high-performance accelerators today than there were just a few years ago, and customers will want the opportunity to choose the best infrastructure for their goals. That will inevitably look like a heterogeneous hardware configuration to find a sweet spot between AI workload requirements and budgets. This also makes open standards the clearest path forward to customer success.

We created our technology with interoperability—and choice—as a first principle, starting with a design built around a classic PCIe system that you can just drop onto a rack. The open standards from the UEC and the UALink consortiums have clearly aligned with our own goals of adapting existing technology (PCIe) and infrastructure design (ethernet) to operate efficiently for AI-powered workflows. 

Why both improved scale-up and scale-out tools are necessary—with existing infrastructure 

Cloud computing was designed with elasticity as a first principle. The amount of compute needs will always ebb and flow throughout the day. That principle of elasticity in the hyperscalers like AWS, Microsoft Azure, and Google Cloud enabled app developers to both ensure app quality while also keeping costs under control. 

AI-powered applications essentially follow the same principle, except for two differences: one, the inference is powered by accelerators (like GPUs); and second, the complexity of the applications can scale up and down in unusual ways. Here are a few examples: 

  • Agentic network size: AI-powered workflows that use agentic networks can span between 1 and n agents with unique models selected for each. 
  • Context length: Applications may be quick-fire, one-shot inferences that total a few thousand tokens—or they could be operating in the millions of tokens. 
  • Model selection: Models come in multiple configurations, including different sizes that require different levels of memory. 
  • End-user SLAs: Applications may have different levels of performance requirements, such as initial response time (time to first token) or performance of each response—or more than one. 
  • Multi-tenancy: Your userbase could jump from 100 to 100,000 overnight—and you would have to have enough idling power-hungry GPUs ready to go and then sit idling the remainder of the time. 

Developers already have workflows in place that adhere to elasticity requirements—and many had them in place for managing existing machine learning requirements, such as recommendation engines. The challenge is adapting all of this to the ravenous demand for new AI tools, such as large language models and diffusion models, bring to the table. 

The next generation of AI workflows demands flexibility and efficiency. It isn’t feasible to have 100,000 GPUs that sit at half-utilization most of the time, costing more than $2 per GPU hour and relying on proprietary technology. We need to re-think our economics from first principles, and that requires being nimble and adapting to the rapidly shifting environment. 

Committing to open standards 

The UALink consortium came together last year with founding members that included AMD, Apple, AWS, Intel, Microsoft, Meta, and Google. The UEC consortium, which came together in 2023, also included founding members like AMD, Arista, Broadcom, Cisco, Eviden, HPE, Intel, Meta, and Microsoft. This is a colossal, powerful ecosystem—and one that is needed to push the whole industry forward. It was a no-brainer for us to join: We can contribute our ideas and technology to the whole hardware ecosystem in addition to taking advantage of the newest innovations. 

The world of AI-powered workflows will continue to change, both in predictable (scaling-law oriented) and unpredictable, aha-based innovations. We live in a world of dense and mixture-of-expert models, but other options could easily crash the party. That’s why our hardware and software have to be highly adaptable to handle both memory-hungry models and compute-hungry applications. 

There’s no better place than to innovate on the technology that already pervades our data centers. No one company can take on that task itself, and if that were the case, we would be relying on more proprietary technology and the risks of doing so. Ethernet, which has already served as the backbone of our existing world, has turned out to be completely capable of powering our AI-powered future—it’s just underbaked at that moment on that front. 

UALink and UEC are just two open standards that we hope to contribute to with our knowledge and research. But we intend to support all open standards as they emerge. We’re already seeing a very vibrant open development ecosystem emerge beyond just UALink and UEC. Broadcom’s scale-up ethernet (SUE) offers another option for scale-up systems and more will surely come. 

Working with open technology is built into our DNA, and we are incredibly excited to see where these new tools take us as we are forced to rapidly adapt to a new generation of technology. 

Article Tags:

Suggested Articles