Innovating together with the
Ecosystem to build a better future.
We are collaborating with the AI hardware and software ecosystem to build the best AI inference solution for you.
Learn more about SquadRack
“Arista’s cloud networking fabric is designed to meet the rigorous demands of AI infrastructure. JetStream’s ability to enable accelerator-to-accelerator communication over standard ethernet pairs perfectly with Arista’s high-performance switches. Together, we’re demonstrating how AI inference can scale efficiently without requiring proprietary networking fabrics.”
— Vijay Vusirikala, Distinguished Lead, AI Systems and Networks, Arista Networks
“As a leader in high-performance PCIe and Ethernet connectivity, Broadcom is excited to see d-Matrix advancing AI infrastructure solutions. d-Matrix is unlocking a new level of performance and efficiency in AI inference while leveraging the standards-based networking ecosystem that Broadcom has long supported.”
— Jas Tremblay, Vice President and General Manager, Data Center Solutions Group, Broadcom
“Combining d-Matrix’s Corsair PCIe card with GigaIO SuperNODE’s industry-leading scale-up architecture creates a transformative solution for enterprises deploying next-generation AI inference at scale,” said Alan Benjamin, CEO at GigaIO. “Our single-node server supports 64 or more Corsairs, delivering massive processing power and low-latency communication between cards. The Corsair SuperNODE eliminates complex multi-node configurations and simplifies deployment, enabling enterprises to quickly adapt to evolving AI workloads while significantly improving their TCO and operational efficiency.”
“By integrating d-Matrix Corsair, Liqid enables unmatched capability, flexibility, and efficiency, overcoming traditional limitations to deliver exceptional inference performance. In the rapidly advancing AI landscape, we enable customers to meet stringent inference demands with Corsair’s ultra-low latency solution,” said Sumit Puri, Co-Founder at Liqid.
“Supermicro is proud to collaborate with d-Matrix in delivering an efficient AI inference rack solution that combines compute acceleration, efficient networking, and server density in one integrated platform. Our proven track record in rack-level integration, along with d-Matrix’s inference acceleration products, offers customers a practical path to scaling AI inference across the enterprise and cloud.”
— Vik Malyala, President & Managing Director, EMEA and SVP Technology & AI, Supermicro