d-Matrix
  • Technology
  • Product
  • Ecosystem
  • About
  • Careers
  • Blog

d-Matrix Blog

Featured

The Complete Recipe to Unlock AI Reasoning at Enterprise Scale

The Complete Recipe to Unlock AI Reasoning at Enterprise Scale

February 13, 2025
Impact of the DeepSeek Moment on Inference Compute 

Impact of the DeepSeek Moment on Inference Compute 

January 31, 2025
Transforming AI through Hardware-Software Codesign for Gen AI Inference 

Transforming AI through Hardware-Software Codesign for Gen AI Inference 

November 13, 2024
GigaIO Partners with d-Matrix to Deliver Ultra-Efficient Scale-Up AI Inference Platform

GigaIO Partners with d-Matrix to Deliver Ultra-Efficient Scale-Up AI Inference Platform

“When we started d-Matrix in 2019, we looked at the landscape of AI compute and made a bet that inference would be the largest computing opportunity of our lifetime,” said… Read More
May 7, 2025
Eye on AI’s Craig S. Smith deep dive with d-Matrix’s CEO Sid Sheth

Eye on AI’s Craig S. Smith deep dive with d-Matrix’s CEO Sid Sheth

Breaking the Memory Wall: How d-Matrix Is Redefining AI Inference with Chiplets

Breaking the Memory Wall: How d-Matrix Is Redefining AI Inference with Chiplets

How to Bridge Speed and Scale: Redefining AI Inference with Ultra-Low Latency Batched Throughput

How to Bridge Speed and Scale: Redefining AI Inference with Ultra-Low Latency Batched Throughput

The Complete Recipe to Unlock AI Reasoning at Enterprise Scale

The Complete Recipe to Unlock AI Reasoning at Enterprise Scale

Impact of the DeepSeek Moment on Inference Compute 

Impact of the DeepSeek Moment on Inference Compute 

Think more vs. Train more

Think more vs. Train more

View All Posts >

Trending

How to Bridge Speed and Scale: Redefining AI Inference with Ultra-Low Latency Batched Throughput

The Complete Recipe to Unlock AI Reasoning at Enterprise Scale

Impact of the DeepSeek Moment on Inference Compute 

Transforming AI: d-Matrix’s Pivotal Moments in Pursuit of Gen AI Inference At Scale

Transforming AI through Hardware-Software Codesign for Gen AI Inference 

Featured Video

The DeepSeek Moment

In this short talk, d-Matrix CTO Sudeep Bhoja discusses the release of the Deep Seek R1 model, highlighting its impact on inference compute. He discusses the evolution of reasoning models and the significance of inference time compute in enhancing model performance.

Learn more about d-Matrix

From the Media

HPC wire logo

GigaIO Partners with d-Matrix to Deliver Ultra-Efficient Scale-Up AI Inference Platform

embedded logo

Breaking the Memory Wall: How d-Matrix Is Redefining AI Inference with Chiplets

D-Matrix Targets Fast LLM Inference for ‘Real World Scenarios’

Cerebral Valley logo

Forget GPUs, d-Matrix uses DIMC to win AI Inference

AP

Nvidia rivals focus on building a different kind of chip to power AI products

Forbes

d-Matrix Emerges From Stealth With Strong AI Performance And Efficiency

View all media articles > For all press inquiries, please email pr@d-matrix.ai>
Transforming AI from
unsustainable to attainable.
  • Technology
  • Product
  • Ecosystem
  • About
  • Careers
  • Blog
  • Newsletter
  • Contact
  • Privacy Policy
  • Terms of Use
© d-Matrix, 2025
X Twitter Logo Streamline Icon: https://streamlinehq.com