Forget GPUs, d-Matrix uses DIMC to win AI Inference

November 26, 2024

Cerebral Valley ‘Deep Dive’ with CEO Sid Sheth exploring our DIMC architecture for a new way forward with AI, the competitive landscape of AI inference, and Sid’s personal journey as a multi-time entrepreneur.

Today, we’re talking with Sid Sheth, co-founder and CEO of d-Matrix.

d-Matrix is tackling AI inference with its novel Digital In-Memory Compute (DIMC) architecture to boost efficiency and speed for generative AI workloads in cloud data centers. Founded in 2019, d-Matrix is targeting the growing demand for AI inference, building what Sid describes as “the most efficient computing platform for AI inference” by focusing on customer-driven innovation and first-principles engineering.

Key Takeaways:

Inference Dominance: d-Matrix bet early on inference becoming the dominant AI computing challenge, a vision validated by the explosion of generative AI demand.

DIMC Innovation: The DIMC architecture combines digital accuracy with in-memory compute efficiency, chiplet based scaling and block floating point numerics, achieving industry-leading performance, energy efficiency and throughput.

Future Expansion: While currently focused on cloud data centers, d-Matrix envisions scaling it’s chiplet-based platform into workstations and client PCs in the future.

Read the full article on Cerebral Valley

Suggested Articles

Deep divers off pier

Impact of the DeepSeek Moment on Inference Compute 

By d-Matrix Team | January 31, 2025

Transforming AI through Hardware-Software Codesign for Gen AI Inference 

By d-Matrix Team | November 13, 2024

Think more vs. Train more

By Sid Sheth | January 29, 2025