The Next
100x

Follow our journey as we aim to disrupt the economics of AI compute.

Get in touch with our media team at media@d-matrix.ai
Overhead view of a large warehouse filled with server racks.
Explore Blog
Press Article

The looming need for sustainable AI

By:
Sid Sheth

The memory wall refers to the physical barriers limiting how fast data can be moved in and out of memory. It’s a fundamental limitation with traditional architectures. In-memory computing or IMC addresses this challenge by running AI matrix calculations directly in the memory module, avoiding the overhead of sending data across the memory bus.

Read Article
News

New Microsoft partnership accelerates generative AI development

By:
VentureBeat

One of the hottest trends in artificial intelligence (AI) this year has been the emergence of popular generative AI models. With technologies including DALL-E and Stable Diffusion, there are a growing number of startups and use cases that are emerging.

Read Article
Conference Presentation

Accelerating Transformers for Efficient Inference of Giant NLP Models

By:
Sudeep Bhoja (Co-founder / CTO)

Large Transformer Models are finding uses across speech, text, video, and images. In this presentation, we explain the challenges of accelerating these large models in hardware.

Watch Video
News

AI Compute Company Banks on Chiplets for Future Processors

By:
Spencer Chin

Chiplet packaging is catching on with companies designing high-performance processors for data center and AI applications. While familiar names such as Intel and AMD are in this space, so are some smaller startup companies. One of them is d-Matrix, a young company developing technology for AI-compute and inference processors.

Read Article
News

A Better Approach to Hyperscale Computing

By:
Playground Global

Why We Invested in d-Matrix

Read Article
News

d-Matrix Announces $44 Million in Funding

By:
Business Wire

d-Matrix Announces $44 Million in Funding to Build a One-of-a-kind Compute Platform Targeted for At-Scale Transformer AI Datacenter Inference

Read Article

D-Matrix lands $44M to build AI-specific chipsets

By:
SiliconAngle

Three-year-old startup d-Matrix Corp. said today that it has closed a $44 million funding round to support its efforts to build a new type of computing platform that support transformer artificial intelligence workloads.

Read Article
News

D-Matrix’s new chip will optimize matrix calculations

By:
VentureBeat

Today, D-Matrix, a company focused on building accelerators for complex matrix math supporting machine learning, announced a $44 million series A round. Playground Global led the round with support from Microsoft’s M12 and SK Hynix. The three join existing investors Nautilus Venture Partners, Marvell Technology and Entrada Ventures.

Read Article
News

VMblog Expert Interview

By:
VMBlog

Recently, a company called d-Matrix launched out of stealth mode with a $44m Series A round. The co-founders are proven, long-time SV tech innovators. They've developed the first 'baked' in-memory AI model which, for a change, actually is different than what's been out there.

Read Article
News

d-Matrix Announces $44 Million in Funding

By:
Business Wire

d-Matrix Announces $44 Million in Funding to Build a One-of-a-kind Compute Platform Targeted for At-Scale Transformer AI Datacenter Inference

Read Article
White Paper

Designing Next-Gen AI Inferencing Chips Using Azure's Scalable IT Cloud Infrastructure

By:
d-Matrix, Microsoft & Six Nines

A case study describing how d-Matrix built its first proof-of-concept AI chip entirely in Microsoft's Azure cloud.

Read Article
Conference Presentation

Developing Scalable AI Inference Chip with Cadence Flow in Azure Cloud

By:
Farhad Shakeri (Sr. Director IT/Cloud)

d-Matrix set up a productive Azure cloud infrastructure running Cadence flow, the lessons learned and the key success factors that led to delivering first AI chip within 14 months.

Read Article
Conference Presentation

Developing Scalable AI Inference Chip with Cadence Flow in Azure Cloud

By:
Farhad Shakeri (Sr. Director IT/Cloud)

d-Matrix set up a productive Azure cloud infrastructure running Cadence flow, the lessons learned and the key success factors that led to delivering first AI chip within 14 months.

Read Article
Conference Presentation

Accelerating Transformers for Efficient Inference of Giant NLP Models

By:
Sudeep Bhoja (Co-founder / CTO)

Large Transformer Models are finding uses across speech, text, video, and images. In this presentation, we explain the challenges of accelerating these large models in hardware.

Read Article
Press Article

Generative AI drives an explosion in compute: The looming need for sustainable AI

By:
Sid Sheth

The memory wall refers to the physical barriers limiting how fast data can be moved in and out of memory. It’s a fundamental limitation with traditional architectures. In-memory computing or IMC addresses this challenge by running AI matrix calculations directly in the memory module, avoiding the overhead of sending data across the memory bus.

Read Article
White Paper

Designing Next-Gen AI Inferencing Chips Using Azure's Scalable IT Cloud Infrastructure

By:
d-Matrix, Microsoft & Six Nines

A case study describing how d-Matrix built its first proof-of-concept AI chip entirely in Microsoft's Azure cloud.

Read Article
No items found.