d-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine.
We take a holistic approach to AI compute, leveraging five pillars:
Custom digital circuits that integrate compute directly in programmable memory (IMC) lead to huge leaps in efficiency while retaining datacenter accuracy.
Seamlessly map existing trained models and option to design new models to take advantage of IMC hardware that intrinsically supports mulitple data types.
Embracing the open-source software movement in AI frameworks and compiler, combined with the maturity and extensibility of our libraries and ML tools, makes adoption seamless.
A fabric of chiplets built with low-power open interconnects, enhanced with low-latency FEC to take advantage of the “More Than Moore” paradigm. The Lego block approach from chip design to system design provides modularity and scalability across multiple applications.
Hetero-Modular™ organic package that enables chiplet heterogeneity and scalability, while being readily available and cost-effective.
Lego blocks of chiplets connected with open standard interconnects provide modularity and scalability across multiple applications.
Our Nighthawk proof-of-concept is now being showcased to select customers and partners.