New technologies and elaborations

Industrial supercomputers

One of the Mineberg’s main goals is building a modern hi-end supercomputer as a data center’s part. To achieve this we will use the most technological and tested solutions. Bitware specialists develop new platform with the main aim to provide maximum productivity with minimum energy consumption. Supercomputer is the heart of our data centers and we designed it with a lot of attention to detail.

The system consists of 5U blocks that will be installed in a rack. The core of every block is 8 hi-end SoC FPGA+HBM2 units based on Xilinx Kintex-7 or Altera Stratix 10. The boards were selected due to their functionality. According to Bitware Group planners, the offered solutions perfectly fit for the system capable of exceptionally voluminous computations.

To decrease energy consumption, we will use cutting-edge external memory technology called Hybrid Memory Cube (HMC). It proves to be 70% more energy-efficient compared to existing solutions using DDR3. The technology is also 15 times more productive and requires 10 times less space. The usage of 3D architecture helps to locate more memory banks than DDR-based solutions, which grants the new system with sufficient advantage over another memory schemes.

The blocks are connected to the host with the help of two PCIe x16 4.0 interfaces through the switchboard. We will use only high-quality optical cables to maintain uninterrupted work of all the system components.

Intelligence Processing Unit - 14nm IPU ASIC

The elements of IPU ASIC IP library include functions that are found in most AI designs, resulting in a core architecture that is both optimized and durable with respect to AI algorithm changes. Specific algorithm modifications can be accommodated through a combination of minor chip revisions that integrate appropriate AI “tiles” or modifications of the 2.5D package to integrate appropriate memory components.

Bitware Group developed AI-targeted “tiles” include subsystems such as convolution engines that have MAC blocks tightly coupled with memory subsystems optimized for AI that result in lowest area and power. Special innovative structures have been developed for data transfer across memory subsystems. It also includes transpose memory, among others. The physical interface (PHY) to the HBM2 memory stack is also part of the library. Approximately 100 engineers at Bitware Group are working on the design and silicon hardening of this AI IP.

A typical AI design requires access to large amounts of memory. This is usually accomplished with a combination of customized memory structures on the AI chip itself and off-chip access to dense 3D memory stacks called high-bandwidth memory (HBM). Access to these HBM stacks is accomplished through a technology called 2.5D integration. This technology employs a silicon substrate to tightly integrate the chip with HBM memory in a sophisticated multi-chip package. The current standard for this interface is HBM2. The development of customized on-chip memory and 2.5D integration represent eSilicon core competencies that are required for a successful AI design.

Bitware Group built the industry’s leading AI ASIC. We are currently engaged with several tier-one system providers and high-profile startups to deploy the IPU ASIC platform and its associated IP. Initial applications will focus on the data center and information optimization, human/machine interaction and autonomous vehicles.