Silicon Valley AI startup Wave Computing has launched the new TritonAI 64 platform. The platform is said to integrate a triad of powerful technologies into a single, future-proof intellectual property (IP) licensable solution for AI System on Chip (SoC) designers targeting automotive, enterprise and other high-growth AI edge markets.
The TritonAI 64 platform delivers 8-to-32-bit integer-based support for high-performance AI inferencing at the edge now, with bfloat16 and 32-bit floating point-based support for edge training in the future.
The platform comes equipped with a MIPS 64-bit SIMD engine that is integrated with Wave’s dataflow and tensor-based configurable technology. Additional features include access to Wave’s MIPS integrated developer environment (IDE), as well as a Linux-based TensorFlow programming environment.
The MIPS IP subsystem in the TritonAI 64 platform enables SoCs to be configured with up to six MIPS 64 CPUs, each with up to four hardware-threads. The MIPS subsystem hosts the execution of Google’s TensorFlow framework on a Debian-based Linux operating system, enabling the development of both inferencing and edge learning applications.
Additional AI frameworks such as Caffe2, can be ported to the MIPS subsystem, as well as support a variety of AI networks using ONNX conversion.