New Fastest AI Platform
Our vision is to create the world's fastest AI infrastructure. With this vision, we are developing an infrastructure that will enable all users of AI technology to reason and learn freely and at the fastest possible speed.
Our goal is not to manufacture physical products, but to provide a high-performance AI infrastructure optimized to the limit in terms of both hardware and software. And the form of provision will be in the form of IP (intellectual property) that other companies can incorporate the infrastructure into their own products and services.
This AI platform can demonstrate its performance in any hardware environment, from desktops to supercomputers to edge devices. Our vision is to unify the fragmented world of inference and learning, edge and server, and to solve everything with a single, optimized, high-speed AI platform.
ARM, which has the largest market share in the world, does not manufacture its own chips, but licenses its own designs, which are then used by other companies to create their own products. We, likewise, provide the design of the world's fastest AI infrastructure, on which engineers around the world freely and rapidly shape their thoughts. This is our vision and the true meaning of our slogan, "Creating the world's fastest AI infrastructure.
Provides IP cores for FPGA-based deep learning inference accelerators. Utilizing proprietary multi-core technology running at up to 500 MHz and a sparse matrix computation accelerator, it is capable of inferring over 1000 images per second.
An FPGA-based deep learning learning accelerator is under development. By parallelizing multiple low-power FPGAs, it will provide up to 100 times faster AI learning at 1/10th the cost of conventional GPU-based learning platforms.
Provides a library that automatically compresses deep learning models. Pruning and quantization compresses learned models for our inference accelerators, reducing inference speed and memory usage by a factor of 10 or more.