WHAT WE DO

CNN and LLM inference accelerators, from FPGA to ASIC.

SpiceEngine provides end-to-end design and implementation for dedicated AI accelerators. Our staged strategy starts with FPGA validation, moves to IP-core standardization, then transitions to ASIC.

Strengths

Staged Strategy FPGA → IP core → ASIC
Value Indicators 1 ms recognition / under 5 W
Coverage From edge to cloud

CNN Inference Accelerator

An accelerator stating up to 21x CNN inference performance versus Jetson baselines. The implementation targets low-latency, low-power computer vision workloads.

LLM Inference Accelerator

Dedicated hardware for efficient LLM inference. The architecture is designed to reduce system overhead and provide practical throughput across deployment environments.

Technology

We combine dataflow architecture with dedicated hardware design to minimize CPU/GPU overhead. Wasabi2.0 is introduced as an integration of model compression, multicore architecture, and 8-bit quantization.

WASABI 2.0 with AXI architecture diagram
WASABI 2.0 with AXI Architecture

Use Cases / Applications

Statements on performance and roadmap are aligned with wording published on public pages.