A ULP 22 nm System-on-Chip with Dual-engine Hardware Acceleration for Edge ML Inference
Loading...
Author
Jokic, P.
Azarkhish, E.
Cattenoz, R.
Türetken, E.
Moser, V.
Nussbaum, P.
Emery, S.
DOI
Abstract
Neural network-based object detection algorithms disrupted the field of computer vision, achieving unprecedented detection accuracies in various application domains ranging from large-scale automotive to miniaturized IoT devices. This advance was enabled by the introduction of increasingly deeper and thus more computationally intensive network architectures, challenging the processing hardware. IoT platforms are restricted in size and power, requiring efficient hardware engines to enable on-board processing of such neural network algorithms. We present a system-on-chip, fabricated in an advanced 22 nm CMOS process to provide end-to-end embedded machine learning inference capabilities at the edge.
Publication Reference
CSEM Scientific and Technical Report 2020, p. 106
Year
2020