Show simple item record

dc.contributor.authorNarduzzi, Simon
dc.contributor.authorBigdeli, Siavash. A
dc.contributor.authorLiu, Shih-Chii
dc.contributor.authorDunbar, L. Andrea
dc.identifier.citationProceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 2022en_US
dc.description.abstractReducing energy consumption is a critical point for neural network models running on edge devices. In this regard, reducing the number of multiply-accumulate (MAC) operations of Deep Neural Networks (DNNs) running on edge hardware accelerators will reduce the energy consumption during inference. Spiking Neural Networks (SNNs) are an example of bio-inspired techniques that can further save energy by using binary activations, and avoid consuming energy when not spiking. The networks can be configured for equivalent accuracy on a task through DNN-to-SNN conversion frameworks but their conversion is based on rate coding therefore the synaptic operations can be high. In this work, we look into different techniques to enforce sparsity on the neural network activation maps and compare the effect of different training regularizers on the efficiency of the optimized DNNs and SNNs.en_US
dc.rightsCC0 1.0 Universal*
dc.subjectDeep Neural Networksen_US
dc.subjectSpiking Neural Networksen_US
dc.titleOptimizing the Consumption of Spiking Neural Networks with Activity Regularizationen_US
dc.type.csemresearchareasData & AIen_US
dc.type.csemresearchareasASICs for the Edgeen_US

Files in this item


This item appears in the following Collection(s)

  • Research Publications
    The “Research Publications” collection provides bibliographic information for scientific papers including conference proceedings and presentations.

Show simple item record

CC0 1.0 Universal
Except where otherwise noted, this item's license is described as CC0 1.0 Universal