Constrained Zero-Shot Neural Architecture Search on Small Classification Dataset

Thumbnail Image
Vuagniaux, Rémy
Narduzzi, Simon
Maamari, Nadim
Dunbar, L. Andrea
The rapid evolution of DL has brought about significant transformations across scientific domains, marked by the development of increasingly intricate models demanding powerful GPU platforms. However, edge applications like wearables and monitoring systems impose stringent constraints on memory, size, and energy, making on-device processing imperative. To address these constraints, we employ an efficient zero-shot data-dependent NAS strategy, enhancing the search speed through the utilization of proxy functions. Additionally, we integrate KD during the learning process, harnessing insights from pre-trained models to enrich the performance and adaptability of our approach. This combined method not only achieves improved accuracy with but also results in a reduced memory footprint for the model. Our validation on CUB200-2011 demonstrates the feasibility of achieving a competitive NAS-optimized architecture for small datasets, compared to models pre-trained on larger ones.
Publication Reference
11th Swiss Conference on Data Science (SDS), May 30-31, 204, Zürich, Switzerland
This project was partially funded by EU H2020, through ANDANTE grant no. 876925.