Covering the latest development of novel methodologies for Binary Neural Networks and their application to Computer Vision. Bringing together a diverse group of researchers working in several related areas.
Authors are welcome to submit full 8-page papers or short 2-page extended abstracts on any of the following topics:
Paper submission deadline: | |
Decisions: | |
Camera ready papers due: | |
Extended abstract submission: | |
Extended abstract decisions: | |
Workshop Date: | June 25th, 2021 |
Please upload submissions at: cmt
The Workshop will take place on the 25th of June according to the following schedule. All times are in BST (UTC+1).
20:00 - 20:10 | Opening remarks and workshop kickoff |
20:10 - 20:40 | Invited talk: Daniel Soudry - On depth and data limitations with extreme quantization
We examine three aspects of quantized neural nets: |
20:40 - 21:10 | Invited talk: Nicholas Lane - What is Next for the Efficient Machine Learning Revolution?
Mobile and embedded devices increasingly rely on deep neural networks to understand the world -- a formerly impossible feat that would have overwhelmed their system resources just a few years ago. The age of on-device artificial intelligence is upon us; but incredibly, these dramatic changes are just the beginning. Looking ahead, mobile machine learning will extend beyond just classifying categories and perceptual tasks, to roles that alter how every part of the systems stack of smart devices function. This evolutionary step in constrained-resource computing will finally produce devices that meet our expectations in how they can learn, reason and react to the real-world. In this talk, I will briefly discuss the initial breakthroughs that allowed us to reach this point, and outline the next set of open problems we must overcome to bring about this next deep transformation of mobile and embedded computing. |
21:10 - 21:15 | Short Break |
21:15 - 21:30 | Training Dynamical Binary Neural Networks with Equilibrium Propagation - Jeremie Laydevant, Maxence M ERNOULT, Damien Querlioz and Julie Grollier |
21:30 - 21:45 | Enabling Binary Neural Network Training on the Edge - Erwei Wang, James Davis, Daniele Moro, Piotr Zielinski, Jia Jie Lim, Claudionor Coelho, Satrajit Chatterjee, Peter Cheung, George Constantinides |
21:45 - 22:15 | Invited talk: Diana Marculescu - miliJoules for 1000 Inferences: Machine Learning Systems ‘on the Cheap’
Machine learning (ML) applications have entered and impacted our lives unlike any other technology advance from the recent past. While the holy grail for judging the quality of a ML model has largely been serving accuracy, and only recently its resource usage, neither of these metrics translate directly to energy efficiency, runtime, or mobile device battery lifetime. This talk uncovers the need for designing efficient convolutional neural networks (CNNs) for deep learning mobile applications that operate under stringent energy and latency constraints. We show that while CNN model quantization and pruning are effective tools in bringing down the model size and resulting energy cost by up to 1000x while maintaining baseline accuracy, the interplay between bitwidth, channel count, and CNN memory footprint uncovers a non-trivial trade-off. Surprisingly, there exists a single weight bitwidth that is superior to others for a given storage constraint, even outperforming mixed-precision quantization. Our results show that even when the channel count is allowed to change, a single weight bitwidth can be sufficient for model compression, which greatly reduces the software and hardware optimization costs for CNN-based ML systems. |
22:15 - 22:30 | Break |
22:30 - 23:00 | Invited Talk: Tim de Bruin - BNNs for TinyML: performance beyond accuracy
Over the past few years, there has been a lot of exciting progress in the field of Binary Neural Networks. New training methods and network architectures have enabled rapid increases in accuracy, especially on traditional computer vision benchmarks such as ImageNet -- closing the gap to higher bit-width models while delivering on the promise of increased inference efficiency. At Plumerai, we are strong believers in BNNs. We think that their reduced memory, energy, and computational needs will be especially relevant in the subfield of TinyML, where they can enable previously infeasible products. However, the TinyML field does bring a unique set of challenges: from the quality of the data coming from the low-cost sensors to extreme constraints on the model architectures imposed by the available hardware. This means that solutions developed for ImageNet do not always generalize to this domain. These challenges also extend beyond simply obtaining a high enough accuracy, as real world performance is often more nuanced; requiring stable predictions and a good understanding of model biases. We demonstrate the effects of binarization within this domain. We start by demonstrating how binary convolutions make networks more sensitive to small changes to their inputs. We then show how changes in network architectures designed to more easily carry gradients during training cause models to pick up on different biases in their training data. We also explain how we combine our own collected data with our tiny BNNs into a tool to look at publicly available datasets, and some of the sampling biases they contain. Finally we make the case for an increased research focus into BNNs in the TinyML domain. Given the need for the strengths of BNNs in this domain, the lower computational cost of experiments and the fact that smaller networks bring some of the remaining challenges of BNNs more clearly in focus, we believe that research into TinyML-BNNs could be especially impactful. |
23:00 - 23:15 | "BNN - BN = ?": Training Binary Neural Networks without Batch Normalization - Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu, Zhiqiang Shen, Zhangyang Wang |
23:15 - 23:30 | Monte Carlo optimization for training Binary Neural Networks - Yurii Antentyk, Ivan Kosarevych, Volodymyr Tsapiv, Oles Dobosevych, Volodymyr Karpiv, Mykola Maksymenko and Maciej Koch-Janusz |
23:30 - 23:35 | Short Break |
23:35 - 00:05 | Invited Talk: Mohammad Rastegari and Maxwell Horton - Data-Free Model Compression
Efficient method for compressing a trained neural network without using any data is very challenging. Our data-free method requires 14x-450x fewer FLOPs than comparable state-of-the-art methods. We break the problem of data-free network compression into a number of independent layer-wise compressions. We show how to efficiently generate layer-wise training data, and how to precondition the network to maintain accuracy during layer-wise compression. We show state-of-the-art performance on MobileNetV1 for data-free low-bit-width quantization. We also show state-of-the-art performance on data-free pruning of EfficientNet B0 when combining our method with end-to-end generative methods. |
00:05 - 00:10 | Closing remarks and Conclusions |
BCNN: A Binary CNN With All Matrix Ops Quantized To 1 Bit Precision - Arthur J Redfern, Lijun Zhu and Molly Newquist | [Download] |
Learning Accurate BNNs by Pruning A Random Network - James Diffenderfer, Shreya Chaganti and Bhavya Kailkhura | [Download] |
Adaptive Binary-Ternary Quantization - Ryan Razani, Gregoire Morin, Eyyüb Sari and Vahid Partovi Nia | [Download] |
"BNN - BN = ?": Training Binary Neural Networks without Batch Normalization - Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu, Zhiqiang Shen and Zhangyang Wang | [Download] |
On the Application of Binary Neural Networks in Oblivious Inference - Mohammad Samragh, Siam Umar Hussain, XINQIAO ZHANG, Ke Huang and Farinaz Koushanfar | [Download] |
Training Dynamical Binary Neural Networks with Equilibrium Propagation - Jeremie Laydevant, Maxence M ERNOULT, Damien Querlioz and Julie Grollier | [Download] |
BNAS v2: A Summary with Empirical Improvements - Dahyun Kim, Kunal Pratap Singh and Jonghyun Choi | [Download] |
Fast Walsh-Hadamard Transform and Smooth-Thresholding Based Binary Layers in Deep Neural Networks - Hongyi Pan, Diaa Badawi and Ahmet E Cetin | [Download] |
Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution - Hyungjun Kim, Jihoon Park, Changhun Lee and Jae-Joon Kim | [Download] |
Initialization and Transfer Learning of Stochastic Binary Networks from Real-Valued Ones - Anastasiia Livochka and Alexander Shekhovtsov | [Download] |
Binary Graph Neural Networks - Mehdi Bahri, Gaetan Bahl and Stefanos Zafeiriou | [Download] |
Enabling Binary Neural Network Training on the Edge - Erwei Wang, James Davis, Daniele Moro, Piotr Zielinski, Jia Jie Lim, Claudionor Coelho, Satrajit Chatterjee, Peter Cheung and George Constantinides | [Download] |
Resistive RAM-based Implementation of Binarized Neural Networks Inference and Training - Atreya Majumdar | [Download] |
Synaptic metaplasticity in binarized neural networks - Axel Laborieux, Maxence M ERNOULT, Tifenn Hirtzlin and Damien Querlioz | [Download] |
Monte Carlo optimization for training Binary Neural Networks - Yurii Antentyk, Ivan Kosarevych, Volodymyr Tsapiv, Oles Dobosevych, Volodymyr Karpiv, Mykola Maksymenko and Maciej Koch-Janusz | [Download] |
Fully Binary CNNs - Martin Lukac, Kamila Abdiyeva and Tagir Nukenov | [Download] |
Technion
The University of Texas at Austin
University of Cambridge and Samsung AI
Apple
Plumerai
Samsung AI
HKUST and CMU
Samsung AI
Samsung AI
QMUL and Samsung AI