Binary and Extreme Quantization for Computer Vision

ICCV 2025 Workshop

Covering the latest development of novel methodologies for Extreme Quantization, Binary Neural Networks and their application to Computer Vision. Bringing together a diverse group of researchers working in several related areas.

Workshop Description

The pervasive deployment of deep learning models on resource-constrained devices necessitates a critical focus on model compactness, computational efficiency, and power consumption. A key pathway to achieving these goals lies in low-bit quantization, a technique that represents model weights and/or activations with a reduced number of bits (e.g., 2, 4 bits) instead of the standard 32-bit floating point. This significantly reduces model size and computational demands, enabling efficient on-device inference. While low-bit quantization offers immense potential for model compression and accelerated computation, a central challenge remains: how to train and deploy these models while maintaining accuracy comparable to their full-precision counterparts. Recent advancements have demonstrated the feasibility of achieving highly accurate low-bit quantized networks, opening new avenues for their application across diverse domains. This workshop aims to bring together a diverse group of researchers and practitioners from academia and industry to discuss the latest advancements, identify open problems, and foster collaborations in the exciting and rapidly evolving field of efficient deep learning through low-bit quantization. We invite submissions and presentations on novel algorithms, theoretical insights, and practical applications related to this critical area of research.

Call for Papers

Authors are welcome to submit full 8-page papers or short 2-page extended abstracts on any of the following topics:

  • Binary and low-bit quantized models for large vision-language models (LVLMs)
  • Post-training quantization for Neural Networks
  • New methodologies and architectures for training low-bit quantized neural networks
  • Applications of low-bit NNs in computer vision (e.g., image classification, segmentation, object detection, 3D and video recognition)
  • Binary and low-bit quantization for generative models (e.g., Diffusion, Visual Autoregressive models)
  • Hardware implementation and on-device deployment of low-bit NNs
  • New methodologies combining quantization with other efficient techniques (e.g. pruning, dynamic modeling etc.)
  • Federated learning with low-bit quantization.
  • On-device learning.

Important Dates

Paper submission deadline: July 1st, 2025 (11:59pm PST)
Decisions: July 10th, 2025 (11:59pm PST)
Camera ready papers due: August 17th, 2025 (11:59pm PST)
Extended abstract submission: August 11th, 2025 (11:59pm PST)
Extended abstract decisions: August 17th, 2025 (11:59pm PST)
Workshop Date: TBD

Submission Guidelines

  • Papers included in ICCV proceedings: Submitted (full 8-page) papers must be formatted using the ICCV 2025 template and should adhere to ICCV submission guidelines. The maximum file size for submissions is 50MB. The OpenReview-based review process will be double-blind. These submissions will be included in the proceedings and must contain new previously unpublished material.
  • Extended abstracts NOT included in ICCV proceedings: We encourage the submission of extended abstracts (2 pages plus references) that summarize previously published or unpublished work. Extended abstracts will undergo a light single-blind review process. Please use the standard ICCV template, adjusting only the length.
    • Previously published work: We welcome previously published papers from previous CV/ML conferences including ICCV 2025 which are within the scope of the workshop.
    • Unpublished work: We also encourage the submission of papers that summarize work in-progress. The idea of this type of submission is the dissemination of preliminary results or methods that fall within the overall scope of the workshop.

Please upload submissions at: link

Schedule

The Workshop schedule is not yet defined. Please check back closer to the conference date.

Invited Speakers

TBD

Organizers

Adrian Bulat

Samsung AI

Zechun Liu

Meta Reality Labs

Nic Lane

University of Cambridge and Flower Labs

Georgios Tzimiropoulos

QMUL and Samsung AI

Supported by

RAIDO

Previous editions

CVPR 2021

ICCV 2023