Architecture-Agnostic Masked Image Modeling - From ViT back to CNN

Siyuan Li, Di Wu, Fang Wu, Zelin Zang, Stan Z. Li

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations

Abstract

Masked image modeling, an emerging self-supervised pre-training method, has shown impressive success across numerous downstream vision tasks with Vision transformers. Its underlying idea is simple: a portion of the input image is masked out and then reconstructed via a pre-text task. However, the working principle behind MIM is not well explained, and previous studies insist that MIM primarily works for the Transformer family but is incompatible with CNNs. In this work, we observe that MIM essentially teaches the model to learn better middle-order interactions among patches for more generalized feature extraction. We then propose an Architecture-Agnostic Masked Image Modeling framework (A2MIM), which is compatible with both Transformers and CNNs in a unified way. Extensive experiments on popular benchmarks show that A2MIM learns better representations without explicit design and endows the backbone model with the stronger capability to transfer to various downstream tasks.

Original languageEnglish
Pages (from-to)19460-19470
Number of pages11
JournalProceedings of Machine Learning Research
Volume202
StatePublished - 1 Jan 2023
Externally publishedYes
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: 23 Jul 202329 Jul 2023

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Architecture-Agnostic Masked Image Modeling - From ViT back to CNN'. Together they form a unique fingerprint.

Cite this