mmunir commited on
Commit
503857b
·
verified ·
1 Parent(s): 8cf18a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -3
README.md CHANGED
@@ -1,3 +1,48 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ <div align="center">
6
+ <h3>RapidNet: Multi-Level Dilated Convolution Based Mobile Backbone (WACV 2025)</h3>
7
+
8
+ [Mustafa Munir](https://github.com/mmunir127), Md Mostafijur Rahman, and [Radu Marculescu](https://scholar.google.com/citations?user=ZCmYP5cAAAAJ&hl=en)
9
+
10
+ The University of Texas at Austin
11
+
12
+ [PDF](https://openaccess.thecvf.com/content/WACV2025/html/Munir_RapidNet_Multi-Level_Dilated_Convolution_Based_Mobile_Backbone_WACV_2025_paper.html) | [Arxiv](https://arxiv.org/abs/2412.10995)
13
+
14
+ [GAIN 2025 Best Poster Award](https://sites.utexas.edu/gain/gain-2025-winners/)
15
+
16
+
17
+ </div>
18
+
19
+ ## Abstract
20
+ Vision transformers (ViTs) have dominated computer vision in recent years. However, ViTs are computationally expensive and not well suited for mobile devices; this led to the prevalence of convolutional neural network (CNN) and ViT-based hybrid models for mobile vision applications. Recently, Vision GNN (ViG) and CNN hybrid models have also been proposed for mobile vision tasks. However, all of these methods remain slower compared to pure CNN-based models. In this work, we propose Multi-Level Dilated Convolutions to devise a purely CNN-based mobile backbone. Using Multi-Level Dilated Convolutions allows for a larger theoretical receptive field than standard convolutions. Different levels of dilation also allow for interactions between the short-range and long-range features in an image. Experiments show that our proposed model outperforms state-of-the-art (SOTA) mobile CNN, ViT, ViG, and hybrid architectures in terms of accuracy and/or speed on image classification, object detection, instance segmentation, and semantic segmentation. Our fastest model, RapidNet-Ti, achieves 76.3\% top-1 accuracy on ImageNet-1K with 0.9 ms inference latency on an iPhone 13 mini NPU, which is faster and more accurate than MobileNetV2x1.4 (74.7\% top-1 with 1.0 ms latency). Our work shows that pure CNN architectures can beat SOTA hybrid and ViT models in terms of accuracy and speed when designed properly.
21
+
22
+ ## Code
23
+ - Repo: https://github.com/mmunir127/RapidNet-Official
24
+
25
+ ### Citation
26
+ If our code or models help your work, please cite MobileViG (CVPRW 2023) and RapidNet (WACV 2025):
27
+
28
+ ```
29
+ @InProceedings{RapidNet_2025_WACV,
30
+ author = {Munir, Mustafa and Rahman, Md Mostafijur and Marculescu, Radu},
31
+ title = {RapidNet: Multi-Level Dilated Convolution Based Mobile Backbone},
32
+ booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)},
33
+ month = {February},
34
+ year = {2025},
35
+ pages = {8291-8301}
36
+ }
37
+ ```
38
+
39
+ ```
40
+ @InProceedings{mobilevig2023,
41
+ author = {Munir, Mustafa and Avery, William and Marculescu, Radu},
42
+ title = {MobileViG: Graph-Based Sparse Attention for Mobile Vision Applications},
43
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
44
+ month = {June},
45
+ year = {2023},
46
+ pages = {2211-2219}
47
+ }
48
+ ```