EfficientNet-B4: Optimized for Qualcomm Devices
EfficientNetB4 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This is based on the implementation of EfficientNet-B4 found here. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.
Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.
Getting Started
There are two ways to deploy this model on your device:
Option 1: Download Pre-Exported Models
Below are pre-exported model assets ready for deployment.
| Runtime | Precision | Chipset | SDK Versions | Download |
|---|---|---|---|---|
| ONNX | float | Universal | QAIRT 2.42, ONNX Runtime 1.24.1 | Download |
| ONNX | w8a16 | Universal | QAIRT 2.42, ONNX Runtime 1.24.1 | Download |
| QNN_DLC | float | Universal | QAIRT 2.43 | Download |
| QNN_DLC | w8a16 | Universal | QAIRT 2.43 | Download |
| TFLITE | float | Universal | QAIRT 2.43, TFLite 2.17.0 | Download |
For more device-specific assets and performance metrics, visit EfficientNet-B4 on Qualcomm® AI Hub.
Option 2: Export with Custom Configurations
Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:
- Custom weights (e.g., fine-tuned checkpoints)
- Custom input shapes
- Target device and runtime configurations
This option is ideal if you need to customize the model beyond the default configuration provided here.
See our repository for EfficientNet-B4 on GitHub for usage instructions.
Model Details
Model Type: Model_use_case.image_classification
Model Stats:
- Model checkpoint: Imagenet
- Input resolution: 380x380
- Number of parameters: 19.3M
- Model size (float): 73.6 MB
- Model size (w8a16): 24.0 MB
Performance Summary
| Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit |
|---|---|---|---|---|---|---|
| EfficientNet-B4 | ONNX | float | Snapdragon® X Elite | 3.351 ms | 45 - 45 MB | NPU |
| EfficientNet-B4 | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 2.249 ms | 0 - 127 MB | NPU |
| EfficientNet-B4 | ONNX | float | Qualcomm® QCS8550 (Proxy) | 3.049 ms | 0 - 52 MB | NPU |
| EfficientNet-B4 | ONNX | float | Qualcomm® QCS9075 | 4.017 ms | 0 - 4 MB | NPU |
| EfficientNet-B4 | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 1.763 ms | 0 - 77 MB | NPU |
| EfficientNet-B4 | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 1.471 ms | 0 - 77 MB | NPU |
| EfficientNet-B4 | ONNX | float | Snapdragon® X2 Elite | 1.632 ms | 45 - 45 MB | NPU |
| EfficientNet-B4 | QNN_DLC | float | Snapdragon® X Elite | 3.662 ms | 1 - 1 MB | NPU |
| EfficientNet-B4 | QNN_DLC | float | Snapdragon® 8 Gen 3 Mobile | 2.416 ms | 0 - 125 MB | NPU |
| EfficientNet-B4 | QNN_DLC | float | Qualcomm® QCS8275 (Proxy) | 12.016 ms | 1 - 69 MB | NPU |
| EfficientNet-B4 | QNN_DLC | float | Qualcomm® QCS8550 (Proxy) | 3.36 ms | 1 - 16 MB | NPU |
| EfficientNet-B4 | QNN_DLC | float | Qualcomm® QCS9075 | 4.199 ms | 3 - 5 MB | NPU |
| EfficientNet-B4 | QNN_DLC | float | Qualcomm® QCS8450 (Proxy) | 7.863 ms | 0 - 142 MB | NPU |
| EfficientNet-B4 | QNN_DLC | float | Snapdragon® 8 Elite For Galaxy Mobile | 1.862 ms | 0 - 74 MB | NPU |
| EfficientNet-B4 | QNN_DLC | float | Snapdragon® 8 Elite Gen 5 Mobile | 1.51 ms | 1 - 75 MB | NPU |
| EfficientNet-B4 | QNN_DLC | float | Snapdragon® X2 Elite | 1.951 ms | 1 - 1 MB | NPU |
| EfficientNet-B4 | QNN_DLC | w8a16 | Snapdragon® X Elite | 3.794 ms | 0 - 0 MB | NPU |
| EfficientNet-B4 | QNN_DLC | w8a16 | Snapdragon® 8 Gen 3 Mobile | 2.305 ms | 0 - 154 MB | NPU |
| EfficientNet-B4 | QNN_DLC | w8a16 | Qualcomm® QCS6490 | 8.883 ms | 2 - 4 MB | NPU |
| EfficientNet-B4 | QNN_DLC | w8a16 | Qualcomm® QCS8275 (Proxy) | 6.65 ms | 0 - 99 MB | NPU |
| EfficientNet-B4 | QNN_DLC | w8a16 | Qualcomm® QCS8550 (Proxy) | 3.452 ms | 0 - 2 MB | NPU |
| EfficientNet-B4 | QNN_DLC | w8a16 | Qualcomm® QCS9075 | 3.8 ms | 0 - 2 MB | NPU |
| EfficientNet-B4 | QNN_DLC | w8a16 | Qualcomm® QCM6690 | 17.122 ms | 0 - 231 MB | NPU |
| EfficientNet-B4 | QNN_DLC | w8a16 | Qualcomm® QCS8450 (Proxy) | 4.106 ms | 0 - 154 MB | NPU |
| EfficientNet-B4 | QNN_DLC | w8a16 | Snapdragon® 8 Elite For Galaxy Mobile | 1.602 ms | 0 - 102 MB | NPU |
| EfficientNet-B4 | QNN_DLC | w8a16 | Snapdragon® 7 Gen 4 Mobile | 3.599 ms | 0 - 109 MB | NPU |
| EfficientNet-B4 | QNN_DLC | w8a16 | Snapdragon® 8 Elite Gen 5 Mobile | 1.323 ms | 0 - 102 MB | NPU |
| EfficientNet-B4 | QNN_DLC | w8a16 | Snapdragon® X2 Elite | 1.711 ms | 0 - 0 MB | NPU |
| EfficientNet-B4 | TFLITE | float | Snapdragon® 8 Gen 3 Mobile | 2.408 ms | 0 - 165 MB | NPU |
| EfficientNet-B4 | TFLITE | float | Qualcomm® QCS8275 (Proxy) | 11.994 ms | 0 - 105 MB | NPU |
| EfficientNet-B4 | TFLITE | float | Qualcomm® QCS8550 (Proxy) | 3.344 ms | 0 - 5 MB | NPU |
| EfficientNet-B4 | TFLITE | float | Qualcomm® QCS9075 | 4.195 ms | 0 - 48 MB | NPU |
| EfficientNet-B4 | TFLITE | float | Qualcomm® QCS8450 (Proxy) | 7.804 ms | 0 - 186 MB | NPU |
| EfficientNet-B4 | TFLITE | float | Snapdragon® 8 Elite For Galaxy Mobile | 1.854 ms | 0 - 110 MB | NPU |
| EfficientNet-B4 | TFLITE | float | Snapdragon® 8 Elite Gen 5 Mobile | 1.511 ms | 0 - 106 MB | NPU |
License
- The license for the original implementation of EfficientNet-B4 can be found here.
References
- EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
- Source Model Implementation
Community
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.
