MobileSAM ONNX Models
MobileSAM models in ONNX format for on-device mobile inference.
Files
mobile_sam_encoder.onnx+.onnx.data- Image encoder (27MB)mobile_sam_decoder.onnx+.onnx.data- Mask decoder (23MB)
Usage
These models are designed for React Native applications using onnxruntime-react-native.
Model URLs
const ENCODER_MODEL_URL = 'https://huggingface.co/gifty-so/shoppy-mobilesam/resolve/main/mobile_sam_encoder.onnx';
const DECODER_MODEL_URL = 'https://huggingface.co/gifty-so/shoppy-mobilesam/resolve/main/mobile_sam_decoder.onnx';
Input Format
Encoder:
- Input:
input(float32, shape: [1, 3, 1024, 1024])- ImageNet normalized RGB image
- Output:
image_embeddings(float32, shape: [1, 256, 64, 64])
Decoder:
- Inputs:
image_embeddings(from encoder)point_coords(float32, shape: [1, N, 2]) - Point coordinates normalized to [0, 1024]point_labels(float32, shape: [1, N]) - 1 for positive, 0 for negativemask_input(float32, shape: [1, 1, 256, 256]) - Previous mask or zeroshas_mask_input(float32, shape: [1]) - 0 or 1orig_im_size(float32, shape: [2]) - Original image dimensions
- Outputs:
masks(float32) - Segmentation masksiou_predictions(float32) - Confidence scores
License
Apache 2.0
Citation
MobileSAM: https://github.com/ChaoningZhang/MobileSAM