README.md exists but content is empty.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Spaces using Shimin/LLaMA-embeeding 11
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported84.821
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported52.244
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported79.356
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported76.880
- ap on MTEB AmazonPolarityClassificationtest set self-reported71.756
- f1 on MTEB AmazonPolarityClassificationtest set self-reported76.759
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported36.716
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported36.332
- v_measure on MTEB ArxivClusteringS2Stest set self-reported30.154
- map on MTEB AskUbuntuDupQuestionstest set self-reported48.656
- mrr on MTEB AskUbuntuDupQuestionstest set self-reported62.226
- cos_sim_pearson on MTEB BIOSSEStest set self-reported69.187
- cos_sim_spearman on MTEB BIOSSEStest set self-reported67.850
- euclidean_pearson on MTEB BIOSSEStest set self-reported63.366
- euclidean_spearman on MTEB BIOSSEStest set self-reported63.015
- manhattan_pearson on MTEB BIOSSEStest set self-reported63.692
- manhattan_spearman on MTEB BIOSSEStest set self-reported63.597
- v_measure on MTEB BiorxivClusteringS2Stest set self-reported24.294
- accuracy on MTEB EmotionClassificationtest set self-reported41.930
- f1 on MTEB EmotionClassificationtest set self-reported38.451