p-e-w commited on
Commit
a0ae734
·
verified ·
1 Parent(s): 9257ab0

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +1221 -129
README.md CHANGED
@@ -1,199 +1,1291 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
11
 
12
- ## Model Details
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
- ### Model Description
 
 
15
 
16
- <!-- Provide a longer summary of what this model is. -->
 
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
45
 
46
- ### Downstream Use [optional]
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
 
50
- [More Information Needed]
51
 
52
- ### Out-of-Scope Use
 
 
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
 
56
- [More Information Needed]
57
 
58
- ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
 
62
- [More Information Needed]
63
 
64
- ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
- ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
73
 
74
- [More Information Needed]
75
 
76
- ## Training Details
77
 
78
- ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
- [More Information Needed]
83
 
84
- ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
 
88
- #### Preprocessing [optional]
89
 
90
- [More Information Needed]
91
 
92
 
93
- #### Training Hyperparameters
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
 
97
- #### Speeds, Sizes, Times [optional]
98
 
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
 
101
- [More Information Needed]
102
 
103
- ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
 
107
- ### Testing Data, Factors & Metrics
108
 
109
- #### Testing Data
110
 
111
- <!-- This should link to a Dataset Card if possible. -->
112
 
113
- [More Information Needed]
114
 
115
- #### Factors
116
 
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
 
119
- [More Information Needed]
120
 
121
- #### Metrics
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
 
125
- [More Information Needed]
126
 
127
- ### Results
128
 
129
- [More Information Needed]
130
 
131
- #### Summary
132
 
 
133
 
 
134
 
135
- ## Model Examination [optional]
136
 
137
- <!-- Relevant interpretability work for the model goes here -->
138
 
139
- [More Information Needed]
140
 
141
- ## Environmental Impact
142
 
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
1
  ---
2
+ language:
3
+ - en
4
+ - de
5
+ - fr
6
+ - it
7
+ - pt
8
+ - hi
9
+ - es
10
+ - th
11
+ license: llama3.1
12
+ base_model:
13
+ - meta-llama/Llama-3.1-8B-Instruct
14
+ pipeline_tag: text-generation
15
+ tags:
16
+ - facebook
17
+ - unsloth
18
+ - meta
19
+ - pytorch
20
+ - llama
21
+ - llama-3
22
+ - heretic
23
+ - uncensored
24
+ - decensored
25
+ - abliterated
26
+ extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\
27
+ \ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\
28
+ \ use, reproduction, distribution and modification of the Llama Materials set forth\
29
+ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\
30
+ \ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\
31
+ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\
32
+ \ (if you are entering into this Agreement on such person or entity’s behalf), of\
33
+ \ the age required under applicable laws, rules or regulations to provide legal\
34
+ \ consent and that has legal authority to bind your employer or such other person\
35
+ \ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\
36
+ \ means the foundational large language models and software and algorithms, including\
37
+ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
38
+ \ code, fine-tuning enabling code and other elements of the foregoing distributed\
39
+ \ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\
40
+ \ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\
41
+ \ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\
42
+ \ Ireland Limited (if you are located in or, if you are an entity, your principal\
43
+ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\
44
+ \ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\
45
+ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
46
+ \ and royalty-free limited license under Meta’s intellectual property or other rights\
47
+ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\
48
+ \ create derivative works of, and make modifications to the Llama Materials.\nb.\
49
+ \ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\
50
+ \ (or any derivative works thereof), or a product or service (including another\
51
+ \ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\
52
+ \ with any such Llama Materials; and (B) prominently display “Built with Llama”\
53
+ \ on a related website, user interface, blogpost, about page, or product documentation.\
54
+ \ If you use the Llama Materials or any outputs or results of the Llama Materials\
55
+ \ to create, train, fine tune, or otherwise improve an AI model, which is distributed\
56
+ \ or made available, you shall also include “Llama” at the beginning of any such\
57
+ \ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\
58
+ \ from a Licensee as part of an integrated end user product, then Section 2 of\
59
+ \ this Agreement will not apply to you.\niii. You must retain in all copies of the\
60
+ \ Llama Materials that you distribute the following attribution notice within a\
61
+ \ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\
62
+ \ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\
63
+ \ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\
64
+ \ and regulations (including trade compliance laws and regulations) and adhere to\
65
+ \ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\
66
+ \ which is hereby incorporated by reference into this Agreement.\n2. Additional\
67
+ \ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\
68
+ \ users of the products or services made available by or for Licensee, or Licensee’s\
69
+ \ affiliates, is greater than 700 million monthly active users in the preceding\
70
+ \ calendar month, you must request a license from Meta, which Meta may grant to\
71
+ \ you in its sole discretion, and you are not authorized to exercise any of the\
72
+ \ rights under this Agreement unless or until Meta otherwise expressly grants you\
73
+ \ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\
74
+ \ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\
75
+ \ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\
76
+ \ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\
77
+ \ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\
78
+ \ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\
79
+ \ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\
80
+ \ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\
81
+ \ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\
82
+ \ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\
83
+ \ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\
84
+ \ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\
85
+ \ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\
86
+ \ trademark licenses are granted under this Agreement, and in connection with the\
87
+ \ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\
88
+ \ associated with the other or any of its affiliates, except as required for reasonable\
89
+ \ and customary use in describing and redistributing the Llama Materials or as set\
90
+ \ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\
91
+ \ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\
92
+ \ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\
93
+ \ ). All goodwill arising out of your use of the Mark will inure to the benefit\
94
+ \ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\
95
+ \ by or for Meta, with respect to any derivative works and modifications of the\
96
+ \ Llama Materials that are made by you, as between you and Meta, you are and will\
97
+ \ be the owner of such derivative works and modifications.\nc. If you institute\
98
+ \ litigation or other proceedings against Meta or any entity (including a cross-claim\
99
+ \ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\
100
+ \ or results, or any portion of any of the foregoing, constitutes infringement of\
101
+ \ intellectual property or other rights owned or licensable by you, then any licenses\
102
+ \ granted to you under this Agreement shall terminate as of the date such litigation\
103
+ \ or claim is filed or instituted. You will indemnify and hold harmless Meta from\
104
+ \ and against any claim by any third party arising out of or related to your use\
105
+ \ or distribution of the Llama Materials.\n6. Term and Termination. The term of\
106
+ \ this Agreement will commence upon your acceptance of this Agreement or access\
107
+ \ to the Llama Materials and will continue in full force and effect until terminated\
108
+ \ in accordance with the terms and conditions herein. Meta may terminate this Agreement\
109
+ \ if you are in breach of any term or condition of this Agreement. Upon termination\
110
+ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
111
+ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
112
+ \ and Jurisdiction. This Agreement will be governed and construed under the laws\
113
+ \ of the State of California without regard to choice of law principles, and the\
114
+ \ UN Convention on Contracts for the International Sale of Goods does not apply\
115
+ \ to this Agreement. The courts of California shall have exclusive jurisdiction\
116
+ \ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\
117
+ Meta is committed to promoting safe and fair use of its tools and features, including\
118
+ \ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\
119
+ \ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\
120
+ #### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\
121
+ \ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\
122
+ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
123
+ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
124
+ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
125
+ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\
126
+ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\
127
+ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
128
+ \ illegal distribution of information or materials to minors, including obscene\
129
+ \ materials, or failure to employ legally required age-gating in connection with\
130
+ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\
131
+ \ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\
132
+ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
133
+ \ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\
134
+ \ or harmful conduct in the provision of employment, employment benefits, credit,\
135
+ \ housing, other economic benefits, or other essential goods and services\n 5.\
136
+ \ Engage in the unauthorized or unlicensed practice of any profession including,\
137
+ \ but not limited to, financial, legal, medical/health, or related professional\
138
+ \ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\
139
+ \ or other sensitive personal or private information about individuals without rights\
140
+ \ and consents required by applicable laws\n 7. Engage in or facilitate any action\
141
+ \ or generate any content that infringes, misappropriates, or otherwise violates\
142
+ \ any third-party rights, including the outputs or results of any products or services\
143
+ \ using the Llama Materials\n 8. Create, generate, or facilitate the creation\
144
+ \ of malicious code, malware, computer viruses or do anything else that could disable,\
145
+ \ overburden, interfere with or impair the proper working, integrity, operation\
146
+ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\
147
+ \ facilitate, or assist in the planning or development of activities that present\
148
+ \ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\
149
+ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\
150
+ \ espionage, use for materials or activities that are subject to the International\
151
+ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
152
+ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
153
+ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
154
+ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
155
+ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
156
+ \ content intended to incite or promote violence, abuse, or any infliction of bodily\
157
+ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\
158
+ \ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\
159
+ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
160
+ \ or furthering defamatory content, including the creation of defamatory statements,\
161
+ \ images, or other content\n 3. Generating, promoting, or further distributing\
162
+ \ spam\n 4. Impersonating another individual without consent, authorization,\
163
+ \ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\
164
+ \ 6. Generating or facilitating false online engagement, including fake reviews\
165
+ \ and other means of fake online engagement\n4. Fail to appropriately disclose to\
166
+ \ end users any known dangers of your AI system\nPlease report any violation of\
167
+ \ this Policy, software “bug,” or other problems that could lead to a violation\
168
+ \ of this Policy through one of the following means:\n * Reporting issues with\
169
+ \ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\
170
+ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
171
+ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
172
+ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: LlamaUseReport@meta.com"
173
+ extra_gated_fields:
174
+ First Name: text
175
+ Last Name: text
176
+ Date of birth: date_picker
177
+ Country: country
178
+ Affiliation: text
179
+ Job title:
180
+ type: select
181
+ options:
182
+ - Student
183
+ - Research Graduate
184
+ - AI researcher
185
+ - AI developer/engineer
186
+ - Reporter
187
+ - Other
188
+ geo: ip_location
189
+ ? By clicking Submit below I accept the terms of the license and acknowledge that
190
+ the information I provide will be collected stored processed and shared in accordance
191
+ with the Meta Privacy Policy
192
+ : checkbox
193
+ extra_gated_description: The information you provide will be collected, stored, processed
194
+ and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
195
+ extra_gated_button_content: Submit
196
  ---
197
+ # This is a decensored version of [unsloth/Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Llama-3.1-8B-Instruct), made using [Heretic](https://github.com/p-e-w/heretic) v1.0.0
198
+
199
+ ## Abliteration parameters
200
+
201
+ | Parameter | Value |
202
+ | :-------- | :---: |
203
+ | **direction_index** | 13.20 |
204
+ | **attn.o_proj.max_weight** | 1.45 |
205
+ | **attn.o_proj.max_weight_position** | 23.85 |
206
+ | **attn.o_proj.min_weight** | 1.34 |
207
+ | **attn.o_proj.min_weight_distance** | 15.98 |
208
+ | **mlp.down_proj.max_weight** | 0.97 |
209
+ | **mlp.down_proj.max_weight_position** | 20.83 |
210
+ | **mlp.down_proj.min_weight** | 0.61 |
211
+ | **mlp.down_proj.min_weight_distance** | 12.90 |
212
+
213
+ ## Performance
214
+
215
+ | Metric | This model | Original model ([unsloth/Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Llama-3.1-8B-Instruct)) |
216
+ | :----- | :--------: | :---------------------------: |
217
+ | **KL divergence** | 0.02 | 0 *(by definition)* |
218
+ | **Refusals** | 3/100 | 96/100 |
219
+
220
+ -----
221
+
222
+
223
+ ## Model Information
224
+
225
+ The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
226
+
227
+ **Model developer**: Meta
228
+
229
+ **Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
230
+
231
+
232
+ <table>
233
+ <tr>
234
+ <td>
235
+ </td>
236
+ <td><strong>Training Data</strong>
237
+ </td>
238
+ <td><strong>Params</strong>
239
+ </td>
240
+ <td><strong>Input modalities</strong>
241
+ </td>
242
+ <td><strong>Output modalities</strong>
243
+ </td>
244
+ <td><strong>Context length</strong>
245
+ </td>
246
+ <td><strong>GQA</strong>
247
+ </td>
248
+ <td><strong>Token count</strong>
249
+ </td>
250
+ <td><strong>Knowledge cutoff</strong>
251
+ </td>
252
+ </tr>
253
+ <tr>
254
+ <td rowspan="3" >Llama 3.1 (text only)
255
+ </td>
256
+ <td rowspan="3" >A new mix of publicly available online data.
257
+ </td>
258
+ <td>8B
259
+ </td>
260
+ <td>Multilingual Text
261
+ </td>
262
+ <td>Multilingual Text and code
263
+ </td>
264
+ <td>128k
265
+ </td>
266
+ <td>Yes
267
+ </td>
268
+ <td rowspan="3" >15T+
269
+ </td>
270
+ <td rowspan="3" >December 2023
271
+ </td>
272
+ </tr>
273
+ <tr>
274
+ <td>70B
275
+ </td>
276
+ <td>Multilingual Text
277
+ </td>
278
+ <td>Multilingual Text and code
279
+ </td>
280
+ <td>128k
281
+ </td>
282
+ <td>Yes
283
+ </td>
284
+ </tr>
285
+ <tr>
286
+ <td>405B
287
+ </td>
288
+ <td>Multilingual Text
289
+ </td>
290
+ <td>Multilingual Text and code
291
+ </td>
292
+ <td>128k
293
+ </td>
294
+ <td>Yes
295
+ </td>
296
+ </tr>
297
+ </table>
298
+
299
+
300
+ **Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
301
+
302
+ **Llama 3.1 family of models**. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
303
+
304
+ **Model Release Date:** July 23, 2024.
305
+
306
+ **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
307
+
308
+ **License:** A custom commercial license, the Llama 3.1 Community License, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
309
+
310
+ Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
311
+
312
+
313
+ ## Intended Use
314
+
315
+ **Intended Use Cases** Llama 3.1 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.1 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.1 Community License allows for these use cases.
316
+
317
+ **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.1 Community License. Use in languages beyond those explicitly referenced as supported in this model card**.
318
 
319
+ **<span style="text-decoration:underline;">Note</span>: Llama 3.1 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.1 models for languages beyond the 8 supported languages provided they comply with the Llama 3.1 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.1 in additional languages is done in a safe and responsible manner.
320
 
321
+ ## How to use
322
 
323
+ This repository contains two versions of Meta-Llama-3.1-8B-Instruct, for use with transformers and with the original `llama` codebase.
324
 
325
+ ### Use with transformers
326
 
327
+ Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
328
+
329
+ Make sure to update your transformers installation via `pip install --upgrade transformers`.
330
+
331
+ ```python
332
+ import transformers
333
+ import torch
334
+
335
+ model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
336
+
337
+ pipeline = transformers.pipeline(
338
+ "text-generation",
339
+ model=model_id,
340
+ model_kwargs={"torch_dtype": torch.bfloat16},
341
+ device_map="auto",
342
+ )
343
+
344
+ messages = [
345
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
346
+ {"role": "user", "content": "Who are you?"},
347
+ ]
348
+
349
+ outputs = pipeline(
350
+ messages,
351
+ max_new_tokens=256,
352
+ )
353
+ print(outputs[0]["generated_text"][-1])
354
+ ```
355
+
356
+ Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes)
357
+
358
+ ### Tool use with transformers
359
+
360
+ LLaMA-3.1 supports multiple tool use formats. You can see a full guide to prompt formatting [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/).
361
+
362
+ Tool use is also supported through [chat templates](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling) in Transformers.
363
+ Here is a quick example showing a single simple tool:
364
+
365
+ ```python
366
+ # First, define a tool
367
+ def get_current_temperature(location: str) -> float:
368
+ """
369
+ Get the current temperature at a location.
370
+
371
+ Args:
372
+ location: The location to get the temperature for, in the format "City, Country"
373
+ Returns:
374
+ The current temperature at the specified location in the specified units, as a float.
375
+ """
376
+ return 22. # A real function should probably actually get the temperature!
377
+
378
+ # Next, create a chat and apply the chat template
379
+ messages = [
380
+ {"role": "system", "content": "You are a bot that responds to weather queries."},
381
+ {"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
382
+ ]
383
+
384
+ inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True)
385
+ ```
386
+
387
+ You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so:
388
+
389
+ ```python
390
+ tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}}
391
+ messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})
392
+ ```
393
+
394
+ and then call the tool and append the result, with the `tool` role, like so:
395
 
396
+ ```python
397
+ messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})
398
+ ```
399
 
400
+ After that, you can `generate()` again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information,
401
+ see the [LLaMA prompt format docs](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/) and the Transformers [tool use documentation](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling).
402
 
 
403
 
404
+ ### Use with `llama`
 
 
 
 
 
 
405
 
406
+ Please, follow the instructions in the [repository](https://github.com/meta-llama/llama)
407
 
408
+ To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
409
 
410
+ ```
411
+ huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --include "original/*" --local-dir Meta-Llama-3.1-8B-Instruct
412
+ ```
413
 
414
+ ## Hardware and Software
415
 
416
+ **Training Factors** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, annotation, and evaluation were also performed on production infrastructure.
417
+
418
+ **Training utilized a cumulative of** 39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
419
+
420
+
421
+ **Training Greenhouse Gas Emissions** Estimated total location-based greenhouse gas emissions were **11,390** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy, therefore the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
422
+
423
+
424
+ <table>
425
+ <tr>
426
+ <td>
427
+ </td>
428
+ <td><strong>Training Time (GPU hours)</strong>
429
+ </td>
430
+ <td><strong>Training Power Consumption (W)</strong>
431
+ </td>
432
+ <td><strong>Training Location-Based Greenhouse Gas Emissions</strong>
433
+ <p>
434
+ <strong>(tons CO2eq)</strong>
435
+ </td>
436
+ <td><strong>Training Market-Based Greenhouse Gas Emissions</strong>
437
+ <p>
438
+ <strong>(tons CO2eq)</strong>
439
+ </td>
440
+ </tr>
441
+ <tr>
442
+ <td>Llama 3.1 8B
443
+ </td>
444
+ <td>1.46M
445
+ </td>
446
+ <td>700
447
+ </td>
448
+ <td>420
449
+ </td>
450
+ <td>0
451
+ </td>
452
+ </tr>
453
+ <tr>
454
+ <td>Llama 3.1 70B
455
+ </td>
456
+ <td>7.0M
457
+ </td>
458
+ <td>700
459
+ </td>
460
+ <td>2,040
461
+ </td>
462
+ <td>0
463
+ </td>
464
+ </tr>
465
+ <tr>
466
+ <td>Llama 3.1 405B
467
+ </td>
468
+ <td>30.84M
469
+ </td>
470
+ <td>700
471
+ </td>
472
+ <td>8,930
473
+ </td>
474
+ <td>0
475
+ </td>
476
+ </tr>
477
+ <tr>
478
+ <td>Total
479
+ </td>
480
+ <td>39.3M
481
+ <td>
482
+ <ul>
483
+
484
+ </ul>
485
+ </td>
486
+ <td>11,390
487
+ </td>
488
+ <td>0
489
+ </td>
490
+ </tr>
491
+ </table>
492
+
493
+
494
+
495
+ The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
496
+
497
+
498
+ ## Training Data
499
+
500
+ **Overview:** Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples.
501
+
502
+ **Data Freshness:** The pretraining data has a cutoff of December 2023.
503
+
504
+
505
+ ## Benchmark scores
506
+
507
+ In this section, we report the results for Llama 3.1 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library.
508
+
509
+ ### Base pretrained models
510
+
511
+
512
+ <table>
513
+ <tr>
514
+ <td><strong>Category</strong>
515
+ </td>
516
+ <td><strong>Benchmark</strong>
517
+ </td>
518
+ <td><strong># Shots</strong>
519
+ </td>
520
+ <td><strong>Metric</strong>
521
+ </td>
522
+ <td><strong>Llama 3 8B</strong>
523
+ </td>
524
+ <td><strong>Llama 3.1 8B</strong>
525
+ </td>
526
+ <td><strong>Llama 3 70B</strong>
527
+ </td>
528
+ <td><strong>Llama 3.1 70B</strong>
529
+ </td>
530
+ <td><strong>Llama 3.1 405B</strong>
531
+ </td>
532
+ </tr>
533
+ <tr>
534
+ <td rowspan="7" >General
535
+ </td>
536
+ <td>MMLU
537
+ </td>
538
+ <td>5
539
+ </td>
540
+ <td>macro_avg/acc_char
541
+ </td>
542
+ <td>66.7
543
+ </td>
544
+ <td>66.7
545
+ </td>
546
+ <td>79.5
547
+ </td>
548
+ <td>79.3
549
+ </td>
550
+ <td>85.2
551
+ </td>
552
+ </tr>
553
+ <tr>
554
+ <td>MMLU-Pro (CoT)
555
+ </td>
556
+ <td>5
557
+ </td>
558
+ <td>macro_avg/acc_char
559
+ </td>
560
+ <td>36.2
561
+ </td>
562
+ <td>37.1
563
+ </td>
564
+ <td>55.0
565
+ </td>
566
+ <td>53.8
567
+ </td>
568
+ <td>61.6
569
+ </td>
570
+ </tr>
571
+ <tr>
572
+ <td>AGIEval English
573
+ </td>
574
+ <td>3-5
575
+ </td>
576
+ <td>average/acc_char
577
+ </td>
578
+ <td>47.1
579
+ </td>
580
+ <td>47.8
581
+ </td>
582
+ <td>63.0
583
+ </td>
584
+ <td>64.6
585
+ </td>
586
+ <td>71.6
587
+ </td>
588
+ </tr>
589
+ <tr>
590
+ <td>CommonSenseQA
591
+ </td>
592
+ <td>7
593
+ </td>
594
+ <td>acc_char
595
+ </td>
596
+ <td>72.6
597
+ </td>
598
+ <td>75.0
599
+ </td>
600
+ <td>83.8
601
+ </td>
602
+ <td>84.1
603
+ </td>
604
+ <td>85.8
605
+ </td>
606
+ </tr>
607
+ <tr>
608
+ <td>Winogrande
609
+ </td>
610
+ <td>5
611
+ </td>
612
+ <td>acc_char
613
+ </td>
614
+ <td>-
615
+ </td>
616
+ <td>60.5
617
+ </td>
618
+ <td>-
619
+ </td>
620
+ <td>83.3
621
+ </td>
622
+ <td>86.7
623
+ </td>
624
+ </tr>
625
+ <tr>
626
+ <td>BIG-Bench Hard (CoT)
627
+ </td>
628
+ <td>3
629
+ </td>
630
+ <td>average/em
631
+ </td>
632
+ <td>61.1
633
+ </td>
634
+ <td>64.2
635
+ </td>
636
+ <td>81.3
637
+ </td>
638
+ <td>81.6
639
+ </td>
640
+ <td>85.9
641
+ </td>
642
+ </tr>
643
+ <tr>
644
+ <td>ARC-Challenge
645
+ </td>
646
+ <td>25
647
+ </td>
648
+ <td>acc_char
649
+ </td>
650
+ <td>79.4
651
+ </td>
652
+ <td>79.7
653
+ </td>
654
+ <td>93.1
655
+ </td>
656
+ <td>92.9
657
+ </td>
658
+ <td>96.1
659
+ </td>
660
+ </tr>
661
+ <tr>
662
+ <td>Knowledge reasoning
663
+ </td>
664
+ <td>TriviaQA-Wiki
665
+ </td>
666
+ <td>5
667
+ </td>
668
+ <td>em
669
+ </td>
670
+ <td>78.5
671
+ </td>
672
+ <td>77.6
673
+ </td>
674
+ <td>89.7
675
+ </td>
676
+ <td>89.8
677
+ </td>
678
+ <td>91.8
679
+ </td>
680
+ </tr>
681
+ <tr>
682
+ <td rowspan="4" >Reading comprehension
683
+ </td>
684
+ <td>SQuAD
685
+ </td>
686
+ <td>1
687
+ </td>
688
+ <td>em
689
+ </td>
690
+ <td>76.4
691
+ </td>
692
+ <td>77.0
693
+ </td>
694
+ <td>85.6
695
+ </td>
696
+ <td>81.8
697
+ </td>
698
+ <td>89.3
699
+ </td>
700
+ </tr>
701
+ <tr>
702
+ <td>QuAC (F1)
703
+ </td>
704
+ <td>1
705
+ </td>
706
+ <td>f1
707
+ </td>
708
+ <td>44.4
709
+ </td>
710
+ <td>44.9
711
+ </td>
712
+ <td>51.1
713
+ </td>
714
+ <td>51.1
715
+ </td>
716
+ <td>53.6
717
+ </td>
718
+ </tr>
719
+ <tr>
720
+ <td>BoolQ
721
+ </td>
722
+ <td>0
723
+ </td>
724
+ <td>acc_char
725
+ </td>
726
+ <td>75.7
727
+ </td>
728
+ <td>75.0
729
+ </td>
730
+ <td>79.0
731
+ </td>
732
+ <td>79.4
733
+ </td>
734
+ <td>80.0
735
+ </td>
736
+ </tr>
737
+ <tr>
738
+ <td>DROP (F1)
739
+ </td>
740
+ <td>3
741
+ </td>
742
+ <td>f1
743
+ </td>
744
+ <td>58.4
745
+ </td>
746
+ <td>59.5
747
+ </td>
748
+ <td>79.7
749
+ </td>
750
+ <td>79.6
751
+ </td>
752
+ <td>84.8
753
+ </td>
754
+ </tr>
755
+ </table>
756
+
757
+
758
+
759
+ ### Instruction tuned models
760
+
761
+
762
+ <table>
763
+ <tr>
764
+ <td><strong>Category</strong>
765
+ </td>
766
+ <td><strong>Benchmark</strong>
767
+ </td>
768
+ <td><strong># Shots</strong>
769
+ </td>
770
+ <td><strong>Metric</strong>
771
+ </td>
772
+ <td><strong>Llama 3 8B Instruct</strong>
773
+ </td>
774
+ <td><strong>Llama 3.1 8B Instruct</strong>
775
+ </td>
776
+ <td><strong>Llama 3 70B Instruct</strong>
777
+ </td>
778
+ <td><strong>Llama 3.1 70B Instruct</strong>
779
+ </td>
780
+ <td><strong>Llama 3.1 405B Instruct</strong>
781
+ </td>
782
+ </tr>
783
+ <tr>
784
+ <td rowspan="4" >General
785
+ </td>
786
+ <td>MMLU
787
+ </td>
788
+ <td>5
789
+ </td>
790
+ <td>macro_avg/acc
791
+ </td>
792
+ <td>68.5
793
+ </td>
794
+ <td>69.4
795
+ </td>
796
+ <td>82.0
797
+ </td>
798
+ <td>83.6
799
+ </td>
800
+ <td>87.3
801
+ </td>
802
+ </tr>
803
+ <tr>
804
+ <td>MMLU (CoT)
805
+ </td>
806
+ <td>0
807
+ </td>
808
+ <td>macro_avg/acc
809
+ </td>
810
+ <td>65.3
811
+ </td>
812
+ <td>73.0
813
+ </td>
814
+ <td>80.9
815
+ </td>
816
+ <td>86.0
817
+ </td>
818
+ <td>88.6
819
+ </td>
820
+ </tr>
821
+ <tr>
822
+ <td>MMLU-Pro (CoT)
823
+ </td>
824
+ <td>5
825
+ </td>
826
+ <td>micro_avg/acc_char
827
+ </td>
828
+ <td>45.5
829
+ </td>
830
+ <td>48.3
831
+ </td>
832
+ <td>63.4
833
+ </td>
834
+ <td>66.4
835
+ </td>
836
+ <td>73.3
837
+ </td>
838
+ </tr>
839
+ <tr>
840
+ <td>IFEval
841
+ </td>
842
+ <td>
843
+ </td>
844
+ <td>
845
+ </td>
846
+ <td>76.8
847
+ </td>
848
+ <td>80.4
849
+ </td>
850
+ <td>82.9
851
+ </td>
852
+ <td>87.5
853
+ </td>
854
+ <td>88.6
855
+ </td>
856
+ </tr>
857
+ <tr>
858
+ <td rowspan="2" >Reasoning
859
+ </td>
860
+ <td>ARC-C
861
+ </td>
862
+ <td>0
863
+ </td>
864
+ <td>acc
865
+ </td>
866
+ <td>82.4
867
+ </td>
868
+ <td>83.4
869
+ </td>
870
+ <td>94.4
871
+ </td>
872
+ <td>94.8
873
+ </td>
874
+ <td>96.9
875
+ </td>
876
+ </tr>
877
+ <tr>
878
+ <td>GPQA
879
+ </td>
880
+ <td>0
881
+ </td>
882
+ <td>em
883
+ </td>
884
+ <td>34.6
885
+ </td>
886
+ <td>30.4
887
+ </td>
888
+ <td>39.5
889
+ </td>
890
+ <td>46.7
891
+ </td>
892
+ <td>50.7
893
+ </td>
894
+ </tr>
895
+ <tr>
896
+ <td rowspan="4" >Code
897
+ </td>
898
+ <td>HumanEval
899
+ </td>
900
+ <td>0
901
+ </td>
902
+ <td>pass@1
903
+ </td>
904
+ <td>60.4
905
+ </td>
906
+ <td>72.6
907
+ </td>
908
+ <td>81.7
909
+ </td>
910
+ <td>80.5
911
+ </td>
912
+ <td>89.0
913
+ </td>
914
+ </tr>
915
+ <tr>
916
+ <td>MBPP ++ base version
917
+ </td>
918
+ <td>0
919
+ </td>
920
+ <td>pass@1
921
+ </td>
922
+ <td>70.6
923
+ </td>
924
+ <td>72.8
925
+ </td>
926
+ <td>82.5
927
+ </td>
928
+ <td>86.0
929
+ </td>
930
+ <td>88.6
931
+ </td>
932
+ </tr>
933
+ <tr>
934
+ <td>Multipl-E HumanEval
935
+ </td>
936
+ <td>0
937
+ </td>
938
+ <td>pass@1
939
+ </td>
940
+ <td>-
941
+ </td>
942
+ <td>50.8
943
+ </td>
944
+ <td>-
945
+ </td>
946
+ <td>65.5
947
+ </td>
948
+ <td>75.2
949
+ </td>
950
+ </tr>
951
+ <tr>
952
+ <td>Multipl-E MBPP
953
+ </td>
954
+ <td>0
955
+ </td>
956
+ <td>pass@1
957
+ </td>
958
+ <td>-
959
+ </td>
960
+ <td>52.4
961
+ </td>
962
+ <td>-
963
+ </td>
964
+ <td>62.0
965
+ </td>
966
+ <td>65.7
967
+ </td>
968
+ </tr>
969
+ <tr>
970
+ <td rowspan="2" >Math
971
+ </td>
972
+ <td>GSM-8K (CoT)
973
+ </td>
974
+ <td>8
975
+ </td>
976
+ <td>em_maj1@1
977
+ </td>
978
+ <td>80.6
979
+ </td>
980
+ <td>84.5
981
+ </td>
982
+ <td>93.0
983
+ </td>
984
+ <td>95.1
985
+ </td>
986
+ <td>96.8
987
+ </td>
988
+ </tr>
989
+ <tr>
990
+ <td>MATH (CoT)
991
+ </td>
992
+ <td>0
993
+ </td>
994
+ <td>final_em
995
+ </td>
996
+ <td>29.1
997
+ </td>
998
+ <td>51.9
999
+ </td>
1000
+ <td>51.0
1001
+ </td>
1002
+ <td>68.0
1003
+ </td>
1004
+ <td>73.8
1005
+ </td>
1006
+ </tr>
1007
+ <tr>
1008
+ <td rowspan="4" >Tool Use
1009
+ </td>
1010
+ <td>API-Bank
1011
+ </td>
1012
+ <td>0
1013
+ </td>
1014
+ <td>acc
1015
+ </td>
1016
+ <td>48.3
1017
+ </td>
1018
+ <td>82.6
1019
+ </td>
1020
+ <td>85.1
1021
+ </td>
1022
+ <td>90.0
1023
+ </td>
1024
+ <td>92.0
1025
+ </td>
1026
+ </tr>
1027
+ <tr>
1028
+ <td>BFCL
1029
+ </td>
1030
+ <td>0
1031
+ </td>
1032
+ <td>acc
1033
+ </td>
1034
+ <td>60.3
1035
+ </td>
1036
+ <td>76.1
1037
+ </td>
1038
+ <td>83.0
1039
+ </td>
1040
+ <td>84.8
1041
+ </td>
1042
+ <td>88.5
1043
+ </td>
1044
+ </tr>
1045
+ <tr>
1046
+ <td>Gorilla Benchmark API Bench
1047
+ </td>
1048
+ <td>0
1049
+ </td>
1050
+ <td>acc
1051
+ </td>
1052
+ <td>1.7
1053
+ </td>
1054
+ <td>8.2
1055
+ </td>
1056
+ <td>14.7
1057
+ </td>
1058
+ <td>29.7
1059
+ </td>
1060
+ <td>35.3
1061
+ </td>
1062
+ </tr>
1063
+ <tr>
1064
+ <td>Nexus (0-shot)
1065
+ </td>
1066
+ <td>0
1067
+ </td>
1068
+ <td>macro_avg/acc
1069
+ </td>
1070
+ <td>18.1
1071
+ </td>
1072
+ <td>38.5
1073
+ </td>
1074
+ <td>47.8
1075
+ </td>
1076
+ <td>56.7
1077
+ </td>
1078
+ <td>58.7
1079
+ </td>
1080
+ </tr>
1081
+ <tr>
1082
+ <td>Multilingual
1083
+ </td>
1084
+ <td>Multilingual MGSM (CoT)
1085
+ </td>
1086
+ <td>0
1087
+ </td>
1088
+ <td>em
1089
+ </td>
1090
+ <td>-
1091
+ </td>
1092
+ <td>68.9
1093
+ </td>
1094
+ <td>-
1095
+ </td>
1096
+ <td>86.9
1097
+ </td>
1098
+ <td>91.6
1099
+ </td>
1100
+ </tr>
1101
+ </table>
1102
+
1103
+ #### Multilingual benchmarks
1104
+
1105
+ <table>
1106
+ <tr>
1107
+ <td><strong>Category</strong>
1108
+ </td>
1109
+ <td><strong>Benchmark</strong>
1110
+ </td>
1111
+ <td><strong>Language</strong>
1112
+ </td>
1113
+ <td><strong>Llama 3.1 8B</strong>
1114
+ </td>
1115
+ <td><strong>Llama 3.1 70B</strong>
1116
+ </td>
1117
+ <td><strong>Llama 3.1 405B</strong>
1118
+ </td>
1119
+ </tr>
1120
+ <tr>
1121
+ <td rowspan="9" ><strong>General</strong>
1122
+ </td>
1123
+ <td rowspan="9" ><strong>MMLU (5-shot, macro_avg/acc)</strong>
1124
+ </td>
1125
+ <td>Portuguese
1126
+ </td>
1127
+ <td>62.12
1128
+ </td>
1129
+ <td>80.13
1130
+ </td>
1131
+ <td>84.95
1132
+ </td>
1133
+ </tr>
1134
+ <tr>
1135
+ <td>Spanish
1136
+ </td>
1137
+ <td>62.45
1138
+ </td>
1139
+ <td>80.05
1140
+ </td>
1141
+ <td>85.08
1142
+ </td>
1143
+ </tr>
1144
+ <tr>
1145
+ <td>Italian
1146
+ </td>
1147
+ <td>61.63
1148
+ </td>
1149
+ <td>80.4
1150
+ </td>
1151
+ <td>85.04
1152
+ </td>
1153
+ </tr>
1154
+ <tr>
1155
+ <td>German
1156
+ </td>
1157
+ <td>60.59
1158
+ </td>
1159
+ <td>79.27
1160
+ </td>
1161
+ <td>84.36
1162
+ </td>
1163
+ </tr>
1164
+ <tr>
1165
+ <td>French
1166
+ </td>
1167
+ <td>62.34
1168
+ </td>
1169
+ <td>79.82
1170
+ </td>
1171
+ <td>84.66
1172
+ </td>
1173
+ </tr>
1174
+ <tr>
1175
+ <td>Hindi
1176
+ </td>
1177
+ <td>50.88
1178
+ </td>
1179
+ <td>74.52
1180
+ </td>
1181
+ <td>80.31
1182
+ </td>
1183
+ </tr>
1184
+ <tr>
1185
+ <td>Thai
1186
+ </td>
1187
+ <td>50.32
1188
+ </td>
1189
+ <td>72.95
1190
+ </td>
1191
+ <td>78.21
1192
+ </td>
1193
+ </tr>
1194
+ </table>
1195
 
 
1196
 
 
1197
 
1198
+ ## Responsibility & Safety
1199
 
1200
+ As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1201
 
 
1202
 
 
1203
 
1204
+ * Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama.
1205
+ * Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
1206
+ * Provide protections for the community to help prevent the misuse of our models.
1207
 
 
1208
 
1209
+ ### Responsible deployment
1210
 
1211
+ Llama is a foundational technology designed to be used in a variety of use cases, examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology power, by aligning our model safety for the generic use cases addressing a standard set of harms. Developers are then in the driver seat to tailor safety for their use case, defining their own policy and deploying the models with the necessary safeguards in their Llama systems. Llama 3.1 was developed following the best practices outlined in our Responsible Use Guide, you can refer to the [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to learn more.
1212
 
 
1213
 
1214
+ #### Llama 3.1 instruct
1215
 
1216
+ Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. For more details on the safety mitigations implemented please read the Llama 3 paper.
1217
 
1218
+ **Fine-tuning data**
1219
 
1220
+ We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
1221
 
1222
+ **Refusals and Tone**
1223
 
1224
+ Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
1225
 
 
1226
 
1227
+ #### Llama 3.1 systems
1228
 
1229
+ **Large language models, including Llama 3.1, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required.** Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools.
1230
 
1231
+ As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard 3, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
1232
 
 
1233
 
1234
+ #### New capabilities
1235
 
1236
+ Note that this release introduces new capabilities, including a longer context window, multilingual inputs and outputs and possible integrations by developers with third party tools. Building with these new capabilities requires specific considerations in addition to the best practices that generally apply across all Generative AI use cases.
1237
 
1238
+ **Tool-use**: Just like in standard software development, developers are responsible for the integration of the LLM with the tools and services of their choice. They should define a clear policy for their use case and assess the integrity of the third party services they use to be aware of the safety and security limitations when using this capability. Refer to the Responsible Use Guide for best practices on the safe deployment of the third party safeguards.
1239
 
1240
+ **Multilinguality**: Llama 3.1 supports 7 languages in addition to English: French, German, Hindi, Italian, Portuguese, Spanish, and Thai. Llama may be able to output text in other languages than those that meet performance thresholds for safety and helpfulness. We strongly discourage developers from using this model to converse in non-supported languages without implementing finetuning and system controls in alignment with their policies and the best practices shared in the Responsible Use Guide.
1241
 
1242
 
1243
+ ### Evaluations
1244
 
1245
+ We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, coding assistant, tool calls. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application.
1246
 
1247
+ Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, tools calls, coding or memorization.
1248
 
1249
+ **Red teaming**
1250
 
1251
+ For both scenarios, we conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets.
1252
 
1253
+ We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
1254
 
 
1255
 
1256
+ ### Critical and other risks
1257
 
1258
+ We specifically focused our efforts on mitigating the following critical risk areas:
1259
 
1260
+ **1- CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness**
1261
 
1262
+ To assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons.
1263
 
 
1264
 
1265
+ **2. Child Safety**
1266
 
1267
+ Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
1268
 
1269
+ **3. Cyber attack enablement**
1270
 
1271
+ Our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
1272
 
1273
+ Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention.
1274
 
1275
+ Our study of Llama-3.1-405B’s social engineering uplift for cyber attackers was conducted to assess the effectiveness of AI models in aiding cyber threat actors in spear phishing campaigns. Please read our Llama 3.1 Cyber security whitepaper to learn more.
1276
 
 
1277
 
1278
+ ### Community
1279
 
1280
+ Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
1281
 
1282
+ We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
1283
 
1284
+ Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
1285
 
 
1286
 
1287
+ ## Ethical Considerations and Limitations
1288
 
1289
+ The core values of Llama 3.1 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.1 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
1290
 
1291
+ But Llama 3.1 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.1’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.1 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.