Text Generation
GGUF
conversational
munish0838 commited on
Commit
66ecb5f
·
verified ·
1 Parent(s): 22c047e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ base_model: Bin12345/AutoCoder_S_6.7B
5
+ ---
6
+
7
+ # QuantFactory/AutoCoder_S_6.7B-GGUF
8
+ This is quantized version of [Bin12345/AutoCoder_S_6.7B](https://huggingface.co/Bin12345/AutoCoder_S_6.7B) created using llama.cpp
9
+
10
+ # Model Description
11
+
12
+ We introduced a new model designed for the Code generation task. It 33B version's test accuracy on the HumanEval base dataset surpasses that of GPT-4 Turbo (April 2024). (90.9% vs 90.2%).
13
+
14
+ Additionally, compared to previous open-source models, AutoCoder offers a new feature: it can **automatically install the required packages** and attempt to run the code until it deems there are no issues, **whenever the user wishes to execute the code**.
15
+
16
+ This is the 6.7B version of AutoCoder. Its base model is deepseeker-coder.
17
+
18
+ See details on the [AutoCoder GitHub](https://github.com/bin123apple/AutoCoder).
19
+
20
+ Simple test script:
21
+
22
+ ```
23
+ model_path = ""
24
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
25
+ model = AutoModelForCausalLM.from_pretrained(model_path,
26
+ device_map="auto")
27
+
28
+ HumanEval = load_dataset("evalplus/humanevalplus")
29
+
30
+ Input = "" # input your question here
31
+
32
+ messages=[
33
+ { 'role': 'user', 'content': Input}
34
+ ]
35
+ inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True,
36
+ return_tensors="pt").to(model.device)
37
+
38
+ outputs = model.generate(inputs,
39
+ max_new_tokens=1024,
40
+ do_sample=False,
41
+ temperature=0.0,
42
+ top_p=1.0,
43
+ num_return_sequences=1,
44
+ eos_token_id=tokenizer.eos_token_id)
45
+
46
+ answer = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
47
+ ```
48
+
49
+ Paper: https://arxiv.org/abs/2405.14906