Gherkin Scenario Generator (DeepSeek Coder LoRA)
A fine-tuned LoRA adapter for generating Gherkin BDD test scenarios, built on top of DeepSeek Coder 6.7B Instruct.
Model Description
This model generates Gherkin/Cucumber test scenarios for data management systems. It was fine-tuned on real-world BDD test cases covering:
- Data import/export (CSV, JSON, Excel)
- REST and SOAP API testing
- UI navigation and search
- Job scheduling and reporting
- IBOR and financial data operations
Usage
With Unsloth (Recommended)
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="Ghaythfd/gherkin-deepseek-lora",
max_seq_length=2048,
load_in_4bit=True,
)
FastLanguageModel.for_inference(model)
prompt = "### Instruction:\nWrite a Gherkin scenario for testing CSV file import\n\n### Response:\n"
inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.5,
top_p=0.9,
repetition_penalty=1.15,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
With PEFT/Transformers
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = AutoModelForCausalLM.from_pretrained(
"deepseek-ai/deepseek-coder-6.7b-instruct",
device_map="auto",
load_in_4bit=True,
)
model = PeftModel.from_pretrained(base_model, "Ghaythfd/gherkin-deepseek-lora")
tokenizer = AutoTokenizer.from_pretrained("Ghaythfd/gherkin-deepseek-lora")
Prompt Format
### Instruction:
{your request here}
### Response:
Example Output
Prompt: "Write a Gherkin scenario for testing CSV file import"
Output:
Scenario Outline: Testing CSV file import
Given I am logged as TAV_standard.user on <screen_name>
And I go to detail screen <detail_screen> of <object_type>
When I select the tab <tab>
Then The fields <fields> should be displayed with values <values>
Examples:
| screen_name | object_type | detail_screen | tab | fields | values |
| My Securities | Equity | Detail | Overview | Name | Test |
Training Details
- Base Model: deepseek-ai/deepseek-coder-6.7b-instruct
- Method: LoRA (Low-Rank Adaptation)
- LoRA Rank: 16
- LoRA Alpha: 16
- Training Data: 726 examples from Gherkin feature files
- Epochs: 1
- Framework: Unsloth + TRL
Limitations
- Generates scenarios in the style of the training data (data management domain)
- May hallucinate specific field names or values
- Works best for scenarios similar to the training examples
License
Apache 2.0
- Downloads last month
- 27
Model tree for Ghaythfd/gherkin-deepseek-lora
Base model
deepseek-ai/deepseek-coder-6.7b-instruct