EO-Mistral β Endless Online Knowledge Model
Created by: https://luls.lol
EO-Mistral is a fine-tuned variant of Mistral-7B-Instruct-v0.2, trained specifically on structured data from the MMORPG Endless Online (classic + Recharged).
The model specializes in:
- NPC data
- Item descriptions
- Monster drops
- EO history & lore
- Player community culture
- EO drama / historical events
- Clean questionβanswer formatting for game-related queries
This model is optimized to answer Endless Online questions instantly and accurately, providing an EO-aware conversational assistant.
π§ Model Details
β’ Model Description
EO-Mistral is a LoRA fine-tuned SFT model built on top of Mistral-7B-Instruct.
It uses a curated dataset of:
- Item drop tables
- NPC metadata
- EO community history
- EO βdrama datasetβ (expanded historical context)
- Clean instruction-style prompts via Mistral chat template
This gives the model a strong understanding of EO mechanics and terminology.
β’ Developed by
Luls β https://luls.lol
β’ License
Same license as Mistral-7B-Instruct-v0.2 (Apache-2.0)
β’ Finetuned From
mistralai/Mistral-7B-Instruct-v0.2
π§ Model Sources
- Base Model: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
- Creator Website: https://luls.lol
- Dataset: Private LoRA SFT dataset (items, NPCs, EO history & drama)
π― Intended Uses
β Direct / Recommended Use
- Endless Online information queries
- NPC / item / monster lookup
- EO lore responses
- Community discussions
- Text-based EO companion or chatbot
- Server moderation helpers (EO-themed)
- Game knowledge lookup for EO private servers
β Downstream Use
- Custom EO bots
- EO server NPC AI dialog
- EO knowledgebase assistants
- EO game guide generators
β Out-of-Scope / Not Recommended
- Real-world factual predictions
- High-stakes decision making
- Advice requiring verified accuracy
- Impersonation of real people
- Any malicious usage
β οΈ Bias, Risks & Limitations
This model is fine-tuned only on Endless Online content and therefore:
- May hallucinate when asked non-EO questions
- Not suited for legal, medical, or financial advice
- EO drama data may contain biased perspectives
- Responses may reflect the culture of the EO community
Recommendations
Always verify in-game details if accuracy is critical (e.g., drop rates may change over time).
π Getting Started
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "YOUR_USERNAME/eo-mistral"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
prompt = "In Endless Online, what drops the item 'Eon'?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
ποΈ Training Details
β’ Training Data
Dataset includes:
EO Item Drop Dataset (cleaned & deduped)
EO NPC Dataset
EO Drama Dataset (expanded historical text)
EO map summaries & game lore
All formatted into Mistral-style instruction prompts.
β’ Preprocessing
Normalized drop tables
Duplicate removal (type A strong dedupe)
Chat-template embedding
Clean instruction / answer format
β’ Training Hyperparameters
Method: LoRA + SFT
Precision: bf16
Batch size: 2
Gradient Accumulation: 4
Epochs: 3
Learning rate: 3e-5
Max sequence length: 2048
π Evaluation
This model was tested informally by querying:
Item drop accuracy
EO-specific terminology
NPC identification
EO historical trivia
Multi-step reasoning about EO server design
Results:
Very strong performance on EO items/NPCs
Consistent accurate responses to structured questions
High reliability in explaining EO drama and historical context
Weak outside EO domain (expected)
π± Environmental Impact
Training used a single Google Colab GPU (T4/A100) for LoRA SFT.
Estimated carbon footprint is minimal due to small-scale fine-tuning.
π Technical Specifications
Model Architecture
Mistral-7B transformer
LoRA adapters
SFT training using TRL + PEFT
Software
Transformers
TRL
PEFT 0.18
HuggingFace Hub
Python 3.10 / Colab
β Citation
@misc{eo-mistral,
title = {EO-Mistral: Endless Online Knowledge Model},
author = {Luls},
howpublished = {\url{https://luls.lol}},
year = {2025}
}
π© Contact
Creator Website: https://luls.lol
HuggingFace User: Lulslol
For questions/support: open an issue on the repo.
- Downloads last month
- -
Model tree for Lulslol/EOMistral
Base model
mistralai/Mistral-7B-Instruct-v0.2