EO-Mistral β€” Endless Online Knowledge Model

Created by: https://luls.lol

EO-Mistral is a fine-tuned variant of Mistral-7B-Instruct-v0.2, trained specifically on structured data from the MMORPG Endless Online (classic + Recharged).
The model specializes in:

  • NPC data
  • Item descriptions
  • Monster drops
  • EO history & lore
  • Player community culture
  • EO drama / historical events
  • Clean question–answer formatting for game-related queries

This model is optimized to answer Endless Online questions instantly and accurately, providing an EO-aware conversational assistant.


🧠 Model Details

β€’ Model Description

EO-Mistral is a LoRA fine-tuned SFT model built on top of Mistral-7B-Instruct.
It uses a curated dataset of:

  • Item drop tables
  • NPC metadata
  • EO community history
  • EO β€œdrama dataset” (expanded historical context)
  • Clean instruction-style prompts via Mistral chat template

This gives the model a strong understanding of EO mechanics and terminology.

β€’ Developed by

Luls β€” https://luls.lol

β€’ License

Same license as Mistral-7B-Instruct-v0.2 (Apache-2.0)

β€’ Finetuned From

mistralai/Mistral-7B-Instruct-v0.2


πŸ”§ Model Sources


🎯 Intended Uses

βœ” Direct / Recommended Use

  • Endless Online information queries
  • NPC / item / monster lookup
  • EO lore responses
  • Community discussions
  • Text-based EO companion or chatbot
  • Server moderation helpers (EO-themed)
  • Game knowledge lookup for EO private servers

βœ” Downstream Use

  • Custom EO bots
  • EO server NPC AI dialog
  • EO knowledgebase assistants
  • EO game guide generators

❌ Out-of-Scope / Not Recommended

  • Real-world factual predictions
  • High-stakes decision making
  • Advice requiring verified accuracy
  • Impersonation of real people
  • Any malicious usage

⚠️ Bias, Risks & Limitations

This model is fine-tuned only on Endless Online content and therefore:

  • May hallucinate when asked non-EO questions
  • Not suited for legal, medical, or financial advice
  • EO drama data may contain biased perspectives
  • Responses may reflect the culture of the EO community

Recommendations

Always verify in-game details if accuracy is critical (e.g., drop rates may change over time).


πŸš€ Getting Started

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "YOUR_USERNAME/eo-mistral"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto"
)

prompt = "In Endless Online, what drops the item 'Eon'?"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))


πŸ‹οΈ Training Details
β€’ Training Data

Dataset includes:

EO Item Drop Dataset (cleaned & deduped)

EO NPC Dataset

EO Drama Dataset (expanded historical text)

EO map summaries & game lore

All formatted into Mistral-style instruction prompts.

β€’ Preprocessing

Normalized drop tables

Duplicate removal (type A strong dedupe)

Chat-template embedding

Clean instruction / answer format

β€’ Training Hyperparameters

Method: LoRA + SFT

Precision: bf16

Batch size: 2

Gradient Accumulation: 4

Epochs: 3

Learning rate: 3e-5

Max sequence length: 2048

πŸ“Š Evaluation

This model was tested informally by querying:

Item drop accuracy

EO-specific terminology

NPC identification

EO historical trivia

Multi-step reasoning about EO server design

Results:

Very strong performance on EO items/NPCs

Consistent accurate responses to structured questions

High reliability in explaining EO drama and historical context

Weak outside EO domain (expected)

🌱 Environmental Impact

Training used a single Google Colab GPU (T4/A100) for LoRA SFT.
Estimated carbon footprint is minimal due to small-scale fine-tuning.

πŸ— Technical Specifications
Model Architecture

Mistral-7B transformer

LoRA adapters

SFT training using TRL + PEFT

Software

Transformers

TRL

PEFT 0.18

HuggingFace Hub

Python 3.10 / Colab

✍ Citation
@misc{eo-mistral,
  title        = {EO-Mistral: Endless Online Knowledge Model},
  author       = {Luls},
  howpublished = {\url{https://luls.lol}},
  year         = {2025}
}

πŸ“© Contact

Creator Website: https://luls.lol

HuggingFace User: Lulslol
For questions/support: open an issue on the repo.
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Lulslol/EOMistral

Adapter
(1084)
this model

Space using Lulslol/EOMistral 1