Fix for KV Cache Bug - ouro-cache-fix Package
# Solution for KV Cache Bug in Ouro-1.4B
I've created a Python package that fixes the KV cache indexing bug affecting inference speed.
## π¦ Package: ouro-cache-fix
- PyPI: https://pypi.org/project/ouro-cache-fix/
- GitHub: https://github.com/Antizana/ouro-cache-fix
## β¨ What it fixes
- Resolves cache out-of-bounds errors during generation
- Improves inference speed by 1.3-1.7x
- Drop-in replacement for DynamicCache
## π Usage
```python
from transformers import AutoModelForCausalLM
from ouro_cache_fix import UniversalTransformerCache
model = AutoModelForCausalLM.from_pretrained(
"ByteDance/Ouro-1.4B",
trust_remote_code=True,
cache_implementation=UniversalTransformerCache
)
Tested with transformers 4.36.0+ and torch 2.0.0+.
Would appreciate feedback and testing!
Thank you for this valuable contribution! We've verified that your fix successfully enables Ouro to run with transformers>=4.56.0. We've now merged your KV cache optimization into our model repository and credited your work in the model card's Acknowledgments section. We really appreciate your effort in improving the model's compatibility!
After your last update, I get
AttributeError: property 'key_cache' of 'UniversalTransformerCache' object has no setter
even when running the example code from the model card
After your last update, I get
AttributeError: property 'key_cache' of 'UniversalTransformerCache' object has no setter
even when running the example code from the model card
ByteDance has already included this fix as native functionality in the model, as mentioned in the previous comment.