Llama-3.1-8B-Instruct GGUF (ShapeLearn Quantized)
This is a GGUF-quantized version of Llama-3.1-8B-Instruct produced with ByteShape's ShapeLearn, which learns the optimal datatype per tensor to maintain high quality even at very low bit lengths (the exclusive focus of this release).
To learn more about ShapeLearn and to see detailed benchmarks across GPUs, CPUs, and even the Raspberry Pi, please visit our blog.
If you have questions or want to share feedback, reach us on Reddit.
How to Pick a Model
We provide CPU and GPU optimized variants for llama.cpp:
- CPUs: KQ quantization is preferred due to GGML kernel efficiency.
- Nvidia GPUs: IQ quantization delivers faster throughput on modern architectures.
Each hardware target includes a range of models covering different size and quality tradeoffs.
The charts below show quality vs tokens per second for each device, comparing ShapeLearn models with Unsloth baselines.
Selection rule: Choose the model with the highest quality at your target throughput or the fastest model that still meets your required quality.
GGUF-KQ Models (Best for CPU)
Table sorted by inference speed (match the chart numbers to model IDs):
| Model ID | Bits/Weight | Model Size | Normalized Quality |
|---|---|---|---|
| KQ-1 | 2.91 | 2.93 GB | 83.03% |
| KQ-2 | 3.06 | 3.08 GB | 87.68% |
| KQ-3 | 3.24 | 3.26 GB | 90.10% |
| KQ-4 | 3.34 | 3.36 GB | 92.40% |
| KQ-5 | 3.41 | 3.43 GB | 93.20% |
| KQ-6 | 3.60 | 3.63 GB | 94.85% |
| KQ-7 | 3.83 | 3.85 GB | 92.89% |
| KQ-8 | 4.21 | 4.23 GB | 96.15% |
| KQ-9 | 4.31 | 4.33 GB | 97.94% |
GGUF-IQ Models (Best for higher-end GPUs)
Table sorted by inference speed (match the chart numbers to model IDs):
| Model ID | Bits/Weight | Model Size | Normalized Score |
|---|---|---|---|
| IQ-1 | 2.54 | 2.56 GB | 68.48% |
| IQ-2 | 2.72 | 2.74 GB | 81.97% |
| IQ-3 | 2.87 | 2.89 GB | 83.63% |
| IQ-4 | 3.01 | 3.03 GB | 86.02% |
| IQ-5 | 3.09 | 3.11 GB | 87.75% |
| IQ-6 | 3.31 | 3.33 GB | 89.56% |
| IQ-7 | 3.57 | 3.59 GB | 93.21% |
| IQ-8 | 3.94 | 3.96 GB | 95.65% |
| IQ-9 | 4.05 | 4.07 GB | 95.71% |
Notes on quantization labels
The labels you see (for example IQ4_XS) are only there to make Hugging Face show our models in the GGUF table. We do not use the conventional quantization profiles as defined in llama.cpp. In our case these labels simply indicate whether the model uses KQ or IQ quantization and the average bit length, which is why several models can share the same tag.
Running these models with Ollama
All GGUF files in this repo can be used directly with Ollama.
To run a model with Ollama, use:
ollama run hf.co/byteshape/Llama-3.1-8B-Instruct-GGUF:FILE_NAME.gguf
Replace FILE_NAME.gguf with the GGUF filename you want. For example:
ollama run hf.co/byteshape/Llama-3.1-8B-Instruct-GGUF:Llama-3.1-8B-Instruct-IQ4_XS-3.57bpw.gguf
- Downloads last month
- 4,530
3-bit
4-bit
Model tree for byteshape/Llama-3.1-8B-Instruct-GGUF
Base model
meta-llama/Llama-3.1-8B
