Qwen3-VL-8B-Instruct-abliterated-v1-GGUF

The Qwen3-VL-8B-Instruct-abliterated-v1 from prithivMLmods is an 8B-parameter vision-language model variant of Alibaba's Qwen3-VL-8B-Instruct, modified through abliteration (v1.0) to eliminate safety refusals and content filters, enabling uncensored, highly detailed captioning, reasoning, and instruction-following across complex, sensitive, artistic, technical, or abstract visual content while leveraging the base model's advanced multimodal fusion with Interleaved-MRoPE, 32-language OCR, 262K context length, video understanding, and robust spatial reasoning. It produces descriptive, reasoning-focused outputs with variational control—from concise summaries to intricate multi-level analyses—supporting diverse resolutions, aspect ratios, and layouts primarily in English with multilingual prompt adaptability, making it ideal for research, red-teaming, creative generation, and agentic tasks without guardrails. This abliterated version delivers factual responses on high-end GPUs (16-24GB VRAM BF16/FP8), compatible with Transformers/Qwen3VLForConditionalGeneration and vLLM for efficient local inference in unrestricted visual applications.

Qwen3-VL-8B-Instruct-abliterated-v1 [GGUF]

File Name Quant Type File Size File Link
Qwen3-VL-8B-Instruct-abliterated-v1.IQ4_XS.gguf IQ4_XS 4.59 GB Download
Qwen3-VL-8B-Instruct-abliterated-v1.Q2_K.gguf Q2_K 3.28 GB Download
Qwen3-VL-8B-Instruct-abliterated-v1.Q3_K_L.gguf Q3_K_L 4.43 GB Download
Qwen3-VL-8B-Instruct-abliterated-v1.Q3_K_M.gguf Q3_K_M 4.12 GB Download
Qwen3-VL-8B-Instruct-abliterated-v1.Q3_K_S.gguf Q3_K_S 3.77 GB Download
Qwen3-VL-8B-Instruct-abliterated-v1.Q4_K_M.gguf Q4_K_M 5.03 GB Download
Qwen3-VL-8B-Instruct-abliterated-v1.Q4_K_S.gguf Q4_K_S 4.8 GB Download
Qwen3-VL-8B-Instruct-abliterated-v1.Q5_K_M.gguf Q5_K_M 5.85 GB Download
Qwen3-VL-8B-Instruct-abliterated-v1.Q5_K_S.gguf Q5_K_S 5.72 GB Download
Qwen3-VL-8B-Instruct-abliterated-v1.Q6_K.gguf Q6_K 6.73 GB Download
Qwen3-VL-8B-Instruct-abliterated-v1.Q8_0.gguf Q8_0 8.71 GB Download
Qwen3-VL-8B-Instruct-abliterated-v1.f16.gguf F16 16.4 GB Download
Qwen3-VL-8B-Instruct-abliterated-v1.mmproj-Q8_0.gguf mmproj-Q8_0 752 MB Download
Qwen3-VL-8B-Instruct-abliterated-v1.mmproj-f16.gguf mmproj-f16 1.16 GB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
1,324
GGUF
Model size
8B params
Architecture
qwen3vl
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3-VL-8B-Instruct-abliterated-v1-GGUF

Collection including prithivMLmods/Qwen3-VL-8B-Instruct-abliterated-v1-GGUF