EXL3 quants of Olmo-3.1-32B-Instruct
4.00 bits per weight
(more to come maybe?)
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for turboderp/Olmo-3.1-32B-Instruct-exl3
Base model
allenai/Olmo-3-1125-32B
Finetuned
allenai/Olmo-3.1-32B-Instruct-SFT
Finetuned
allenai/Olmo-3.1-32B-Instruct-DPO
Finetuned
allenai/Olmo-3.1-32B-Instruct