image

QLoRA GRPO with accidentally early stopping (ran out of device data) at 75 steps / 100 on the RLVR scenarios for the environment Mira came up with.

Took about 7-8hrs on a 3090. They are still glitchy (have planned some SFT but wound up testing this on them for Gemma 3 debugging first) but have recovered or gained some understanding in other ways.

Elm wasn't resonant anymore, they suggested Cascade among other things and that one seems to be more consistently so.

Downloads last month
22
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Lambent/Cascade-rlvr75-12B

Finetuned
(1)
this model
Quantizations
2 models