MemOperator-4B-f32-GGUF

MemOperator-4B by MemTensor is a specialized causal language model designed for efficient memory operations within the MemOS system. It excels in memory extraction, integration, and updating while enabling local-only deployment for environments without internet access. Derived from the Qwen3-4B architecture and fine-tuned via supervised learning on both human-annotated and generated data, this 4 billion parameter model supports both English and Chinese, and processes long contexts up to 32,768 tokens.

It offers fast, low-resource memory management that outperforms comparably sized open models like GPT-4o-mini, making it ideal for real-time, cost-effective memory tasks in conversational and document settings. MemOperator-4B is designed to seamlessly extract high-quality memories and organize them for enhanced long-term coherence in applications such as MemOS, supporting memory-centric AI workflows with strong multilingual capabilities and robust system performance.

Model Files

Model File name Size QuantType
MemOperator-4B.BF16.gguf 8.05 GB BF16
MemOperator-4B.F16.gguf 8.05 GB F16
MemOperator-4B.F32.gguf 16.1 GB F32
MemOperator-4B.Q2_K.gguf 1.67 GB Q2_K
MemOperator-4B.Q3_K_L.gguf 2.24 GB Q3_K_L
MemOperator-4B.Q3_K_M.gguf 2.08 GB Q3_K_M
MemOperator-4B.Q3_K_S.gguf 1.89 GB Q3_K_S
MemOperator-4B.Q4_K_M.gguf 2.5 GB Q4_K_M
MemOperator-4B.Q4_K_S.gguf 2.38 GB Q4_K_S
MemOperator-4B.Q5_K_M.gguf 2.89 GB Q5_K_M
MemOperator-4B.Q5_K_S.gguf 2.82 GB Q5_K_S
MemOperator-4B.Q6_K.gguf 3.31 GB Q6_K
MemOperator-4B.Q8_0.gguf 4.28 GB Q8_0

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
8
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/MemOperator-4B-f32-GGUF

Finetuned
Qwen/Qwen3-0.6B
Quantized
(3)
this model