File size: 7,292 Bytes
13631fd
3774640
 
 
 
 
 
 
 
 
 
 
db2ce93
3774640
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db2ce93
9e5da2e
3774640
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
368bd48
3774640
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bf36319
6e2d583
3774640
af9cbe6
1bfc729
 
3774640
 
 
1bfc729
 
 
 
3774640
bf36319
 
 
 
 
 
 
3774640
1bfc729
 
 
 
 
3774640
 
1bfc729
 
 
 
 
3774640
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db2ce93
 
3774640
 
 
 
 
 
 
 
 
 
 
 
db2ce93
3774640
db2ce93
3774640
db2ce93
3774640
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
---
language:
- tr
- en
- de
- es
- fr
- ru
- zh
- ja
- ko
license: mit
tags:
- turkish
- türkiye
- reasoning
- ai
- lamapi
- gemma3
- next
- next-x1
- text-generation
- open-source
- 14b
- large-language-model
- llm
- transformer
- artificial-intelligence
- machine-learning
- nlp
- multilingual
- instruction-tuned
- chat
- generative-ai
- optimized
- trl
- sft
- cognitive
- analytical
- enterprise
pipeline_tag: text-generation
datasets:
- mlabonne/FineTome-100k
- CognitiveKernel/CognitiveKernel-Pro-SFT
- OpenSPG/KAG-Thinker-training-dataset
- Gryphe/ChatGPT-4o-Writing-Prompts
- QuixiAI/dolphin-r1
- uclanlp/Brief-Pro
library_name: transformers
---

<img src='assets/banner.png'>

# 🧠 Next 14B (l310)

### *Türkiye’s First Reasoning-Capable AI Model — Logical, Analytical, and Enterprise-Ready*

[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![Language: Multilingual](https://img.shields.io/badge/Language-Multilingual-red.svg)]()
[![HuggingFace](https://img.shields.io/badge/🤗-Lamapi/Next--14B-orange.svg)](https://huggingface.co/Lamapi/next-14b)

---

## 📖 Overview

**Next 14B** is a **14-billion parameter large language model (LLM)** built upon **Qwen 3 architecture**, trained to achieve **superior reasoning and analytical capabilities**.  
It is **Türkiye’s first reasoning-capable AI model**, designed to think, infer, and make decisions — **not just respond**.

Unlike vision-based models, **Next 14B focuses on pure cognitive performance**, mastering complex problem solving, abstract logic, and human-level understanding in both **Turkish and English**.

---

## ⚡ Highlights

- 🇹🇷 **Türkiye’s first reasoning-capable AI model**
- 🧠 **Advanced logical, analytical, and inferential reasoning**
- 🌍 **High multilingual understanding (Turkish, English, and beyond)**
- 🏢 **Enterprise-grade stability and consistency**
- 💬 **Instruction-tuned for dialogue, problem solving, and analysis**

---

## 📊 Benchmark Performance

<table>
  <thead>
    <tr>
      <th>Model</th>
      <th>MMLU (5-shot) %</th>
      <th>MMLU-Pro %</th>
      <th>GSM8K %</th>
      <th>MATH %</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Next 14B (Thinking)</strong></td>
      <td><strong>94.6</strong></td>
      <td><strong>93.2</strong></td>
      <td><strong>98.8</strong></td>
      <td>92.7</td>
    </tr>
    <tr>
      <td>Next 12B</td>
      <td>92.7</td>
      <td>84.4</td>
      <td>95.3</td>
      <td>87.2</td>
    </tr>
    <tr class="next">
      <td>Next 8B (Thinking)</td>
      <td>91.0</td>
      <td>88.5</td>
      <td>96.2</td>
      <td>88.0</td>
    </tr>
    <tr>
      <td>GPT-5</td>
      <td>92.5</td>
      <td>87.0</td>
      <td>98.4</td>
      <td><strong>96.0</strong></td>
    </tr>
    <tr>
      <td>Claude Opus 4.1 (Thinking)</td>
      <td>~92.0</td>
      <td>87.8</td>
      <td>84.7</td>
      <td>95.4</td>
    </tr>
  </tbody>
</table>

---

## 🚀 Installation & Usage

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "Lamapi/next-14b"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")

messages = [
    {"role": "system", "content": "You are Next-X1, a reasoning-capable AI assistant created by Lamapi. You think deeply, reason logically, and always answer concisely. Proudly made in Turkey."},
    {"role": "user", "content": "Explain why the sky appears blue using logical reasoning."}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
````

---

## 🧩 Key Features

| Feature                                       | Description                                                                    |
| --------------------------------------------- | ------------------------------------------------------------------------------ |
| 🧠 **Advanced Reasoning**                     | Excels in abstract logic, critical thinking, and long-form analysis.           |
| 🇹🇷 **Cultural & Multilingual Intelligence** | Deep Turkish understanding, alongside fluent English and 30+ languages.        |
| ⚙️ **Optimized for Efficiency**               | Available in quantized formats (Q8_0, Q4_K_M, FP16).                           |
| 🧮 **Mathematical & Analytical Skill**        | Performs exceptionally in structured problem solving and scientific reasoning. |
| 🧩 **Non-Vision Architecture**                | Focused purely on cognitive and linguistic understanding.                      |
| 🏢 **Enterprise Reliability**                 | Consistent, interpretable outputs for professional use cases.                  |

---

## 📐 Model Specifications

| Specification     | Details                                                            |
| ----------------- | ------------------------------------------------------------------ |
| **Base Model**    | Qwen 3                                                             |
| **Parameters**    | 14 Billion                                                         |
| **Architecture**  | Transformer (Causal LLM)                                           |
| **Modalities**    | Text-only                                                          |
| **Fine-Tuning**   | Instruction-tuned and reinforced with cognitive reasoning datasets |
| **Optimizations** | Quantization-ready, FP16 support                                   |
| **Primary Focus** | Reasoning, logic, decision-making, and language understanding      |

---

## 🎯 Ideal Use Cases

* **Analytical Chatbots** for business and enterprise logic
* **Research Assistance** — scientific, legal, or data-heavy reasoning
* **Education & Tutoring** — explain concepts step-by-step
* **Creative Writing** — coherent story logic and worldbuilding
* **Code & Algorithm Design** — reasoning-based code generation
* **Decision Support Systems** — scenario evaluation and inference

---

## 💡 Performance Highlights

* **Superior Reasoning:** Outperforms previous-generation 12B models in logic-based benchmarks.
* **Robust Mathematical Understanding:** Handles symbolic reasoning and complex equations.
* **Consistent Long-Context Memory:** Capable of tracking context across multi-turn conversations.
* **Professional Reliability:** Built for critical enterprise and research applications.

---

## 📄 License

Licensed under the **MIT License** — free for commercial and non-commercial use. Attribution is appreciated.

---

## 📞 Contact & Support

* 📧 **Email:** [[email protected]](mailto:[email protected])
* 🤗 **HuggingFace:** [Lamapi](https://huggingface.co/Lamapi)

---

> **Next 14B** — Türkiye’s first *reasoning-capable* large language model, combining **logical depth**, **analytical intelligence**, and **enterprise reliability**.

[![Follow on HuggingFace](https://img.shields.io/badge/Follow-HuggingFace-yellow?logo=huggingface)](https://huggingface.co/Lamapi)

```