next-14b / README.md
Lamapi's picture
Update README.md
bf36319 verified
---
language:
- tr
- en
- de
- es
- fr
- ru
- zh
- ja
- ko
license: mit
tags:
- turkish
- türkiye
- reasoning
- ai
- lamapi
- gemma3
- next
- next-x1
- text-generation
- open-source
- 14b
- large-language-model
- llm
- transformer
- artificial-intelligence
- machine-learning
- nlp
- multilingual
- instruction-tuned
- chat
- generative-ai
- optimized
- trl
- sft
- cognitive
- analytical
- enterprise
pipeline_tag: text-generation
datasets:
- mlabonne/FineTome-100k
- CognitiveKernel/CognitiveKernel-Pro-SFT
- OpenSPG/KAG-Thinker-training-dataset
- Gryphe/ChatGPT-4o-Writing-Prompts
- QuixiAI/dolphin-r1
- uclanlp/Brief-Pro
library_name: transformers
---
<img src='assets/banner.png'>
# 🧠 Next 14B (l310)
### *Türkiye’s First Reasoning-Capable AI Model — Logical, Analytical, and Enterprise-Ready*
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![Language: Multilingual](https://img.shields.io/badge/Language-Multilingual-red.svg)]()
[![HuggingFace](https://img.shields.io/badge/🤗-Lamapi/Next--14B-orange.svg)](https://huggingface.co/Lamapi/next-14b)
---
## 📖 Overview
**Next 14B** is a **14-billion parameter large language model (LLM)** built upon **Qwen 3 architecture**, trained to achieve **superior reasoning and analytical capabilities**.
It is **Türkiye’s first reasoning-capable AI model**, designed to think, infer, and make decisions — **not just respond**.
Unlike vision-based models, **Next 14B focuses on pure cognitive performance**, mastering complex problem solving, abstract logic, and human-level understanding in both **Turkish and English**.
---
## ⚡ Highlights
- 🇹🇷 **Türkiye’s first reasoning-capable AI model**
- 🧠 **Advanced logical, analytical, and inferential reasoning**
- 🌍 **High multilingual understanding (Turkish, English, and beyond)**
- 🏢 **Enterprise-grade stability and consistency**
- 💬 **Instruction-tuned for dialogue, problem solving, and analysis**
---
## 📊 Benchmark Performance
<table>
<thead>
<tr>
<th>Model</th>
<th>MMLU (5-shot) %</th>
<th>MMLU-Pro %</th>
<th>GSM8K %</th>
<th>MATH %</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Next 14B (Thinking)</strong></td>
<td><strong>94.6</strong></td>
<td><strong>93.2</strong></td>
<td><strong>98.8</strong></td>
<td>92.7</td>
</tr>
<tr>
<td>Next 12B</td>
<td>92.7</td>
<td>84.4</td>
<td>95.3</td>
<td>87.2</td>
</tr>
<tr class="next">
<td>Next 8B (Thinking)</td>
<td>91.0</td>
<td>88.5</td>
<td>96.2</td>
<td>88.0</td>
</tr>
<tr>
<td>GPT-5</td>
<td>92.5</td>
<td>87.0</td>
<td>98.4</td>
<td><strong>96.0</strong></td>
</tr>
<tr>
<td>Claude Opus 4.1 (Thinking)</td>
<td>~92.0</td>
<td>87.8</td>
<td>84.7</td>
<td>95.4</td>
</tr>
</tbody>
</table>
---
## 🚀 Installation & Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "Lamapi/next-14b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
messages = [
{"role": "system", "content": "You are Next-X1, a reasoning-capable AI assistant created by Lamapi. You think deeply, reason logically, and always answer concisely. Proudly made in Turkey."},
{"role": "user", "content": "Explain why the sky appears blue using logical reasoning."}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
````
---
## 🧩 Key Features
| Feature | Description |
| --------------------------------------------- | ------------------------------------------------------------------------------ |
| 🧠 **Advanced Reasoning** | Excels in abstract logic, critical thinking, and long-form analysis. |
| 🇹🇷 **Cultural & Multilingual Intelligence** | Deep Turkish understanding, alongside fluent English and 30+ languages. |
| ⚙️ **Optimized for Efficiency** | Available in quantized formats (Q8_0, Q4_K_M, FP16). |
| 🧮 **Mathematical & Analytical Skill** | Performs exceptionally in structured problem solving and scientific reasoning. |
| 🧩 **Non-Vision Architecture** | Focused purely on cognitive and linguistic understanding. |
| 🏢 **Enterprise Reliability** | Consistent, interpretable outputs for professional use cases. |
---
## 📐 Model Specifications
| Specification | Details |
| ----------------- | ------------------------------------------------------------------ |
| **Base Model** | Qwen 3 |
| **Parameters** | 14 Billion |
| **Architecture** | Transformer (Causal LLM) |
| **Modalities** | Text-only |
| **Fine-Tuning** | Instruction-tuned and reinforced with cognitive reasoning datasets |
| **Optimizations** | Quantization-ready, FP16 support |
| **Primary Focus** | Reasoning, logic, decision-making, and language understanding |
---
## 🎯 Ideal Use Cases
* **Analytical Chatbots** for business and enterprise logic
* **Research Assistance** — scientific, legal, or data-heavy reasoning
* **Education & Tutoring** — explain concepts step-by-step
* **Creative Writing** — coherent story logic and worldbuilding
* **Code & Algorithm Design** — reasoning-based code generation
* **Decision Support Systems** — scenario evaluation and inference
---
## 💡 Performance Highlights
* **Superior Reasoning:** Outperforms previous-generation 12B models in logic-based benchmarks.
* **Robust Mathematical Understanding:** Handles symbolic reasoning and complex equations.
* **Consistent Long-Context Memory:** Capable of tracking context across multi-turn conversations.
* **Professional Reliability:** Built for critical enterprise and research applications.
---
## 📄 License
Licensed under the **MIT License** — free for commercial and non-commercial use. Attribution is appreciated.
---
## 📞 Contact & Support
* 📧 **Email:** [[email protected]](mailto:[email protected])
* 🤗 **HuggingFace:** [Lamapi](https://huggingface.co/Lamapi)
---
> **Next 14B** — Türkiye’s first *reasoning-capable* large language model, combining **logical depth**, **analytical intelligence**, and **enterprise reliability**.
[![Follow on HuggingFace](https://img.shields.io/badge/Follow-HuggingFace-yellow?logo=huggingface)](https://huggingface.co/Lamapi)
```