Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,24 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# Orion-Spark-30M Dataset
|
| 2 |
|
| 3 |
-
|
|
|
|
|
|
|
| 4 |
|
| 5 |
-
The dataset was
|
| 6 |
|
| 7 |
-
|
| 8 |
-
The data was scraped and extracted from multiple public websites, such as Wikipedia articles on AI-related topics, technology news outlets like BBC and New York Times technology sections, and educational platforms like fast.ai. The text was cleaned and segmented into sentences of sufficient length to ensure quality and consistency.
|
| 9 |
|
| 10 |
-
### Dataset Structure
|
| 11 |
- The dataset consists of plain text lines, each representing a meaningful sentence or paragraph segment.
|
| 12 |
- Total number of lines: 10,846
|
| 13 |
-
- Text is
|
| 14 |
|
| 15 |
-
|
| 16 |
-
This dataset is designed primarily for training and fine-tuning transformer-based generative models, specifically the Orion-Spark-30M model. However, it can also be used for other natural language processing tasks related to AI and technology domains.
|
| 17 |
|
| 18 |
-
|
| 19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
---
|
| 22 |
|
| 23 |
-
For
|
|
|
|
| 24 |
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: en
|
| 3 |
+
license: apache-2.0 # עדכן לפי הרישיון שלך
|
| 4 |
+
tags:
|
| 5 |
+
- dataset
|
| 6 |
+
- text
|
| 7 |
+
- language-model
|
| 8 |
+
- ai
|
| 9 |
+
- machine-learning
|
| 10 |
+
- transformers
|
| 11 |
+
- pytorch
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
# Orion-Spark-30M Dataset
|
| 15 |
|
| 16 |
+
The **Orion-Spark-30M** dataset is a curated corpus containing 10,846 lines of text gathered from reputable internet sources, including Wikipedia pages, technology news websites, and educational platforms. The dataset focuses on foundational and advanced topics related to artificial intelligence, machine learning, large language models, and generative pretrained transformers. It also covers major technology companies, influential figures, and key concepts in the AI and tech industries.
|
| 17 |
+
|
| 18 |
+
## Data Collection Process
|
| 19 |
|
| 20 |
+
The dataset was compiled by scraping and extracting text from multiple publicly available sources such as Wikipedia articles on AI-related subjects, technology sections of news outlets like BBC and The New York Times, and educational websites like fast.ai. The collected text was cleaned, filtered, and segmented into sentences or meaningful paragraph segments to ensure high quality and consistency.
|
| 21 |
|
| 22 |
+
## Dataset Structure
|
|
|
|
| 23 |
|
|
|
|
| 24 |
- The dataset consists of plain text lines, each representing a meaningful sentence or paragraph segment.
|
| 25 |
- Total number of lines: 10,846
|
| 26 |
+
- Text is tokenized using the GPT-2 tokenizer, with added end-of-sequence tokens for training purposes.
|
| 27 |
|
| 28 |
+
## Usage
|
|
|
|
| 29 |
|
| 30 |
+
This dataset is primarily designed for training and fine-tuning transformer-based generative language models, specifically the Orion-Spark-30M model implemented in PyTorch. However, it can also be leveraged for other natural language processing tasks within the AI and technology domains.
|
| 31 |
+
|
| 32 |
+
## License
|
| 33 |
+
|
| 34 |
+
This dataset is licensed under the Apache 2.0 License.
|
| 35 |
+
*(Replace with your chosen license if different)*
|
| 36 |
|
| 37 |
---
|
| 38 |
|
| 39 |
+
For additional information about the model architecture and training procedures, please refer to the [Orion-Spark-30M repository](#).
|
| 40 |
+
|
| 41 |
|