Datasets:
Upload folder using huggingface_hub
Browse files- README.md +3 -1
- intro/HiPhO_IPhO2025.png +2 -2
- intro/HiPhO_main_results.png +2 -2
- intro/HiPhO_score.png +2 -2
README.md
CHANGED
|
@@ -20,6 +20,7 @@ tags:
|
|
| 20 |
|
| 21 |
<p align="center" style="font-size:28px"><b>π₯ HiPhO: High School Physics Olympiad Benchmark</b></p>
|
| 22 |
<p align="center">
|
|
|
|
| 23 |
<a href="https://huggingface.co/datasets/SciYu/HiPhO">[π Dataset]</a>
|
| 24 |
<a href="https://github.com/SciYu/HiPhO">[β¨ GitHub]</a>
|
| 25 |
<a href="https://huggingface.co/papers/2509.07894">[π Paper]</a>
|
|
@@ -28,6 +29,7 @@ tags:
|
|
| 28 |
[](https://opensource.org/license/mit)
|
| 29 |
</div>
|
| 30 |
|
|
|
|
| 31 |
|
| 32 |
## π Introduction
|
| 33 |
|
|
@@ -103,7 +105,7 @@ Evaluation is conducted using:
|
|
| 103 |
<img src="intro/HiPhO_main_results.png" alt="main results medal table" width="700"/>
|
| 104 |
</div>
|
| 105 |
|
| 106 |
-
- **Closed-source reasoning MLLMs** lead the benchmark, earning **6β12 gold medals** (Top 5: Gemini-2.5-Pro, Gemini-2.5-Flash, GPT-5, o3, Grok-4)
|
| 107 |
- **Open-source MLLMs** mostly score at or below the **bronze** level
|
| 108 |
- **Open-source LLMs** demonstrate **stronger reasoning** and generally outperform open-source MLLMs
|
| 109 |
|
|
|
|
| 20 |
|
| 21 |
<p align="center" style="font-size:28px"><b>π₯ HiPhO: High School Physics Olympiad Benchmark</b></p>
|
| 22 |
<p align="center">
|
| 23 |
+
<a href="https://phyarena.github.io/">[π Leaderboard]</a>
|
| 24 |
<a href="https://huggingface.co/datasets/SciYu/HiPhO">[π Dataset]</a>
|
| 25 |
<a href="https://github.com/SciYu/HiPhO">[β¨ GitHub]</a>
|
| 26 |
<a href="https://huggingface.co/papers/2509.07894">[π Paper]</a>
|
|
|
|
| 29 |
[](https://opensource.org/license/mit)
|
| 30 |
</div>
|
| 31 |
|
| 32 |
+
π **New (Sep. 16):** We launched "[**PhyArena**](https://phyarena.github.io/)", a physics reasoning leaderboard incorporating the HiPhO benchmark.
|
| 33 |
|
| 34 |
## π Introduction
|
| 35 |
|
|
|
|
| 105 |
<img src="intro/HiPhO_main_results.png" alt="main results medal table" width="700"/>
|
| 106 |
</div>
|
| 107 |
|
| 108 |
+
- **Closed-source reasoning MLLMs** lead the benchmark, earning **6β12 gold medals** (Top 5: Gemini-2.5-Pro, Gemini-2.5-Flash-Thinking, GPT-5, o3, Grok-4)
|
| 109 |
- **Open-source MLLMs** mostly score at or below the **bronze** level
|
| 110 |
- **Open-source LLMs** demonstrate **stronger reasoning** and generally outperform open-source MLLMs
|
| 111 |
|
intro/HiPhO_IPhO2025.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
intro/HiPhO_main_results.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
intro/HiPhO_score.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|