Safetensors
patchtst
abao commited on
Commit
0d18042
·
verified ·
1 Parent(s): e83d160

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -7,7 +7,7 @@ license: cc-by-nc-4.0
7
  This is a scaled-up version of the checkpoint originally presented in our preprint. We will include results with this checkpoint in the appendix of our next preprint update. This model has 12 layers with 12 attention heads each.
8
 
9
  Trained with larger dataset of multiple initial conditions per system, with mixed periods as well.
10
- Specifically, using 8 out of the 16 initial conditions (ICs) per system that we provide in our [skew-mixedp-ic16 dataset](https://huggingface.co/datasets/GilpinLab/skew-mixedp-ic16)
11
  We trained this model for 800k iterations, with per-device batch size 384, across 6 AMD MI100X GPUs.
12
  *Panda*: Patched Attention for Nonlinear Dynamics.
13
 
 
7
  This is a scaled-up version of the checkpoint originally presented in our preprint. We will include results with this checkpoint in the appendix of our next preprint update. This model has 12 layers with 12 attention heads each.
8
 
9
  Trained with larger dataset of multiple initial conditions per system, with mixed periods as well.
10
+ Specifically, using 8 out of the 16 initial conditions (ICs) per system that we provide in our [skew-mixedp-ic16 dataset](https://huggingface.co/datasets/GilpinLab/skew-mixedp-ic16).
11
  We trained this model for 800k iterations, with per-device batch size 384, across 6 AMD MI100X GPUs.
12
  *Panda*: Patched Attention for Nonlinear Dynamics.
13