This release adds a new human supervised learning ("Human SL") model trained on a large number of human games to predict human moves across players of different ranks and time periods! Not much experimentation with it has been done yet and there is probably low-hanging fruit on ways to use and visualize it, open for interested devs and enthusiasts to try.
This version of KataGo adds support for a new and improved neural net architecture!
這個新的架構以及其他的改善讓訓練的速度改善:
The new neural nets use a new nested residual bottleneck structure, along with other major improvements in training. They train faster than KataGo's old nets and learn more effectively.
另外一個是他把 UEC 比賽時用的 model 放出來了,很特別的是採用 b18c384,而 KataGo Distributed Training 這邊目前主要是 b40c256 與 b60c320,看起來是為了比賽而一次性訓練出來的。
Attached to this release is a one-off net b18c384nbt-uec.bin.gz that was trained for a tournament in 2022, which should be of similar strength to the 60-block nets on http://katagotraining.org/, but on many machines will run much faster, on some machines between 40-block and 60-block speed, but on some machines even as fast as or faster than 40-block.
KataGo 是目前 open source 裡最強的計算引擎了,不過先前的缺點就是得透過 OpenCL 或是 CUDA 才能跑,所以基本上得有張夠力的顯示卡才行。
如果要想要在 CPU 上跑 (不透過硬體顯示卡),一種方式是透過 OpenCL 的方式模擬,在 Linux 下可以透過 pocl 達成,效能就普普通通,但算是會動的東西,不過 Windows 下好像不太好弄... 這也是先前蠻多人還是繼續使用 Leela Zero 的原因。
最近 KataGo 在 1.5 版實做了純 CPU 版本的程式碼,是透過 Eigen 這套 library 達成的,不過大家測過以後發現慢到爆炸 XDDD
因為作者沒有提供 CPU 版本的 binary,我自己在 Linux 下抓程式碼 compile 後測試發現只會用一個 CPU (沒有 multi threading),對比於在 1080Ti 上跑 OpenCL 版本大約 150 visits/sec (40b),但 CPU 版本是 0.0x visits/sec 啊 XDDD
作者自己在 GitHub 上討論時也有提到這個版本只有確認正確性,完全沒有考慮效能...
不過就有其他人跳出來改善了,在「Optimization of Eigen backend #288」這邊可以看到 kaorahi 拋出了不少修改,可以看到從一開始的 eigen_naive_loop (對比 1.5 版有 13x 的成長) 一路到 borrow_tensorflow (1400x) 的版本,使得在 CPU 上面跑 15b 也有 10 visits/sec 了:
"borrow_tensorflow" version: x1400 speed up from 1.5.0 (70% of libtensorflow backend). Now 15b net is usable for me. I get 19 visits/s in benchmark and 10 visits/s in GUI with 15b net.
這樣看起來已經快了不少,這樣子 Leela Zero 應該會逐漸淡出了,CPU-only 算是最後一塊 Leela Zero 還可以爭的地盤...
The first serious run of KataGo ran for 7 days in Februrary 2019 on up to 35xV100 GPUs. This is the run featured in the paper. It achieved close to LZ130 strength before it was halted, or up to just barely superhuman.
Following some further improvements and much-improved hyperparameters, KataGo performed a second serious run in May-June a max of 28xV100 GPUs, surpassing the February run after just three and a half days. The run was halted after 19 days, with the final 20-block networks reaching a final strength slightly stronger than LZ-ELFv2! (This is Facebook's very strong 20-block ELF network, running on Leela Zero's search architecture). Comparing to the yet larger Leela Zero 40-block networks, KataGo's network falls somewhere around LZ200 at visit parity, despite only itself being 20 blocks.
從論文裡面可以看到,跟 Leela Zero 一樣是逐步提昇 (應該也是用 Net2Net),而不是一開始就拉到 20x256:
In KataGo’s main 19-day run, (b, c) began at (6, 96) and switched to (10, 128), (15, 192), and (20, 256), at roughly 0.75 days, 1.75 days, and 7.5 days, respectively. The final size approximately matches that of AlphaZero and ELF.
訓練速度上會有這麼大的改善,分成兩個類型,一種是一般性的 (在「Major General Improvements」這章),另外一類是特定於圍棋領域的改進 (在「Major Domain-Specific Improvements」這章)。
在 Leela Zero 的 issue tracking 裡面也可以看到很多關於 KataGo 的消息,看起來作者也在裡面一起討論,應該會有一些結果出來...