最近開始在不同的地方會看到 KataGo 這個名字 (Twitter 與 YouTube 上都有看到),翻了一下資料發現是在訓練成本上有重大突破,依照論文的宣稱快了五十倍...
在第一次跑的時候,只用了 35 張 V100 跑七天就有 Leela Zero 第 130 代的強度:
The first serious run of KataGo ran for 7 days in Februrary 2019 on up to 35xV100 GPUs. This is the run featured in the paper. It achieved close to LZ130 strength before it was halted, or up to just barely superhuman.
而第二次跑的時候用了 28 張 V100 跑 20 blocks 的訓練,跑了 19 天就已經超越 Facebook 當初提供的 ELFv2 版本,而對應到 Leela Zero 大約是第 200 代左右的強度 (要注意的是在 Leela Zero 這邊已經是用 40 blocks 的結構訓練了一陣子了):
Following some further improvements and much-improved hyperparameters, KataGo performed a second serious run in May-June a max of 28xV100 GPUs, surpassing the February run after just three and a half days. The run was halted after 19 days, with the final 20-block networks reaching a final strength slightly stronger than LZ-ELFv2! (This is Facebook's very strong 20-block ELF network, running on Leela Zero's search architecture). Comparing to the yet larger Leela Zero 40-block networks, KataGo's network falls somewhere around LZ200 at visit parity, despite only itself being 20 blocks.
從論文裡面可以看到,跟 Leela Zero 一樣是逐步提昇 (應該也是用 Net2Net),而不是一開始就拉到 20x256:
In KataGo’s main 19-day run, (b, c) began at (6, 96) and switched to (10, 128), (15, 192), and (20, 256), at roughly 0.75 days, 1.75 days, and 7.5 days, respectively. The final size approximately matches that of AlphaZero and ELF.
訓練速度上會有這麼大的改善,分成兩個類型,一種是一般性的 (在「Major General Improvements」這章),另外一類是特定於圍棋領域的改進 (在「Major Domain-Specific Improvements」這章)。
在 Leela Zero 的 issue tracking 裡面也可以看到很多關於 KataGo 的消息,看起來作者也在裡面一起討論,應該會有一些結果出來...
One thought on “用更少訓練時間的 KataGo”