KataGo 是目前 open source 裡最強的計算引擎了,不過先前的缺點就是得透過 OpenCL 或是 CUDA 才能跑,所以基本上得有張夠力的顯示卡才行。
如果要想要在 CPU 上跑 (不透過硬體顯示卡),一種方式是透過 OpenCL 的方式模擬,在 Linux 下可以透過 pocl 達成,效能就普普通通,但算是會動的東西,不過 Windows 下好像不太好弄... 這也是先前蠻多人還是繼續使用 Leela Zero 的原因。
最近 KataGo 在 1.5 版實做了純 CPU 版本的程式碼,是透過 Eigen 這套 library 達成的,不過大家測過以後發現慢到爆炸 XDDD
因為作者沒有提供 CPU 版本的 binary,我自己在 Linux 下抓程式碼 compile 後測試發現只會用一個 CPU (沒有 multi threading),對比於在 1080Ti 上跑 OpenCL 版本大約 150 visits/sec (40b),但 CPU 版本是 0.0x visits/sec 啊 XDDD
作者自己在 GitHub 上討論時也有提到這個版本只有確認正確性,完全沒有考慮效能...
不過就有其他人跳出來改善了,在「Optimization of Eigen backend #288」這邊可以看到 kaorahi 拋出了不少修改,可以看到從一開始的 eigen_naive_loop (對比 1.5 版有 13x 的成長) 一路到 borrow_tensorflow (1400x) 的版本,使得在 CPU 上面跑 15b 也有 10 visits/sec 了:
"borrow_tensorflow" version: x1400 speed up from 1.5.0 (70% of libtensorflow backend). Now 15b net is usable for me. I get 19 visits/s in benchmark and 10 visits/s in GUI with 15b net.
這樣看起來已經快了不少,這樣子 Leela Zero 應該會逐漸淡出了,CPU-only 算是最後一塊 Leela Zero 還可以爭的地盤...
I figured that since we were to replace that matrix function anyway, I could try replacing it with XMMatrixInverse being a “modern” replacement for D3DXMatrixInverse. XMMatrixInverse also uses SSE2 instructions so it should be equally optimal to the D3DX function, but I was nearly sure it would break the same way.
Here’s Intel versus AMD relative error of RCPPS instruction: http://const.me/tmp/vrcpps-errors-chart.png AMD is Ryzen 5 3600, Intel is Core i3 6157U.
Over the complete range of floats, AMD is more precise on average, 0.000078 versus 0.000095 relative error. However, Intel has 0.000300 maximum relative error, AMD 0.000315.
Both are well within the spec. The documentation says “maximum relative error for this approximation is less than 1.5*2^-12”, in human language that would be 3.6621E-4.
Source code that compares them by creating 16GB binary files with the complete range of floats: https://gist.github.com/Const-me/a6d36f70a3a77de00c61cf4f6c17c7ac
至於為什麼會生出 NaN 的原因,沒找出來還是有點可惜,不過這個解法還行,就是「新版的 library 既然沒問題,就大家也不要太計較舊版的問題」的概念...
WebP lossless images are 26% smaller in size compared to PNGs. WebP lossy images are 25-34% smaller than comparable JPEG images at equivalent SSIM quality index.
但作者發現 Google 之所以可以達到 25%~34% 這個數字,是因為比較的對象是 Independent JPEG Group 所釋出的 cjpeg,而如果拿 MozJPEG 相比的話應該得不到這個結果,另外也把 AV1 的 AVIF 拉進來一起測試了:
I think Google’s result of 25-34% smaller files is mostly caused by the fact that they compared their WebP encoder to the JPEG reference implementation, Independent JPEG Group’s cjpeg, not Mozilla’s improved MozJPEG encoder. I decided to run some tests to see how cjpeg, MozJPEG and WebP compare. I also tested the new AVIF format, based on the open AV1 video codec. AVIF support is already in Firefox behind a flag and should be coming soon to Chrome if this ticket is to be believed.
WebP seems to have about 10% better compression compared to libjpeg in most cases, except with 1500px images where the compression is about equal.
However, when compared to MozJPEG, WebP only performs better with small 500px images. With other image sizes the compression is equal or worse.
I think MozJPEG is the clear winner here with consistently about 10% better compression than libjpeg.
另外也提到了 AVIF 的壓縮率很好,不過要注意演算法會把非重點部位的細節吃掉:
I think AVIF is a really exciting development and compared to WebP it seems like a true next-generation codec with about 30% better compression ratio compared to libjpeg. Only concern I have is the excessive blurring of low detail areas. It remains to be seen if this can be improved when more advanced tooling becomes available.