Ubuntu 下的滑鼠滾輪速度

這陣子因為經常切回 WindowsD2R,發現 Windows 下的滾輪速度快多了,回到 Ubuntu 20.04 下發現無法調整滑鼠滾輪的速度,找了一些方案測試,發現居然地雷還是超多 XD

搜尋可以找到「Increase mouse wheel scroll speed」與「How to change my mouse wheel scroll rate?」這兩篇,被推最多的都是 imwheel,但這套軟體的最新版是 2004 年,實際上用就會發現配合現代的系統 bug 很多...

另外用的方案是「Mouse scroll wheel acceleration, implemented in user space」,作者用 Python 去控制加速,測了一下正常多了。範例給的 ./main.py -v --exp 1 其中的 --exp 1 實際用起來有點太快,我改成 0.75 比較習慣。

先照著作者提到的,把 dependency 都裝起來,接下來掛到 Session and Startup 裡面,在登入後跑起來就可以了:

檔案壓縮順序造成壓縮率的差異

Hacker News Daily 上看到「Why are tar.xz files 15x smaller when using Python's tar library compared to macOS tar?」這篇,作者問了為什麼他用 Pythontarfile 壓出來比起用 tar 壓出來小了 15 倍,檔案都是 JSON 檔壓成 XZ 格式:

I'm compressing ~1.3 GB folders each filled with 1440 JSON files and find that there's a 15-fold difference between using the tar command on macOS or Raspbian 10 (Buster) and using Python's built-in tarfile library.

看到 1440 個檔案應該會有直覺是一分鐘一個檔案,跑一天的量...

隔天他把原因找出來了,在裝了 GNU Tar 並且加上 --sort='name' 參數後,壓出來的大小就跟 Python 的 tarfile 差不多了:

Ok, I think I found the issue: BSD tar and GNU tar without any sort options put the files in the archive in an undefined order.

After installing GNU tar on my Mac with:

brew install gnu-tar

And then tarring the same folder, but with the --sort option:

gtar --sort='name' -cJf zsh-archive-sorted.tar.xz /Users/user/Desktop/temp/tar/2021-03-11

I get a .tar.xz archive of 1.5 MB, equal to the archive created by the Python library.

底層的原因是檔名與檔案內容有正相關的相似度 (因為裡面都是 sensor 資料),依照檔名排序壓縮就等於把類似的 JSON 檔案放在一起壓,使得 xz 可以利用這點急遽拉高壓縮率:

My JSON files contain measurements from hundreds of sensors. Every minute I read out all sensors, but only a few of these sensors have a different value from minute to minute.

By sorting the files by name (which has the creation unixtime at the beginning of it), two subsequent files have very little different characters between them. Apparently this is very favourable for the compression efficiency.

遇到類似的情境可以當作 tuning 的一種,測試看看會不會變小很多...

Backblaze 在 2020 年對機械硬碟的回顧

前幾天 Backblaze 放了 2020 年的回顧資料出來:「Backblaze Hard Drive Stats for 2020」。

整體的 AFR (Annualized Failure Rate) 在 0.93% 左右,而如果照品牌拆開,HGST 的數字依然是最漂亮的 (雖然他現在是 WD 的品牌),大約在 0.36% 左右 (111/(1083774+4663049+372000+820272+275779+3968475)),Toshiba 次之,大約低了平均值一些落在 0.89%,而 Seagate 光是看就就知道會超過 1%...

官方有提到,低於 250,000 drive days 以下的數據僅供參考,因為資料量太少,在統計上無法提供結論:

For drives which have less than 250,000 drive days, any conclusions about drive failure rates are not justified. There is not enough data over the year-long period to reach any conclusions. We present the models with less than 250,000 drive days for completeness only.

然後 WD 本家的硬碟回到戰線了,記得之前基本上算是被唾棄 XDDD

另外一張表則是講到這三年的情況,可以看出來 2020 年的 AFR 數字降了不少,裡面也解釋了為什麼 (看起來就是活下來的穩下來了...):

The answer: It was a group effort. To start, the older drives: 4TB, 6TB, 8TB, and 10TB drives as a group were significantly better in 2020, decreasing from a 1.35% AFR in 2019 to a 0.96% AFR in 2020. At the other end of the size spectrum, we added over 30,000 larger drives: 14TB, 16TB, and 18TB, which as a group recorded an AFR of 0.89% for 2020. Finally, the 12TB drives as a group had a 2020 AFR of 0.98%. In other words, whether a drive was old or new, or big or small, they performed well in our environment in 2020.

這幾天 blog 被掃,用 nginx 的 limit_req_zone 擋...

Update:這個方法問題好像還是不少,目前先拿掉了...

這幾天 blog 被掃中單一頁面負載會比較重的頁面,結果 CPU loading 變超高,從後台可以看到常常滿載:

看了一下是都是從 Azure 上面打過來的,有好幾組都在打,IP address 每隔一段時間就會變,所以單純用 firewall 擋 IP address 的方法看起來沒用...

印象中 nginx 本身可以 rate limit,搜了一下文件可以翻到應該就是「Module ngx_http_limit_req_module」這個,就設起來暫時用這個方式擋著,大概是這樣:

limit_conn_status 429;
limit_req_status 429;
limit_req_zone $binary_remote_addr zone=myzone:10m rate=10r/m;

其中預設是傳回 5xx 系列的 service unavailable,但這邊用 429 應該更正確,從維基百科的「List of HTTP status codes」這邊可以看到不錯的說明:

429 Too Many Requests (RFC 6585)
The user has sent too many requests in a given amount of time. Intended for use with rate-limiting schemes.

然後 virtual host 的設定檔內把某個 path 放進這個 zone 保護起來,目前比較困擾的是需要 copy & paste try_filesFastCGI 相關的設定:

    location /path/subpath {
        limit_req zone=myzone;
        try_files $uri $uri/ /index.php?$args;

        include fastcgi.conf;
        fastcgi_intercept_errors on;
        fastcgi_pass php74;
    }

這樣一來就可以自動擋下這些狂抽猛送的 bot,至少在現階段應該還是有用的...

如果之後有遇到其他手法的話,再見招拆招看看要怎麼再加強 :o

Backblaze 與 Cloudflare 合作,免除傳輸費用

先前知道不少單位會選擇用 CloudFront 的原因就是 S3 到 CloudFront 這段是不需要傳輸費用的。畢竟 CDN 的 hit rate 還是有限,用其他家 CDN 得付這塊費用。

而現在 Backblaze 宣佈跟 Cloudflare 合作,免除掉 Backblaze 到 Cloudflare 的費用:「Backblaze and Cloudflare Partner to Provide Free Data Transfer」。

Today we are announcing that beginning immediately, Backblaze B2 customers will be able to download data stored in B2 to Cloudflare for zero transfer fees.

AWS 這邊會不會有其他動作呢...

Amazon S3 提供更高的存取量...

AWS 宣佈提高了 Amazon S3 的效能:「Amazon S3 Announces Increased Request Rate Performance」。

每個 S3 prefix 都可以到 5500 RPS read 與 3500 RPS write:

Amazon S3 now provides increased performance to support up to 3,500 requests per second to add data and 5,500 requests per second to retrieve data, which can save significant processing time for no additional charge. Each S3 prefix can support these request rates, making it simple to increase performance exponentially.

舊的資料可以看「Request Rate and Performance Considerations」這邊,裡面沒有明講速度,但有提到如果超過 800 RPS read 與 300 RPS write 的門檻,建議開 case:

However, if you expect a rapid increase in the request rate for a bucket to more than 300 PUT/LIST/DELETE requests per second or more than 800 GET requests per second, we recommend that you open a support case to prepare for the workload and avoid any temporary limits on your request rate.

不過如果有量的話,還是建議照著原來的 prefix 建議,打散處理會比較好,通常在前面的 CDN 通常可以跑簡單的 url rewrite 處理掉 (像是 CloudFront 自家或是 Cloudflare),像是把使用 unix timestamp (ms) 的 https://www.example.com/1531843366123.jpg 變成 https://www.example.com/6123/1531843366123.jpg,這樣可以讓 Amazon S3 的後端依照 prefix 打散 loading,避免當站愈來愈大的時候很難處理。

Stripe 將 Redis 單機版轉到 Cluster 版本上降低了錯誤率

在「Scaling a High-traffic Rate Limiting Stack With Redis Cluster」這邊提到了 StripeRedis 單機版轉移到 10 個節點的 cluster 版本,然後錯誤率大幅下降:

Stripe’s rate limiters are built on top of Redis, and until recently, they ran on a single very hot instance of Redis. The server had followers in place for failover, but at any given time, one node was handling every operation.

We eventually solved it by migrating to a 10-node Redis Cluster.

另外也可以看出來,在轉移到 cluster 版本後有不少要注意的,像是因為 sharding 而需要調整平衡性。另外是 cluster 模式下寫入的 confirmation 跟一般預期的不太一樣,不過這對於 rate limit 的應用還好,可以接受某種程度的掉資料...

用 4.5+ 的 Linux Kernel 限制 I/O 速度

在「Using cgroups to limit I/O」這邊看到作者試著用 cgroups 限制 I/O 速度。

作者前面花了不少篇幅解釋 cgroups v1 無法正確限制 I/O 速度,後面就在講 cgroups v2 怎麼做:

So, in order to limit I/O when this I/O may hit the writeback kernel cache, we need to use both memory and io controllers in the cgroups v2!

這會需要 4.5+ 的 kernel,可能會需要手動更新,或是直接使用比較新的 distribution:

Since kernel 4.5, the cgroups v2 implementation was marked non-experimental.

然後照抄就可以了 (不過這邊的指定都需要 root,作者用 $ 表示 shell 有點怪):

# mount -t cgroup2 nodev /cgroup2
# mkdir /cgroup2/cg2
# echo "+io" > /cgroup2/cgroup.subtree_control
# echo "8:0 wbps=1048576" > io.max
# echo $$ > /cgroup2/cg2/cgroup.procs

然後就可以跑 dd 測試速度了,同時間也可以跑 iostat 看。

Amazon 的多變數最佳化

在「An efficient bandit algorithm for real-time multivariate optimization」這邊提到了 Amazon 不是走傳統的 A/B testing,而是同時進行多變數的最佳化:

Consider the problem of trying to find a near-optimal version of a promotional message such as this one, which has 5 variable parts and 48 different combinations in total.

在這樣的測試數量下,作者預估需要 66 天才能夠得到有效的結果,而這也表示當變數更多的時候問題就更大了:

Based on the Amazon success rate and traffic size, the authors calculated it would take 66 days to conduct a 48 treatment randomized experiment. Often this isn’t practical.

也就是開頭提到的,如何一個禮拜就提昇 21% conversion rate:

Aka, “How Amazon improved conversion by 21% in a single week!”

作者也提到了這個方法其實打臉了他先前提到的另外一篇論文,在 2014 年提出測試應該要盡可能簡單 XDDD:

Yesterday we saw the hard-won wisdom on display in ‘seven myths‘ recommending that experiments be kept simple and only test one thing at a time, otherwise interpreting the results can get really complicated.

只能說狀況愈來愈複雜,導致需要新方法解決問題。而且這些電商會遇到在測試時不同的 factor 之間有可能會有相依性 (也就是說這些 factor 不是 i.i.d.),你用本來的方式反而會測不出來。

對於按讚數排名的方法

前幾天看到一篇 2009 年的老文章,在討論使用者透過「喜歡」以及「不喜歡」投票後,要怎麼排名的方法:「How Not To Sort By Average Rating」。

基本的概念是當使用者投票數愈多時就會愈準確,透過統計方法可以算一個信賴區間,再用區間的下限來排... 但沒想到公式「看起來」這麼複雜 XDDD

Score = Lower bound of Wilson score confidence interval for a Bernoulli parameter

但實際的運算其實沒那麼複雜,像是 Ruby 的程式碼可以看出大多都是系統內的運算就可以算出來。其中的 z 在大多數的情況下是常數。

require 'statistics2'

def ci_lower_bound(pos, n, confidence)
    if n == 0
        return 0
    end
    z = Statistics2.pnormaldist(1-(1-confidence)/2)
    phat = 1.0*pos/n
    (phat + z*z/(2*n) - z * Math.sqrt((phat*(1-phat)+z*z/(4*n))/n))/(1+z*z/n)
end

The z-score in this function never changes, so if you don't have a statistics package handy or if performance is an issue you can always hard-code a value here for z. (Use 1.96 for a confidence level of 0.95.)

作者後來在 2012 年與 2016 年也分別給了 SQL 以及 Excel 的範例程式碼出來,裡面 hard-code 了 95% 信賴區間的部份:

SELECT widget_id, ((positive + 1.9208) / (positive + negative) - 
                   1.96 * SQRT((positive * negative) / (positive + negative) + 0.9604) / 
                          (positive + negative)) / (1 + 3.8416 / (positive + negative)) 
       AS ci_lower_bound FROM widgets WHERE positive + negative > 0 
       ORDER BY ci_lower_bound DESC;
=IFERROR((([@[Up Votes]] + 1.9208) / ([@[Up Votes]] + [@[Down Votes]]) - 1.96 * 
    SQRT(([@[Up Votes]] *  [@[Down Votes]]) / ([@[Up Votes]] +  [@[Down Votes]]) + 0.9604) / 
    ([@[Up Votes]] +  [@[Down Votes]])) / (1 + 3.8416 / ([@[Up Votes]] +  [@[Down Votes]])),0)

而更多的說明在維基百科的「Binomial proportion confidence interval」可以翻到,裡面也有其他的方法可以用。