QTE 小遊戲 Looptap

Hacker News 首頁上看到這個小遊戲:「Show HN: Looptap – A minimal game to waste your time (vasanthv.com)」,如同他的標題寫的,浪費時間的小遊戲 XDDD

標題講的 QTE 是「快速反應事件」這個東西,在現在的遊戲裡面算是蠻常見的機制,是一種需要在事件有效區間反應的設計 (太早反應或是太晚反應都不行)。

SSH 的 StrictHostKeyChecking=accept-new

OpenSSH 在連到新的 host 時會跳出 key fingerprint 的資訊讓使用者確認,有時候為了自動化會用 StrictHostKeyChecking=no 避開,在 Lobsters Daily 上則看到了新的選項可以用,StrictHostKeyChecking=accept-new

就如同選項的名字所描述的,查了一下 OpenSSH Release Notes 可以看到這是在 OpenSSH 7.5 導入的參數,是在 March 20, 2017 引入的:

* ssh(1): expand the StrictHostKeyChecking option with two new settings. The first "accept-new" will automatically accept hitherto-unseen keys but will refuse connections for changed or invalid hostkeys. This is a safer subset of the current behaviour of StrictHostKeyChecking=no. The second setting "off", is a synonym for the current behaviour of StrictHostKeyChecking=no: accept new host keys, and continue connection for hosts with incorrect hostkeys. A future release will change the meaning of StrictHostKeyChecking=no to the behaviour of "accept-new". bz#2400

對於一些自動化的流程應該夠用了,不需要到用 no 完全關掉。

翻了「Ubuntu – Package Search Results -- openssh-client」可以看到 18.04 之後都是 7.5 之後的版本了,支援度應該是沒什麼太大問題...

搜尋影片的串流平台

Hacker News Daily 上看到「Show HN: API to query catalogs of 20 streaming services across 60 countries (movieofthenight.com)」這個,但這個服務反而不是重點,有許多人發現裡面錯誤率頗高,而且也沒有台灣的資料,反倒是裡面有人提到 JustWatch 這個服務看起來比較好用...

像是「Friends」(這邊用的是中國的翻譯片名) 可以看到在台灣是在 Netflix 上,美國的話則是在 HBO Max (串流) 與 Apple TV (購買) 上可以看到。不過查 MythBusters 在兩個平台上都沒看到資料...

但整體上來說 JustWatch 搜出來的品質還是好不少...

Uber 對 Golang GC 的調整

Hacker News 上看到「How We Saved 70K Cores Across 30 Mission-Critical Services (Large-Scale, Semi-Automated Go GC Tuning @Uber)」這篇,講 Uber 的人怎麼調整 GolangGC,在 Hacker News 上的討論「Large-scale, semi-automated Go GC tuning (uber.com)」也有些東西再講。

一開始的方法是動態一直調整 GOGC 的值:

Our initial approach was to have a ticker to run every second to monitor the heap metrics, and then adjust GOGC value accordingly.

但這個方法的 overhead 太重:

The disadvantage of this approach is that the overhead starts to become considerable, because in order to read heap metrics Go needs to do a STW (ReadMemStats) and it is somewhat inaccurate, because we can have more than one garbage collection per second.

後來的方法是利用 SetFinalizer 來做 (然後這段 code 不知道為什麼是用圖片...):

Luckily we were able to find a good alternative. Go has finalizers (SetFinalizer), which are functions that run when the object is going to be garbage collected. They are mainly useful for cleaning memory in C code or some other resources. We were able to employ a self-referencing finalizer that resets itself on every GC invocation. This allows us to reduce any CPU overhead.

不過 Hacker News 上有些人也很驚訝於 30 個 service 用掉 70K cores 這件事情,以 Uber 的服務來說算是比預想多不少數字,而且這只是跑 Golang,而且這次省下來的部份...

另外在 Hacker News 上也有人提到 Golang 有在思考 soft memory limit 的設計,也值得看一看:「runtime/debug: soft memory limit #48409」、「Proposal: Soft memory limit」。

qBittorrent 支援 BitTorrent v2

在「Thursday January 06th 2022 - qBittorrent v4.4.0 release」這邊可以看到 qBittorrent v4.4.0 支援 BitTorrent v2 (BEP 52) 了:

FEATURE: Support for v2 torrents along with libtorrent 2.0.x support (glassez, Chocobo1)

先前在「libtorrent 宣佈支援 BitTorrent v2」這邊有提到過 libtorrent 要支援 BitTorrent v2 的消息,而 qBittorrent 算是 libtorrent 的使用大戶,支援 BitTorrent v2 後會讓 BitTorrent v2 的使用率增加不少。

用了一年多才出現一個比較大的 client 跳進去支援,看起來大家的動力的確不高...

用 Akamai 提供的 akahelp 分析 DNS Resolver 的資訊

整理資料的時候看到以前就看到的資訊,Akamai 有提供工具,可以看 DNS resolver 的資訊:「Introducing a New whoami Tool for DNS Resolver Information」。

這拿來分析 168.95.1.1 或是 8.8.8.8 這些服務還蠻好用的,這些對外雖然有一個 IP address 在服務,但後面是一整個 cluster,所以可以利用 Akamai 的這個工具來看分析。

像是 8.8.8.8 會給接近的 EDNS Client Subnet (ECS) 資訊 (ip 的部份看起來是隨便給一個):

$ dig whoami.ds.akahelp.net txt @8.8.8.8

[...]

;; ANSWER SECTION:
whoami.ds.akahelp.net.  20      IN      TXT     "ns" "172.217.43.194"
whoami.ds.akahelp.net.  20      IN      TXT     "ecs" "111.250.35.0/24/24"
whoami.ds.akahelp.net.  20      IN      TXT     "ip" "111.250.35.149"

1.1.1.1 會給假的 ECS 資訊:

$ dig whoami.ds.akahelp.net txt @1.1.1.1

[...]

;; ANSWER SECTION:
whoami.ds.akahelp.net.  20      IN      TXT     "ns" "2400:cb00:80:1024::a29e:f134"
whoami.ds.akahelp.net.  20      IN      TXT     "ip" "2400:cb00:80:1024::a29e:f134"
whoami.ds.akahelp.net.  20      IN      TXT     "ecs" "111.250.0.0/24/24"

然後 168.95.1.1 則是連 ECS 都不給 XDDD

$ dig whoami.ds.akahelp.net txt @168.95.1.1

[...]

;; ANSWER SECTION:
whoami.ds.akahelp.net.  20      IN      TXT     "ns" "2001:b000:180:8002:0:2:9:114"

之前在找 DNS 類問題的時候還算可以用的工具...

把 Blog 丟到 CloudFront 上

先前在「AWS 流量相關的 Free Tier 增加不少...」這邊有提到一般性的流量從 1GB/month per region 升到 100GB/month,另外 CloudFront 則是大幅增加,從 50GB/month (只有註冊完的前 12 個月) 提升到 1TB/month (不限制 12 個月),另外 CloudFront 到 EC2 中間的流量是不計費的。

剛剛花了點功夫把 blog 從 Cloudflare 搬到 CloudFront 上,另外先對預設的 /* 調整成 no cache,然後針對 /wp-content/* 另外加上 cache 處理,跑一陣子看看有沒有問題再說...

目前比較明顯的改善就是 latency,從 HiNet 連到免費版的 Cloudflare 會導去美國,用 CloudFront 的話就會是台灣了:

另外一方面,這樣國際頻寬的部份就會走進 AWS 的骨幹,比起透過 HiNet 自己連到美國的 PoP 上,理論上應該是會快一些...

Ingo Molnár 提出讓 Linux Kernel 編譯速度提昇的 mega patch

Hacker News 首頁上看到「Massive ~2.3k Patch Series Would Improve Linux Build Times 50~80% & Fix "Dependency Hell"」這個,對應到 mailing list 上的信件是「* [PATCH 0000/2297] [ANNOUNCE, RFC] "Fast Kernel Headers" Tree -v1: Eliminate the Linux kernel's "Dependency Hell"」這個,看到「0000/2297」這個 prefix XDDD

他主要是想要改善 Linux Kernel 的 compile 時間 (從 project 的名稱「Fast Kernel Headers」可以看到),只是沒想到會縮短這麼多。另外一方面也順便處理了 dependency hell 的問題 (改善維護性)。

測試出來的結果相當驚人,從 231.34 +- 0.60 secs (15.5 builds/hour) 到 129.97 +- 0.51 secs (27.7 builds/hour),以編譯次數來看的話是 78% 的改善。如果以 CPU time 來看的話,從 11,474,982.05 msec cpu-clock 降到 7,100,730.37 msec cpu-clock,也是以編譯次數來算的話,有 61.6% 的改善...

這是花了一年多的時間嘗試才達成的目標,嘗試不同的方法,前幾次雖然都有改善,但改善幅度太小,變動卻太大,他覺得不值得丟出來,直到第三次才達成這樣的目標...

第一次:

When I started this project, late 2020, I expected there to be maybe 50-100 patches. I did a few crude measurements that suggested that about 20% build speed improvement could be gained by reducing header dependencies, without having a substantial runtime effect on the kernel. Seemed substantial enough to justify 50-100 commits.

第二次:

But as the number of patches increased, I saw only limited performance increases. By mid-2021 I got to over 500 commits in this tree and had to throw away my second attempt (!), the first two approaches simply didn't scale, weren't maintainable and barely offered a 4% build speedup, not worth the churn of 500 patches and not worth even announcing.

第三次:

With the third attempt I introduced the per_task() machinery which brought the necessary flexibility to reduce dependencies drastically, and it was a type-clean approach that improved maintainability. But even at 1,000 commits I barely got to a 10% build speed improvement. Again this was not something I felt comfortable pushing upstream, or even announcing. :-/

然後基於第三次的成果覺得有望,意外的發現後續的速度改善比想像中的多非常多:

But the numbers were pretty clear: 20% performance gains were very much possible. So I kept developing this tree, and most of the speedups started arriving after over 1,500 commits, in the fall of 2021. I was very surprised when it went beyond 20% speedup and more, then arrived at the current 78% with my reference config. There's a clear super-linear improvement property of kernel build overhead, once the number of dependencies is reduced to the bare minimum.

這次的 patch 雖然超大包,但看起來對於 compile 時間改善非常多,應該會有不少討論... 消息還蠻新的 (台灣時間今天早上五點的信),晚點可以看一下其他大老出來回什麼...

Google 提供掃描 JAR 檔內是否有中獎的 Log4j 的工具

Hacker News 首頁上看到 Google 提供了一套用 Golang 寫的工具,可以掃描 JAR 檔裡面是否有中獎的 Log4j:「log4jscanner」,對應的討論在「Log4jscanner (github.com/google)」這邊。

看起來是內部工具,放出來前先把 vcs history 清掉了:

We unfortunately had to squash the history when open sourcing. The following contributors were instrumental in this project's development: [...]

另外討論裡也有人提到「OWASP Dependency-Check」這個工具也可以掃,這套就更一般性了:

Dependency-check automatically updates itself using the NVD Data Feeds hosted by NIST.

在 ZFS 上跑 PostgreSQL 的調校

在「Everything I've seen on optimizing Postgres on ZFS」這邊看到如果要在 ZFS 上面跑 PostgreSQL 時的調校方式,看起來作者有一直在更新這篇,所以需要的時候可以跑去看...

主要的族群是要搞 self-hosted PostgreSQL 的人,相較於 ext4 或是 XFS,底層如果使用 ZFS 可以做許多事情,像是 compression 與 snapshot,這對於很多 DBA 相關的操作會方便不少,但也因為 ZFS 的關係,兩邊 (& PostgreSQL) 需要一起調整以確保效能...

不過短期應該還是用 RDS 就是了...