改善 Wikipedia 的 JavaScript,減少 300ms 的 blocking time

Hacker News 首頁上看到「300ms Faster: Reducing Wikipedia's Total Blocking Time」這篇,作者 Nicholas RayWikimedia Foundation 的工程師,雖然是貼在自己的 blog 上,但算是半官方性質了... 文章裡面提到了兩個改善都是跟前端 JavaScript 有關的。

作者是透過瀏覽器端的 profiling 產生火焰圖,判讀裡面哪塊是大塊的問題,然後看看有沒有機會改善。

先看最後的成果,可以看到第一個 fix 讓 blocking time 少了 200ms 左右,第二個 fix 則是少了 80ms 左右:

第一個改善是從火焰圖發現 l._enable 吃掉很多 blocking time:

作者發現是因為 find() 找出所有的連結後 (a 元素),跑去每一個連結上面綁定事件造成的效能問題:

The .on("click") call attached a click event listener to nearly every link in the content so that the corresponding section would open if the clicked link contained a hash fragment. For short articles with few links, the performance impact was negligible. But long articles like ”United States” included over 4,000 links, leading to over 200ms of execution time on low-end devices.

但這其實是 redundant code,在其他地方已經有處理了,所以解法就比較簡單,拔掉後直接少了 200ms:

Worse yet, this behavior was unnecessary. The downstream code that listened to the hashchange event already called the same method that the click event listener called. Unless the window’s location already pointed at the link’s destination, clicking a link called the checkHash method twice — once for the link click event handler and once more for the hashchange handler.

第二個改善是 initMediaViewer 吃掉的 blocking time,從 code 也可以看到問題也類似,跑一個 loop 把事件掛到所有符合條件的元素上面:

這邊的解法是 event delegation,把事件掛到上層的元素,就只需要掛一個,然後多加上檢查事件觸發的起點是不是符合條件就可以了,這樣可以大幅降低「掛」事件的成本。

這點算是常用技巧,像是 table 裡面有事件要掛到很多個 td 的時候,會改成把一個事件掛到 table 上面,另外加上判斷條件。

算是蠻標準的 profiling 過程,直接拉出真實數據來看,然後調有重大影響的部分。

llama.cpp 的載入速度加速

Hacker News 上看到「Llama.cpp 30B runs with only 6GB of RAM now (github.com/ggerganov)」這個消息,原 pull request 在「Make loading weights 10-100x faster #613」這邊。

這個 PR 的作者 Justine Tunney 在 PR 上有提到他改變 model 檔案格式,以便改用 mmap(),大幅降低了需要預先讀取的時間 (因為變成 lazy-loading style),而且這也讓系統可以利用 cache page,避免了 double buffering 的問題:

This was accomplished by changing the file format so we can mmap() weights directly into memory without having to read() or copy them thereby ensuring the kernel can make its file cache pages directly accessible to our inference processes; and secondly, that the file cache pages are much less likely to get evicted (which would force loads to hit disk) because they're no longer competing with memory pages that were needlessly created by gigabytes of standard i/o.

這讓我想到在資料庫領域中,PostgreSQL 也會用 mmap() 操作,有點類似的概念。

另外 Justine Tunney 在這邊的 comment 有提到一個意外觀察到的現象,他發現實際在計算的時候用到的 model 內容意外的少:他用一個簡單的 prompt 測試,發現 20GB 的 30B model 檔案在他的 Intel 機器上實際只用到了 1.6GB 左右:

If I run 30B on my Intel machine:

[...]

As we can see, 400k page faults happen, which means only 1.6 gigabytes ((411522 * 4096) / (1024 * 1024)) of the 20 gigabyte weights file actually needed to be used.

這點他還在懷疑是不是他的修改有 bug,但目前他覺得不太像,也看不太出來:

Now, since my change is so new, it's possible my theory is wrong and this is just a bug. I don't actually understand the inner workings of LLaMA 30B well enough to know why it's sparse. Maybe we made some kind of rare mistake where llama.cpp is somehow evaluating 30B as though it were the 7B model. Anything's possible, however I don't think it's likely. I was pretty careful in writing this change, to compare the deterministic output of the LLaMA model, before and after the Git commit occurred. I haven't however actually found the time to reconcile the output of LLaMA C++ with something like PyTorch. It'd be great if someone could help with that, and possibly help us know why, from more a data science (rather than systems engineering perspective) why 30B is sparse.

如果不是 bug 的話,這其實冒出了一個很有趣的訊號,表示這些 model 是有可能再瘦身的?

Heroku 公佈了廢止免費方案的時間表

打開 Hacker News 看到的第一名,Heroku 公佈了廢止免費方案的時間表:「Removal of Heroku free product plans (heroku.com)」,文章在「Removal of Heroku Free Product Plans FAQ」。

沒在用的帳號會在 2022/10/26 開始刪,既有的帳號會在 2022/11/28 終止:

Focus on what's mission-critical: Removal of free dynos, hobby-dev Heroku Postgres and hobby-dev Heroku Data for Redis plans starting November 28, 2022 and inactive account deletion starting October 26, 2022.

取而代之的是針對特定團體條件性的開放,分成三類:學生、非營利組織以及 open source 專案。但前兩個目前方案都還沒出來,要晚點才會公佈;後面的 open source 專案則是要寄信申請。

不過現在好像沒什麼人在用 Heroku 了,大多都是因為以前有在用的人就繼續用,如果要講 "sexy" 的產品 (玩新東西的感覺),Fly.io 應該是比較常見的方案?

原來有專有名詞:TOCTOU (Time-of-check to time-of-use)

看「The trouble with symbolic links」這篇的時候看到的專有名詞:「TOCTOU (Time-of-check to time-of-use)」,直翻是「先檢查再使用」,算是一個常見的 security (hole) pattern,因為檢查完後有可能被其他人改變,接著使用的時候就有可能產生安全漏洞。

在資料庫這類環境下,有 isolation (ACID 裡的 I) 可以確保不會發生這類問題 (需要 REPEATABLE-READ 或是更高的 isolation level)。

但在檔案系統裡面看起來不太順利,2004 年的時候研究出來沒有 portable 的方式可以確保避免 TOCTOU 的問題發生:

In the context of file system TOCTOU race conditions, the fundamental challenge is ensuring that the file system cannot be changed between two system calls. In 2004, an impossibility result was published, showing that there was no portable, deterministic technique for avoiding TOCTOU race conditions.

其中一種 mitigation 是針對 fd 監控:

Since this impossibility result, libraries for tracking file descriptors and ensuring correctness have been proposed by researchers.

然後另外一種方式 (比較治本) 是檔案系統的 API 支援 transaction,但看起來不被主流接受?

An alternative solution proposed in the research community is for UNIX systems to adopt transactions in the file system or the OS kernel. Transactions provide a concurrency control abstraction for the OS, and can be used to prevent TOCTOU races. While no production UNIX kernel has yet adopted transactions, proof-of-concept research prototypes have been developed for Linux, including the Valor file system and the TxOS kernel. Microsoft Windows has added transactions to its NTFS file system, but Microsoft discourages their use, and has indicated that they may be removed in a future version of Windows.

目前看起來的問題是沒有一個讓 Linux community 能接受的 API 設計?

要求參數名稱要加上單位的請願

Hacker News 上看到「Please put units in names (ruudvanasseldonk.com)」這邊的討論,看到的時候已經超過 900 點了 (好像還在急遽成長),算是爆紅狀態,可以感覺到超多 developer 們的怒氣 XDDD

原文在「Please put units in names」這篇,在講參數的命名要加上單位,尤其是時間相關的參數。

時間相關的參數根本沒有共識,你不查資料不會知道是 ns、ms 還是 seconds,所以就會出現象在這三個程式語言跑出來的時間是不一樣的:

然後你就翻桌了 XDDD

Amazon EFS 的效能提昇

AWS 宣佈他們將 Amazon EFS 的 latency 大幅降低以提昇效能:「Amazon Elastic File System Update – Sub-Millisecond Read Latency」。

Linux 上一般是用 NFS 掛 EFS,個位數的 ms 的確對於效能影響超大,現在宣稱讀取的部份降到 0.6ms,應該會有蠻明顯的感覺:

Up until today, EFS latency for read operations (both data and metadata) was typically in the low single-digit milliseconds. Effective today, new and existing EFS file systems now provide average latency as low as 600 microseconds for the majority of read operations on data and metadata.

然後不另外收費:

This performance boost applies to One Zone and Standard General Purpose EFS file systems. New or old, you will still get the same availability, durability, scalability, and strong read-after-write consistency that you have come to expect from EFS, at no additional cost and with no configuration changes.

另外就是過去幾個禮拜他們把現有的 EFS 都轉移過去了:

We “flipped the switch” and enabled this performance boost for all existing EFS General Purpose mode file systems over the course of the last few weeks, so you may already have noticed the improvement. Of course, any new file systems that you create will also benefit.

不過 EFS 另外一個問題就是貴炸,用錢換方便...

QTE 小遊戲 Looptap

Hacker News 首頁上看到這個小遊戲:「Show HN: Looptap – A minimal game to waste your time (vasanthv.com)」,如同他的標題寫的,浪費時間的小遊戲 XDDD

標題講的 QTE 是「快速反應事件」這個東西,在現在的遊戲裡面算是蠻常見的機制,是一種需要在事件有效區間反應的設計 (太早反應或是太晚反應都不行)。

Ingo Molnár 提出讓 Linux Kernel 編譯速度提昇的 mega patch

Hacker News 首頁上看到「Massive ~2.3k Patch Series Would Improve Linux Build Times 50~80% & Fix "Dependency Hell"」這個,對應到 mailing list 上的信件是「* [PATCH 0000/2297] [ANNOUNCE, RFC] "Fast Kernel Headers" Tree -v1: Eliminate the Linux kernel's "Dependency Hell"」這個,看到「0000/2297」這個 prefix XDDD

他主要是想要改善 Linux Kernel 的 compile 時間 (從 project 的名稱「Fast Kernel Headers」可以看到),只是沒想到會縮短這麼多。另外一方面也順便處理了 dependency hell 的問題 (改善維護性)。

測試出來的結果相當驚人,從 231.34 +- 0.60 secs (15.5 builds/hour) 到 129.97 +- 0.51 secs (27.7 builds/hour),以編譯次數來看的話是 78% 的改善。如果以 CPU time 來看的話,從 11,474,982.05 msec cpu-clock 降到 7,100,730.37 msec cpu-clock,也是以編譯次數來算的話,有 61.6% 的改善...

這是花了一年多的時間嘗試才達成的目標,嘗試不同的方法,前幾次雖然都有改善,但改善幅度太小,變動卻太大,他覺得不值得丟出來,直到第三次才達成這樣的目標...

第一次:

When I started this project, late 2020, I expected there to be maybe 50-100 patches. I did a few crude measurements that suggested that about 20% build speed improvement could be gained by reducing header dependencies, without having a substantial runtime effect on the kernel. Seemed substantial enough to justify 50-100 commits.

第二次:

But as the number of patches increased, I saw only limited performance increases. By mid-2021 I got to over 500 commits in this tree and had to throw away my second attempt (!), the first two approaches simply didn't scale, weren't maintainable and barely offered a 4% build speedup, not worth the churn of 500 patches and not worth even announcing.

第三次:

With the third attempt I introduced the per_task() machinery which brought the necessary flexibility to reduce dependencies drastically, and it was a type-clean approach that improved maintainability. But even at 1,000 commits I barely got to a 10% build speed improvement. Again this was not something I felt comfortable pushing upstream, or even announcing. :-/

然後基於第三次的成果覺得有望,意外的發現後續的速度改善比想像中的多非常多:

But the numbers were pretty clear: 20% performance gains were very much possible. So I kept developing this tree, and most of the speedups started arriving after over 1,500 commits, in the fall of 2021. I was very surprised when it went beyond 20% speedup and more, then arrived at the current 78% with my reference config. There's a clear super-linear improvement property of kernel build overhead, once the number of dependencies is reduced to the bare minimum.

這次的 patch 雖然超大包,但看起來對於 compile 時間改善非常多,應該會有不少討論... 消息還蠻新的 (台灣時間今天早上五點的信),晚點可以看一下其他大老出來回什麼...

列出 curl 連線的內部時間資訊

Twitter 上看到一個很久前的討論:

裡面提到了「Timing Details With cURL」,可以給一個 template 進去輸出內部資訊:

time_namelookup: %{time_namelookup}
time_connect: %{time_connect}
time_appconnect: %{time_appconnect}
time_pretransfer: %{time_pretransfer}
time_redirect: %{time_redirect}
time_starttransfer: %{time_starttransfer}
———
time_total: %{time_total}

不過我實際測試發現現在的 curl 版本要加上 \n 才會換行,之後可以拿來 debug 看看...

QOI 圖片無損壓縮演算法

Hacker News Daily 上看到「Lossless Image Compression in O(n) Time」這篇,作者丟出了一個圖片的無損壓縮演算法,壓縮與解壓縮的速度超快,但壓縮率又不輸 PNG 太多,在 Hacker News 上的討論也可以看一下:「QOI: Lossless Image Compression in O(n) Time (phoboslab.org)」。

裡面有提到在遊戲產業常用到的 stb_image.h

Yes, stb_image saved us all from the pains of dealing with libpng and is therefore used in countless games and apps. A while ago I aimed to do the same for video with pl_mpeg, with some success.

作者的簡介也可以看到他的主業也在遊戲這塊:

My name is Dominic Szablewski. I build games, experiment with JavaScript and occasionally tinker with low-level C.

圖片的無損壓縮與解壓縮算是遊戲創作者蠻常用到的功能,所以他想要看看這塊有沒有機會有更好的工具,於是他就用了四個很簡單的演算法幹完了 QOI (然後發現效果很讚):

  • A run of the previous pixel
  • An index into a previously seen pixel
  • The difference to the previous pixel
  • Full rgba values

其實從 Hacker News 的討論也可以看到這組演算法也常被拿出來在現代的壓縮演算法使用,所以雖然作者自稱不是 compression guy,但他用的演算法其實蠻專業的...

然後挑 single thread 主要是可以避免 threading 的複雜度以及 overhead,在「QOI Benchmark Results」這頁可以看到,無論是什麼類型的檔案,壓縮與解壓縮的速度都相當漂亮,而且壓縮率又沒有差 libpng 太多。

而且作者自己有提到,還沒用到 SIMD 指令集加速,這樣猜測應該還有不少空間...