改善 Wikipedia 的 JavaScript,減少 300ms 的 blocking time

Hacker News 首頁上看到「300ms Faster: Reducing Wikipedia's Total Blocking Time」這篇,作者 Nicholas RayWikimedia Foundation 的工程師,雖然是貼在自己的 blog 上,但算是半官方性質了... 文章裡面提到了兩個改善都是跟前端 JavaScript 有關的。

作者是透過瀏覽器端的 profiling 產生火焰圖,判讀裡面哪塊是大塊的問題,然後看看有沒有機會改善。

先看最後的成果,可以看到第一個 fix 讓 blocking time 少了 200ms 左右,第二個 fix 則是少了 80ms 左右:

第一個改善是從火焰圖發現 l._enable 吃掉很多 blocking time:

作者發現是因為 find() 找出所有的連結後 (a 元素),跑去每一個連結上面綁定事件造成的效能問題:

The .on("click") call attached a click event listener to nearly every link in the content so that the corresponding section would open if the clicked link contained a hash fragment. For short articles with few links, the performance impact was negligible. But long articles like ”United States” included over 4,000 links, leading to over 200ms of execution time on low-end devices.

但這其實是 redundant code,在其他地方已經有處理了,所以解法就比較簡單,拔掉後直接少了 200ms:

Worse yet, this behavior was unnecessary. The downstream code that listened to the hashchange event already called the same method that the click event listener called. Unless the window’s location already pointed at the link’s destination, clicking a link called the checkHash method twice — once for the link click event handler and once more for the hashchange handler.

第二個改善是 initMediaViewer 吃掉的 blocking time,從 code 也可以看到問題也類似,跑一個 loop 把事件掛到所有符合條件的元素上面:

這邊的解法是 event delegation,把事件掛到上層的元素,就只需要掛一個,然後多加上檢查事件觸發的起點是不是符合條件就可以了,這樣可以大幅降低「掛」事件的成本。

這點算是常用技巧,像是 table 裡面有事件要掛到很多個 td 的時候,會改成把一個事件掛到 table 上面,另外加上判斷條件。

算是蠻標準的 profiling 過程,直接拉出真實數據來看,然後調有重大影響的部分。

KeyDB:使用 Multithreading 改善 Redis 的效能

Hacker News 上看到有支援 Multithreading 的 Redis fork:「KeyDB – A Multithreaded Fork of Redis (keydb.dev)」,官網在「KeyDB - The Faster Redis Alternative」這邊。

不過這篇是要記錄從 Hacker News 看到的雷點,這樣以後自己再找資料的時候會比較容找到。

36022425 這篇是跳下去用發現不太行,最後在 application 端實作需要的 feature,後端還是用原廠的 Redis:

To counter what the other active business said, we tried using KeyDB for about 6 months and every fear you concern you stated came true. Numerous outages, obscure bugs, etc. Its not that the devs aren’t good, its just a complex problem and they went wide with a variety of enhancements. We changed client architecture to work better with tradition Redis. Combined with with recent Redis updates, its rock solid and back to being an after-thought rather than a pain point. Its only worth the high price if it solves problems without creating worse ones. I wish those guys luck but I wont try it again anytime soon.

* its been around 2 years since our last use as a paying customer. YMMV.

另外是在專案裡搜尋「is:open is:issue label:"Priority 1"」的結果可以看到不太妙,在 36021108 這邊有提到的問題:

Filed July, eventually marked priority 1 in early December, not a single comment or signs of fix on it since. That doesn't look good at all.

然後 36020184 有提到 Snap 買進去後沒有什麼在管 open source project 的部分了:

I think I'll stay far away from this thing anyway. Numerous show-stopper bug reports open and there hasn't been a substantial commit on the main branch in at least a few weeks, and possibly months. I'll be surprised if Snap is actually paying anybody to work on this.

llama.cpp 開始支援 GPU 了

前陣子因為重灌桌機,所以在重建許多環境... 其中一個就是 llama.cpp,連到專案頁面上時意外發現這兩個新的 feature:

OpenBLAS support
cuBLAS and CLBlast support

這代表可以用 GPU 加速了,所以就照著說明試著編一個版本測試。

編好後就跑了 7B 的 model,看起來快不少,然後改跑 13B 的 model,也可以把完整 40 個 layer 都丟進 3060 (12GB 版本) 的 GPU 上:

./main -m models/13B/ggml-model-q4_0.bin -p "Building a website can be done in 10 simple steps:" -n 512 -ngl 40

從 log 可以看到 40 layers 到都 GPU 上面,吃了 7.5GB 左右:

llama.cpp: loading model from models/13B/ggml-model-q4_0.bin
llama_model_load_internal: format     = ggjt v2 (latest)
llama_model_load_internal: n_vocab    = 32000
llama_model_load_internal: n_ctx      = 512
llama_model_load_internal: n_embd     = 5120
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 40
llama_model_load_internal: n_layer    = 40
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: n_ff       = 13824
llama_model_load_internal: n_parts    = 1
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size =  90.75 KB
llama_model_load_internal: mem required  = 9807.48 MB (+ 1608.00 MB per state)
llama_model_load_internal: [cublas] offloading 40 layers to GPU
llama_model_load_internal: [cublas] total VRAM used: 7562 MB
llama_init_from_file: kv self size  =  400.00 MB

30B 的 model 我也試著丟上去跑,但只能丟 28 layers 上去 (全部是 60 layers),再多 GPU 的記憶體就撐不住了。

但能用 GPU 算是一個很大的進展,現在這版只快了一半的時間,不知道後面還有沒有 tune 的空間...

Ruff:用 Rust 寫的 Python Linter

Hacker News Daily 上看到「Astral (astral.sh)」這個,網站在「Astral: Next-gen Python tooling」。

裡面提到的 Ruff 專案是一套用 Rust 寫的 Python Linter,主打就是速度,從官網提供的 benchmark 就可以看出來差距:

因為是 Python ecosystem 的東西,安裝可以直接用 pip 裝預設編好的套件,而不需要透過 cargo 自己編 (當然你想要還是可以用 cagro 編)。

feedgen 測了一下,速度是真的快,這樣就比較不會嫌棄了... 要注意會冒出 .ruff_cache/ 目錄,.gitignore 要加一下。

然後用預設值先掃出 unused import 修掉,其他的有機會再看要怎麼改。

llama.cpp 的載入速度加速

Hacker News 上看到「Llama.cpp 30B runs with only 6GB of RAM now (github.com/ggerganov)」這個消息,原 pull request 在「Make loading weights 10-100x faster #613」這邊。

這個 PR 的作者 Justine Tunney 在 PR 上有提到他改變 model 檔案格式,以便改用 mmap(),大幅降低了需要預先讀取的時間 (因為變成 lazy-loading style),而且這也讓系統可以利用 cache page,避免了 double buffering 的問題:

This was accomplished by changing the file format so we can mmap() weights directly into memory without having to read() or copy them thereby ensuring the kernel can make its file cache pages directly accessible to our inference processes; and secondly, that the file cache pages are much less likely to get evicted (which would force loads to hit disk) because they're no longer competing with memory pages that were needlessly created by gigabytes of standard i/o.

這讓我想到在資料庫領域中,PostgreSQL 也會用 mmap() 操作,有點類似的概念。

另外 Justine Tunney 在這邊的 comment 有提到一個意外觀察到的現象,他發現實際在計算的時候用到的 model 內容意外的少:他用一個簡單的 prompt 測試,發現 20GB 的 30B model 檔案在他的 Intel 機器上實際只用到了 1.6GB 左右:

If I run 30B on my Intel machine:

[...]

As we can see, 400k page faults happen, which means only 1.6 gigabytes ((411522 * 4096) / (1024 * 1024)) of the 20 gigabyte weights file actually needed to be used.

這點他還在懷疑是不是他的修改有 bug,但目前他覺得不太像,也看不太出來:

Now, since my change is so new, it's possible my theory is wrong and this is just a bug. I don't actually understand the inner workings of LLaMA 30B well enough to know why it's sparse. Maybe we made some kind of rare mistake where llama.cpp is somehow evaluating 30B as though it were the 7B model. Anything's possible, however I don't think it's likely. I was pretty careful in writing this change, to compare the deterministic output of the LLaMA model, before and after the Git commit occurred. I haven't however actually found the time to reconcile the output of LLaMA C++ with something like PyTorch. It'd be great if someone could help with that, and possibly help us know why, from more a data science (rather than systems engineering perspective) why 30B is sparse.

如果不是 bug 的話,這其實冒出了一個很有趣的訊號,表示這些 model 是有可能再瘦身的?

前陣子 Hacker News 很慢的一些背景知識

看到 Ask HN: Is Hacker News slow for anyone else? 這邊的討論,dang (Hacker News 的管理員) 在 35157344 這邊就有出來說明:

All: our poor server is smoking today* so I've had to reduce the page size of comments. There are 1500+ comments in this thread but if you want to read more than a few dozen you'll need to page through them by clicking the More link at the bottom. I apologize!

Also, if you're cool with read-only access, just log out (edit: or use an incognito tab) and all will be fast again.

* yes, HN still runs on one core, at least the part that serves logged-in requests, and yes this will all get better someday...it kills me that this isn't done yet but one day you will all see

另外比較特別的是,Hacker News 是用 Arc (Lisp) 寫的,不過看起來沒有考慮到 optimization,加上那天 Reddit 也掛了,的確帶動 Hacker News 這邊更新的頻率比較高...

Ruby 再引入另外一套 JIT 實做:RJIT

Hacker News Daily 上看到「RJIT #7448」這個,Ruby 上一套新的 JIT 實做。

這次的 RJIT 取代掉先前的 MJIT:

This PR replaces the current implementation of MJIT with a new JIT called "RJIT"

有些特點,其中一個是 RJIT 在 buildtime 與 runtime 都不需要 compiler,這是因為 RJIT 直接用 Ruby 實做:

RJIT uses a pure-Ruby assembler to generate native code

  • MJIT requires a C compiler at runtime. YJIT requires a Rust compiler at build time. RJIT doesn't require them.
  • This means that RJIT's warmup could be slower than YJIT, but it's still much faster than MJIT's.

另外值得注意的是,RJIT 的作者 k0kubunYJIT 的作者 Maxime Chevalier-Boisvert 都是 Shopify 的員工,可以看出 Shopify 對於 Ruby 效能的痛?決定直接自己養人改善效能。

回到 RJIT 這邊跑的測試,可以看到他是用 YJIT 的測試套件測,這也就不會太奇怪了。

跟這次取代掉的 MJIT 相比,RJIT 在 Headline 這包測試都 OK,在 Other 這包則是有來有回,而在 Micro 這包則是有不少項目輸掉 (相比於前兩者):

這樣整體看起來算是有進步,下一版 Ruby 更新應該就會有了。

由 pnpm 做出來的效能比較:npm、yarn 以及 pnpm

找資料的時候看到 pnpm 有在做幾個常見的 javascript package manager 的比較:「Benchmarks of JavaScript Package Managers」。

裡面測了九個項目,其中前八個剛好就是 cache 的有無、lockfile 的有無以及 node_modules 的有無,這三者的組合剛好八個,最後一個是測 update 的速度。

pnpm 因為用了 hard link,我先跳過去,只先關注 npmyarn 的差異。

其中幾個比較重要的項目是 (由最重要開始列):

  • with cache, with lockfile, with node_modules:這個會是一般開發的情境。
  • update:套件有更新時的時間。
  • with lockfile:這個會是第一次 clone 下來後的環境,以及 CI 的環境。

在一般開發環境下,npm 比 yarn 快一些,這點讓我比較意外,我知道 npm 這幾年一直有在改善效率,但沒想到會在 benchmark 下反超,這個是以前 yarn 很大的宣傳點,現在已經不見了。

第二個是 update (upgrade) 的部份,這邊 yarn 比 npm 快一些。

第三個因為是 CI 環境,或是第一次 clone 時的環境,慢通常可以被接受,不要差太多就好,而這邊 npm 比 yarn 快一些。

但不管是哪個,差距都不像以前那麼大了,就效率上來說,官方的 npm 已經沒有太明顯的缺點,因為效率的差異而去選擇 yarn 的理由已經不存在了。

AWS 推出了 Graviton3 的機種

Amazon EC2 推出了 Graviton3 的機種:「New Graviton3-Based General Purpose (m7g) and Memory-Optimized (r7g) Amazon EC2 Instances」。

第一波只有一般的 m7g 與記憶體型的 r7g,而計算型的 c7g 大家在 Twitter 上猜應該晚點會放出消息。在去年五月就推出了:「AWS 推出 c7g 機種」。

目前只在歐美的 us-east-1us-east-2us-west-2eu-west-1 區提供,亞洲目前都還沒有這些機種可以用:

M7g and R7g instances are available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) AWS Regions in On-Demand, Spot, Reserved Instance, and Savings Plan form.

官方宣稱比 Graviton2 的 m6g & r6g 多了 25% 的效能,不過我另外查了一下 us-east-1 上的價錢,也貴了 6% 左右,如果依照官方宣稱的數字計算,大約是 18% 左右的 CP 值提昇,對於有實際上跑滿的 CPU 的人是個不錯的效能提昇:

Today I am happy to tell you about the newest Amazon EC2 instance types, the M7g and the R7g. Both types are powered by the latest generation AWS Graviton3 processors, and are designed to deliver up to 25% better performance than the equivalent sixth-generation (M6g and R6g) instances, making them the best performers in EC2.

裡面有提到在 Graviton3 的一個架構上的大改變是記憶體從 DDR4 變到 DDR5,這使得記憶體的傳輸頻寬提昇了 50%:

Both types of instances are equipped with DDR5 memory, which provides up to 50% higher memory bandwidth than the DDR4 memory used in previous generations.

接下來是看有沒有下放到 t 系列的計畫,像是 t5g 之類的,有的話再用看看好了,不過 blog 這台已經買了三年 RI,等到期間滿了之後說不定都有 Graviton4 或是 Graviton5 了...

Windows 11 瘦身版本的 Tiny11

Tiny11NTDEV 弄出來的精簡版 Windows 11:「De-Bloated Windows 11 Build Runs on 2GB of RAM」。HN 上對應的討論在「De-Bloated Windows 11 Build Runs on 2GB of RAM (tomshardware.com)」。

It just uses around 8GB of space compared to the 20+GB that a standard installation does.

但有些限制,像是安全性更新需要自己來:

This OS install “is not serviceable,” notes NTDev. “.NET, drivers and security definition updates can still be installed from Windows Update,” so this isn’t an install which you can set and forget.

另外像是透過 WinSxS 安裝的功能 (包括語言) 會無法安裝:

Moreover, removing the Windows Component Store (WinSxS), which is responsible for a fair degree of Tiny11’s compactness, means that installing new features or languages isn’t possible.

但我記得拔掉 WinSxS 應該會影響蠻多東西的?這樣的系統應該是拿來跑跑 CI 或是固定用途還行,一般性的用途不知道會卡多少東西...

另外除了使用的磁碟空間變小以外,記憶體的使用量也大幅下降,畢竟也拔掉了一堆肥大的軟體:

In testing, NTDev said that Tiny11 could “run great” on a system with just 2GB of RAM.