前陣子 Hacker News 很慢的一些背景知識

看到 Ask HN: Is Hacker News slow for anyone else? 這邊的討論,dang (Hacker News 的管理員) 在 35157344 這邊就有出來說明:

All: our poor server is smoking today* so I've had to reduce the page size of comments. There are 1500+ comments in this thread but if you want to read more than a few dozen you'll need to page through them by clicking the More link at the bottom. I apologize!

Also, if you're cool with read-only access, just log out (edit: or use an incognito tab) and all will be fast again.

* yes, HN still runs on one core, at least the part that serves logged-in requests, and yes this will all get better someday...it kills me that this isn't done yet but one day you will all see

另外比較特別的是,Hacker News 是用 Arc (Lisp) 寫的,不過看起來沒有考慮到 optimization,加上那天 Reddit 也掛了,的確帶動 Hacker News 這邊更新的頻率比較高...

網頁大小 14KB 與 15KB 的速度差異

Hacker News 上看到「Why your website should be under 14kB in size」這篇,對應的討論在「A 14kb page can load much faster than a 15kb page (endtimes.dev)」,在講網頁大小 14KB/15KB 的速度差異比 15KB/16KB 大很多的問題:

What is surprising is that a 14kB page can load much faster than a 15kB page — maybe 612ms faster — while the difference between a 15kB and a 16kB page is trivial.

原因是 TCP slow start 造成的:

This is because of the TCP slow start algorithm.

而網頁這邊 TCP slow start 目前大多數的實做都是 10 packets 後發動:

Most web servers TCP slow start algorithm starts by sending 10 TCP packets.

然後再組合 1500 bytes/packet 以及 overhead,就差不多是 14KB 了:

The maximum size of a TCP packet is 1500 bytes.

This this maximum is not set by the TCP specification, it comes from the ethernet standard

Each TCP packet uses 40 bytes in its header — 16 bytes for IP and an additional 24 bytes for TCP

That leaves 1460 bytes per TCP packet. 10 x 1460 = 14600 bytes or roughly 14kB!

然後 HTTP/3 也可以看到類似的設計 (出自「QUIC Loss Detection and Congestion Control」:

Sending multiple packets into the network without any delay between them creates a packet burst that might cause short-term congestion and losses. Implementations MUST either use pacing or limit such bursts to the initial congestion window, which is recommended to be the minimum of 10 * max_datagram_size and max(2* max_datagram_size, 14720)), where max_datagram_size is the current maximum size of a datagram for the connection, not including UDP or IP overhead.

算是一個小知識... 但對於現在肥滋滋的網頁效果來說就沒辦法了,而且考慮到大一點的網站會在一個 TCP 連線裡面可能會傳很多 request,其實早就超過 TCP slow start 的門檻了。

GTA 的啟動讀取效能問題

這件事情也已經過了一個禮拜,來整理一下發生什麼事情...

起因是 GTA Online 的遊戲開啟速度很慢,而有人一路 reverse engineering 找出問題並且解決:「How I cut GTA Online loading times by 70%」,對應的 Hacker News 討論有提到其他有趣的事情也可以看看:「How I cut GTA Online loading times by 70% (nee.lv)」。

作者的電腦不算太差,但光開啟 GTA Online 就需要六分鐘,網路上甚至有辦投票蒐集大家的等待時間,發現也有很多人反應類似的問題:

接下來就開始 reverse engineering 了,先觀察各種狀態後發現是卡在 CPU,而不是網路或 Disk I/O,然後就拿出 Luke Stackwalker 這個工具 profiling,不過因為沒有 debug symbol 幫忙 group,所以只能人工判斷後,可以看到兩個問題:

第一個問題發現效能是卡在 strlen(),而 call stack 可以看出來是從 sscanf() 一路打進去的:

反追發現是在處理 10MB 的 JSON 檔造成的,裡面 sscanf() 因為拉出 strlen(),於是就造成把整個 10MB 的 JSON 掃過很多次 (一開始是 10MB,掃到後面會愈來愈少,平均下來應該是 5MB):

第二個問題產生的時間會在第一個問題跑完後,另外看問題的性質,應該跟第一個 JSON 處理有關,他會把 JSON 處理過的資料丟進 array,每個 entry 長這樣:

struct {
    uint64_t *hash;
    item_t   *item;
} entry;

丟進 array 是 OK 的,但問題在於他需要判斷 entry 是否重複,卻沒有用 hash 或是 tree 的結構,而這邊大約有 63k 筆資料,用 array 實做就產生了 O(n^2) 的演算法:

But before it’s stored? It checks the entire array, one by one, comparing the hash of the item to see if it’s in the list or not. With ~63k entries that’s (n^2+n)/2 = (63000^2+63000)/2 = 1984531500 checks if my math is right. Most of them useless. You have unique hashes why not use a hash map.

作者在 PoC 的章節裡面描述他怎麼解這兩個問題。

第一個問題比較好的解法是修正 JSON Parser,但這太複雜,所以他用 workaround 解:把 strlen() 包起來,針對長字串加上一層 cache:

  • hook strlen
  • wait for a long string
  • “cache” the start and length of it
  • if it’s called again within the string’s range, return cached value

而第二個問題他直接把檢查是否有重複的跳過,因為資料本身不重複:

And as for the hash-array problem, it’s more straightforward - just skip the duplicate checks entirely and insert the items directly since we know the values are unique.

整個開啟的速度從六分鐘降到一分五十秒,還是偏慢,但算是大幅緩解的 GTA Online 啟動速度的問題了。

不過故事到這邊還沒結束,有人一路去挖,發現其實 sscanf() 的效能地雷已經不是第一次了:YAML 的 Parser 也中過一樣的問題:「Parsing can become accidentally quadratic because of sscanf」,這篇也一樣上了 Hacker News:「Parsing can become accidentally quadratic because of sscanf (github.com/biojppm)」。

然後這又帶出了六年前在 StackOverflow 上就有人問過這個問題:「Why is glibc's sscanf vastly slower than fscanf on Linux?」。

另外也有人整理出來,應該是大家把同樣的演算法拿來實做:

JdeBP 3 days ago

I found this while making a collection of what C implementation does what at https://news.ycombinator.com/item?id=26298300.

There are two basic implementation strategies. The BSD (FreeBSD and OpenBSD and more than likely NetBSD too), Microsoft, GNU, and MUSL C libraries use one, and suffer from this; whereas the OpenWatcom, P.J. Plauger, Tru64 Unix, and my standard C libraries use another, and do not.

The 2002 report in the comp.lang.c Usenet newsgroup (listed in that discussion) is the earliest that I've found so far.

後續的更新動作可以再追一下進度 (包括 GTA Online 與各家的 libc)。

Android 打算針對 Data Saver + 2G 網路的使用者關閉 JavaScript 功能

在「Chrome for Android may start disabling JavaScript on 2G connections」這邊看到的,引用的文章是 XDA 的「Google Chrome on Android to disable JavaScript automatically on 2G connections」。

在「Disable scripts for Data Saver users on slow connections」這邊可以看到 Android 的計畫,在打開 Data Saver 的情況下,Chrome 會停用 JavaScript,並且提示使用者:

If a Data Saver user is on a 2G-speed or slower network according to the NetInfo API, Chrome disables scripts and sends an intervention header on every resource request. Users are shown a UI at the bottom of the screen indicating the page has been modified to save data. Users can enable scripts on the page by tapping “Show original” in the UI.

在「Enabled NoScript Preview feature by default on Android」這邊可以看到對應的程式修改。

這樣反而可以少收到很多廣告耶,反倒希望是全域打開 XDDD

ALB 支援 Slow Start 了

這個功能在 ELB Classic 年代時有跟 AWS 提過,到 ALB 支援了 (總算...):「Application Load Balancer Announces Slow Start Support for its Load Balancing Algorithm」。

Application Load Balancers now support a slow start mode that allows you to add new targets without overwhelming them with a flood of requests. With the slow start mode, targets warm up before accepting their fair share of requests based on a ramp-up period that you specify.

然後時間可以設定,從 30 秒到 15 分鐘:

Slow start mode can be enabled by target group and can be configured for a duration of 30 seconds to 15 minutes. The load balancer linearly increases the number of requests sent to a new target in a target group up to its fair share during the slow start ramp-up window.

就之前的經驗來說,這在跑 PHP 的時候會很需要這個功能 (之前是在 F5 的設備上設定)。其他的語言因為性質不太一樣,可能不會這麼吃這個功能。

主要是因為 PHP 是在 request 進來時 compile 並且 cache。所以在機器剛起來時,儘量將 CPU 留給 opcache,把常用的頁面 compile 完並且放進 cache,而不是讓大量的連線灌進來,這樣對使用體驗不會太好... (要避免 CPU 吃滿 100% 很久,造成每個連線都很慢才跑完)

AWS 推出 Slow Start 後對 auto scaling 時的順暢度會好不少...

MySQL 版本的 Amazon Aurora 會將各種記錄丟到 CloudWatch Logs 了...

剛好今天才被問是不是可以在 Amazon Aurora (MySQL-Compatible Edition) 裡面翻出有哪些 Slow Query,剛好想到這幾天發表了這個功能:「Amazon Aurora Publishes General, Slow Query and Error Logs to Amazon CloudWatch」。

You can now configure the MySQL-compatible edition of Amazon Aurora to publish general logs, slow query logs, and error logs to Amazon CloudWatch Logs. Previously, you could only publish audit logs.

看起來是要另外開 (畢竟 CloudWatch Logs 不是免費的 XD),不過以這類型的 log 產生速度與數量來說應該還行...

Facebook 在南韓因為太慢被罰錢???

看到「South Korea fines Facebook $369K for slowing user internet connections」這則新聞,裡面提到 Facebook 的 reroute 行為:

The Korea Communications Commission (KCC) began investigating Facebook last May and found that the company had illegally limited user access, as reported by ABC News. Local South Korean laws prohibit internet services from rerouting users’ connections to networks in Hong Kong and US instead of local ISPs without notifying those users. In a few cases, such rerouting slowed down users’ connections by as much as 4.5 times.

沒有告知使用者就導去香港或是美國的伺服器,聽起來像是 GeoDNS 的架構,以及 Facebook 的 CDN 架構幹的事情?不過在原報導裡面,另外一個指控是:

The KCC probed claims that Facebook intentionally slowed access while it negotiated network usage fees with internet service providers.

另外南韓官方也不承認使用者條款內的告知有效的:

Facebook said it did not violate the law in part because its terms of use say it cannot guarantee its services will operate without delays or interference. KCC officials rejected that argument, saying the terms were unfair. It recommended the company amend its terms of use.

現在看起來應該是要打官司?

MySQL 裡搜尋 CHAR/VARCHAR (String) 欄位時要注意的事情

MySQL 表格欄位是 CHAR 或 VARCHAR 時,寫搜尋條件要記得使用 string 格式,而非數字。意思是,要避免這種 SQL query:

SELECT * FROM foo WHERE `column_string` = 123456;

原因是即使 column_string 加上了 B-tree index,也無法利用這個 index 加速查詢。

原因是,除了最明確的 '123456' 會符合外,還有很多種 case 符合:

mysql> SELECT 123456 = '0123456';
+--------------------+
| 123456 = '0123456' |
+--------------------+
|                  1 |
+--------------------+
1 row in set (0.00 sec)

mysql> SELECT 123456 = ' 123456';
+--------------------+
| 123456 = ' 123456' |
+--------------------+
|                  1 |
+--------------------+
1 row in set (0.00 sec)

這使得 index 無用武之地。

但如果欄位本身是數字 (INT/BIGINT),搜尋時用字串反而沒關係:MySQL 會先把字串轉型為數字再比較,所以會用到 index。

總而言之:

  • 可以用 INT/BIGINT 時,不要用 CHAR/VARCHAR 儲存。
  • 使用 CHAR/VARCHAR 的欄位當搜尋條件時,要用字串形式當作搜尋條件。(除非你很清楚你在做什麼)

今天上場當救援投手時解掉的問題...