AWS 宣佈 EBS io2 的新花樣 Block Express Volumes

看到「AWS Announces General Availability of Amazon EBS io2 Block Express Volumes」這篇,在 EBSio2 上面又推出了新的花樣 Block Express Volumes:

Today AWS announced general availability of io2 Block Express volumes that deliver up to 4x higher throughput, IOPS, and capacity than io2 volumes, and are designed to deliver sub-millisecond latency and 99.999% durability.

要再提供更高的效能,在 R5b 的機種下,單個 volume 可以拉到 256k IOPS 與 4000MB/sec 的傳輸速度,以及在 well-tuned 的環境下 (應該是多個 volume) 可以拉到 260k IOPS (多一點點) 與 7500MB/sec (將近原來的兩倍) 的傳輸速度:

Using R5b instances customers can now provision a single io2 volume with up to 256,000 IOPS, 4000 MB/s of throughput, and storage capacity of 64 TiB.

R5b instances are well-suited to run business-critical and storage-intensive applications as they offer the highest EBS-optimized performance of up to 260,000 IOPS and 7,500 MB/s throughput.

是個用錢炸效能的東西,用的到的就用...

快速產生 SQLite 資料的方式:一分鐘內產生十億筆資料

在「Towards Inserting One Billion Rows in SQLite Under A Minute」這邊看到作者想要在一分鐘內在 MBP 2019 上面寫 1B 筆資料進 SQLite,裡面有些方法還蠻值得玩一下的,這台 MBP 2019 機器的規格是:

The machine I am using is MacBook Pro, 2019 (2.4 GHz Quad Core i5, 8GB, 256GB SSD, Big Sur 11.1)

第一版是 Python 寫的,塞 10M 筆花了 15 分鐘:

In this script, I tried to insert 10M rows, one by one, in a for loop. This version took close to 15 minutes, sparked my curiosity and made me explore further to reduce the time.

加了五個 PRAGMA 的版本變成 100M 筆十分鐘:

The naive for loop version took about 10 minutes to insert 100M rows.

用批次處理則可以降到八分半:

The batched version took about 8.5 minutes to insert 100M rows.

再來是拿經典神器 PyPy 出來用,降到兩分半:

All I had to do was run my existing code, without any change, using PyPy. It worked and the speed bump was phenomenal. The batched version took only 2.5 minutes to insert 100M rows. I got close to 3.5x speed :)

接下來就是跳槽到 Rust 了,中間也有不少 tuning 相關的討論,但直接先跳到最後面好了... 最後 100M 只用了 33 秒:

I created a threaded version, where I had one writer thread that received data from a channel and four other threads which pushed data to the channel. This is the current best version which took about 32.37 seconds.

能用 PyPy 的地方還是可以考慮一下的...

Chromium 系列瀏覽器對 Google Search Engine 的不公平最佳化

在 tab 上放了一陣子的連結,忘記是哪邊看到的,在講 Chromium 系列瀏覽器會針對 Google Search Engine 最佳化:「Google’s unfair performance advantage in Chrome」。

作者發現 Chromium 瀏覽器會預先開 HTTPS 連線連到搜尋引擎,這樣可以大幅降低建立 HTTPS 連線時所需要的時間,包括了 DNS 查詢、TCP handshake 與 TLS handshake:

I was looking for something else when I stumbled upon a feature called PreconnectToSearch. When enabled, the feature preemptively opens and maintains a connection to the default search engine.

問題在於這個功能只開給 Google Search 使用:

There’s just one small catch: Chromium checks the default search engine setting, and only enables the feature when it’s set to Google Search.

search_engine_preconnector.cc (HEAD 版本) 這邊可以看到這段程式碼:

// Feature to limit experimentation to Google search only.
const base::Feature kPreconnectToSearchNonGoogle{
    "PreconnectToSearchNonGoogle", base::FEATURE_DISABLED_BY_DEFAULT};
}  // namespace features

作者有提到,的確這個功能會對 search engine 有不小的衝擊,但可以透過擴充 OpenSearch Descriptions 或是 Well-Known URI 的方式提供,現在這樣寫死在程式碼裡面完全就是不公平競爭。

CloudFront 宣佈支援 ECDSA 的 Certificate

Amazon CloudFront 宣佈支援 ECDSA 的 certificate:「Amazon CloudFront now supports ECDSA certificates for HTTPS connections to viewers」。

用主要是讓 certificate 更小,讓 HTTPS 建立時的過程更快 (包括了傳輸的速度與計算的速度):

As a result, conducting TLS handshakes with ECDSA certificates requires less networking and computing resources making them a good option for IoT devices that have limited storage and processing capabilities.

很久以前好像有看到資料說 256 bits 的 EC 運算量跟 768~1024 bits 的 RSA 差不多,但一時間找不到資料...

目前 CloudFront 只支援 NIST P-256 (secp256r1,或稱作 prime256v1):

Starting today, you can use Elliptic Curve Digital Signature Algorithm (ECDSA) P256 certificates to negotiate HTTPS connections between your viewers and Amazon CloudFront.

但 NIST P-256 一直為人詬病,在「SafeCurves: choosing safe curves for elliptic-curve cryptography」這邊可以看到 NIST 宣稱的效率設計實際上都不是真的:

Subsequent research (and to some extent previous research) showed that essentially all of these efficiency-related decisions were suboptimal, that many of them actively damaged efficiency, and that some of them were bad for security.

但目前標準是往 NIST P-256、NIST P-384 與 NIST P-521 靠攏 (主要是受到 CA/Browser Forum 的限制),要其他 curve 的 certificate 也沒辦法生,目前可能還是繼續觀望...

MySQL 跑在 ZFS 與 ext4 的效能差異

Percona 的「MySQL/ZFS Performance Update」這篇又對 ZFS 做了一次測試,算是用比較新的軟體跑出來的結果,不過要注意這邊的 ZFS 版本仍然不是目前最新版:

ZFS 0.8.6-1 is not bleeding edge, there have been more than 1700 commits since and after 0.8.6, the ZFS release number jumped to 2.0. The big addition included in the 2.0 release is native encryption.

機器是在雲端上 (Azure 上),不熟悉 Azure 的機種,但看記憶體與 CPU 的量好像不是用頂規的機器:

benchmark host
Standard D2ds_v4 instance
2 vCpu, 8GB of Ram and 75 GB of temporary storage
Debian Buster

Database host
Standard E4-2ds-v4 instance
2 vCpu, 32GB of Ram and 150GB of temporary storage
256GB SSD Premium (SSD Premium LRS P15 – 1100 IOPS (3500 burst), 125 MB/s)
Debian Buster
Percona server 8.0.22-13

跑出來的結果看起來不差:

看了一下測試用的設定,似乎只測了 compression 的部份,沒測 snapshot 以及其他功能會對效能有什麼影響,但至少基本盤應該是還不錯?

從調校 HTTP Server 的文章中學各種奇技淫巧

在「Extreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 Instance」這篇文章裡面,作者在示範各種奇技淫巧調校 HTTP server。

Hacker News 上的討論也蠻有趣的:「Extreme HTTP Performance Tuning (talawah.io)」。

雖然是在講 HTTP server,但裡面有很多東西可以拿出來獨立用。

想特地拿出來聊的大項目是「Speculative Execution Mitigations」這段,作者有些說明,除非你真的知道你在做什麼,不然不應該關掉這些安全相關的修正:

You should probably leave the mitigations enabled for that system.

而作者是考慮到 AWS 有推出 AWS Nitro Enclaves 的前提下決定關掉,但我會建議在 *.metal 的機器上才這樣做,這樣可以避免這台機器上有其他 AWS 帳號的程式在跑。

測試中關了一卡車 mitigation,得到了 28% 的效能提昇:

Disabling these mitigations gives us a performance boost of around 28%.

這其實比預期中多了不少,這對於自己擁有實體機房跑 Intel 平台的使用者來說,很吸引人啊...

試用 Cloudflare 的 Argo Tunnel

Cloudflare 宣佈讓大家免費使用 Argo Tunnel 了,也順便改名為 Cloudflare Tunnel 了:「A Boring Announcement: Free Tunnels for Everyone」。

Starting today, we’re excited to announce that any organization can use the secure, outbound-only connection feature of the product at no cost. You can still add the paid Argo Smart Routing feature to accelerate traffic.

As part of that change (and to reduce confusion), we’re also renaming the product to Cloudflare Tunnel. To get started, sign up today.

Cloudflare Tunnel 的功能就像 ngrok,在用戶端的機器上跑一隻 agent 連到 Cloudflare 或是 ngrok 的伺服器,這樣外部連到 Cloudflare 或是 ngrok 的伺服器後就可以透過這組預先建好的連線連上本機的服務了,常見的應用當然就是 HTTP(S) server。

本來是付費功能,一般使用者應該也不會需要這個功能,這次把這個功能免費丟出來的用意不知道是什麼...

不過既然都免費了,還是花了點時間測了一下,可以發現 ngrok 的設定比較簡單,Cloudflare 的 cloudflared 設定起來複雜不少,不過文件還算清楚,照著設就好。

Anyway,有些事情有了 Cloudflare Tunnel 就更方便了,像是有些超小型的 VPS 是共用 IPv4 address 而且沒有 IPv6 address 的,可以透過 cloudflared 反向打進去提供服務,同樣的,在 NAT 後面的機器也可以透過這個方法很簡單的打通。

順便說一下,現在的 blog.gslin.org 就是跑在 cloudflared 上面了,官方提供的 ARM64 binary 跑在 EC2t4g 上面目前看起來沒有什麼問題,而且比起本來 nginx 都是抓到 Cloudflare 本身的 IP,現在加上這兩行後反而就可以抓到真的使用者 IP address 了:

    set_real_ip_from 127.0.0.1;
    real_ip_header X-Forwarded-For;

跑一陣子看看效果如何...

Amazon EC2 Auto Scaling 支援 Warm Pools

EC2 推出的新功能:「Amazon EC2 Auto Scaling introduces Warm Pools to accelerate scale out while saving money」。

重點只有這個,這個作法是先把機器準備好,然後關掉放在 stopped 狀態:

Additionally, Warm Pools offer a way to save compute costs by placing pre-initialized instances in a stopped state.

理論上可以快到 30 秒:

Now, these applications can start pre-initialized, stopped instances to serve traffic in as low as 30 seconds.

不過考慮到就算是 stopped 的機器,啟動時還是得去確認有沒有新版程式... 目前可以理解的部份,應該是加快 EBS 的準備時間吧?

GTA Online 釋出官方修正,大幅改善啟動效能

看到「GTA Online load time fix released, shaves off actual minutes of waiting for some」這邊的消息,先前在「GTA 的啟動讀取效能問題」這邊提到 GTA Online 啟動速度很慢的問題,官方正式推出修正版本了:「GTAV Title Update 1.53 Notes (PS4 / Xbox One / PC)」。

抓了一些在 Reddit 的討論「Loading Times Have FINALLY been patched - Discussion Thread」。

這則降的比率與當時 workaround 的修正差不多:

Insane. GTA menu -> GTA: Online.

Dropped from 7 minutes to 1:57

i7-2600k,GTX1070,16GB RAM and the game is on HDD.

這個就有點誇張了,這是 90% 吧?

Dropped from 5-8 minutes to 35 seconds

這個差不多 70%~80%:

Loading time 2m 20s for online directly from steam. Before it was like 8-10 minutes for me. Damn

Edit: 50s for story mode. 35s from story mode to online. So it seems it's still faster to load into online from story mode.

這個也差不多 70%:

From 4-5 minutes to 1 a minute and 22 seconds. Y e s p l e a s e

然後 PS4 的版本原來也受到一樣的影響?

Currently tested on PS4 , from main menu to online : 3min 45 sec From story mode to online: 1min 20sec (😩 i can't tell for sure )

整體看起來是正面的,畢竟大家等這個問題等超久了... 另外也可以看出來當初的 workaround patch 其實相當精準的把問題都解掉了,官方的修正並沒有快更多。

來繼續關注 libc 那邊的問題...

Cloudflare 再次嘗試 ARM 伺服器

2018 年的時候寫過一篇 Cloudflare 在嘗試 ARM 伺服器的進展:「Cloudflare 用 ARM 當伺服器的進展...」,後來就沒有太多公開的消息,直到這幾天看到「ARMs Race: Ampere Altra takes on the AWS Graviton2」才看到原因:

By the time we completed porting our software stack to be compatible with ARM, Qualcomm decided to exit the server business.

所以是都測差不多,也都把 Cloudflare 自家的軟體搬上去了,但 Qualcomm 也決定收手,沒機器可以用...

這次再次踏入 ARM 領域讓人想到前陣子 AppleM1,讓大家看到 ARM 踏入桌機與筆電領域可以是什麼樣貌...

這次 Cloudflare 選擇了 Ampere Altra,這是基於 Neoverse N1 的平台,而這個平台的另外一個知名公司就是 AWSGraviton2,所以就拿來比較:

可以看到 Ampere Altra 的核心數多了 25% (64 vs. 80),運作頻率多了 20% (2.5Ghz vs. 3.0Ghz)。測試的結果也都有高有低,落在 10%~40% 都有。

不過其中比較特別的是 Brotli - 9 的測試特別差 (而且是 8 與 10 都正常的情況下):

依照 Cloudflare 的說法,他們其實不會用到 Brotli - 7 以及更高的等級,不過畢竟有測出來,還是花了時間找一下根本原因:

Although we do not use Brotli level 7 and above when performing dynamic compression, we decided to investigate further.

反追問題後發現跟 Page Faults 以及 Pipeline Backend Stalls 有關,不過是可以改寫避開,在避開後可以達到跟 Graviton2 類似的水準:

By analyzing our dataset further, we found the common underlying cause appeared to be the high number of page faults incurred at level 9. Ampere has demonstrated that by increasing the page size from 4K to 64K bytes, we can alleviate the bottleneck and bring the Ampere Altra at parity with the AWS Graviton2. We plan to experiment with large page sizes in the future as we continue to evaluate Altra.

但目前看起來應該都還算正向,看起來供貨如果穩定的話,應該有機會換過去?畢竟 ARM 平台可以省下來的電力太多了,現在因為 M1 對 ARM 的公關效果太驚人的關係,解釋起來會更輕鬆...