從調校 HTTP Server 的文章中學各種奇技淫巧

在「Extreme HTTP Performance Tuning: 1.2M API req/s on a 4 vCPU EC2 Instance」這篇文章裡面,作者在示範各種奇技淫巧調校 HTTP server。

Hacker News 上的討論也蠻有趣的:「Extreme HTTP Performance Tuning (talawah.io)」。

雖然是在講 HTTP server,但裡面有很多東西可以拿出來獨立用。

想特地拿出來聊的大項目是「Speculative Execution Mitigations」這段,作者有些說明,除非你真的知道你在做什麼,不然不應該關掉這些安全相關的修正:

You should probably leave the mitigations enabled for that system.

而作者是考慮到 AWS 有推出 AWS Nitro Enclaves 的前提下決定關掉,但我會建議在 *.metal 的機器上才這樣做,這樣可以避免這台機器上有其他 AWS 帳號的程式在跑。

測試中關了一卡車 mitigation,得到了 28% 的效能提昇:

Disabling these mitigations gives us a performance boost of around 28%.

這其實比預期中多了不少,這對於自己擁有實體機房跑 Intel 平台的使用者來說,很吸引人啊...

EC2 的大記憶體機器又推新規格了...

這次 Amazon EC2 的機器又推出一些新規格了:「Amazon EC2 High Memory Instances now available for on-demand usage」。

然後每次推這些機器的時候都會提到 SAP HANA,都沒有其他的例子可以說... 話說業界就只剩下這套系統是完全都沒在考慮分散式架構嗎 XDDD (完全沒用過)

SAP customers continue to leverage AWS as their platform of choice and innovation. Some are in the early stages of their SAP cloud journeys and are focused on executing their migration. Others have hardened their SAP systems on AWS and are innovating around their core business processes with advanced AWS services.

另外他有提到 24TB 的機器,在 Amazon EC2 Instance Types 這邊可以翻到 u-24tb1.metal

In 2018, we released High Memory Instances in response, which now offer up to 24TB of memory in a single instance.

不過你會發現在 Amazon EC2 On-Demand Pricing 這邊翻不到 24TB 的價錢,先前在「EC2 推出 18TB 與 24TB 的機器...」這邊有過這些機器買三年 RI 才能用,所以這次推出來 12TB 的機器算是隨時租用的機器裡面記憶體最多的了...

u-12tb1.112xlargeus-east-1 的價錢是 USD$109.2/hour,想要玩的人可以測試看看,至少應該玩的動 XD

Amazon EC2 的 t3/t3a/t4g 的 CPU credit 保留七天的限制

Twitter 上看到朋友提到 t3 系列的機器有保留七天的 CPU credit:

在「CPU credits and baseline utilization for burstable performance instances」這邊有提到,t3/t3a/t4g 的設計都是讓你可以塞 24h 小時的量:

這邊講的七天是這段:

CPU credits on a running instance do not expire.

For T2, the CPU credit balance does not persist between instance stops and starts. If you stop a T2 instance, the instance loses all its accrued credits.

For T3 and T4g, the CPU credit balance persists for seven days after an instance stops and the credits are lost thereafter. If you start the instance within seven days, no credits are lost.

開著的機器的 CPU credit 不會過期,只會到最大上限 (在同一篇文件裡面的表格有提到),t2 的機器關掉後 (stop) CPU credit 就會直接消失,而 t3/t3a/t4g 則在關掉後會保留七天。

之前沒注意到文件上的這點。

另外之前在測試自己架設 Sentry 時還測過 t3a.medium -> r5a.large -> t3a.medium 這樣換過去又換回來的情況,本來的 CPU credit 是可以繼續用的,看起來 CPU credit 不會因為 family type 改變就不見 (不過不確定這個是不是 undefined behavior...)。

Amazon EC2 Auto Scaling 支援 Warm Pools

EC2 推出的新功能:「Amazon EC2 Auto Scaling introduces Warm Pools to accelerate scale out while saving money」。

重點只有這個,這個作法是先把機器準備好,然後關掉放在 stopped 狀態:

Additionally, Warm Pools offer a way to save compute costs by placing pre-initialized instances in a stopped state.

理論上可以快到 30 秒:

Now, these applications can start pre-initialized, stopped instances to serve traffic in as low as 30 seconds.

不過考慮到就算是 stopped 的機器,啟動時還是得去確認有沒有新版程式... 目前可以理解的部份,應該是加快 EBS 的準備時間吧?

Amazon EC2 提供跨區直接複製 AMI (Image) 的功能

Amazon EC2AMI 可以跨區複製了:「Amazon EC2 now allows you to copy Amazon Machine Images across AWS GovCloud, AWS China and other AWS Regions」。

如同公告提到的,在這個功能出來以前,想要產生一樣的 image 得重新在 build 一份:

Previously, to copy AMIs across these AWS regions, you had to rebuild the AMI in each of them. These partitions enabled data isolation but often made this copy process complex, time-consuming and expensive.

有一些限制,image 大小必須在 1TB 以下,另外需要存到 S3 上,不過這些限制應該是還好:

This feature provides a packaged format that allows AMIs of size 1TB or less to be stored in AWS Simple Storage Service (S3) and later moved to any other region.

然後目前只有透過 cli 操作的方式,或是直接用 SDK 呼叫 API,看起來 web console 還沒提供:

This functionality is available through the AWS Command Line Interface (AWS CLI) and the AWS Software Development Kit (AWS SDK). To learn more about copying AMIs across these partitions, please refer to the documentation.

EC2 總算支援透過 Serial Console 操作了...

以往 Amazon EC2 的機器爛到開不起來時只能「看」到 Console 的輸出,然後要把 root volume 掛到其他機器上修正,接著再掛回來 (然後沒修好就要再重複...),現在總算可以透過 EC2 Serial Console 來操作了:「Troubleshoot Boot and Networking Issues with New EC2 Serial Console」。

不過裡面有一些限制,首先機器必須是基於 AWS Nitro System,這個部份在「Amazon EC2 Instance Types」這邊可以翻到是不是 Nitro,比較新的 family type 應該都是 (像是 t3/t3a/t4g 都是 Nitro,但 t2 不是):

EC2 Serial Console access is available for EC2 instances based on the AWS Nitro System. It supports all major Linux distributions, FreeBSD, NetBSD, Microsoft Windows, and VMWare.

另外是支援的區域目前只有幾個主力區域,不過公司在用的主力區 (新加坡) 也進去了,之後遇到問題的時候可以測試看看:

  • US East (N. Virginia), US West (Oregon), US East (Ohio)
  • Europe (Ireland), Europe (Frankfurt)
  • Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore)

Twitter 上看到有些人有提到「總算啊...」之類的感想...

AWS 推出 X2gd 機種,針對記憶體再提供更便宜的方案

AWS 推出了 X2gd 機種,用 ARM 的 CPU,然後給更少顆,disk 也更小,但換來的就是價錢更低:「New Amazon EC2 X2gd Instances – Graviton2 Power for Memory-Intensive Workloads」。

把兩個用 ARM 的主機拿出來看看 us-east-1 的價錢,第一個是這次的 x2gd.medium,只有 1 vCPU + 59 GB SSD,但有 16 GB RAM,現在的價錢是 $0.0835/hr。

另外一個 r6g.large 則是 2 vCPU + 16 GB RAM,然後 EBS only,則是 $0.1008/hr。

再來是 Intelx1e.xlarge,這邊是 4 vCPU + 12 GB RAM + 120 GB SSD,單價也差不多,不過記憶體少了點,$0.834/hr。

另外 Intel 也有 r5.large,2 vCPU + 16 GB RAM + EBS only,$0.126/hr。

最後一個是 AMDr5a.large,跟 Intel 的 r5.large 也很像,2 vCPU + 16 GB RAM + EBS only,$0.113/hr。

這次推出的 X2gd 機種提供了只要記憶體的極端選擇,而且依照先前的經驗,Graviton2 真的很快,1 vCPU 未必會不夠用... 至少我 blog 的 PHPMariaDB 都是跑在 t4g 上面,看起來比之前放在 VPS 上快不少 :o

Cloudflare 再次嘗試 ARM 伺服器

2018 年的時候寫過一篇 Cloudflare 在嘗試 ARM 伺服器的進展:「Cloudflare 用 ARM 當伺服器的進展...」,後來就沒有太多公開的消息,直到這幾天看到「ARMs Race: Ampere Altra takes on the AWS Graviton2」才看到原因:

By the time we completed porting our software stack to be compatible with ARM, Qualcomm decided to exit the server business.

所以是都測差不多,也都把 Cloudflare 自家的軟體搬上去了,但 Qualcomm 也決定收手,沒機器可以用...

這次再次踏入 ARM 領域讓人想到前陣子 AppleM1,讓大家看到 ARM 踏入桌機與筆電領域可以是什麼樣貌...

這次 Cloudflare 選擇了 Ampere Altra,這是基於 Neoverse N1 的平台,而這個平台的另外一個知名公司就是 AWSGraviton2,所以就拿來比較:

可以看到 Ampere Altra 的核心數多了 25% (64 vs. 80),運作頻率多了 20% (2.5Ghz vs. 3.0Ghz)。測試的結果也都有高有低,落在 10%~40% 都有。

不過其中比較特別的是 Brotli - 9 的測試特別差 (而且是 8 與 10 都正常的情況下):

依照 Cloudflare 的說法,他們其實不會用到 Brotli - 7 以及更高的等級,不過畢竟有測出來,還是花了時間找一下根本原因:

Although we do not use Brotli level 7 and above when performing dynamic compression, we decided to investigate further.

反追問題後發現跟 Page Faults 以及 Pipeline Backend Stalls 有關,不過是可以改寫避開,在避開後可以達到跟 Graviton2 類似的水準:

By analyzing our dataset further, we found the common underlying cause appeared to be the high number of page faults incurred at level 9. Ampere has demonstrated that by increasing the page size from 4K to 64K bytes, we can alleviate the bottleneck and bring the Ampere Altra at parity with the AWS Graviton2. We plan to experiment with large page sizes in the future as we continue to evaluate Altra.

但目前看起來應該都還算正向,看起來供貨如果穩定的話,應該有機會換過去?畢竟 ARM 平台可以省下來的電力太多了,現在因為 M1 對 ARM 的公關效果太驚人的關係,解釋起來會更輕鬆...

Eventbrite 的 MySQL 升級計畫

在 2021 年看到 EventbiteMySQL 升級計畫:「MySQL High Availability at Eventbrite」。

看起來是 2019 年年初的時候 MySQL 5.1 出問題,後續決定安排升級,在 2019 年年中把系統升級到 MySQL 5.7 (Percona Server 版本):

Our first major hurdle was to get current with our version of MySQL. In July, 2019 we completed the MySQL 5.1 to MySQL 5.7 (v5.7.19-17-log Percona Server to be precise) upgrade across all MySQL instances.

然後看起來是直接在 EC2 上跑,不過這邊提到的空間問題就不太確定了,是真的把 EBS 的空間上限用完嗎?比較常使用的 gp2gp3 上限都是 16TB,不確定是不是真的用到接近爆掉了:

Not only was support for MySQL 5.1 at End-of-Life (more than 5 years ago) but our MySQL 5.1 instances on EC2/AWS had limited storage and we were scheduled to run out of space at the end of July. Our backs were up against the wall and we had to deliver!

另外在升級到 5.7 的時候,順便把本來是 INT 的 primary key 都換成 BIGINT

As part of the cut-over to MySQL 5.7, we also took the opportunity to bake in a number of improvements. We converted all primary key columns from INT to BIGINT to prevent hitting MAX value.

然後系統因為舊版的 Django 沒辦法配合 MySQL 5.7,得升級到 Django 1.6 (要注意 Django 1 系列的最新版是 1.11,看起來光是升級到 1.6 勉強會動就升不上去了?):

In parallel with the MySQL 5.7 upgrade we also Upgraded Django to 1.6 due a behavioral change in MySQL 5.7 related to how transactions/commits were handled for SELECT statements. This behavior change was resulting in errors with older version of Python/Django running on MySQL 5.7

然後採用了 GitHub 家研發的 gh-ost 當作改變 schema 的工具:

In December 2019, the Eventbrite DBRE successfully implemented a table ALTER via gh-ost on one of our larger MySQL tables.

看起來主要的原因是有遇到 pt-online-schema-change 的限制 (在「GitHub 發展出來的 ALTER TABLE 方式」這邊有提到):

Eventbrite had traditionally used pt-online-schema-change (pt-osc) to ALTER MySQL tables in production. pt-osc uses MySQL triggers to move data from the original to the “duplicate” table which is a very expensive operation and can cause replication lag. Matter of fact, it had directly resulted in several outages in H1 of 2019 due to replication lag or breakage.

另外一個引入的技術是 Orchestrator,看起來是先跟 HAProxy 搭配,不過他們打算要再換到 ProxySQL

Next on the list was implementing improvements to MySQL high availability and automatic failover using Orchestrator. In February of 2020 we implemented a new HAProxy layer in front of all DB clusters and we released Orchestrator to production!

Orchestrator can successfully detect the primary failure and promote a new primary. The goal was to implement Orchestrator with HAProxy first and then eventually move to Orchestrator with ProxySQL.

然後最後題到了 Square 研發的 Shift,把 gh-ost 包裝起來變成有個 web UI 可以操作:

2021 還可以看到這類文章還蠻有趣的...

AWS 的 T4g 系列機器增加服務區域

先前在「AWS 推出了 ARM 平台上 T 系列的機器」這邊提到 Amazon EC2 推出採用 ARM 系列的 t4g.*,當時亞洲區只有東京與孟買可以使用,現在在更多區域都推出上線了:「Announcing new Amazon EC2 T4g instances powered by AWS Graviton2 processors along with a T4g free trial in Asia Pacific (Sydney, Singapore), Europe (London), North Americas (Canada Central, San Francisco), and South Americas (Sao Paulo) regions」。

抓了一下新加坡的價錢:

t4g.nano 2 N/A 0.5 GiB EBS Only $0.0053 per Hour
t3.nano 2 Variable 0.5 GiB EBS Only $0.0066 per Hour
t3a.nano 2 Variable 0.5 GiB EBS Only $0.0059 per Hour

可以來測一些東西看看如何了...